From: Hugo Ferreira <hmf@inescporto.pt>
To: Gerd Stolpmann <info@gerd-stolpmann.de>
Cc: Martin Jambon <martin.jambon@ens-lyon.org>, caml-list@inria.fr
Subject: Re: [Caml-list] Efficient OCaml multicore -- roadmap?
Date: Sat, 26 Mar 2011 09:11:37 +0000 [thread overview]
Message-ID: <4D8DADC9.4010508@inescporto.pt> (raw)
In-Reply-To: <1301084818.8429.435.camel@thinkpad>
On 03/25/2011 08:26 PM, Gerd Stolpmann wrote:
> Am Freitag, den 25.03.2011, 19:19 +0000 schrieb Hugo Ferreira:
>> On 03/25/2011 06:24 PM, Martin Jambon wrote:
>>>> On 03/25/2011 01:10 PM, Fabrice Le Fessant wrote:
>>>>> Of course, sharing structured mutable data between threads will not be
>>>>> possible, but actually, it is a good thing if you want to write correct
>>>>> programs ;-)
>>>
>>> On 03/25/11 08:44, Hugo Ferreira replied:
>>>> I'll stick to my guns here. It simply makes solving certain problem
>>>> unfeasible. Point in case: I work on machine learning algorithms. I
>>>> use large data-structures that must be processed (altered)
>>>> in order to learn. Because these data-structures are large it become
>>>> impractical to copy this to a process every time I start off a new
>>>> "thread".
>>>
>>> The solution would be to use get/set via a message-passing interface.
>>>
>>
>> Cannot see how this works. Say I want to share a balanced binary tree.
>> Several processes/threads each take this tree and alter it by adding and
>> deleting elements. Each (new) tree is then further processed by other
>> processes/threads.
>>
>> How can get/set be used in this scenario?
>>
>>>> From my purely speculative perspective, it seems unavoidable that
>>> message-passing happens at some level in order to keep a shared data
>>> structure in sync between a large number of processors. In other words,
>>> any access to a shared data structure requires some physical copy no
>>> matter what the programming language makes it look like.
>>>
>>
>> I assume you are referring to multi-processing were memory is not shared
>> amongst CPU's, correct?
>
> This is quite normal nowadays when you have several CPUs in a (server)
> system. Each CPU gets its own cache, or even its own bank of RAM (e.g.
> all Opterons have this). And there is indeed message passing on the
> hardware level (HyperTransport, QuickPath, or Infiniband). For the
> software, it still looks as if memory was uniform (cache coherency), but
> under the hood messages are exchanged to get this effect (or even RAM
> accesses are routed over the CPU interconnect).
>
> The messages are only noticeable from software as speed degradation when
> you access RAM in the wrong way. E.g. you can see this when you
> read/modify/write the same memory cell in a loop from two threads
> running on two cores. This is a lot slower than if only a single core
> did this because the two cores constantly exchange messages and have to
> wait until the delivery of the message is done (this is also known as
> cache line bouncing).
>
> When you implement a message passing API for a high level language, the
> way to go is to provide message buffers in memory. When a thread
> delivers a message it just writes to the buffer. The real message,
> however, is sent by the hardware under the hood, and must be delivered
> until the reading thread synchronizes. The point is here, IMHO, that you
> exploit the features of the hardware the most when you do it in a way
> the hardware can deal best with. Thus a program written against the
> message passing API will finally be faster than one that uncritically
> assumes a uniform memory, and modifies memory in-place, and will finally
> suffer from cache line bouncing.
>
> For current standard server hardware there are usually not enough CPUs
> to get a big effect. But in a few years it will be very important (as it
> is for current supercomputers), maybe even for standard 128 core laptops
> in 2015 (just guessing).
>
Thanks for the info.
Hugo
> Gerd
>
>
>>
>> Hugo F.
>>
>>
>>>
>>> Martin
>>>
>>
>>
>
>
next prev parent reply other threads:[~2011-03-26 9:11 UTC|newest]
Thread overview: 74+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <2054357367.219171.1300974318806.JavaMail.root@zmbs4.inria.fr>
2011-03-24 23:13 ` Fabrice Le Fessant
2011-03-25 0:23 ` [Caml-list] " Sylvain Le Gall
2011-03-25 9:55 ` [Caml-list] " Alain Frisch
2011-03-25 11:44 ` Gerd Stolpmann
[not found] ` <1396338209.232813.1301046980856.JavaMail.root@zmbs4.inria.fr>
2011-03-25 10:23 ` Fabrice Le Fessant
2011-03-25 12:07 ` Gerd Stolpmann
2011-04-16 12:12 ` Jon Harrop
2011-03-25 10:51 ` Hugo Ferreira
2011-03-25 12:25 ` Gerd Stolpmann
2011-03-25 12:58 ` Hugo Ferreira
[not found] ` <341494683.237537.1301057887481.JavaMail.root@zmbs4.inria.fr>
2011-03-25 13:10 ` Fabrice Le Fessant
2011-03-25 13:41 ` Dario Teixeira
2011-03-30 18:12 ` Jon Harrop
2011-03-25 15:44 ` Hugo Ferreira
2011-03-25 18:24 ` Martin Jambon
2011-03-25 19:19 ` Hugo Ferreira
2011-03-25 20:26 ` Gerd Stolpmann
2011-03-26 9:11 ` Hugo Ferreira [this message]
2011-03-26 10:31 ` Richard W.M. Jones
2011-03-30 16:56 ` Jon Harrop
2011-03-30 19:24 ` Richard W.M. Jones
2011-04-20 21:44 ` Jon Harrop
2011-04-19 9:57 ` Eray Ozkural
2011-04-19 10:05 ` Hugo Ferreira
2011-04-19 20:26 ` Gerd Stolpmann
2011-04-20 7:59 ` Hugo Ferreira
2011-04-20 12:30 ` Markus Mottl
2011-04-20 12:53 ` Hugo Ferreira
2011-04-20 13:22 ` Markus Mottl
2011-04-20 14:00 ` Edgar Friendly
2011-04-19 22:49 ` Jon Harrop
2011-03-30 17:02 ` Jon Harrop
2011-04-20 19:23 ` Jon Harrop
2011-04-20 20:05 ` Alexy Khrabrov
2011-04-20 23:00 ` Jon Harrop
[not found] ` <76544177.594058.1303341821437.JavaMail.root@zmbs4.inria.fr>
2011-04-21 7:48 ` Fabrice Le Fessant
2011-04-21 8:35 ` Hugo Ferreira
2011-04-23 17:32 ` Jon Harrop
2011-04-21 9:09 ` Alain Frisch
[not found] ` <20110421.210304.1267840107736400776.Christophe.Troestler+ocaml@umons.ac.be>
2011-04-21 19:53 ` Hezekiah M. Carty
2011-04-22 8:34 ` Alain Frisch
[not found] ` <799994864.610698.1303412613509.JavaMail.root@zmbs4.inria.fr>
2011-04-22 8:06 ` Fabrice Le Fessant
2011-04-22 9:11 ` Gerd Stolpmann
2011-04-23 10:17 ` Eray Ozkural
2011-04-23 13:47 ` Alexy Khrabrov
2011-04-23 17:39 ` Eray Ozkural
2011-04-23 20:18 ` Alexy Khrabrov
2011-04-23 21:18 ` Jon Harrop
2011-04-24 0:33 ` Eray Ozkural
2011-04-28 14:42 ` orbitz
2011-04-23 19:02 ` Jon Harrop
2011-04-22 9:44 ` Vincent Aravantinos
2011-04-21 10:09 ` Philippe Strauss
2011-04-23 17:44 ` Jon Harrop
2011-04-23 17:05 ` Jon Harrop
2011-04-20 20:30 ` Gerd Stolpmann
2011-04-20 23:33 ` Jon Harrop
2011-03-25 20:27 ` Philippe Strauss
2011-04-19 22:47 ` Jon Harrop
[not found] ` <869445701.579183.1303253283515.JavaMail.root@zmbs4.inria.fr>
2011-04-20 9:25 ` Fabrice Le Fessant
2011-03-25 18:45 ` Andrei Formiga
2011-03-30 17:00 ` Jon Harrop
2011-04-13 3:36 ` Lucas Dixon
2011-04-13 13:01 ` Gerd Stolpmann
2011-04-13 13:09 ` Lucas Dixon
2011-04-13 23:04 ` Goswin von Brederlow
2011-04-16 13:54 ` Jon Harrop
2011-03-24 13:44 Alexy Khrabrov
2011-03-24 14:57 ` Gerd Stolpmann
2011-03-24 15:03 ` Joel Reymont
2011-03-24 15:28 ` Guillaume Yziquel
2011-03-24 15:48 ` Gerd Stolpmann
2011-03-24 15:38 ` Gerd Stolpmann
2011-03-25 19:49 ` Richard W.M. Jones
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4D8DADC9.4010508@inescporto.pt \
--to=hmf@inescporto.pt \
--cc=caml-list@inria.fr \
--cc=info@gerd-stolpmann.de \
--cc=martin.jambon@ens-lyon.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox