From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail1-relais-roc.national.inria.fr (mail1-relais-roc.national.inria.fr [192.134.164.82]) by walapai.inria.fr (8.13.6/8.13.6) with ESMTP id p3KKV15G005780 for ; Wed, 20 Apr 2011 22:31:01 +0200 X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AuEBAEhBr03U4366kWdsb2JhbACEUaBxFAEBAQEJCwsHFCWIb60gkRgCgSeDTnoEiQ2JFA X-IronPort-AV: E=Sophos;i="4.64,247,1301868000"; d="scan'208";a="106355097" Received: from moutng.kundenserver.de ([212.227.126.186]) by mail1-smtp-roc.national.inria.fr with ESMTP; 20 Apr 2011 22:30:56 +0200 Received: from office1.lan.sumadev.de (dslb-188-097-004-135.pools.arcor-ip.net [188.97.4.135]) by mrelayeu.kundenserver.de (node=mrbap3) with ESMTP (Nemesis) id 0Lg0Wh-1PWDfo28uR-00pfTW; Wed, 20 Apr 2011 22:30:55 +0200 Received: from [192.168.1.111] (546BE640.cm-12-4d.dynamic.ziggo.nl [84.107.230.64]) by office1.lan.sumadev.de (Postfix) with ESMTPA id 1B5655F701; Wed, 20 Apr 2011 22:30:55 +0200 (CEST) From: Gerd Stolpmann To: Jon Harrop Cc: "'Martin Jambon'" , Caml List In-Reply-To: <029701cbff90$7a874510$6f95cf30$@ffconsultancy.com> References: <2054357367.219171.1300974318806.JavaMail.root@zmbs4.inria.fr> <4D8BD02D.1010505@inria.fr> <4D8C73C8.6020801@inescporto.pt> <1301055903.8429.314.camel@thinkpad> <341494683.237537.1301057887481.JavaMail.root@zmbs4.inria.fr> <4D8C944A.9060601@inria.fr> <4D8CB859.9040709@inescporto.pt> <4D8CDDCC.4010000@ens-lyon.org> <029701cbff90$7a874510$6f95cf30$@ffconsultancy.com> Content-Type: text/plain; charset="UTF-8" Date: Wed, 20 Apr 2011 22:30:53 +0200 Message-ID: <1303331453.8429.1282.camel@thinkpad> Mime-Version: 1.0 X-Mailer: Evolution 2.28.1 Content-Transfer-Encoding: 7bit X-Provags-ID: V02:K0:9sJe4/XhZCKXprU1n8WQ2VFkONH9pMPrbQjyvyNA5aE qxcDIQGy9zp0+j7oKvpZvydSiMZkpcyaH/wS6kU2v7aRmk1yKC JTd8j+7qkmzhnbk2nSO7TwYvDkrEStuTJd+Bd5OO0Myf2+n5jJ T/dgpdKSxQMrSH9IlYbkxLHLy3mduhM8YrG12otGqn9V1SXrb4 KKkiof9XwHtHMksDa4U5Q== Subject: RE: [Caml-list] Efficient OCaml multicore -- roadmap? Am Mittwoch, den 20.04.2011, 20:23 +0100 schrieb Jon Harrop: > Martin wrote: > > On 03/25/11 08:44, Hugo Ferreira replied: > > > I'll stick to my guns here. It simply makes solving certain problem > > > unfeasible. Point in case: I work on machine learning algorithms. I > > > use large data-structures that must be processed (altered) in order to > > > learn. Because these data-structures are large it become impractical > > > to copy this to a process every time I start off a new "thread". > > > > The solution would be to use get/set via a message-passing interface. > > The main objective when parallelizing a program is to remove focal points because they are sources of contention. Funnelling all IO through a single getter/setter would add a focal point and, most likely, destroy scalability. > > > From my purely speculative perspective, it seems unavoidable that message- > > passing happens at some level in order to keep a shared data structure in sync > > between a large number of processors. In other words, any access to a shared > > data structure requires some physical copy no matter what the programming > > language makes it look like. > > Yes. The hardware maintains the illusion of shared memory so the programmer only need optimize on the basis of an accurate cost model. There is no need to burden the programmer with message passing and manual memory management. I don't understand this last sentence. Message passing is just an abstract concept (i.e. an interface) which can e.g. be implemented with a shared buffer (or the illusion thereof) or other data structure that decouples sender and receiver processes. Are you against using the right interfaces when they are appropriate? We don't want to burden the programmer with too low-level details, do we? Gerd > > Cheers, > Jon. > > > > -- ------------------------------------------------------------ Gerd Stolpmann, Bad Nauheimer Str.3, 64289 Darmstadt,Germany gerd@gerd-stolpmann.de http://www.gerd-stolpmann.de Phone: +49-6151-153855 Fax: +49-6151-997714 ------------------------------------------------------------