From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail4-relais-sop.national.inria.fr (mail4-relais-sop.national.inria.fr [192.134.164.105]) by yquem.inria.fr (Postfix) with ESMTP id EC81ABBAF for ; Thu, 28 Oct 2010 11:28:55 +0200 (CEST) X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AvsEADrfyEzVuiYS/2dsb2JhbAChRHG/JA2FOwQ X-IronPort-AV: E=Sophos;i="4.58,250,1286143200"; d="scan'208";a="76683285" Received: from solaria.dimino.org ([213.186.38.18]) by mail4-smtp-sop.national.inria.fr with ESMTP; 28 Oct 2010 11:28:38 +0200 Received: from aurora (localhost.localdomain [127.0.0.1]) by solaria.dimino.org (Postfix) with ESMTP id 947F580048; Thu, 28 Oct 2010 11:28:37 +0200 (CEST) Received: by aurora (Postfix, from userid 1000) id 6A9EC45B15; Thu, 28 Oct 2010 11:28:41 +0200 (CEST) Date: Thu, 28 Oct 2010 11:28:41 +0200 From: =?iso-8859-1?Q?J=E9r=E9mie?= Dimino To: Goswin von Brederlow Cc: caml-list@inria.fr Subject: Re: [Caml-list] Asynchronous IO programming in OCaml Message-ID: <20101028092841.GA23531@aurora> References: <20101025111022.GA32282@aurora> <20101025143317.GB32282@aurora> <20101025172633.GC32282@aurora> <877hh4dlog.fsf@frosties.localdomain> <20101027111835.GA5664@gaia> <877hh33g52.fsf@frosties.localdomain> <20101027153026.GD32282@aurora> <87fwvq8zec.fsf@frosties.localdomain> MIME-Version: 1.0 Content-Type: text/plain; charset=iso-8859-1 Content-Disposition: inline Content-Transfer-Encoding: 8bit In-Reply-To: <87fwvq8zec.fsf@frosties.localdomain> User-Agent: Mutt/1.5.20 (2009-06-14) X-Spam: no; 0.00; ocaml:01 0200,:01 fstat:01 buffered:01 buffer:01 buffer:01 28,:98 wrote:01 caml-list:01 data:02 slower:02 cached:02 kernel:02 kernel:02 flush:03 On Thu, Oct 28, 2010 at 11:00:59AM +0200, Goswin von Brederlow wrote: > Hehe, and you have prooven yourself wrong. As you said, when the file > isn't cache and the syscall actually blocks there is no time > difference. The reasons being that the blokcing takes the majority of > time anyway and the context switch for the thread on the read doesn't > cost anything since the kernel needs to context switch anyway. :) The kernel will always prefetch a part of the file, so the first read will launch a needed thread but not subsequent reads until the offset reach a part of the file not yet cached. And again, for other system calls, such as fstat, launching a thread is not a good solution. > NUMA has memory dedicated to each core. Each core can access its memory > fast. Accessing some other cores memory on the other hand is slower (and > will have to fight for access with that core). And so that is a bad solution for Lwt since it runs on one system thread. > Nah, if you can detect the error then handing is easy. The problem is > detecting. Write() itself can fail and that is easy to detect. But > write() succeeding doesn't mean the data has been written > successfully. You can still get an error after that which you only get > on fsync(). Take for example buffered channels, when you would have to flush the buffer, you would launch a thread calling write and let the program continue to write onto the channel, assuming that the write succeed. But if one write fails, you would get the error latter. And worst, if the program stop using the buffer after the flush, it will never get the error. Jérémie