From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from mail1-relais-roc.national.inria.fr (mail1-relais-roc.national.inria.fr [192.134.164.82]) by walapai.inria.fr (8.13.6/8.13.6) with ESMTP id p9QAeiBM014455 for ; Wed, 26 Oct 2011 12:40:44 +0200 X-IronPort-Anti-Spam-Filtered: true X-IronPort-Anti-Spam-Result: AnkBAIPip07RVdy2mGdsb2JhbABCqTEIIgEBAQEBCAkNBxQlgW4BAQEDAQsHAhMEFQEbHQEDAQsGBQsDOCIBDAEEAQUBAwEYBhMih16XPgqLUAqCVoVFPYhwAgUKiGAElAqGHIcXPTqDUQ X-IronPort-AV: E=Sophos;i="4.69,408,1315173600"; d="scan'208";a="126068116" Received: from mail-vx0-f182.google.com ([209.85.220.182]) by mail1-smtp-roc.national.inria.fr with ESMTP/TLS/RC4-SHA; 26 Oct 2011 12:40:38 +0200 Received: by vcbfo13 with SMTP id fo13so2299511vcb.27 for ; Wed, 26 Oct 2011 03:40:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=gamma; h=mime-version:reply-to:in-reply-to:references:date:message-id :subject:from:to:cc:content-type; bh=ZGa1418usULe14A0+v/K78+aAULs237Tpt7sqZZRSjs=; b=uzHG7rh6AF0gRyRZIAPERwBko2M9xxch6Wlu4UrtPrSIiTQ+3CXO04GkNhSwTHuacM LhpYjfoTuavwNoLSsZi7tiIWh7htDWE29RQssgRk/V3FnzcYwnD/n/N+Z61A4xQgWe0Z TT+hlaK9yMt8IqA0LNp1rNuaKisjhZm4cy/k0= MIME-Version: 1.0 Received: by 10.52.29.9 with SMTP id f9mr31941989vdh.30.1319625637683; Wed, 26 Oct 2011 03:40:37 -0700 (PDT) Received: by 10.220.77.85 with HTTP; Wed, 26 Oct 2011 03:40:37 -0700 (PDT) Reply-To: yminsky@gmail.com In-Reply-To: <1319607060.20523.11.camel@Nokia-N900> References: <1319607060.20523.11.camel@Nokia-N900> Date: Wed, 26 Oct 2011 06:40:37 -0400 Message-ID: From: Yaron Minsky To: Cedric Cellier Cc: caml-list@inria.fr Content-Type: multipart/alternative; boundary=20cf307cfd4e5ee1a504b03149b8 Subject: Re: [Caml-list] [ANN] Async, a monadic concurrency library --20cf307cfd4e5ee1a504b03149b8 Content-Type: text/plain; charset=ISO-8859-1 On Wed, Oct 26, 2011 at 1:31 AM, Cedric Cellier wrote: > While comparing async and lwt you forget to mention performances. Didn't > you run any benchmark with some result to share ? Or are Async performances > irrelevant to your use cases so you never benchmarked ? Perhaps surprisingly, while we've benchmarked and tuned the performance of Async, but we've never done the same for Lwt. This has to do with history as much as anything else. We wrote the first version of Async years ago, when Lwt was much less mature. We looked at Lwt at the time and decided that there were several design choices that made it unsuitable for our purposes, so we wrote Async. We've looked back at Lwt for inspiration over the years, but we've never again had the question in front of us as to whether to use Lwt or Async, so doing the benchmarking has never been high on our list. We do have some guesses about how performance differs. For example, Lwt's bind is faster than Async's bind because of different interleaving policies: Lwt can run the right-hand side of a bind immediately, whereas in Async, it's always scheduled to run later, so that's more work that Async has to do in. That said, we prefer the semantics of Async's bind, even though it has a performance cost. (You can easily implement Lwt-style bind.) Another thing to note for any intrepid benchmarkers is that the released version of Async is missing a useful feature that is already available in our development trunk, which is tail-recursive bind. In the released version, the following loop: let rec loop () = after (sec 30.) >>= fun () -> printf ".%!"; loop () will allocate an unbounded amount of space, creating one deferred value every 30 seconds. In our development trunk bind is tail-recursive, but that hasn't gotten to the public release yet. You can do the same loop efficiently in Async using the upon operator: let rec loop () = after (sec 30.) >>> fun () -> printf ".%!"; loop () This is actually more efficient than using the tail-recursive version of bind, since it allocates one less deferred value every time through the loop. y --20cf307cfd4e5ee1a504b03149b8 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
On Wed, Oct 26, 2011 at 1:31 AM, Cedric Cellier = <rixed@happyl= eptic.org> wrote:
While comparing async and lwt you forget to mention performances. Didn'= t you run any benchmark with some result to share ? Or are Async performanc= es irrelevant to your use cases so you never benchmarked ?

Perhaps surprisingly, while we've benchmarked and t= uned the performance of Async, but we've never done the same for Lwt.

This has to do with history as much as anything els= e. =A0We wrote the first version of Async years ago, when Lwt was much less= mature. =A0We looked at Lwt at the time and decided that there were severa= l design choices that made it unsuitable for our purposes, so we wrote Asyn= c. =A0We've looked back at Lwt for inspiration over the years, but we&#= 39;ve never again had the question in front of us as to whether to use Lwt = or Async, so doing the benchmarking has never been high on our list.

We do have some guesses about how performance differs. = =A0For example, Lwt's bind is faster than Async's bind because of d= ifferent interleaving policies: Lwt can run the right-hand side of a bind i= mmediately, whereas in Async, it's always scheduled to run later, so th= at's more work that Async has to do in. =A0That said, we prefer the sem= antics of Async's bind, even though it has a performance cost. =A0(You = can easily implement Lwt-style bind.)

Another thing to note for any intrepid benchmarkers is = that the released version of Async is missing a useful feature that is alre= ady available in our development trunk, which is tail-recursive bind. =A0In= the released version, the following loop:

let rec loop () =3D
=A0 =A0>>=3D fun () ->
= =A0 =A0printf ".%!";
= =A0 =A0loop ()

will allocate an unbounded amount = of space, creating one deferred value every 30 seconds. =A0In our developme= nt trunk bind is tail-recursive, but that hasn't gotten to the public r= elease yet.

You can do the same loop efficiently in Async using the= upon operator:

let rec loop () =3D
= =A0 =A0after (sec 30.)
=A0 =A0>>> fun () ->
= =A0 =A0printf ".%!";
=A0 =A0loop ()
=

This is actually more efficient than using the tail-recursive version = of bind, since it allocates one less deferred value every time through the = loop.

y
--20cf307cfd4e5ee1a504b03149b8--