From: Yotam Barnoy <yotambarnoy@gmail.com>
To: Malcolm Matalka <mmatalka@gmail.com>
Cc: Yaron Minsky <yminsky@janestreet.com>,
Jesper Louis Andersen <jesper.louis.andersen@gmail.com>,
Ocaml Mailing List <caml-list@inria.fr>
Subject: Re: [Caml-list] Question about Lwt/Async
Date: Mon, 7 Mar 2016 16:54:02 -0500 [thread overview]
Message-ID: <CAN6ygOnKaC7CDz67ToN_8rXs8gau8OKWnZpoK2nPgx16P4=BWw@mail.gmail.com> (raw)
In-Reply-To: <86d1r69ho4.fsf@gmail.com>
[-- Attachment #1: Type: text/plain, Size: 6142 bytes --]
Out of curiosity, what polling mechanism is available on the lwt side?
On Mon, Mar 7, 2016 at 3:06 PM, Malcolm Matalka <mmatalka@gmail.com> wrote:
> Yaron Minsky <yminsky@janestreet.com> writes:
>
> > Right now, only select and epoll are supported, but adding support for
> > something else isn't hard. The Async_unix library has an interface
> > called File_descr_watcher_intf.S, which both select and epoll go
> > through. Adding support for another shouldn't be difficult if someone
> > with the right OS expertise wants to do it.
> >
> > Is there a particular kernel API you want support for?
>
> kqueue, I run most things on FreeBSD and select is sadly mostly useless
> for anything serious. I've played with the idea of adding kqueue
> support myself but haven't had the time.
>
> >
> > y
> >
> > On Mon, Mar 7, 2016 at 1:16 PM, Malcolm Matalka <mmatalka@gmail.com>
> wrote:
> >> Yaron Minsky <yminsky@janestreet.com> writes:
> >>
> >>> This is definitely a fraught topic, and it's unfortunate that there's
> >>> no clear solution.
> >>>
> >>> To add a bit more information:
> >>>
> >>> - Async is more portable than it once was. There's now Core_kernel,
> >>> Async_kernel and Async_rpc_kernel, which allows us to do things like
> >>> run Async applications in the browser. I would think Windows
> >>> support would be pretty doable by someone who understands that world
> >>> well.
> >>>
> >>> That said, the chain of dependencies brought in by Async is still
> >>> quite big. This is something that could perhaps be improved, either
> >>> with better dead code analysis in OCaml, or some tweaks to
> >>> Async_kernel and Core_kernel themselves.
> >>
> >> When I last looked at the scheduler it was limited to [select] or
> >> [epoll], is this still the case? How difficult would it be to expand on
> >> those?
> >>
> >>>
> >>> - There are things we could contemplate to make it easier to bridge
> >>> the divide. Jeremie Dimino did a proof of concept rewrite of lwt to
> >>> use async as its implementation, where an Lwt.t and a Deferred.t are
> >>> equal at the type level.
> >>>
> >>> https://github.com/janestreet/lwt-async
> >>>
> >>> Another possibility, and one that might be easier to write, would be
> >>> to allow Lwt code to run using the Async scheduler as another
> >>> possible back-end. This would allow one to have programs that used
> >>> both Async and Lwt together in one program, without running on
> >>> different threads.
> >>>
> >>> It's worth mentioning if that there is interest in making Async more
> >>> suitable for a wider variety of goals, we're happy to work with
> >>> outside contributors on it. For example, if someone wanted to work on
> >>> Windows support for Async, we'd be happy to help out on integrating
> >>> that work.
> >>>
> >>> Probably the biggest issue is executable size. That will get better
> >>> when we release an unpacked version of our external libraries. But
> >>> even then, the module-level granularity captures more things than
> >>> would be ideal.
> >>>
> >>> y
> >>>
> >>> On Mon, Mar 7, 2016 at 10:16 AM, Jesper Louis Andersen
> >>> <jesper.louis.andersen@gmail.com> wrote:
> >>>>
> >>>> On Mon, Mar 7, 2016 at 2:38 AM, Yotam Barnoy <yotambarnoy@gmail.com>
> wrote:
> >>>>>
> >>>>> Also, what happens to general utility functions that aren't
> rewritten for
> >>>>> Async/Lwt -- as far as I can tell, being in non-monadic code, they
> will
> >>>>> always starve other threads, since they cannot yield to another
> Async/Lwt
> >>>>> thread. Is this perception correct?
> >>>>
> >>>>
> >>>> Yes.
> >>>>
> >>>> On one hand, your observation is negative in the sense that now your
> code
> >>>> has "color" in the sense that it is written for one library only. And
> you
> >>>> have to transform code to having the right color before it can be
> used. This
> >>>> is not the case if the concurrency model is at a lower level[0].
> >>>>
> >>>> On the other hand, your observation is positive: cooperative
> scheduling
> >>>> makes the points in which the code can switch explicit. This gives the
> >>>> programmer far more control over when you are done with a task and
> start to
> >>>> process the next task. You can also avoid the preemption check in the
> code
> >>>> all the time. If your code manipulates lots of shared data, it also
> >>>> simplifies things since you don't usually have to protect data with a
> mutex
> >>>> in a single-threaded context as much[1]. Cooperative models, if
> carefully
> >>>> managed, can exploit structure in the problem domain, whereas a
> preemptive
> >>>> model needs to fit all.
> >>>>
> >>>> My personal opinion is that the preemptive model eventually wins over
> the
> >>>> cooperative model, much like it has in most (all popular) operating
> systems.
> >>>> It is simply more productive to take an up-front performance hit as a
> >>>> sacrifice for a system which is more robust against stray code
> misbehaving.
> >>>> If a cooperative system fails, it is fails catastrophically. If a
> preemptive
> >>>> system fails, it degrades in performance.
> >>>>
> >>>> But given I have more than 10 years of Erlang programming behind me
> by now,
> >>>> I'm obviously biased toward certain computational models :)
> >>>>
> >>>> [0] Erlang would be one such example, where the system is preemptively
> >>>> scheduling for you and you can use any code in any place without
> having to
> >>>> worry about blocking for latency. Go is quasi-preemptive because it
> checks
> >>>> on function calls, but in contrast to Erlang a loop is not forced to
> factor
> >>>> through a recursion, so it can in principle run indefinitely. Haskell
> (GHC)
> >>>> is quasi-preemptive as well, checking on memory allocation
> boundaries. So
> >>>> the thing to look out for in GHC is latency from processing large
> arrays
> >>>> with no allocation, say.
> >>>>
> >>>> [1] Erlang has two VM runtimes for this reason. One is
> single-threaded and
> >>>> can avoid lots of locks which is far faster for certain workloads, or
> on
> >>>> embedded devices with a single core only.
> >>>>
> >>>> --
> >>>> J.
>
[-- Attachment #2: Type: text/html, Size: 8360 bytes --]
next prev parent reply other threads:[~2016-03-07 21:54 UTC|newest]
Thread overview: 31+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-03-07 1:38 Yotam Barnoy
2016-03-07 7:16 ` Malcolm Matalka
2016-03-07 9:08 ` Simon Cruanes
2016-03-07 14:06 ` Yotam Barnoy
2016-03-07 14:25 ` Ashish Agarwal
2016-03-07 14:55 ` rudi.grinberg
2016-03-07 14:59 ` Ivan Gotovchits
2016-03-07 15:05 ` Ivan Gotovchits
2016-03-08 6:55 ` Milan Stanojević
2016-03-08 10:54 ` Jeremie Dimino
2016-03-07 15:16 ` Jesper Louis Andersen
2016-03-07 17:03 ` Yaron Minsky
2016-03-07 18:16 ` Malcolm Matalka
2016-03-07 18:41 ` Yaron Minsky
2016-03-07 20:06 ` Malcolm Matalka
2016-03-07 21:54 ` Yotam Barnoy [this message]
2016-03-08 6:56 ` Malcolm Matalka
2016-03-08 7:46 ` Adrien Nader
2016-03-08 11:04 ` Jeremie Dimino
2016-03-08 12:47 ` Yaron Minsky
2016-03-08 13:03 ` Jeremie Dimino
2016-03-09 7:35 ` Malcolm Matalka
2016-03-09 10:23 ` Gerd Stolpmann
2016-03-09 14:37 ` Malcolm Matalka
2016-03-09 17:27 ` Gerd Stolpmann
2016-03-08 9:41 ` Francois Berenger
2016-03-11 13:21 ` François Bobot
2016-03-11 15:22 ` Yaron Minsky
2016-03-11 16:15 ` François Bobot
2016-03-11 17:49 ` Yaron Minsky
2016-03-08 5:59 ` Milan Stanojević
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAN6ygOnKaC7CDz67ToN_8rXs8gau8OKWnZpoK2nPgx16P4=BWw@mail.gmail.com' \
--to=yotambarnoy@gmail.com \
--cc=caml-list@inria.fr \
--cc=jesper.louis.andersen@gmail.com \
--cc=mmatalka@gmail.com \
--cc=yminsky@janestreet.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox