From: Simon Cruanes <simon.cruanes.2007@m4x.org>
To: Thomas Gazagnaire <thomas@gazagnaire.org>
Cc: OCaml users <caml-list@inria.fr>
Subject: Re: [Caml-list] [ANN] cconv-0.2
Date: Mon, 1 Dec 2014 14:00:54 +0100 [thread overview]
Message-ID: <20141201130053.GM37610@emmental.inria.fr> (raw)
In-Reply-To: <20141201105227.GB8862@fuck_yeah>
[-- Attachment #1: Type: text/plain, Size: 5409 bytes --]
So, it is possible (by changing slightly the interface of encoders to
avoid an intermediate structure), to obtain the following (I also added
a benchmark for decoding, comparing cconv to ppx_deriving_yojson):
====== BEGIN BENCH ======
% ./run_bench.native
encoding...
benchmark points
Throughputs for "manual", "cconv", "deriving_yojson" each running for at least 4 CPU seconds:
manual: 4.18 WALL ( 4.18 usr + 0.00 sys = 4.18 CPU) @ 2826676.15/s (n=11818333)
cconv: 4.20 WALL ( 4.20 usr + 0.00 sys = 4.20 CPU) @ 941618.68/s (n=3951032)
deriving_yojson: 4.20 WALL ( 4.20 usr + 0.00 sys = 4.20 CPU) @ 2824134.70/s (n=11867014)
Rate cconv deriving_yojson manual
cconv 941619/s -- -67% -67%
deriving_yojson 2824135/s 200% -- -0%
manual 2826676/s 200% 0% --
benchmark terms
Throughputs for "manual", "cconv", "deriving_yojson" each running for at least 4 CPU seconds:
manual: 4.20 WALL ( 4.20 usr + 0.00 sys = 4.20 CPU) @ 1125111.48/s (n=4723218)
cconv: 4.21 WALL ( 4.21 usr + 0.00 sys = 4.21 CPU) @ 789967.68/s (n=3324184)
deriving_yojson: 4.20 WALL ( 4.20 usr + 0.00 sys = 4.20 CPU) @ 1100920.75/s (n=4626069)
Rate cconv deriving_yojson manual
cconv 789968/s -- -28% -30%
deriving_yojson 1100921/s 39% -- -2%
manual 1125111/s 42% 2% --
decoding...
benchmark points
Throughputs for "cconv", "deriving_yojson" each running for at least 3 CPU seconds:
cconv: 3.16 WALL ( 3.16 usr + 0.00 sys = 3.16 CPU) @ 493501.11/s (n=1558970)
deriving_yojson: 3.15 WALL ( 3.15 usr + 0.00 sys = 3.15 CPU) @ 1248812.96/s (n=3932512)
Rate cconv deriving_yojson
cconv 493501/s -- -60%
deriving_yojson 1248813/s 153% --
benchmark terms
Throughputs for "cconv", "deriving_yojson" each running for at least 3 CPU seconds:
cconv: 3.12 WALL ( 3.12 usr + 0.00 sys = 3.12 CPU) @ 577372.88/s (n=1800826)
deriving_yojson: 3.14 WALL ( 3.14 usr + 0.00 sys = 3.14 CPU) @ 1492303.95/s (n=4688819)
Rate cconv deriving_yojson
cconv 577373/s -- -61%
deriving_yojson 1492304/s 158% --
./run_bench.native 45.01s user 1.16s system 99% cpu 46.181 total
====== END BENCH ======
It is still slower to encode records (intermediate list), and decoding also
has some overhead. However, this is the cost of translating
between the type and the JSON representation; I think it should be
negligible compared to the actual IO + printing/parsing cost.
The new, more efficient interface will probably appear in a future release.
With ppx_deriving_cconv that shouldn't be too big a problem...
Cheers,
Le Mon, 01 Dec 2014, Simon Cruanes a écrit :
> Le Mon, 01 Dec 2014, Thomas Gazagnaire a écrit :
> > Do you have any benchmarks to compare CConv and similar camlp4 generators?
>
> Hi Thomas,
>
> I hadn't, but I just wrote very basic ones to compare with
> ppx_deriving_yojson (should be similar to camlp4). The code is at
> https://github.com/c-cube/cconv/blob/e80ab0e6c458a01b419ea69c7f41d0a350aebbad/bench/run_bench.ml
>
> It only compares times for encoding into Json right now, with the
> following results (recursive records first, recursive terms then;
> "manual" is a handwritten encoding function, "cconv" the combinators
> version, and "deriving_yojson" uses @whitequark's nice deriver):
>
> % ./run_bench.native
>
> benchmark points
> Throughputs for "manual", "cconv", "deriving_yojson" each running for at least 4 CPU seconds:
> manual: 4.20 WALL ( 4.20 usr + 0.00 sys = 4.20 CPU) @ 3057270.82/s (n=12846652)
> cconv: 4.21 WALL ( 4.21 usr + 0.00 sys = 4.21 CPU) @ 784724.92/s (n=3300553)
> deriving_yojson: 4.21 WALL ( 4.21 usr + 0.00 sys = 4.21 CPU) @ 3065779.07/s (n=12891601)
> Rate cconv manual deriving_yojson
> cconv 784725/s -- -74% -74%
> manual 3057271/s 290% -- -0%
> deriving_yojson 3065779/s 291% 0% --
>
> benchmark terms
> Throughputs for "manual", "cconv", "deriving_yojson" each running for at least 4 CPU seconds:
> manual: 4.20 WALL ( 4.20 usr + 0.00 sys = 4.20 CPU) @ 1679609.71/s (n=7057720)
> cconv: 4.20 WALL ( 4.20 usr + 0.00 sys = 4.20 CPU) @ 726619.43/s (n=3051075)
> deriving_yojson: 4.20 WALL ( 4.20 usr + 0.00 sys = 4.20 CPU) @ 1624740.65/s (n=6822286)
> Rate cconv deriving_yojson manual
> cconv 726619/s -- -55% -57%
> deriving_yojson 1624741/s 124% -- -3%
> manual 1679610/s 131% 3% --
--
Simon
http://weusepgp.info/
key 49AA62B6, fingerprint 949F EB87 8F06 59C6 D7D3 7D8D 4AC0 1D08 49AA 62B6
[-- Attachment #2: Type: application/pgp-signature, Size: 819 bytes --]
prev parent reply other threads:[~2014-12-01 13:01 UTC|newest]
Thread overview: 4+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-12-01 9:44 Simon Cruanes
2014-12-01 10:15 ` Thomas Gazagnaire
2014-12-01 10:52 ` Simon Cruanes
2014-12-01 13:00 ` Simon Cruanes [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20141201130053.GM37610@emmental.inria.fr \
--to=simon.cruanes.2007@m4x.org \
--cc=caml-list@inria.fr \
--cc=thomas@gazagnaire.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox