From mboxrd@z Thu Jan 1 00:00:00 1970 Path: news.gmane.org!.POSTED!not-for-mail From: Arne Babenhauserheide Newsgroups: gmane.lisp.guile.user Subject: Guile fibers server vs. varnish/lighttpd and SSL:nginx/varnish/lighttpd Date: Sat, 15 Jul 2017 21:30:21 +0200 Message-ID: <87tw2dcvwt.fsf@web.de> NNTP-Posting-Host: blaine.gmane.org Mime-Version: 1.0 Content-Type: multipart/signed; boundary="=-=-="; micalg=pgp-sha256; protocol="application/pgp-signature" X-Trace: blaine.gmane.org 1500148885 30451 195.159.176.226 (15 Jul 2017 20:01:25 GMT) X-Complaints-To: usenet@blaine.gmane.org NNTP-Posting-Date: Sat, 15 Jul 2017 20:01:25 +0000 (UTC) To: guile-user@gnu.org Original-X-From: guile-user-bounces+guile-user=m.gmane.org@gnu.org Sat Jul 15 22:01:18 2017 Return-path: Envelope-to: guile-user@m.gmane.org Original-Received: from lists.gnu.org ([208.118.235.17]) by blaine.gmane.org with esmtp (Exim 4.84_2) (envelope-from ) id 1dWTFe-0007YS-GB for guile-user@m.gmane.org; Sat, 15 Jul 2017 22:01:18 +0200 Original-Received: from localhost ([::1]:43134 helo=lists.gnu.org) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dWTFk-0007e4-0T for guile-user@m.gmane.org; Sat, 15 Jul 2017 16:01:24 -0400 Original-Received: from eggs.gnu.org ([2001:4830:134:3::10]:41184) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1dWTFO-0007d6-8M for guile-user@gnu.org; Sat, 15 Jul 2017 16:01:03 -0400 Original-Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1dWTFL-0003uh-71 for guile-user@gnu.org; Sat, 15 Jul 2017 16:01:02 -0400 Original-Received: from mout.web.de ([212.227.15.4]:61185) by eggs.gnu.org with esmtps (TLS1.0:DHE_RSA_AES_128_CBC_SHA1:16) (Exim 4.71) (envelope-from ) id 1dWTFK-0003sm-RB for guile-user@gnu.org; Sat, 15 Jul 2017 16:00:59 -0400 Original-Received: from fluss ([85.212.70.62]) by smtp.web.de (mrweb003 [213.165.67.108]) with ESMTPSA (Nemesis) id 0M6mQO-1dtcIN0AFA-00wTO3; Sat, 15 Jul 2017 22:00:55 +0200 X-Provags-ID: V03:K0:a773sXP4TqyL/OgxRYIzgq/qZdZTJcCMGOGCE1cvrEfBHWAVEm9 quihjZbn12NkLaZX2nOmQ6x+jEgPaw1m0HQFTnhY2CMVhJESrZA39bF2jXG0cXFhMLpaIm4 s2msS5Uds51qOYCmbBxzTWejGJSsMXp5dHV3X470jXRGfHlI1wrgGUpD7CZrsjHES5gWsXv NilvYaQ6FPSds7HPK+4Nw== X-UI-Out-Filterresults: notjunk:1;V01:K0:7E7Bj1ErJTQ=:qvXyL+emEXpPNFfexSLINd MF9ejbKfISTgNOtNFXPqLBOwSkn68WsPlYvfJgxdBxMLpBwFtOlFU6SVWnosyzpeWMtgjC4Qz rZJesKHJcglQCrNel2hR4QcA4jCWzcyGi8Nck2hQJrgej+8ovIpt0/LmXyqBxzlavNF0lu54X dgQawtJjZ9rV6EcBDEtBsaUagYc0gh0ovCBiqnNXLKzv9wHqcd7GnTPJatYFyhEx7wrNMAjR1 pSkZmzQudZhYJIsinSz3kA3EnUcbdLs02Xov7W85Em0WiFAd97+JrW0FWND/ub55vPCUivxZi 41h2HiJTGiAdITitP2y9nKF5MUnPC+GxTRHoVQS4EaZKl+tQUXBdcamNbprqPGq9/vBSVjyQx oV1uGj4VTq6E0FV52mB+odHs0tlXCVzY+bZn784jtwxEawqeSAfyodyfNQufSVTceoFerWqnM MHxynaIm9RYg54XPo9mOITKfDzUX0asgvuosh/a5Y/w05dpFixP0IGg+ivZcvawsRsv4OpNcE /eelV2AgkkrqeoXn/TBKxEdubYsKTR08P571cQFbb5TkDOT/ZrAuKtLKNmf7k0JuB8U/o1uz9 FhOHckDUSuH9Bqsf3EYQ7GwnIH60/AMiqPU6kG/bIn7aRAUW7tZ/t0wmPLuhKn0/4CAmuf4f3 /ysetLLz97s9qNTWPXFj9LlrLwCdonanGb7y7czlOG0km4wL0AXdwT8wwrgQOL5NsmU/kWGyU 2U5AGyRinrOi2J6mv8Ul8RHq3OmgiLrbsiRmgVCGscgthekVDKM+dGNkG/vhyGPRuVCY+TzD X-detected-operating-system: by eggs.gnu.org: GNU/Linux 2.2.x-3.x [generic] [fuzzy] X-Received-From: 212.227.15.4 X-BeenThere: guile-user@gnu.org X-Mailman-Version: 2.1.21 Precedence: list List-Id: General Guile related discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: guile-user-bounces+guile-user=m.gmane.org@gnu.org Original-Sender: "guile-user" Xref: news.gmane.org gmane.lisp.guile.user:13933 Archived-At: --=-=-= Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable Hi, I did a simple performance test of Guile fibers vs. my local setup which (a) serves the lighttpd "it works" reply using varnish (varnish/lighttpd) a= nd (b) forwards that over SSL using nginx (nginx/varnish/lighttpd). The Guile fibers server uses the following simple script: =2D https://notabug.org/ArneBab/guile-base64server/src/master/server.scm#L1= 35 =2D https://bitbucket.org/ArneBab/guile-base64server/src/aec8471fdeff/serve= r.scm#server.scm-135 The system is an simple homeserver behind a cheap off-the-shelf fritz-box which was already outdated when we got it around 4 years ago. The server has two cores, clocked down to 800MHz, runs Gentoo GNU/Linux, and is typically completely overloaded with 3-4 Freenet instances (only the one with the WoT-plugin and Sone actually overloads it, the others are around 10% load each). I tested it with wrk. Here are three runs: One with 50 concurrent requests, one with 100 concurrent requests, one with 200 concurrent requests. With 200 concurrent requests, the latencies rise sharply. The requests are fired from a remote box. $ wrk -c 50 -t 3 -d 60s --timeout 15m http://d6.gnutella2.info:2342; wrk -c= 50 -t 3 -d 60s --timeout 15m http://d6.gnutella2.info:80; wrk -c 50 -t 3 -= d 60s --timeout 15m https://d6.gnutella2.info=20=20=20 Running 1m test @ http://d6.gnutella2.info:2342 3 threads and 50 connections Thread Stats Avg Stdev Max +/- Stdev Latency 383.40ms 385.49ms 4.10s 91.35% Req/Sec 41.52 8.37 63.00 64.51% 7508 requests in 1.00m, 1.13MB read Requests/sec: 125.13 Transfer/sec: 19.31KB Running 1m test @ http://d6.gnutella2.info:80 3 threads and 50 connections Thread Stats Avg Stdev Max +/- Stdev Latency 500.32ms 505.07ms 6.90s 94.21% Req/Sec 26.70 5.40 41.00 66.59% 4893 requests in 1.00m, 1.60MB read Requests/sec: 81.55 Transfer/sec: 27.29KB Running 1m test @ https://d6.gnutella2.info 3 threads and 50 connections Thread Stats Avg Stdev Max +/- Stdev Latency 595.61ms 700.75ms 9.28s 92.21% Req/Sec 24.84 5.91 41.00 73.22% 4303 requests in 1.00m, 1.49MB read Socket errors: connect 0, read 0, write 0, timeout 161 Requests/sec: 71.72 Transfer/sec: 25.36KB $ wrk -c 100 -t 3 -d 60s --timeout 15m http://d6.gnutella2.info:2342; wrk -= c 100 -t 3 -d 60s --timeout 15m http://d6.gnutella2.info:80; wrk -c 100 -t = 3 -d 60s --timeout 15m https://d6.gnutella2.info Running 1m test @ http://d6.gnutella2.info:2342 3 threads and 100 connections Thread Stats Avg Stdev Max +/- Stdev Latency 927.76ms 2.86s 27.74s 97.13% Req/Sec 44.08 10.46 73.00 71.20% 7867 requests in 1.00m, 1.19MB read Socket errors: connect 0, read 0, write 0, timeout 11 Requests/sec: 131.11 Transfer/sec: 20.23KB Running 1m test @ http://d6.gnutella2.info:80 3 threads and 100 connections Thread Stats Avg Stdev Max +/- Stdev Latency 605.29ms 874.02ms 11.70s 94.95% Req/Sec 29.18 7.80 45.00 61.41% 5321 requests in 1.00m, 1.74MB read Socket errors: connect 0, read 0, write 0, timeout 18 Requests/sec: 88.68 Transfer/sec: 29.68KB Running 1m test @ https://d6.gnutella2.info 3 threads and 100 connections Thread Stats Avg Stdev Max +/- Stdev Latency 1.10s 3.19s 34.16s 97.20% Req/Sec 24.39 7.20 40.00 64.13% 4320 requests in 1.00m, 1.49MB read Socket errors: connect 0, read 0, write 0, timeout 891 Requests/sec: 71.95 Transfer/sec: 25.36KB $ wrk -c 200 -t 3 -d 60s --timeout 15m http://d6.gnutella2.info:2342; wrk -= c 200 -t 3 -d 60s --timeout 15m http://d6.gnutella2.info:80; wrk -c 200 -t = 3 -d 60s --timeout 15m https://d6.gnutella2.info Running 1m test @ http://d6.gnutella2.info:2342 3 threads and 200 connections Thread Stats Avg Stdev Max +/- Stdev Latency 2.63s 8.82s 0.96m 95.40% Req/Sec 43.75 10.14 78.00 71.72% 7743 requests in 1.00m, 1.17MB read Socket errors: connect 0, read 0, write 0, timeout 95 Requests/sec: 129.05 Transfer/sec: 19.91KB Running 1m test @ http://d6.gnutella2.info:80 3 threads and 200 connections Thread Stats Avg Stdev Max +/- Stdev Latency 4.51s 7.28s 18.51s 78.77% Req/Sec 27.53 8.17 55.00 66.06% 5106 requests in 1.00m, 1.67MB read Socket errors: connect 0, read 0, write 0, timeout 96 Requests/sec: 85.10 Transfer/sec: 28.50KB Running 1m test @ https://d6.gnutella2.info 3 threads and 200 connections Thread Stats Avg Stdev Max +/- Stdev Latency 4.02s 10.62s 49.52s 91.64% Req/Sec 23.12 7.30 39.00 68.04% 3908 requests in 1.00m, 1.35MB read Socket errors: connect 0, read 0, write 0, timeout 2981 Requests/sec: 65.13 Transfer/sec: 23.02KB Guile creates around 18% load (of 200% available on 2 cores combined). Varnish creates around 6% load for varnish/lighttpd (lighttpd does not create noticeable load here, it likely only gets hit one single time). nginx creates another 6% load for nginx/varnish/lighttpd (total: 12%). I hope these results are an interesting datapoint for you. For all interpretations keep in mind that this box is really perpetually overloaded (but the high load processes run with niceness 10). Best wishes, Arne =2D-=20 Unpolitisch sein hei=C3=9Ft politisch sein ohne es zu merken --=-=-= Content-Type: application/pgp-signature; name="signature.asc" -----BEGIN PGP SIGNATURE----- iQIzBAEBCAAdFiEE801qEjXQSQPNItXAE++NRSQDw+sFAllqdHUACgkQE++NRSQD w+sUDhAArBE0sNLyBHOqdrNWiCZM5KOExBdQp+F/VEYpY2BFdVepQBG3ooMhYK+I B115A7SR06mMNaKUnEqM2VllkepsGVwe9YNSvhWz3fDyJzFxSLjq3D1PxMmh+JXS H6wB7x4Qnrw5GhaGVnm5UqTs/3DZ0WfIb18Os2cUqxhiKyzJUbGnSnopm4Mn0K3e G1qJ/PHNfXhPZdwfEe+3DCPwgiDO48wEQRD+Db60C6FanJX1bvDM6vj4y7KAxHty 4Gs7zSRJseen6liBZjcu+HmlHJu2YJuxMBobYqjvk3R8x883Nsh6gJ1879Vhp2sx DeppivOAV2SA8+c2e6TstUvoUwmPPNIKgJQygNvyiu9qWFBK+aoikeuE6Sccs902 81wEa+a1Oy+ohJgW82H/CAMA1jSfwsCWCPTsovLDeXWiK5ozFHg7ijXC06rf7m+K rYsOhwOCTkNp/gw0Xl6th1INBPrPoJizi7m3j7tKsiGJTcPOiImaZJyy/Lg0trbc PpT1R6om8AsVKhtqr+Ca0C7xraa8GSnIF5RqCJ4TVV15HDbAZtggjec975NJV8j7 V/hs8DUwc+whgWM1kAeQNiMuu4aKqgtKqiZcRCai+gTDpL7IKSvl44S78LQYTTRv AbNt+0Sop8Lin4w0UpS2hQeyRD4nL0fCuyWgiK9m+rx8Yz/iXPk= =rY3M -----END PGP SIGNATURE----- --=-=-=--