techempower / frameworkbenchmarks Goto Github PK
View Code? Open in Web Editor NEWSource for the TechEmpower Framework Benchmarks project
Home Page: https://www.techempower.com/benchmarks/
License: Other
Source for the TechEmpower Framework Benchmarks project
Home Page: https://www.techempower.com/benchmarks/
License: Other
See if this helps the php-raw test.
Hi,
I've worked through some of the tests and noticed that certain framework tests don't implement all tests.
I see a issue with that in the sense that it removes parts of the programs logic and simplifies the case of handling.
Taking the netty test (which did best) as an example I don't see any logic regarding the path in the source which seems a bit like cheating to me.
I'd suggest not to rank frameworks with an incomplete implementation or at least force them to contain the same logic in regards to routing as other tests do.
Cheers,
Heinz
As suggested in issue #1
see here: playframework/playframework#899
In real, nodeJS apps run with cluster. Cluster helps NodeJS to utilize processor cores.
A Java servlet is typically multithreaded. That means, multiple requests to the same servlet will be executed at the same time.
More information: http://nodejs.org/api/cluster.html
I think, node will produce far better results after clustering.
Go is a concurrent system programming language the strength is its ability to produce scalable and reliable software. In your example you set the MAXPROCESS config high but don´t use gorotines to make it parallel/concurrent that´s like feature cutting and that´s no scenario where the language can show his nature of working.
As suggested in issue #1.
Changing apc.stat to 0 will prevent apc from checking the mtime of files, improving performance. I have not made this change since you will also need to change your setup to call apc_clear_cache() or restart the server when you deploy changes, or stale versions of files will be used
We received a number of requests to allow sorting by latency/stddev/etc on the blog posts.
Nazariy suggested using MySQLi in issue #1.
I'm getting an error when trying to run a test with my own machine as both a client and server:
~/FrameworkBenchmarks :master ⚡ ./run-tests.py -u me --max-threads 4 --test flask-python
=====================================================
Preparing up Server and Client ...
=====================================================
net.core.somaxconn = 1024
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_tw_recycle = 1
kernel.shmmax = 134217728
kernel.shmall = 2097152
me@localhost's password:
Welcome to Ubuntu 12.10 (GNU/Linux 3.5.0-26-generic x86_64)
* Documentation: https://help.ubuntu.com/
sudo: no tty present and no askpass program specified
sudo: no tty present and no askpass program specified
sudo: no tty present and no askpass program specified
sudo: no tty present and no askpass program specified
sudo: no tty present and no askpass program specified
sudo: no tty present and no askpass program specified
=====================================================
Parsing Results ...
=====================================================
Time to complete: 6 seconds
Results are saved in results/ec2/20130405200905
How do I prevent this?
Hi there! I'm the creator of an actor system + I/O framework + web server for Ruby: Celluloid, Celluloid::IO, and Reel respectively.
It seems like you guys have created a pretty interesting framework for performance benchmarking. My only complaint is that I don't think your existing tests of Ruby projects are covering the state-of-the-art and therefore make Ruby look rather bad.
I've recently done some major bottleneck elimination in Celluloid::IO and am getting rather promising numbers on JRuby, especially when InvokeDynamic is enabled. It'd be great to get my web server Reel included in your tests. Reel is also supported by the Webmachine framework, which would give a better indicator of the speed when utilized with a useful web framework as opposed to just a bare bones web server. Your benchmarks seem to run the entire gauntlet in this regard so I think including both makes sense.
I'm definitely willing to help out here however possible, and just wanted to open a tracking ticket to let you know ;)
One thing you have to understand about PHP is nobody except the shared webhosts use the technologies you're using.
First of all, any serious PHP app should be using APC by default. It's the nature of how PHP works by default vs other language deployments. Second, use php-fpm + nginx web server (or at least worker mpm in Apache 2.4 with php-fpm or fastcgi).
Okay I hope I don't start to get annoying ;)
The concurrency levels used for the benchmark seem ridiculous low, if someone has between 8 and 265 concurrent requests on a web application that does virtually no work the framework should not be a concern.
Modern frameworks talk about concurrency levels in the range of tens of thousands if not millions of concurrent requests I'd think something of 10k upwards would be a more interesting and valuable information.
PS: I'll get some test hardware the next days and will try to run this tests under 10k+ conditions and provide some results if interested.
Some defaults, for some frameworks, perform better on some machines. tricky.
Or when that is too much hassle, it should be configured to the same size on all FWs regardless of the default. Then we at least know the db test outcomes are "at a pool size of n
".
As suggested in Issue #5
From meritt at http://news.ycombinator.com/item?id=5500014
"I'd set minimum idle to something like 16 or 32. php-fpm will not create more than 32 workers/sec.
What happens now is 256 workers running and 256 simultaneous requests occur. So php-fpm sees 256 workers busy, 0 idle. The minimum idle is 256, so it attempts to start 256 additional processes."
I would be interested in adding tests for ServiceStack (.net framework). To run tests under linux, this would require mono for compilation, and either mono_fastcgi on nginx or mod_mono on apache2 to run. Is this something that could be supported by the test framework?
I don't know WeigHTTP enough to make any comparison but I I've been recommanded this one often and it does work quite well:
The apps currently use Warbler to turn Ruby apps into WAR files on JRuby. While this is certainly a supported option, it's one of the slowest ways to deploy a JRuby application, due to the additional overhead of converting servlet headers back into byte[]-based strings and other things.
This puts JRuby at a disadvantage.
I would recommend running the tests with a purpose-built Ruby server like Puma or Torquebox. For this particular case, either would do a better job, but Puma's probably the easiest to get up and going.
This is in progress.
This makes both gunicorn and the json module significantly faster with no other changes, which should also affect overall Python performance significantly, particularly for long-running processes less affected by JIT warmup.
In a century of multi-core computers request per second is still relevant, but things like latency and maximum concurrent connections become much more important. Latency can be horrible with still reasonable QPS, but with horrible latency a framework in test won't be usable and acceptable in many-many cases.
Sometimes it is beneficial to sacrifice a bit of sequential speed for much better concurrency. Today's 24+-core machines with huge amounts of RAM can handle millions of concurrent connections, but some frameworks are much better at that then the other. This will need some work on kernel parameters tuning, e.g.:
http://blog.whatsapp.com/index.php/2012/01/1-million-is-so-2011/
Instead of Korma. Add to existing Compojure "app" but as a "raw" DB test.
As suggested in issue #1.
As mysql protocol does not allow for parallel queries on one connection to have real parallel queries on need to acquire multiple db connections from pool. If test tries to emulate 'N queries, and each depends on previous result' than async.parallel here is unnecessary, if it's actually 'execute N independent queries' then it might be better to get a connection from pool for each sub-query
I see a issue with comparing 'peak' performance especially over a set of different tests (aka different concurrency levels).
This seems a bit odd to me, I'd have expected to look at the lowest number - do a worst case "you'll get at least X" comparison instead of "when you're lucky you might even get X" best case. Especially with concurrency which is likely not to end up in the frameworks sweet spot.
Alternatives would be a average, weighted average or multiple benchmarks at different concurrency levels.
As suggested in issue #1
The simple way gunicorn is used right now is more typical of local testing than production. When using CPython in a performance-conscious way, it is customary to serve using uwsgi or if using gunicorn, to have it use an optimized worker like gevent or meinheld. At any rate, these servers are designed to serve from behind something else, e.g. nginx (e.g. uwsgi using uwsgi_pass). This is common knowledge in the Python community.
All of these dependencies can be installed with 'pip install': gunicorn, gevent, uwsgi, meinheld, using CPython 2.7.
As suggested in issue #29.
This statement is good, but doesn't cover PHP:
We're using the word "framework" loosely to refer to platforms, micro-frameworks, and full-stack frameworks.
PHP is none of those things, it's a language just like Ruby or Python, and it is not a framework for the same reason Ruby, Python, JavaScript or Java are not frameworks.
PHP is not a platform either, and only becomes one platform when you utilise Apache or other web software such as Nginx. As you are testing with apache list "php" as "apache-php", just like you have listed ruby as "rack-ruby".
Beyond that, PHP 5.4 is the current stable, so use that. Its much faster than PHP 5.3. It would be like using Ruby 1.8.x, which is considerably slower than 1.9.x.
Also, nobody in their right minds is going to put their site live without APC or Zend Optimizer, so enable one of those instead.
Finally, React in there for a NodeJS-style setup. The author of this is very adamant it's not a framework, but since this project is already fragrantly disregarding widely accepted definitions of these terms it fits right in with the other systems you've been benchmarking.
PHP 5.3.x will be entering it's end-of-life cycle this month. The testing infrastructure should at least include a runner for PHP 5.4.x. An additional runner for PHP 5.5.x might also be a good idea to start now, given that it is currently winding it's way through beta.
MongoDB is currently at version 2.4 and should be upgraded as well.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.