Giter VIP home page Giter VIP logo

php-framework-benchmark's Introduction

PHP Framework Benchmark

This project attempts to measure minimum overhead (minimum bootstrap cost) of PHP frameworks in the real world.

So I think the minimum applications to benchmark should not include:

  • cost of template engine (HTML output)
  • cost of database manipulation
  • cost of debugging information

Components like Template engine or ORM/Database libraries are out of scope in this project.

Benchmarking Policy

This is master branch.

  • Install a framework according to the official documentation.
  • Use the default configuration.
    • Don't remove any components/configurations even if they are not used.
    • With minimum changes to run this benchmark.
  • Set environment production/Turn off debug mode.
  • Run optimization which you normally do in your production environment, like Composer's --optimize-autoloader.
  • Use controller or action class if a framework has the functionality.

Some people may think using default configuration is not fair. But I think a framework's default configuration is an assertion of what it is. Default configuration is a good starting point to know a framework. And I can't optimize all the frameworks. Some frameworks are optimized, some are not, it is not fair. So I don't remove any components/configurations.

But if you are interested in benchmarking with optimization (removing components/configurations which are not used), See optimize branch.

If you find something wrong with my code, please feel free to send Pull Requests. But please note optimizing only for "Hello World" is not acceptable. Building fastest "Hello World" application is not the goal in this project.

Results

Benchmarking Environment

  • CentOS 6.8 64bit (VM; VirtualBox)
    • PHP 5.6.30 (Remi RPM)
      • Zend OPcache v7.0.6-dev
    • Apache 2.2

Hello World Benchmark

These are my benchmarks, not yours. I encourage you to run on your (production equivalent) environments.

(2017/02/14)

Benchmark Results Graph

framework requests per second relative peak memory relative
siler-0.6 2,069.69 20.3 0.25 1.0
kumbia-1.0-dev 1,753.60 17.2 0.29 1.2
staticphp-0.9 1,665.28 16.3 0.27 1.1
phalcon-2.0 1,618.39 15.9 0.26 1.1
tipsy-0.10 1,376.97 13.5 0.32 1.3
fatfree-3.5 965.16 9.5 0.41 1.7
ci-3.0 753.09 7.4 0.42 1.7
nofuss-1.2 667.24 6.5 0.40 1.6
slim-3.0 550.43 5.4 0.61 2.5
bear-1.0 502.52 4.9 0.73 3.0
lumen-5.1 415.57 4.1 0.85 3.5
yii-2.0 410.08 4.0 1.32 5.4
ze-1.0 403.34 4.0 0.75 3.1
cygnite-1.3 369.12 3.6 0.71 2.9
fuel-1.8 344.26 3.4 0.63 2.6
silex-2.0 342.81 3.4 0.78 3.2
phpixie-3.2 267.24 2.6 1.25 5.1
aura-2.0 233.54 2.3 0.88 3.6
cake-3.2 174.91 1.7 1.95 7.9
zf-3.0 133.87 1.3 2.24 9.1
symfony-3.0 131.50 1.3 2.18 8.9
laravel-5.3 101.94 1.0 2.83 11.5

Note(1): All the results are run on php with phalcon.so and ice.so. If you don't load phalcon.so or ice.so, the rps except for Phalcon or Ice probably increase a bit.

Note(2): This benchmarks are limited by ab performance. See #62.

How to Benchmark

If you want to benchmark PHP extension frameworks like Phalcon, you need to install the extenstions.

Install source code as http://localhost/php-framework-benchmark/:

$ git clone https://github.com/kenjis/php-framework-benchmark.git
$ cd php-framework-benchmark
$ bash setup.sh

Run benchmarks:

$ bash benchmark.sh

See http://localhost/php-framework-benchmark/.

If you want to benchmark some frameworks:

$ bash setup.sh fatfree-3.5/ slim-3.0/ lumen-5.1/ silex-1.3/
$ bash benchmark.sh fatfree-3.5/ slim-3.0/ lumen-5.1/ silex-1.3/

Linux Kernel Configuration

I added below in /etc/sysctl.conf

# Added
net.netfilter.nf_conntrack_max = 100000
net.nf_conntrack_max = 100000
net.ipv4.tcp_max_tw_buckets = 180000
net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_fin_timeout = 10

and run sudo sysctl -p.

If you want to see current configuration, run sudo sysctl -a.

Apache Virtual Host Configuration

<VirtualHost *:80>
  DocumentRoot /home/vagrant/public
</VirtualHost>

References

Other Benchmarks

php-framework-benchmark's People

Contributors

adsdeploy avatar cebe avatar dracony avatar gintsmurans avatar jarnix avatar joanhey avatar kenjis avatar koriym avatar leocavalcante avatar mruz avatar mstruebing avatar nazar-pc avatar nhymxu avatar nyholm avatar pumatertion avatar rpkamp avatar sadok-f avatar samdark avatar sanjoydesk avatar seyfer avatar sing88 avatar spacedevin avatar taylorotwell avatar vonjagdstrumel avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

php-framework-benchmark's Issues

Can you explain what the parameters for the benchmarks are?

First of all thank you very much for making such a great tool to benchmark all the php-frameworks. I have question about the bash script located in _functions.sh
this ab -c 10 -t 3 "$url" > "$ab_log"
do the actual test.
It's not clear for me how many request per seconds are sent to the framework. I have read the documentation about ab and we should specify the amount of requests with -n. I didn't see that in your benchmarking, correct me if I am wrong, please! ๐Ÿ‘

Please can you maybe explain what you are doing? I mean how much request or maybe how many concurrent users.

I have ran the benchmark for Symfony 3.0 and Laravel5.2 and I got the following results.

screenshot_3

What I don't understand what does memory usage here mean? I mean I didn't know how many requests are sent and also what the is relation between theses results.
For example Symfony could handle 639 rps and has spent 7.879 ms with 2.92 MB memory usage. 214 files were included.

This is specifications of my laptop where I have done the benchmark.
screenshot_3

These are the results:
screencapture-127-0-1-1-php-framework-benchmark-1461243921694
The number beside the framework name is the amount of (rps, mb of ms) I did that because I wanted to make a screenshot for the page. Otherwise you should save it as a html page and hover on it with your mouse to be able to see the amount.

Here info about the version of apache and php
screenshot_1

screenshot_2

Why my Lumen is so much slower than Taylor's?

According to http://taylorotwell.com/how-lumen-is-benchmarked/ :

framework requests per second
Slim 2 1700-1800
Lumen 1800-1900
Slim 3 1200-1300
  • Lumen is a bit faster than Slim2.

But my benchmarks (2015/04/15) :

framework requests per second
slim-2.6 774.31
lumen-5.0 372.07
  • Slim2 is about two times faster than Lumen.

I've found some differences in code, but overall benchmarking methodology seems to be the same.

donโ€™t be surprised if your numbers are different than my numbers.

Yes, benchmarking environments change numbers. But the difference (relatively to Slim2) is too much, isn't it?

I simply want to know why. What makes the huge difference?

Use an easily reproducible server stack as basis for the benchmarks

Currently it takes hours to reproduce the same stack that are used to produce the benchmark results. This demotivates others from reproducing the benchmarks on their own servers / workstations, causing the results to be biased towards a particular benchmarking environment.

By sharing underlying software stacks, the benchmark results vary only according to the host machine's hardware specs and differing code implementations.

Also, it simplifies contribution to php-framework-benchmark, since it makes it easier to test PR:s locally before submitting them.

Phalcon benchmarking!

Hi Kenjis,

I have a question about Phalcon Benchmarking. Since we haven't installed the extension of Phalcon and we haven't configured anything on our server, how can then phalcon run?
If you read the documentation of Phalcon https://docs.phalconphp.com/en/latest/index.html
you know what I mean.
So my question is Phalcon benchmarking legitimate? Personally I don't know, but when I have tried to open the link of the phalcon-2.0 app that being generated after I run the benchmark, I got an error and I couldn't open the link : http://127.0.0.1/php-framework-benchmark/phalcon-2.0/public/index.php?_url=/hello/index

Thanks,

Peshmerge

typo3f-3.0 setup is broken

I got this error on first run bash setup.sh with php7

> TYPO3\Flow\Composer\InstallerScripts::postPackageUpdateAndInstall
Generating optimized autoload files
Deprecation Notice: The callback TYPO3\Flow\Composer\InstallerScripts::postUpdateAndInstall declared at /home/seyfer/www/php-framework-benchmark/typo3f-3.0/Packages/Framework/TYPO3.Flow/Classes/TYPO3/Flow/Composer/InstallerScripts.php accepts a Composer\Script\CommandEvent but post-install-cmd events use a Composer\Script\Event instance. Please adjust your type hint accordingly, see https://getcomposer.org/doc/articles/scripts.md#event-classes in phar:///usr/local/bin/composer/src/Composer/EventDispatcher/EventDispatcher.php:289
> TYPO3\Flow\Composer\InstallerScripts::postUpdateAndInstall
PHP Fatal error:  Uncaught TypeError: Argument 1 passed to TYPO3\Flow\Error\AbstractExceptionHandler::handleException() must be an instance of Exception, instance of TypeError given in /home/seyfer/www/php-framework-benchmark/typo3f-3.0/Packages/Framework/TYPO3.Flow/Classes/TYPO3/Flow/Error/AbstractExceptionHandler.php:78
Stack trace:
#0 [internal function]: TYPO3\Flow\Error\AbstractExceptionHandler->handleException(Object(TypeError))
#1 {main}
  thrown in /home/seyfer/www/php-framework-benchmark/typo3f-3.0/Packages/Framework/TYPO3.Flow/Classes/TYPO3/Flow/Error/AbstractExceptionHandler.php on line 78
Execution of subprocess failed with exit code 255 without any further output.
(Please check your PHP error log for possible Fatal errors)

  Type: TYPO3\Flow\Core\Booting\Exception\SubProcessException
  Code: 1355480641
  File: Packages/Framework/TYPO3.Flow/Classes/TYPO3/Flow/Core/Booting/Scripts.php
  Line: 528

Open Data/Logs/Exceptions/20161206152544f24afe.txt for a full stack trace.

And this error with php5.6

***** typo3f-3.0 *****
Loading composer repositories with package information
Installing dependencies from lock file
Nothing to install or update
Generating optimized autoload files
Deprecation Notice: The callback TYPO3\Flow\Composer\InstallerScripts::postUpdateAndInstall declared at /home/seyfer/www/php-framework-benchmark/typo3f-3.0/Packages/Framework/TYPO3.Flow/Classes/TYPO3/Flow/Composer/InstallerScripts.php accepts a Composer\Script\CommandEvent but post-install-cmd events use a Composer\Script\Event instance. Please adjust your type hint accordingly, see https://getcomposer.org/doc/articles/scripts.md#event-classes in phar:///usr/local/bin/composer/src/Composer/EventDispatcher/EventDispatcher.php:289
> TYPO3\Flow\Composer\InstallerScripts::postUpdateAndInstall
ArrayObject::__construct() expects parameter 3 to be a class name derived from Iterator, '' given

  Type: InvalidArgumentException
  File: Data/Temporary/Production/Cache/Code/Flow_Object_Classes/TYPO3_Flow_Mvc_Con
        troller_Arguments.php
  Line: 293

Laravel 5.2

Hi, I can see that you are still active on this project, thank you for that.

I have just stumbled upon the Taylor Otwell quote: "Laravel 5.2 currently sitting about 25% faster than Laravel 5.1 (before PHP 7)..." https://twitter.com/taylorotwell/status/674327734252892161.

So I see that you still have Laravel 5.1 in benchmarks. Did you planed to put mentioned statement to a test? :)

Off topic - I have asked you once (I believe) how to buy you a beer for your efforts?

Phalcon 3.0 + PHP 7

Phalcon 3 was released last week, would be really great if we could have a benchmark for it using PHP 7.

Add number of included files

What do you think about adding info about included files?

printf(
    "\n%' 8d:%f:%f",
    memory_get_peak_usage(true),
    microtime(true) - $_SERVER['REQUEST_TIME_FLOAT'],
    count(get_included_files())
);

Add a control case?

I think the comparison could gain some useful context if you added a plain vanilla php file to the mix. Since the goal is to see how much overhead is placed on a request, having a control case of no overhead at all gives a clear baseline for the capabilities of the testing hardware.

yaf yaf yaf

i really want to see yaf framework in this match
thank you

Switching the order of the benchmarks or running individually is inconsistent

It maybe my VM setup but I can come close to your output when running the standard benchmarks out of the box. However, changing the order of the tests seems to affect the outcome dramatically as does running each one individually. If for example I run Tipsy or Laravel first I get results like >900 rps for them but changing them in the order drops by over 50% performance. Although I am running this with nginx at the moment I suspect the testing is not accurate at all since this change should not show such dramatic difference.

When running these individually I get a similar result to placing them at the top of the list, so I am wondering what part of the AB test is causing this or possible the scripting somehow??

nginx PHP 5.6.16-3+deb.sury.org~trusty+1 ubuntu 14.04

framework requests per second relative peak memory relative
laravel-5.1 933.13 8.3 0.00 0.0
laravel-5.2 559.62 5.0 0.00 0.0
fatfree-3.5 564.48 5.0 0.00 0.0
slim-2.6 520.47 4.6 0.00 0.0
ci-3.0 501.95 4.4 0.00 0.0
nofuss-1.2 466.65 4.1 0.00 0.0
slim-3.0 469.68 4.2 0.00 0.0
bear-1.0 408.57 3.6 0.00 0.0
lumen-5.1 404.35 3.6 0.00 0.0
ze-1.0 352.31 3.1 0.00 0.0
radar-1.0-dev 374.23 3.3 0.00 0.0
yii-2.0 112.95 1.0 0.00 0.0
silex-1.3 351.49 3.1 0.00 0.0
cygnite-1.3 351.01 3.1 0.00 0.0
fuel-1.8-dev 333.24 3.0 0.00 0.0
phpixie-3.2 327.72 2.9 0.00 0.0
symfony-2.7 320.02 2.8 0.00 0.0
zf-2.5 307.39 2.7 0.00 0.0
typo3f-3.0 291.83 2.6 0.00 0.0
phalcon-2.0 1,757.50 15.6 0.00 0.0
ice-1.0 1,833.87 16.2 0.00 0.0
framework requests per second relative peak memory relative
lumen-5.1 1,038.13 2.0 0.00 0.0
laravel-5.2 600.02 1.2 0.00 0.0
laravel-5.1 566.68 1.1 0.00 0.0
slim-3.0 551.69 1.1 0.00 0.0
symfony-2.7 514.67 1.0 0.00 0.0
framework requests per second relative peak memory relative
laravel-5.2 968.84 1.9 0.00 0.0
laravel-5.1 593.43 1.2 0.00 0.0
slim-3.0 576.18 1.2 0.00 0.0
lumen-5.1 531.08 1.1 0.00 0.0
symfony-2.7 499.94 1.0 0.00 0.0

Some others for comparison

framework requests per second relative peak memory relative
silex-1.3 345.10 3.0 0.00 0.0
laravel-5.1 559.34 4.9 0.00 0.0
laravel-5.2 545.42 4.8 0.00 0.0
fatfree-3.5 551.04 4.8 0.00 0.0
slim-2.6 517.77 4.5 0.00 0.0
ci-3.0 488.31 4.3 0.00 0.0
nofuss-1.2 468.78 4.1 0.00 0.0
slim-3.0 444.96 3.9 0.00 0.0
bear-1.0 420.13 3.7 0.00 0.0
lumen-5.1 396.25 3.5 0.00 0.0
ze-1.0 383.21 3.4 0.00 0.0
radar-1.0-dev 371.82 3.3 0.00 0.0
yii-2.0 114.07 1.0 0.00 0.0
cygnite-1.3 359.93 3.2 0.00 0.0
fuel-1.8-dev 342.59 3.0 0.00 0.0
phpixie-3.2 333.53 2.9 0.00 0.0
symfony-2.7 310.56 2.7 0.00 0.0
zf-2.5 319.69 2.8 0.00 0.0
typo3f-3.0 304.33 2.7 0.00 0.0
phalcon-2.0 2,018.81 17.7 0.00 0.0
ice-1.0 1,816.72 15.9 0.00 0.0
framework requests per second relative peak memory relative
laravel-5.2 968.75 8.2 0.00 0.0
laravel-5.1 593.21 5.0 0.00 0.0
slim-2.6 585.63 5.0 0.00 0.0
ci-3.0 528.41 4.5 0.00 0.0
slim-3.0 521.56 4.4 0.00 0.0
lumen-5.1 473.79 4.0 0.00 0.0
yii-2.0 118.21 1.0 0.00 0.0
silex-1.3 436.37 3.7 0.00 0.0
fuel-1.8-dev 429.55 3.6 0.00 0.0
symfony-2.7 393.60 3.3 0.00 0.0
zf-2.5 392.15 3.3 0.00 0.0
typo3f-3.0 384.68 3.3 0.00 0.0
phalcon-2.0 2,013.04 17.0 0.00 0.0
ice-1.0 2,051.81 17.4 0.00 0.0

Raw php echo and include?

It would be interesting to see the frameworks performance against on the raw php echo and include. I know it should be the fastes on the chart (and not used in real development), but how much weight gives a framework?

yii 2

how yii2 off cache file ???
yii2 with cache file queckly

Use another method of communicating the results than trying to echo them

Many frameworks sets the Content-Length header based on what is actually returned in the actual controller method. When curl requests the resource, it will only save output the length of Content-Length. Any remaining output, such as statements printed/echoed after the framework has run it's course will be omitted with curl in verbose mode warning: Excess found in a non pipelined read.

This means in practice that the "outdata" results are not included in the benchmarking results for these frameworks,:

|framework          |requests per second|relative|peak memory|relative|
|-------------------|------------------:|-------:|----------:|-------:|
|slim-3.0           |             608.14|     0.0|       0.00|     0.0|
|lumen-5.1          |             616.32|     0.0|       0.00|     0.0|
|ze-1.0             |             301.87|     0.0|       0.00|     0.0|
|fuel-2.0-dev       |               0.00|     0.0|       0.00|     0.0|
|bear-0.10          |               0.00|     0.0|       0.00|     0.0|
|laravel-5.1        |               0.00|     0.0|       0.00|     0.0|
|typo3f-3.0         |               0.00|     0.0|       0.00|     0.0|

May I suggest another method of communicating the results than trying to echo them? For instance, they can be written to a file. Also, the memory, time and file results should not be required/communicated when running ab tests.

Adding another framework

I'm in the process of writing my own micro-framework mainly for learning purposes and to test different functionalities, my goal is to get to the most optimized end product possible.
Although it's really messy at the moment and has barely any functionality (list of functionalities available in my repo), I'd like to know how many requests it could handle per second.
Unfortunately I cannot setup a Linux environment and run the benchmark on my personal computer so I'd really appreciate it if you could add this to your benchmarks and update the repo, my micro-framework is open source and available here:
https://github.com/snopboy/Sporter

Thanks in advance, I really appreciate it.

Persistence / MySQL should be disabled in Typo 3

Currently Typo 3 tries to connect to MySQL during the benchmark:

Fatal error: Uncaught exception 'PDOException' with message 'SQLSTATE[HY000] [2003] Can't connect to MySQL server on '127.0.0.1' (111)' in /public/typo3f-3.0/Packages/Libraries/doctrine/dbal/lib/Doctrine/DBAL/Driver/PDOConnection.php:43
Stack trace:
#0 /public/typo3f-3.0/Packages/Libraries/doctrine/dbal/lib/Doctrine/DBAL/Driver/PDOConnection.php(43): PDO->__construct()
#1 /public/typo3f-3.0/Packages/Libraries/doctrine/dbal/lib/Doctrine/DBAL/Driver/PDOMySql/Driver.php(45): Doctrine\\DBAL\\Driver\\PDOConnection->__construct()
#2 /public/typo3f-3.0/Packages/Libraries/doctrine/dbal/lib/Doctrine/DBAL/Connection.php(360): Doctrine\\DBAL\\Driver\\PDOMySql\\Driver->connect()
#3 /public/typo3f-3.0/Data/Temporary/Production/Cache/Code/Flow_Object_Classes/TYPO3_Flow_Persistence_Doctrine_EntityManagerFactory.php(116): Doctrine\\DBAL\\Connection->connect()
#4 /public/typo3f-3.0/Packages/Framework/TYPO3.Flow/Classes/TYPO3/Flow/Object/ObjectManager.php(454): TYPO3\\Flow\\Persistence\\Doctrine\\EntityManagerFactory_Original->create()
#5 /public/typo3f-3.0/Packages/Framework/TYPO3.Flow/Classes/TYPO3/Flow/Object/ObjectManager.php(169): TYPO3\\Flow\\Object\\ObjectManager->buildObjectByFactory()
#6 /public/typo3f-3.0/Data/Temporary/Production/Cache/Code/Flow_Object_Classes/TYPO3_Flow_Persistence_Doctrine_PersistenceManager.php(669): TYPO3\\Flow\\Object\\ObjectManager->get()
#7 /public/typo3f-3.0/Packages/Framework/TYPO3.Flow/Classes/TYPO3/Flow/Object/DependencyInjection/DependencyProxy.php(57): Closure$TYPO3\\Flow\\Persistence\\Doctrine\\PersistenceManager::Flow_Proxy_injectProperties
#2()
#8 /public/typo3f-3.0/Packages/Framework/TYPO3.Flow/Classes/TYPO3/Flow/Object/DependencyInjection/DependencyProxy.php(94): TYPO3\\Flow\\Object\\DependencyInjection\\DependencyProxy->_activateDependency()
#9 /public/typo3f-3.0/Data/Temporary/Production/Cache/Code/Flow_Object_Classes/TYPO3_Flow_Persistence_Doctrine_PersistenceManager.php(58): TYPO3\\Flow\\Object\\DependencyInjection\\DependencyProxy->__call()
#10 /public/typo3f-3.0/Packages/Framework/TYPO3.Flow/Classes/TYPO3/Flow/Core/Booting/Scripts.php(480): TYPO3\\Flow\\Persistence\\Doctrine\\PersistenceManager_Original->initialize()
#11 (): TYPO3\\Flow\\Core\\Booting\\Scripts::initializePersistence()
#12 /public/typo3f-3.0/Packages/Framework/TYPO3.Flow/Classes/TYPO3/Flow/Core/Booting/Step.php(49): call_user_func()
#13 /public/typo3f-3.0/Packages/Framework/TYPO3.Flow/Classes/TYPO3/Flow/Core/Booting/Sequence.php(101): TYPO3\\Flow\\Core\\Booting\\Step->__invoke()
#14 /public/typo3f-3.0/Packages/Framework/TYPO3.Flow/Classes/TYPO3/Flow/Core/Booting/Sequence.php(105): TYPO3\\Flow\\Core\\Booting\\Sequence->invokeStep()
#15 /public/typo3f-3.0/Packages/Framework/TYPO3.Flow/Classes/TYPO3/Flow/Core/Booting/Sequence.php(105): TYPO3\\Flow\\Core\\Booting\\Sequence->invokeStep()
#16 /public/typo3f-3.0/Packages/Framework/TYPO3.Flow/Classes/TYPO3/Flow/Core/Booting/Sequence.php(105): TYPO3\\Flow\\Core\\Booting\\Sequence->invokeStep()
#17 /public/typo3f-3.0/Packages/Framework/TYPO3.Flow/Classes/TYPO3/Flow/Core/Booting/Sequence.php(105): TYPO3\\Flow\\Core\\Booting\\Sequence->invokeStep()
#18 /public/typo3f-3.0/Packages/Framework/TYPO3.Flow/Classes/TYPO3/Flow/Core/Booting/Sequence.php(105): TYPO3\\Flow\\Core\\Booting\\Sequence->invokeStep()
#19 /public/typo3f-3.0/Packages/Framework/TYPO3.Flow/Classes/TYPO3/Flow/Core/Booting/Sequence.php(105): TYPO3\\Flow\\Core\\Booting\\Sequence->invokeStep()
#20 /public/typo3f-3.0/Packages/Framework/TYPO3.Flow/Classes/TYPO3/Flow/Core/Booting/Sequence.php(105): TYPO3\\Flow\\Core\\Booting\\Sequence->invokeStep()
#21 /public/typo3f-3.0/Packages/Framework/TYPO3.Flow/Classes/TYPO3/Flow/Core/Booting/Sequence.php(85): TYPO3\\Flow\\Core\\Booting\\Sequence->invokeStep()
#22 /public/typo3f-3.0/Packages/Framework/TYPO3.Flow/Classes/TYPO3/Flow/Http/RequestHandler.php(142): TYPO3\\Flow\\Core\\Booting\\Sequence->invoke()
#23 /public/typo3f-3.0/Packages/Framework/TYPO3.Flow/Classes/TYPO3/Flow/Http/RequestHandler.php(100): TYPO3\\Flow\\Http\\RequestHandler->boot()
#24 /public/typo3f-3.0/Packages/Framework/TYPO3.Flow/Classes/TYPO3/Flow/Core/Bootstrap.php(112): TYPO3\\Flow\\Http\\RequestHandler->handleRequest()
#25 /public/typo3f-3.0/Web/index.php(31): TYPO3\\Flow\\Core\\Bootstrap->run()
#26 {main}

I tried to disable this via configuration but I must admit that I don't understand much of the Typo 3 documentation.

Reverse the relative throughput results

It makes little sense to use relative numbers where 1.0 represents the worst framework tested. Instead, the best case (either the fastest framework or preferably no framework at all, see #34) should be the base of comparison. We can use 100 as the base number for the best case.

Here is an example result that clearly shows how comparing to worst case scenario is non-informative:

|framework          |requests per second|relative|peak memory|relative|
|-------------------|------------------:|-------:|----------:|-------:|
|phalcon-2.0        |             975.53| 5,419.6|       0.27|     1.0|
|ice-1.0            |             611.45| 3,396.9|       0.26|     1.0|
|tipsy-0.10         |             752.09| 4,178.3|       0.32|     1.2|
|fatfree-3.5        |             363.96| 2,022.0|       0.42|     1.6|
|slim-2.6           |             578.86| 3,215.9|       0.48|     1.8|
|ci-3.0             |              67.70|   376.1|       0.43|     1.6|
|nofuss-1.2         |             226.13| 1,256.3|       0.59|     2.2|
|slim-3.0           |             386.59| 2,147.7|       0.62|     2.4|
|bear-1.0           |               0.20|     1.1|       0.76|     2.9|
|lumen-5.1          |             178.44|   991.3|       0.00|     0.0|
|ze-1.0             |             136.61|   758.9|       0.79|     3.0|
|radar-1.0-dev      |             153.33|   851.8|       0.71|     2.7|
|yii-2.0            |             252.91| 1,405.1|       1.34|     5.1|
|silex-1.3          |             327.53| 1,819.6|       0.00|     0.0|
|cygnite-1.3        |              72.30|   401.7|       0.74|     2.8|
|fuel-1.8-dev       |              21.44|   119.1|       0.70|     2.7|
|phpixie-3.2        |              24.92|   138.4|       1.27|     4.8|
|aura-2.0           |             149.58|   831.0|       0.89|     3.4|
|cake-3.1           |             166.20|   923.3|       0.00|     0.0|
|symfony-2.7        |              51.79|   287.7|       0.00|     0.0|
|laravel-5.1        |              76.21|   423.4|       0.00|     0.0|
|zf-2.5             |              32.31|   179.5|       2.93|    11.1|
|typo3f-3.0         |               0.18|     1.0|       0.00|     0.0|

A reversed version is more informative imo:

|framework          |requests per second|relative|peak memory|relative|
|-------------------|------------------:|-------:|----------:|-------:|
|no-framework       |           1,304.82|   100.0|       0.22|     1.0|
|phalcon-2.0        |             685.55|    52.5|       0.27|     1.2|
|ice-1.0            |             630.68|    48.3|       0.26|     1.2|
|tipsy-0.10         |             774.88|    59.4|       0.32|     1.4|
|fatfree-3.5        |             447.52|    34.3|       0.43|     1.9|
|slim-2.6           |             627.34|    48.1|       0.48|     2.1|
|ci-3.0             |             101.57|     7.8|       0.43|     1.9|
|nofuss-1.2         |             216.13|    16.6|       0.59|     2.6|
|slim-3.0           |             441.98|    33.9|       0.62|     2.8|
|bear-1.0           |              42.33|     3.2|       0.77|     3.4|
|lumen-5.1          |               0.00|     0.0|       0.00|     0.0|
|ze-1.0             |             259.65|    19.9|       0.80|     3.6|
|radar-1.0-dev      |             249.12|    19.1|       0.71|     3.2|
|yii-2.0            |             259.94|    19.9|       1.36|     6.0|
|silex-1.3          |               0.00|     0.0|       0.00|     0.0|
|cygnite-1.3        |             116.15|     8.9|       0.76|     3.4|
|fuel-1.8-dev       |              55.94|     4.3|       0.71|     3.2|
|phpixie-3.2        |              77.97|     6.0|       1.30|     5.8|
|aura-2.0           |             121.80|     9.3|       0.90|     4.0|
|cake-3.1           |               0.00|     0.0|       0.00|     0.0|
|symfony-2.7        |               0.00|     0.0|       0.00|     0.0|
|laravel-5.1        |               0.00|     0.0|       0.00|     0.0|
|zf-2.5             |              18.91|     1.4|       3.00|    13.3|
|typo3f-3.0         |               0.00|     0.0|       0.00|     0.0|

'ab' is not fast enough

When benchmarking, It is very important to see the CPU usage, for find the bottlenecks.
With this benchmark using 3s per fw, it is not easy to control the CPU usage.
But changing to 30s per fw, It's possible.

The slowest frameworks are saturating the CPU (~100% usage), and that is correct. But the fastest frameworks only ~70% CPU usage. And the CPU usage increase with the low performant frameworks.
The problem is ab that can't give enough food (requests) to the fastest frameworks. We are benchmarking the ab performance, not the php performance.

Core i7 860 @ 2.80GHz ร— 8 PHP 5.5.9

ab -c 10 -t 3 http://localhost/hello.php(.html)
wrk -t10 -c10 -d3s --latency http://localhost/hello.php(.html)

ab requests per second transfer/second Mb CPU usage %
hello.php 13,373.38 2,80 40-45
hello.html 14,254.07 3,48 35-40
wrk requests per second transfer/second Mb CPU usage %
hello.php 35,285.86 6.77 ~90
hello.html 49,283.88 11.15 ~90

But perhaps only happens with fast CPUs, so I tried with:

Core 2 Duo 2.53Ghz PHP 5.6.21

ab -c 10 -t 3 http://localhost/hello.php
wrk -t10 -c10 -d3s --latency http://localhost/hello.php

hello.php requests per second transfer/second Mb CPU usage %
ab 2,060.37 0.51 ~40
wrk 5,621.32 1.30 ~95

Exactly the same, we calculate the ratio:

CPU model wrk / ab Ratio
Duo 5,621.32 / 2,060.37 2.72831
i7 35,285.86 / 13,373.38 2.63851

The ratio is very similar, so It's only an ab performance limitation, independent of which CPU.
This ratio decrease proportionally to the low performance of the framework. Where ab is fast enough for the slowest frameworks.

For example, with the core i7, one of the fastest fw go from ~10.000 to ~16.000 req/s (more than hello.html with ab). And one of the slowest fw go only from 603 to 634 req/s.

This actual benchmark, is completely limited by ab performance.

Please use wrk, a modern HTTP benchmarking tool.

https://github.com/wg/wrk

https://github.com/wg/wrk/wiki/Installing-Wrk-on-Linux

Seperate Framework, Micro-Framework, Library chart

It's not too fair to put together all of the framework types and even "libraries" against each other because their are used for separate role. If I want to know which is the best performing framework, I don't care about a micro-framework because most of the cases I need the tools what a framework could offer.

For example it's missleading that a siler is fast and cool framework, but according the author (https://github.com/leocavalcante/siler) it is a library not a framework, of course if will outperform the frameworks.

I don't mind to see libraries in the chart but at least mark them so everybody could see what level of functionality can be made with them.

Laravel-5.0 benchmark is incorrect

Laravel sets crypted cookie on each request โ€” therefore it so slowly. You must disable it. Or enable auth layer in other frameworks, but it some other kind of tests (very usefull though)

Add Nette framework

Hi,

this project is perfect, but i have one wish. Could you add one czech PHP farmework, Nette?

Thak you

Division by zero

I have this error:

Warning: Division by zero in /var/www/html/php-framework-benchmark/libs/parse_results.php on line 33

enh: php56 vs php70 graphs/tables

Enhancement request: It would be very interesting to be able to compare PHP 5.6 against 7.0 using your benchmarks. I mean: update the readme to show both.

Error

when try bash benchmark.sh i got a error:
_hello_world.sh: 16: ./functions.sh: Syntax error: Bad for loop variable

Command Not Found?

When I running bash setup.sh this message will show

root@MHR-PC:/mnt/d/Web/GitHub/php-framework-benchmark# bash setup.sh
setup.sh: line 2: $'\r': command not found
setup.sh: line 27: syntax error near unexpected token `$'do\r''
'etup.sh: line 27: `do

Are you have solutions?

Less/No Manipulation

Any kind of framework should use its default "out of the box" features/behaviour. Symfony f.e. should be the result of: "composer create-project symfony/framework-standard-edition" using the same AppKernel settings from the demo acme bundle. Something like this is to much manipulation: https://github.com/kenjis/php-framework-benchmark/pull/3/files#diff-916370cff7f64fa71f0e69c922b515aaL17

Setups should follow the frameworks documentation to run a sample application or base distribution, imho.

TYPO3 Flow Benchmark f.e. using its default setup without any further patching and optimizations. It's what the user gets by following the documentation. Except a package wich provides simple basic settings and does not optimize anything to get better performance: https://github.com/pumatertion/typo3.benchmark

Phalcon 2

Could you please add benchmarks for phalcon 2?

Nova Framework

Wondering what Nova might do in this (used to be called Simple MVC)
http://novaframework.com/

Haven't seen it included much in benchmarks but for Codeigniter fans, it is organized similar but with more current naming conventions

Separate php.ini for phalcon?

It's not clear from the code here whether or not you used a specific php.ini that loads phalcon.so for the non-phalcon benchmarks. Loading a blank page with phalcon.so has a non-trivial overhead over a blank page without phalcon.so being loaded. As such, the non-phalcon benchmarks should all make sure that phalcon is not being loaded when they're run.

You may have done this but it's not clear.

Benchmark bad variable name

I just do this benchmark on my droplet Digital Ocean, I have git clone, and do

sh setup.sh laravel-5.1/ ci-3.0/ yii-2.0/

Success, but when I do

sh benchmark.sh laravel-5.1/ ci-3.0/ yii-2.0/

produce following error

benchmark.sh: 12: export: ci-3.0/: bad variable name

But if benchmark one by one its run well

sh benchmark.sh laravel-5.1/

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.