Skip to content

Conversation

@stancikcom
Copy link

Gevent, Bjoern adapters add better performance due to async processing of the requests.
Further adapters to be added soon.

Hi, you did a nice job with fiole despite alpha status. 
Before I found your project, I though as well to merge a speed of wheezy with lean syntax / simplicity / features of itty, flask.... ;-)

I have tried some simple benchmarks with a demo " hello word" app. 
With "gevent server" patch, the results were rather impressive. 

ApacheBech results:
Concurrency Level:      100
Time taken for tests:   2.794 seconds
Complete requests:      10000
Failed requests:        0
Total transferred:      1470000 bytes
HTML transferred:       120000 bytes
Requests per second:    3578.54 [#/sec] (mean)
Time per request:       27.944 [ms] (mean)
Time per request:       0.279 [ms] (mean, across all concurrent requests)
Transfer rate:          513.72 [Kbytes/sec] received

Setup: App server as well as ApacheBench running from the same computer (Notebook Lenovo u310 with Intel(R) Core(TM) i7-3517U CPU @ 1.90GHz, OS: Ubuntu 14.04 x64,  Python 2.7 x64)
With suppresed console logging reached 8k Rps.

ms@ms-Lenovo-U310:~$ ab -c 1000 -n 10000 localhost:8080/ 
Server Software:        Bjoern
Server Hostname:        localhost
Server Port:            8080

Document Path:          /
Document Length:        12 bytes

Concurrency Level:      1000
Time taken for tests:   1.218 seconds
Complete requests:      10000
Failed requests:        0
Total transferred:      1100000 bytes
HTML transferred:       120000 bytes
Requests per second:    8209.94 [#/sec] (mean)
Time per request:       121.804 [ms] (mean)
Time per request:       0.122 [ms] (mean, across all concurrent requests)
Transfer rate:          881.93 [Kbytes/sec] received

Connection Times (ms)
              min  mean[+/-sd] median   max
Connect:        0    3   8.8      0      38
Processing:     9   20  17.5     15     423
Waiting:        9   19  16.3     15     423
Total:          9   23  24.6     15     424

Percentage of the requests served within a certain time (ms)
  50%     15
  66%     15
  75%     15
  80%     15
  90%     67
  95%     89
  98%    103
  99%    112
 100%    424 (longest request)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant