How to Perform Web Server Performance Benchmark?

Are you aware the common response time of your web site? Are you aware what number of concurrent customers your web site can deal with?

Load testing is important for internet functions to know the web site capability. When you are going to select the online server, one of many first belongings you’ll need to do is check the load and see which one works properly for you.

Benchmarking might help you determine;

  • Which internet server works greatest
  • Variety of servers it’s good to course of x variety of requests
  • Which configuration offers you one of the best outcome
  • Which tech stacks carry out higher
  • When your web site performs slower or breaks down

There are a number of on-line instruments to carry out a stress check; Nevertheless, if you’re in search of an in-house answer or simply need to benchmark internet server efficiency, you should use it Apache Bench and alternatively among the instruments listed under.

I used the Apache & Nginx internet server hosted on DigitalOcean to check it.

Apache Bench

ApacheBench (ab) is an open-source command-line instrument that works with any internet server. On this submit I’ll clarify tips on how to set up this little program and run the load check to check the outcomes.

Apache

Let’s set up ApacheBench utilizing a yum command.

yum set up httpd-tools

If you have already got httpd instruments, you’ll be able to ignore this.

Now let’s examine the way it performs for 5000 requests at 500 concurrency.

[root@lab ~]# ab -n 5000 -c 500 http://localhost:80/
That is ApacheBench, Model 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Know-how Ltd, http://www.zeustech.internet/
Licensed to The Apache Software program Basis, http://www.apache.org/
Benchmarking localhost (be affected person)
Accomplished 500 requests
Accomplished 1000 requests
Accomplished 1500 requests
Accomplished 2000 requests
Accomplished 2500 requests
Accomplished 3000 requests
Accomplished 3500 requests
Accomplished 4000 requests
Accomplished 4500 requests
Accomplished 5000 requests
Completed 5000 requests
Server Software program:        Apache/2.2.15
Server Hostname:        localhost
Server Port:            80
Doc Path:          /
Doc Size:        4961 bytes
Concurrency Stage:      500
Time taken for assessments:   13.389 seconds
Full requests:      5000
Failed requests:        0
Write errors:           0
Non-2xx responses:      5058
Complete transferred:      26094222 bytes
HTML transferred:       25092738 bytes
Requests per second:    373.45 [#/sec] (imply)
Time per request:       1338.866 [ms] (imply)
Time per request:       2.678 [ms] (imply, throughout all concurrent requests)
Switch fee:          1903.30 [Kbytes/sec] obtained
Connection Occasions (ms)
min  imply[+/-sd] median   max
Join:        0   42  20.8     41    1000
Processing:     0  428 2116.5     65   13310
Ready:        0  416 2117.7     55   13303
Complete:         51  470 2121.0    102   13378
Proportion of the requests served inside a sure time (ms)
50%    102
66%    117
75%    130
80%    132
90%    149
95%    255
98%  13377
99%  13378
100%  13378 (longest request)
[root@lab ~]#

In order you’ll be able to see, Apache dealt with it 373 requests per second, and it took a complete of 13,389 seconds to satisfy the full variety of requests.

Now you already know that the default configuration can meet these many requests, so if you happen to make adjustments to the configuration, you’ll be able to run the check once more to check the outcomes and select the choice greatest An.

Nginx

Let’s check what we have executed for Apache so you’ll be able to examine which one performs higher.

[root@lab ~]# ab -n 5000 -c 500 http://localhost:80/
That is ApacheBench, Model 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Know-how Ltd, http://www.zeustech.internet/
Licensed to The Apache Software program Basis, http://www.apache.org/
Benchmarking localhost (be affected person)
Accomplished 500 requests
Accomplished 1000 requests
Accomplished 1500 requests
Accomplished 2000 requests
Accomplished 2500 requests
Accomplished 3000 requests
Accomplished 3500 requests
Accomplished 4000 requests
Accomplished 4500 requests
Accomplished 5000 requests
Completed 5000 requests
Server Software program:        nginx/1.10.1
Server Hostname:        localhost
Server Port:            80
Doc Path:          /
Doc Size:        3698 bytes
Concurrency Stage:      500
Time taken for assessments:   0.758 seconds
Full requests:      5000
Failed requests:        0
Write errors:           0
Complete transferred:      19660000 bytes
HTML transferred:       18490000 bytes
Requests per second:    6593.48 [#/sec] (imply)
Time per request:       75.832 [ms] (imply)
Time per request:       0.152 [ms] (imply, throughout all concurrent requests)
Switch fee:          25317.93 [Kbytes/sec] obtained
Connection Occasions (ms)
min  imply[+/-sd] median   max
Join:        0    6  11.0      2      53
Processing:     5   19   8.2     17      53
Ready:        0   18   8.2     16      47
Complete:         10   25  17.4     18      79
Proportion of the requests served inside a sure time (ms)
50%     18
66%     21
75%     21
80%     22
90%     69
95%     73
98%     75
99%     76
00%     79 (longest request)
[root@lab ~]#

WOW!

Did you see that?

Nginx executed 6593 requests per second! A winner.

So that you see that by evaluating two internet servers you get an thought which one to decide on on your internet utility.

The above check is on CentOS 6.8, 64 bit. You’ll be able to strive a number of combos of working system and internet server model for optimum outcomes.

Don’t love ApacheBench for some motive? To not fear, there are many others you should use to run HTTP load.

SIEGE

SIEGE is an HTTP load check utility supported on UNIX. You’ll be able to put a number of URLs in a textual content file to add assessments. You’ll be able to set up Siege with yum.

# yum set up siege

Let’s run the check with 500 concurrent requests for five seconds.

[root@lab ~]# siege -q -t 5S -c 500 http://localhost/
Lifting the server siege...      executed.
Transactions:                       4323 hits
Availability:               100.00 %
Elapsed time:                       4.60 secs
Information transferred:        15.25 MB
Response time:                    0.04 secs
Transaction fee:       939.78 trans/sec
Throughput:                         3.31 MB/sec
Concurrency:                      37.97
Profitable transactions:        4323
Failed transactions:                0
Longest transaction:            1.04
Shortest transaction:            0.00
[root@lab ~]#

To separate the parameters.

-q – to run it quietly (don’t present request particulars)

-t – run for five seconds

-c – 500 concurrent requests

As you’ll be able to see the provision is 100% and the response time is 0.04 seconds. You’ll be able to alter the load check parameter in response to your objective.

Ali

Ali is a comparatively new load testing instrument that may carry out real-time evaluation. It helps a number of platforms to put in, together with Docker.

As soon as put in, run ali to view utilization particulars.

root@lab:~# ali
no goal given
Utilization:
  ali [flags] <goal URL>

Flags:
  -b, --body string         A request physique to be despatched.
  -B, --body-file string    The trail to file whose content material can be set because the http request physique.
      --debug               Run in debug mode.
  -d, --duration length   The period of time to concern requests to the targets. Give 0s for an infinite assault. (default 10s)
  -H, --header strings      A request header to be despatched. Can be utilized a number of instances to ship a number of headers.
  -k, --keepalive           Use persistent connections. (default true)
  -M, --max-body int        Max bytes to seize from response our bodies. Give -1 for no restrict. (default -1)
  -m, --method string       An HTTP request technique for every request. (default "GET")
  -r, --rate int            The request fee per second to concern towards the targets. Give 0 then it would ship requests as quick as potential. (default 50)
  -t, --timeout length    The timeout for every request. 0s means to disable timeouts. (default 30s)
  -v, --version             Print the present model.

Examples:
  ali --duration=10m --rate=100 http://host.xz

Writer:
  Ryo Nakao <[email protected]>
root@lab:~#

As you’ll be able to see above, you may have an choice to ship HTTP headers, check length, pace restrict, timeout and extra. I did a fast check with Geekflare Instruments and that is what the output appears like.

The report is interactive and gives detailed latency info.

gobench

Gobench is written within the Go language and is an easy load testing utility to benchmark internet server efficiency. It helps greater than 20,000 concurrent customers, which ApacheBench doesn’t.

Apache JMeter

JMeter is among the hottest open supply instruments for measuring internet utility efficiency. JMeter is a Java based mostly utility and never solely an internet server, however you may also use it towards PHP, Java. ASP.internet, SOAP, REST, and so forth.

JMeter has a reasonably user-friendly GUI and the most recent model 3.0 requires Java 7 or increased to run the applying. You must give JMeter a strive in case your aim is to optimize internet utility efficiency.

work

wrk is one other fashionable efficiency measurement instrument to load your internet server and offer you particulars about latency, requests per second, transfers per second, and so forth.

With wrk you’ll be able to specify that you simply need to run a load check with a lot of threads.

Let’s take an instance of working a check for five minutes with 500 concurrent customers utilizing 8 threads.

wrk –t8 –c500 -d300s http://localhost

Autocannon

Impressed by work, autocannon is written in Node.js. You should utilize it programmatically, by way of an API, or a standalone utility. All you want is NodeJS put in as a prerequisite.

You’ll be able to handle a lot of connections, requests, length, workers, timeout, connection pace and supply quite a few choices to benchmark your internet functions.

curl loader

curl-loader is written in C to simulate utility loading and helps SSL/TLS. Along with the online web page check, you may also use this open-source instrument to load FTP servers.

You’ll be able to create a check plan utilizing a mix of HTTP, HTTPS, FTP, and FTPS in a single batch configuration.

httperf

The httperf is a strong instrument that focuses on micro and macro stage benchmarks. It helps HTTP/1.1 and SSL protocols.

In case you have the anticipated variety of concurrent customers and need to check whether or not your internet server can deal with a lot of requests, you should use the next command.

httperf --server localhost --port 80 --num-conns 1000 --rate 100

The above command assessments at 100 requests per second for 1000 HTTP requests.

Tsung

Tsung is a multi-protocol distributed stress testing instrument for stressing HTTP, SOAP, PostgreSQL, LDAP, XAMP, and MySQL servers. It helps HTTP/1.0, HTTP/1.1 and cookies are dealt with robotically.

Producing a report is feasible with Tsung.

Conclusion

I hope the benchmarking instruments above offer you an thought of ​​your internet server’s efficiency and determine what works greatest on your venture.

Subsequent, remember to test the efficiency of your web site.

Leave a Comment

porno izle altyazılı porno porno