On February 12, 2019, I ran a series of LoadImpact stress tests across my test websites that I use for collecting performance data about web hosts. All tests involved simulating 100 virtual users (VUs) for a duration of 10 minutes with requests originating from Ashburn, VA.
In these tests, the number of VUs was gradually increased to 50. At that point, the number of VUs was held constant for a few minutes before gradually increasing to a peak of 100 VUs. The load was kept at 100 simultaneous users for a few minutes before gradually dropping back to zero.
Response times, requests per second, and request statuses were tracked during the tests. That data provides insights about how well different shared web hosts can handle intense, short-duration server loads.
This was the first time I ran identical LoadImpact tests across all of my test websites. I think the results are useful enough to be worth sharing, but there’s room for improvement in my methodology. Many of the test websites hit their peak capacity well before 100 virtual users were reached. I ran the tests over a couple of hours (rather than simultaneously), and it’s possible that some hosts had advantages related to the time of day the tests took place.
Note that these test results won’t necessarily relevant to all website owners. An awful lot of websites have very modest resource needs. If you run a basic, low-traffic website, these tests may be of little use for evaluating web hosts.
Performance degradation during these LoadImpact tests could be the result of (a) pushing servers to their limits (b) triggering safeguards that prevent any one shared hosting user from consuming too much of a server’s shared resources or (c) triggering safeguards that specifically limit the activity of applications like LoadImpact. At this time, I don’t feel as confident as I would like about what is occurring in some of the cases where I see performance degradation.
I expect that my methodology—as well as my ability to interpret the results of LoadImpact tests—will improve in future rounds of testing.
Graphical results & comments
1&1 IONOS – Business Plan
HTTP errors started to occur with fewer than 10 VUs. The failure rate rose as the number of VUs increased.
A2 – Lite Plan
Almost half of the HTTP requests to the site’s domain failed. I’m not sure when during the test these failures occured.
Bluehost – Basic Plan
HTTP requests failed once their were more than about 10 VUs. Requests continued to fail until the end of the test when the number of VUs dropped significantly.
DreamHost – Starter Plan
HTTP failures became common at about 25 VUs and the failure rate increased gradually as the number of VUs rose.
FatCow – Original Plan
About 10% of HTTP requests failed. I’m not sure when during the test these failures occured.
Hawk Host – Primary Plan
Performance declined gracefully after about 50 VUs. No HTTP requests failed during the test.
HostGator – Hatchling Plan
Performance seemed steady until about 25 VUs were active. At this point, HTTP request started to occassionally fail. Failure reached 100% while at 50 VUs.
Hostinger – Single Plan
No readily apparent issues.
InMotion – Launch Plan
As the load increased, thoroughput appeared limited. HTTP errors were common.
iPage – Foundation Plan
A small portion of requests resulted in HTTP errors. Response times appear stable until exeeding 50 VUs.
Namecheap – Stellar Plan
No readily apparent issues.
SiteGround – StartUp Plan
Performance appears stable for most of the test. Stability was maintained with 100 VUs, but response times increased rapidly towards the end of the test. Stability was regained when the number of VUs dropped to about 20. About one-third of 1% of request resulted in errors. I expect those errors occured during the period of high response times.