On day-to-day basis, we get a lot of performance testing project requests and our team analyzes and works on each request thoroughly. While working on such requests, we have observed that client project managers tend to apply a very common strategy to reduce costs of their performance test task. This strategy revolves around reducing the number of vusers, increasing the ThinkTime and correspondingly increasing the transaction volume on the server. Though, in the interest of customer, we employ each and every possible approach to reduce the cost of the customer and deliver best quality results to them, we do not recommend our clients to employ this technique.
1) Every user logged into an application from a client consumes server resources. Session-state which is associated with the user’s login is the best example. The session state information gets stored on the server in its memory and it remains in the memory till the time user is logged in to the application. Only when he logs out, his session expires and his details move out. On many applications, this accounts for significant amount of memory consumption on the servers and hence is unavoidable. If transaction/sec is increased by decreasing the user load, less number of users will be logged in, on the test environment, during the tests and thus less session related memory will be consumed. There is a possibility that the development team would not come to know about memory associated issues in testing phase and will only be able to understand about those when customers/end users complain about poor performance in production.
2) Also, by under representing the number of users a compromise is made on the number of simultaneous connections made by the clients with the servers. App servers / web servers may have connection limits and the issues arising due to connection limit getting crossed may not get exposed during the testing cycle. Further-more, each simultaneous request may require a db connection and the db server may have a limit on simultaneous connections being made. Ideally, a db should be able to process the amount of concurrent reads and writes during execution and this concurrency is primary cause of many load associated failures.
3) By simulating the desired transaction load with less number of virtual users, stress is not exercised and thus issues associated may not be highlighted and thus may remain un-addressed. In production, the users may have to experience larger response time in queues while waiting for connection pool.
4) Also poor load balancing associated issues may not be highlighted during such tests. During high load, sometimes balancers are required to move sessions from one server to another. By extrapolating & interpolating the number of vusers, issues associated with these movements may not be highlighted and thus may not be resolved till end users report problems in production.
We, at AgileLoad, understand the budget issues faced by the customers. Also, many other tool vendors charge exorbitantly high amounts as licensing costs and these costs move to unbearable limits at higher user load requirement levels. To address this issue, we have come up cloud testing model, as per which, customers pay only when real concurrency is desired and that also a minimal amount. This model allows the customers to execute their load tests at a user load as they desire. Since the cost is less, they play with as many user load requirements as is the business demand and thus get more meaningful results out of their tests.
This strategy of keeping our customers benefit ahead of ours has helped us win customers confidence and many teams are contacting us with their application’s load testing issues.
This post is also available in: Anglais