Continuing my tests on low-end server environments, I recently set up a stresstest of a load balanced tomcat webapp. Part of that tests consist in reducing the available memory for the webapp (JVM -Xmx switch) and see how low it could go before negatively affecting performance.
It’s common knowledge that once available memory runs out on the JVM heap the garbage collector will run more frequently and longer trying to free up memory which results in longer pauses and increased CPU usage. Thus a frequent remedy against unexplainably high CPU load is to increase the JVM heap size.
The test setup is a 2 GHz dual core laptop, Windows Vista, Tomcat 6. A normal stresstest on the webapp would yield around 400 requests / sec with 100% CPU load. By reducing the available heap size to 16 MB the throughput drops as expected to very low 16 requests / sec – however at a quite surprising discount: the CPU usage drops to 50%. As a matter of fact, just moderately increasing the heap size to 20MB already takes care of the problem.