Daemonite: Maximum JVM heap size for CFMX Archive

Daemonite: Maximum JVM heap size for CFMX Archive


Tuesday, September 14, 2004
Maximum JVM heap size for CFMX

We've been building and setting up a few large CFMX applications of late. When it comes down to optimising and tuning the CFMX set up there's always plenty of debate. I thought it would be interesting to get folks feed back on Maximum JVM heap size.

In the old days of CF on C++ CF seemed to take what it needed but with Java you have to actually set some specific values. Our general approach is to run the application flat-out and see where the JVM sits in terms of memory usage; typically this is much lower than most people suspect. Then we add about 100Mb just in case.

I normally look to round off the JVM memory setting at a multiple of 64Mb. I suspect this doesn't actually mean anything to the JVM but it satisfies some bizarre mysticism of mine.

You want to make sure you don't set the memory maximum higher than the physical memory you have available on your machine. So once you account for the operating system and other services you should have a theoretical maximum. If its not enough demand more RAM -- if your app server starts swapping to disk your application will nail one foot to the floor and start limping around in a circle.

Many folks make the mistake of just throwing as much memory as possible at the JVM. For modern hardware getting 1 to 4 GIG of RAM for a server is not unusual and it might seem the more the merrier. This technote Maximum JVM heap size greater than 1.8GB will prevent ColdFusion MX from starting makes an interesting point.

We found the limit on Win2K for a JVM was actually about 1.6GIG on the original CFMX. It seems that the JVM wants a contiguous block of memory and Windows for some reason wants to randomly load something at about 1.6GIG regardless of your physical memory. (Upgrading to Windows Advanced Server and adding some configuration changes I'm told gives you the upper limit of 1.8GIG). We've not encountered an upper limit on Solaris (not to say there isn't one) and I've heard of folks having trouble on various Linux implementations with large JVMs.

So I guess the upper limit is dependent on the version of JVM. But is big really better? Well no. If your application doesn't need the extra memory and runs in a smaller footprint than the maximum provided to the JVM it doesn't appear to make much difference -- the extra memory seems to go to waste.

The answer I think is to look at multiple instances of ColdFusion running on the same server. Each one set up to run inside its own comfort zone. That way you can make much better use of the memory resources on gruntier servers and gain the benefit of instance clusters to boot.

What do you think? How do you go about choosing the max JVM heap for your installation?

Posted by modius at 10:49 PM | Permalink
Trackback: http://blog.daemon.com.au/cgi-bin/dmblog/mt-tb.cgi/249

Comments

Geoff, very good thoughts...

I usually do not work with Windows powered systems, so there might be a difference in what I do. My experience is, that it's really very helpful to watch the memory consumption over a longer timespan and to monitor the JVMs garbage collection also. A lot of our customers work with non-JRUN servers, so in some cases there are brilliant built-in tools to do that (like in Weblogic), in other cases you'd have to find another way to do that (Suns free JVMstat is great).

In one of the settings (a real huge ecommerce cluster) each WLS instance runs with a max of 512 MB JVM heap size. Works fine and we found a nice GC rhythm given a specific/common load.

Posted by: Kai on September 14, 2004 11:26 PM

Let me correct myself: I DO work with Windows powered systems, but most of our customers are on Linux or Solaris ;)

Posted by: Kai on September 14, 2004 11:27 PM