Java Memory Settings – JVM Heap Size

When you run a Java program you don’t execute it the way you do an executable written in languages like C/C++. The program must run inside the Java virtual machine or JVM, which interprets the compiled Java byte code and translates it to the local native instruction set. For example, to execute a Java program called HelloWorld you would type:

java HelloWorld

Large pile of loose hay - no bales

Haystack

The amount of memory available to the application is known as the Java heap size and is controlled by passing the -Xms and -Xmx flags to the virtual machine. The Xms is the starting heap size, the Xmx is the maximumum heap size. On machines with resource contraints you can set them to a small starting size, and a max that is large enough to do what you want to do, like “-Xms128MB -Xmx512MB”. This will free up more memory for other applications, but does impact performance, as I’ll discuss more in just a moment.

If you don’t pass the flags, you get whatever the default settings are, typically something like 32MB for starting and 128MB for max – but it depends on your OS and the particular implementation of Java you’ve installed. For example if you want to set the max heap size to 256 GB you can type:

java -Xmx256gb HelloWorld

However, you can’t just put any number you want there and have it work. It is a known fact that 32-bit architectures limit the amount of memory any single process can allocate to 4GB. So why does the JVM limit the maximum heap size to something much less than 2GB?

When you try to run your JVM with a -Xmx2gb flag, you’ll get the following error:

Error occurred during initialization of VM
Could not reserve enough space for object heap

This is a limitation of a 32-bit OS such as Windows XP. 32-bit processes can only use a maximum of 4GB memory address space. Windows further splits that into half by allocating 2GB to the kernel and 2GB to the application. The reason you can not hit the 2GB limit within the VM is because there is memory overhead that the VM and OS use for the process, hence you end up with a bit less than the actual 2GB limit. The solution would be to move to a 64-bit machine running a 64-bit OS, and a 64-bit JVM.

In tests I have found the overall max you can use with a 32-bit JVM is OS/JVM dependent. Typically on Windows it’s about 1696MB, Linux is about 1696MB, and on Solaris it’s about 3000MB.

I’ve gotten the chance to work with a large variety of java applications on a wide variety of machines as part of my day job at Parasoft and I’ve found some configurations that will make things run better.

On a server machine I recommend you set the min and the max to the same (upper) value. This has a couple of benefits. First, it’s generally faster because you don’t lose any time for the JVM resizing. In addition, in cases where you aren’t completely sure that the meory will be available, this let’s you know immediately that the program doesn’t have enough memory, because it will refuse to even start. You can test this by starting enough applications that you’re down to less than 1GB of free memory, then typing:

java -Xms1024m -Xmx1024m HelloWorld

and it will fail again with a heap error because it is unable to allocate the 1GB of memory we told it we needed. This is useful with servers because we avoid confusing out-of-memory error messages when we thought we had enough but didn’t.

For performance issues, max memory should be 500MB less then your total real memory on Windows and 200 MB on linux/unix. For example if your Windows machine has 1GB of ram, the max setting should be set to 512 MB.

An additional note on contiguous memory. The JVM handles it’s memory as a single contiguous block. This means that if you give the JVM different parameters for initial (-Xms) and max (-Xmx) then the size of the JVM may change dynamically while running. However, this has some frequently unexpected ramifications.

The first is performance. If you have -Xms256m -Xmx1024m then your JVM will start with
an initial 256MB. If you’re running along happily and then you need something more like 300MB, here’s what happens:

  • First the JVM allocates a completely new block of 300MB
  • Then there is a transfer (memcopy) of the existing memory
  • then the original 256MB block is released.

As you can imagine this can be an expensive operation. You may have expected that it would simply go find an available block of 44MB and allocate it, but because of the JVM needing contiguous memory, there is no other way to handle it. This means the expansion tends to be slow. This is fine on machines with limited memory, but on large machines you can achieve much better performance.

A second effect is that you can get unexpected crashes if the system is unable to make the
allocation for the new block. For example, let’s assume you have a machine with 1GB physical memory and 1GB of swap. You’re running “normally” using about 600MB of memory and want to start a Java app that uses a lot of memory. You give the JVM -Xms512m -Xmx1024m and it starts fine.

Obviously there is a bit of swapping going on, but you’re running and happy. Then at some point your app starts needing more memory, and it wants to use the full 1024m that you’ve told the JVM is available. To review, you are currently using 600MB (stuff) + 512MB (java app) out of 2048MB (total memory). Now if you want to allocate your JVM from 512 to 1024, you would think that you only need an additional 512, but that is incorrect.

What actually happens is that you need a new block of 1024 to be allocated, AFTER which
the original memory is freed. Typically when the new block is large than the OS has available, the JVM is unable to handle it cleanly and crashes. This is a perfect example of how a JVM can crash even though you thought you had enough memory. In this case, setting the
parameters to -Xms1024m -Xmx1024m on the same machine would have succeeded, even
though you might think it was less likely than the smaller initial allocation.

Because of this, I normally recommend that if you have performance issues, or unexplained
crashes, to try and figure out how much memory you really need, and set Xms and Xmx to the same size. For typical Java apps, this can be 256 or 512 or even less. For large intensive apps 1024 is a better choice.

If you’ve got other tips or more information about this, let me know in the comments or on twitter.

Resources
Thinking in Java

Java Performance: The Definitive Guide: Getting the Most Out of Your Code

Optimizing Java: Practical techniques for improving JVM application performance

One comment to “Java Memory Settings – JVM Heap Size”
  1. This understand of how JVM acts was so helpful. I have looked for YEARS into why my application was having to restart randomly – it seemed that there was no identifiable pattern to it. What you explained that was going on with the memory “swapping” made sense. I always thought my problem was that I didn’t have enough memory allocated (I would get random “out of memory” errors). Turns out, I had too much memory allocated, so that it wasn’t able to efficiently do the required swapping, thus it would restart. So, after reading your explanation, I halved the memory I had allocated to my application, and it hasn’t missed a beat! Thank you so much!

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.