With apologies to poetess, Elizabeth Barrett Browning, this post is about all ways a Java process can hold on to memory.
When people talk about how much memory a Java process has, most people only think of heap size. But there's more.
Firstly, there's obviously the stack. When we create a new thread
and then start it, memory usage increases. Instantiating a thread - or indeed any object - in Java does not in itself use any additional system memory.
Secondly, there are direct byte buffers.
Thirdly, there are memory mapped files and there also may be OS specific means of sharing data between processes.
1. The Heap
Memory usage of processes share similar characteristics and the concept of heap space is not unique to the JVM.
In C, allocating memory is usually done with the
malloc call. Note that in Linux, this is not a kernel call but a library call. It can be found in the
glibc library. If you're using the GNU C compiler, this is the code that will be executed. But you can use any memory manager you like. You can even write your own.
"A memory manager is a set of routines that takes care of the dirty work of getting your program memory for you. Most memory managers have two basic functions - allocate and deallocate. Whenever you need a certain amount of memory, you can simply tell allocate how much you need, and it will give you back an address to the memory. When you’re done with it, you tell deallocate that you are through with it. allocate will then be able to reuse the memory. This pattern of memory management is called dynamic memory allocation. This minimizes the number of "holes" in your memory, making sure that you are making the best use of it you can. The pool of memory used by memory managers is commonly referred to as the heap.
"The way memory managers work is that they keep track of where the system break is, and where the memory that you have allocated is. They mark each block of memory in the heap as being used or unused. When you request memory, the memory manager checks to see if there are any unused blocks of the appropriate size. If not, it calls the
brk system call to request more memory. When you free memory it marks the block as unused so that future requests can retrieve it." -
Programming from the Ground Up.
This is what
malloc does.
"When a process needs memory, some room is created by moving the upper bound of the heap forward, using the
brk() or
sbrk() system calls. Because a system call is expensive in terms of CPU usage, a better strategy is to call
brk() to grab a large chunk of memory and then split it as needed to get smaller chunks. This is exactly what
malloc() does. It aggregates a lot of smaller
malloc() requests into fewer large
brk() calls. Doing so yields a significant performance improvement. " -
Linux Journal.
So, creating an object in Java does not appear to in itself appear to demand any memory from the OS.
That is, using Linux's strace on the process doesn't show any system calls to brk or mmap. Nor does the pmap command indicate that any more memory is being used (from the point of view of the OS).
2. The Stack
However, upon
starting a Java
Thread,
the pmap command shows increased memory usage - about 312k on my system. Repeatedly running pmap and diffing the results between the JVM calling Thread.start() shows this:
00eb8000 312K rwx-- [ stack ]
3. Direct Byte Buffers
"O
perating systems perform I/O operations on memory areas. These memory areas, as far as the operating system is concerned, are contiguous sequences of bytes. It's no surprise then that only byte buffers are eligible to participate in I/O operations. Also recall that the operating system will directly access the address space of the process, in this case the JVM process, to transfer the data. This means that memory areas that are targets of I/O operations must be contiguous sequences of bytes. In the JVM, an array of bytes may not be stored contiguously in memory, or the Garbage Collector could move it at any time. Arrays are objects in Java, and the way data is stored inside that object could vary from one JVM implementation to another.
"For this reason, the notion of a direct buffer was introduced. Direct buffers are intended for interaction with channels and native I/O routines." -
Java NIO.
Direct buffer memory is allocated with the call:
ByteBuffer.allocateDirect(capacity);
If you keep allocating memory like this, you won't see any changes to the memory usage as reported by calls to:
Runtime.getRuntime().freeMemory();
However, you
will be using memory. In Linux, you can prove this by comparing the
/proc/PID/smaps file between calls (where PID is the ID of the thread allocating memory). Typically
diff will show something like:
< b49fc000-b4e00000 rw-p 00000000 00:00 0
< Size: 4112 kB
< Rss: 4112 kB
< Pss: 4112 kB
---
> b48fb000-b4e00000 rw-p 00000000 00:00 0
> Size: 5140 kB
> Rss: 5140 kB
> Pss: 5140 kB
824,825c824,825
< Private_Dirty: 4112 kB
< Referenced: 4112 kB
---
> Private_Dirty: 5140 kB
> Referenced: 5140 kB
These addresses correspond to the kernel calls the JVM is making. From
strace:
mmap2(NULL, 1052672, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb49fc000
.
.
.
mmap2(NULL, 1052672, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb48fb000
This is when I am allocating a 1MiB of memory at a time. Why this is slightly more than 1MiB is currently a mystery to me.
4. Memory Mapped Files
"Calling map( ) on a FileChannel creates a virtual memory mapping backed by a disk file and wraps a MappedByteBuffer object around that virtual memory space. "
- Java NIO.
This allows your JVM to use much more memory than the machine physically has. Peter Lawrey on his
blog demonstrates "
8,000,000,000,000 bytes or ~7.3 TB in virtual memory, in a Java process! This works because it only allocates or pages in the pages which you use. So while the file size is almost 8 TB, the actual disk space and memory used is 4 GB."
5. OS Specific Tempfs
There is a RamDisk-like functionality on most Linux distros called tmpfs (Note: most Unix-like systems have this). You can use this shared memory out-of-the-box by reading/writing to a virtual filesystem at /dev/shm.
The advantages of using this memory are outlined in Michael Kerrisk's
The Linux Progamming Interface:
"Linux also support the notion of virtual file systems that reside in memory... file operations are much faster since no disk access is involved.
"This file system has kernel persistence--the shared memory objects that it contains will persist even if no process currently has them open, but they will be lost if the system is shut down".
[Disclaimer: I am a Java engineer by trade but have an interest in Linux. I have tried my best to be accurate when talking about Linux but cannot guarantee what I write is error free]