The Heartbleed bug that affected OpenSSL earlier this year was a failure to do a proper bounds check. It allowed the client to define how much memory it was assigned in the server. The server then might have only filled some of this memory but all of its contents were returned to the client. The problem in a language like C is that this memory might be dirty. That is, it may contain other peoples' discarded data.
The patch was to check the bounds so no superfluous data was returned to the client. But why was the data not clean in the first place?
Basically, it's an optimization. Clearing the content of memory is expensive.
Java is much more secure than C. If you grab some off-heap memory, it cleans it before giving it to you. If you call ByteBuffer.allocateDirect(int) you are given an instance of a packge protected class, DirectByteBuffer. In its constructor you see:
base = unsafe.allocateMemory(size);
where unsafe is an object of class sun.misc.Unsafe. The method allocateMemory takes us to unsafe.cpp in OpenJDK where we see:
void* x = os::malloc(sz, mtInternal);
if (x == NULL) {
THROW_0(vmSymbols::java_lang_OutOfMemoryError());
}
//Copy::fill_to_words((HeapWord*)x, sz / HeapWordSize);
return addr_to_java(x);
This doesn't clean the memory (that's done later). But as an interesting aside, set -XX:+PrintMalloc as a command line parameter and you'll see the JVM output lines like:
os::malloc 536870912 bytes --> 0x00007f1ce7fff028
Anyway, the cleaning of the memory comes here, a few lines later in DirectByteBuffer:
unsafe.setMemory(base, size, (byte) 0);
which is a call to:
UNSAFE_ENTRY(void, Unsafe_SetMemory(JNIEnv *env, jobject unsafe, jlong addr, jlong size, jbyte value))
UnsafeWrapper("Unsafe_SetMemory");
size_t sz = (size_t)size;
if (sz != (julong)size || size < 0) {
THROW(vmSymbols::java_lang_IllegalArgumentException());
}
char* p = (char*) addr_from_java(addr);
Copy::fill_to_memory_atomic(p, sz, value);
UNSAFE_END
Despite the mention of filling atomically, this call to the Copy class looks no more complicated than something like this:
for (uintptr_t off = 0; off < size; off += sizeof(jlong)) {
*(jlong*)(dst + off) = fill;
}
The consequence of this security is a loss of performance:
"It's a well known fact that direct bytebuffer allocation is much slower than non direct byte buffers.
"If you think about it when you allocate a non direct byte buffer then it basically just needs to dereference a pointer to some memory on the Java heap, which is very quick. But for a direct buffer, it has to malloc real memory from the OS, and do a bunch of other house keeping.
"So yes, if you're using direct byte buffers it pays to re-use them."
No comments:
Post a Comment