티스토리 뷰

Code Story

[펌] 메모리릭이여, 안녕!

chauchau0 2006. 12. 7. 18:08

http://dev2dev.bea.com/pub/a/2005/06/memory_leaks.html

Memory Leaks, Be Gone!

by Staffan Larsen
06/27/2005

Abstract

Although the Java Virtual Machine (JVM) and its garbage collector (GC) manage most memory chores, it is possible to have memory leaks in Java software programs. Indeed, this is quite a common problem in large projects. The first step to avoiding memory leaks is to understand how they occur. This article presents some common pitfalls  and best practices for writing non-leaking Java code. Once you have a memory leak, it can be very difficult to pinpoint the code that creates the leak. This article also presents a new tool for efficiently diagnosing the leak and pinpointing the root cause. The tool has a very low overhead, allowing you to find memory leaks in production-type systems.

The Role of the Garbage Collector

While the garbage collector takes care of most of the problems involving management of memory, making life easier for the programmer, it is possible for the programmer to make mistakes that lead to memory issues. Stated simply, the GC works by recursively following all references from the “root” objects (objects on the stack, static fields, JNI handles, and so on) and marking as live all the objects it can reach. These become the only objects the program can ever manipulate; any other objects are deleted. Since the GC makes it impossible for the program to reach the objects that are deleted, it is safe to do so.

Although memory management might be said to be automatic, it does not free the programmer from thinking about memory management issues. For example, there will always be a cost associated with allocating (and freeing) memory, although this cost is not made explicit to the programmer. A program that creates too many objects will be slower than one that does the same thing with fewer objects (provided all other things are equal).

And, more relevant for this article, it is possible to create a memory leak by forgetting to “free” memory that has previously been allocated. If the program keeps references to objects that will never be used, the objects will hang around and eat up memory, because there is no way for the automatic garbage collector to prove that these objects will not be used. As we saw earlier, if a reference to an object exists, it is by definition live and therefore is not deleted. To make sure the object’s memory is reclaimed, the programmer has to make sure it is no longer possible to reach the object. This is typically done by setting object fields to null or removing objects from collections. Note, however, that it is not necessary to explicitly set local variables to null when they are no longer used. These references will be cleared automatically as soon as the method exits.

On a high level this is what all memory leaks in memory-managed languages revolve around; leftover references to objects that will never be used again.

Typical Leaks

Now that we know it is indeed possible to create memory leaks in Java, let’s have a look at some typical leaks and what causes them.

Global collections

It is quite common in larger applications to have some kind of global data repository, a JNDI-tree for example, or a session table. In these cases care has to be taken to manage the size of the repository. There has to be some mechanism in place to remove data that is no longer needed from the repository.

This may take many different forms, but one of the most common is some kind of cleanup-task that is run periodically. This task will validate the data in the repository and remove everything that is no longer needed.

Another way to manage this is to use reference counting. The collection is then responsible for keeping track of the number of referrers for each entry in the collection. This requires referrers to tell the collection when they are done with an entry. When the number of referrers reaches zero, the element can be removed from the collection.

Caches

A cache is a data structure used for fast lookup of results for already-executed operations. Therefore, if an operation is slow to execute, you can cache the result of the operation for common input data and use that cached data the next time the operation is invoked.

Caches typically are implemented in a dynamic fashion, where new results are added to the cache as they are executed. A typical algorithm looks like:

  1. Check if the result is in the cache, if so return it.
  2. If the result is not in the cache, calculate it.
  3. Add the newly calculated result to the cache to make it available for future calls to the operation.

The problem (or potential memory leak) with this algorithm is in the last step. If the operation is called with a very large number of different inputs, a large number of results will be stored in the cache. Clearly this isn't the correct way to do it.

To prevent this potentially fatal design, the program has to make sure the cache has an upper bound on the amount of memory it will use. Therefore, a better algorithm is:

  1. Check if the result is in the cache, if so return it.
  2. If the result is not in the cache, calculate it.
  3. If the cache is too large, remove the oldest result from the cache.
  4. Add the newly calculated result to the cache to make it available for future calls to the operation.

By always removing the oldest result from the cache we are making the assumption that, in the future, the last input data is more likely to recur than the oldest data. This generally is a good assumption.

This new algorithm will make sure the cache is held within predefined memory bounds. The exact bounds can be difficult to compute since the objects in the cache are kept alive as is everything they reference. The correct sizing of the cache is a complex task in which you need to balance the amount of used memory against the benefit of retrieving data quickly.

Another approach to solving this problem is to use the java.lang.ref.SoftReference class to hold on to the objects in the cache. This guarantees that these references will be removed if the virtual machine is running out of memory and needs more heap.

ClassLoaders

The use of the Java ClassLoader construct is riddled with chances for memory leaks. What makes ClassLoaders so difficult from a memory-leak perspective is the complicated nature of the construct. ClassLoaders are different in that they are not just involved with “normal” object references, but are also meta-object references such as fields, methods, and classes. This means that as long as there are references to fields, methods, classes, or objects of a ClassLoader, the ClassLoader will stay in the JVM. Since the ClassLoader itself can hold on to a lot of classes as well as all their static fields, quite a bit of memory can be leaked.

Spotting the Leak

Often your first indication of a memory leak occurs after the fact; you get an OutOfMemoryError in your application. This typically happens in the production environment where you least want it to happen, with the possibilities for debugging being minimal. It may be that your test environment does not exercise the application in quite the same way the production system does, resulting in the leak only showing up in production. In this case you need some very low-overhead tools to monitor and search for the memory leak. You also need to be able to attach the tools to the running system without having to restart it or instrument your code. Perhaps most importantly, when you are done analyzing, you need to be able to disconnect the tools and leave the system undisturbed.

While an OutOfMemoryError is often a sign of a memory leak, it is possible that the application really is using that much memory; in that case you either have to increase the amount of heap available to the JVM or change your application in some way to use less memory. However, in a lot of cases, an OutOfMemoryError is a sign of a memory leak. One way to find out is to continuously monitor the GC activity to spot if the memory usage increases over time. If it does, you probably have a memory leak.

Verbose output

There are many ways to monitor the activity of the garbage collector. Probably the most widely used is to start the JVM with the -Xverbose:gc option and watch the output for a while.

[memory ] 10.109-10.235: GC 65536K->16788K (65536K), 126.000 ms

The value after the arrow (in this case 16788K) is the heap usage after the garbage collection.

Console

Looking at endless printouts of verbose GC statistics is tedious at best. Fortunately there are better tools for this. The JRockit Management Console can display a graph of the heap usage. With this graph it is easy to see if the heap usage is growing over time.

Figure 1
Figure 1. The JRockit Management Console

The management console can even be configured to send you an email if the heap usage is looking bad (or on a number of other events). This obviously makes watching for memory leaks much easier.


Memory Leak Detector Tool

There are also more specialized tools for doing memory leak detection. The JRockit Memory Leak Detector can be used to watch for memory leaks and can drill down to find the cause of the leak. This powerful tool is tightly integrated into the JRockit JVM to provide the lowest possible overhead as well as easy access to the virtual machine's heap.

The advantages of a specialized tool

Once you know that you really have a memory leak, you will need a more specialized tool to find out why there is a leak. The JVM by itself is not able to tell you that. A number of tools are available. There are essentially two different ways these tools can get information about the memory system from the JVM: JVMTI and byte code instrumentation. The Java Virtual Machine Tools Interface (JVMTI) and its predecessor JVMPI (Profiling Interface) are standardized interfaces for an external tool to communicate with the JVM and gather information from the JVM. Byte code instrumentation refers to the technique of preprocessing the byte code with probes for information that the tool needs.

In the case of memory-leak detection these techniques have two drawbacks that make them less than ideal for use in production-type environments. First, in terms of both memory usage and performance degradation the overhead is not negligible. Information about the heap usage has to be exported somehow from the JVM and gathered and processed in the tool. This means allocating memory. Exporting the information also has a cost in terms of performance of the JVM. For example, the garbage collector will run more slowly when it is collecting the information. The other drawback is the need to always run with the tool attached to the JVM. It's not possible to attach the tool to a previously started JVM, do the analysis, detach the tool, and keep the JVM running.

Since the JRockit Memory Leak Detector is integrated into the JVM, both of these drawbacks no longer apply. First, most of the processing and analysis is done inside the JVM, so there is no need to transfer or recreate any data. The processing can also piggyback on the garbage collector itself, which means increased speed. Second, the Memory Leak Detector can be attached and detached from a running JVM as long as the JVM was started with the -Xmanagement option (which allows for monitoring and management of the JVM over the remote JMX interface). When the tool is detached, nothing is left of the tool in the JVM; it is back to running code at full speed just like before the tool was attached.

Trend analysis

Let's take a deeper look at the tool and how it can be used for tracking down memory leaks. After you know you have a memory leak, the first step is to try to figure out what data you are leaking—what class of objects are causing the leak. The JRockit Memory Leak Detector does this by computing the number of existing objects for each class at every garbage collection. If the number of objects of a certain class increases over time (the "growth rate") you've probably got a leak.

Figure 2
Figure 2. The trend analysis view of the Memory Leak Detector

Because a leak may be just a trickle, the trend analysis must run over a longer period of time. During short periods of time, local increases of some classes may occur that later recede. However, the overhead of this is very small (the largest overhead consists of sending a packet of data from JRockit to the Memory Leak Detector for each garbage collection). The overhead should not be a problem for any system—even one running at full speed in production.

At first the numbers will jump around a lot, but over time they will stabilize and show you which classes are increasing in size.

Finding the root cause

Knowing which classes of objects are leaking can sometimes be enough to pin down the problem. The class is perhaps only used in a very limited part of the code, and a quick inspection of the code shows where the problem is. Unfortunately, it is likely that this information is not enough. For example, it is very common that the leak is objects of class java.lang.String, but since strings are used all over the program, this is not very helpful.

What we want to know is which other objects are holding on to the leaking objects, Strings in this case. Why are the leaking objects still around? Who has references to these objects? But a list of every object that has a reference to a String would be way too large to be of any practical use. To limit the amount of data we can group it by class, so that we see which other classes of objects are holding on to the leaking objects (Strings). For example, it is very common that the Strings are in a Hashtable, in which case we would see Hashtable entry objects holding on to Strings. Working backward from the Hashtable entries we would eventually find Hashtable objects holding on to the entries as well as the Strings (see Figure 3 below).

Figure 3
Figure 3. Sample view of the type graph as seen in the tool

Working backward

Since we are still looking at classes of objects here, not individual objects, we don't know which Hashtable is leaking. If we could find out how large all the Hashtables in the system are, we could assume that the largest Hashtable is the one that is leaking (since over time it will accumulate enough of a leak to have grown to a fair size). Therefore, a list of all the Hashtable objects, together with how much data they are referencing, will help us pinpoint the exact Hashtable that is responsible for the leak.

Figure 4
Figure 4. Screenshot of the list of Hashtable objects and the size of the data they are holding live

Calculating how much data an object is referencing is quite expensive (it requires walking the reference graph with that object as a root) and if we have to do this for many objects it will take some time. Knowing a bit about the internals of how a Hashtable is implemented allows for a shortcut. Internally, a Hashtable has an array of Hashtable entries. This array grows as the number of objects in the Hashtable grows. Therefore, to find the largest Hashtable, we can limit our search to finding the largest arrays that reference Hashtable entries. This is much faster.

Figure 5
Figure 5. Screenshot of the listing of the largest Hashtable entry arrays, as well as their sizes.

Digging in

When we have found the instance of the Hashtable that is leaking we can see which other instances are referencing this Hashtable and work our way backward to see which Hashtable this is.

Figure 6
Figure 6. This is what an instance graph can look like in the tool.

For example, the Hashtable may be referenced from an object of type MyServer in a field called activeSessions. This will often be enough information to dig into the source code to locate the problem.

Figure 7
Figure 7. Inspecting an object and its references to other objects

Finding allocation sites

When tracking down memory-leak problems it can be very useful to see where objects are allocated. It may not be enough to know how they relate to other objects, that is, which objects refer to them, but information about where they are created can help. Of course, you don't want to create an instrumented build of the application that prints out stack traces for each allocation. Neither do you want to run your application with a profiler attached in your production environment just in case you want to track down a memory leak.

With the JRockit Memory Leak Detector, the code in the application can be dynamically instrumented at the point of allocation to create stack traces. These stack traces can be accumulated and analyzed in the tool. There is zero cost for this feature as long as you do not enable it, which means you can always be ready to go. When allocation traces are requested, code is inserted on the fly by the JRockit compiler to monitor allocations but only for the specific classes requested. Even better, when you are done analyzing the data, the instrumented code is completely removed and there is nothing left in the code that can cause any performance degradation of the application.

Figure 8
Figure 8. The allocation stack traces for String during execution of a sample program

Conclusion

Memory leaks are difficult to find. Some of the best practices for avoiding memory leaks highlighted in this article include always remembering what you put in a data structure, and closely monitoring memory usage for unexpected growth.

We have also seen how the JRockit Memory Leak Detector can be used in a production type system to track down a memory leak. The tool has a three-step approach to finding leaks. First, do a trend analysis to find out which class of objects is leaking. Second, see which other classes are holding on to objects of the leaking class. Third, drill down to the individual objects and see how they are interconnected. It's also possible to get dynamic, on-the-fly stack traces for all object allocations in the system. These features and the fact that the tool is tightly integrated into the JVM allow you to track down and fix memory leaks in a safe, yet powerful way.

Resources

Staffan Larsen is a staff engineer working on the JRockit product, which he co-founded back in 1998.


Return to dev2dev.


'Code Story' 카테고리의 다른 글

Java가 C 보다 8배 느리다?  (14) 2007.01.18
Linux 에서 현재 메모리 상태 확인  (0) 2007.01.12
ctags 와 vim 친구들  (0) 2006.12.04
Programming Tip <VI Editor>  (0) 2006.12.01
TAR  (0) 2006.11.14
공지사항
최근에 올라온 글
최근에 달린 댓글
Total
Today
Yesterday
링크
«   2024/04   »
1 2 3 4 5 6
7 8 9 10 11 12 13
14 15 16 17 18 19 20
21 22 23 24 25 26 27
28 29 30
글 보관함