Does writing programs with immutable objects cause performance problems? If a given object is immutable and we need to somehow change its state, we have to map it to a new object with a slightly changed state. Thus, we can find ourselves in a situation where we create a lot of objects which congest memory and, as I understand it, can be problematic for the garbage collector. Is what I have described taking place or is there some aspect that I am not aware of on this topic?
When you repeatedly modify a mutable object, it will likely produce less garbage than repeatedly constructing new immutable objects to represent the intermediate state. But there are several reasons why using immutable objects still is not necessarily imposing a performance problem:
In typical applications, this scenario occurs only occasionally, compared to other uses where immutable objects do not suffer or even turn out to win. Most notably:
- getters do not need to create a defensive copy when returning an immutable object
- setters can store the incoming argument by reference if immutable
- verification methods can be implemented the simple (or naïve) way, without having to deal with the check-then-act problem, as immutable objects can not change between the check and the subsequent use
- immutable objects can be safely shared, to actually reduce the amount of created objects or used memory
The impact of garbage collection to the performance is often overestimated. We routinely use the immutable type
java.lang.String, benefitting from the advantages mentioned above. Strings are also one of the most often used hash map keys.
There’s the mutable companion class
StringBuilderfor the scenario of repeated string manipulations, but its main advantage is not about the number of allocated objects. The problem with string construction is that each object has to create a copy of the contained characters, so operations like repeatedly concatenating characters lead to a quadratic time complexity when constructing a new string on each step. The
StringBuilderstill reallocates its buffer under the hood when necessary but with a nonlinear growth, which yields an amortized linear time complexity for repeated concatenation.
As explained in this answer, the costs of garbage collection mainly depend on the still existing objects, so temporary objects usually do not impact the garbage collection much unless you create an excessive amount of them. But even the latter scenario should only be addressed if you have an actual performance problem and an impartial profiling tool proves that a particular allocation site truly is the culprit.
Sophisticated applications may have to deal with undo/redo or other versioning features, which require keeping copies of the alternative application state anyway. At this point, using immutable objects may actually become an advantage, as you do not need to copy objects which did not change between two versions of the application state. A changed application state may consist 99% of the same objects as the previous state.
One of the big reasons why the topic has become popular (again) is that immutable objects provide the easiest way to implement efficient and correct parallel processing. There is no need to acquire a lock to prevent inconsistent modifications. As said above, there’s no need to worry about the check-then-act problem. Further, when the application state can be expressed as a single reference to a compound immutable object, the check for a concurrent update reduces to a simple reference comparison. Compare also with this answer.
So there are a lot of performance advantages with immutable objects which may compensate the disadvantages, if there are any. At the same time, potential bottlenecks can be fixed with the established solution of temporarily dealing with mutable objects for an operation while keeping the overall immutable behavior.
Answered By – Holger
Answer Checked By – Cary Denson (BugsFixing Admin)