Memory Pool
Introduction
Memory pool is a memory abstraction on top of the operating system and below the application software. It is an optimization technique that improves the memory overhead, i.e., the gap between the amount of the actual memory and the theoretical memory required for executing the application, and the potentially also the application runtime performance.
In this blog post, I would like to discuss the memory pool optimization at a high level.
Memory Allocation Problems
As mentioned above, memory pool is an optimization, which means it is not required for the correctness of the application. The application that does not use memory pool and uses the native memory api provided by the operating system might have at least one of the following problems.
- Multiple memory allocations causes memory fragmentation. The used memory between the two adjacent used memory blocks cannot be allocated for memory request that is larger than itself. If there are many small memory allocations and a few large memory allocations in the application, it is extremely likely that lots of used memory are wasted because of memory fragmentation. This might not be a problem if the system has lots of memory. However, it is usually a problem for the system on an embedding device whose memory is very limited.
- Different operating systems might use different algorithms and data structures for memory management, dynamic allocation, and deallocation. Those algorithms can be complicated and the performance can vary on different systems. In addition, the performance is usually a concern, especially for a latency-critical real-time system, because those algorithms are developed for very general-purpose management, dynamic allocation, and deallocation.
Depending on the use cases, one or many different memory pools are implemented specifically to address the above problems in the application, using the native memory api provided by the operating system.
Memory Pool Optimization
Memory pool tries to solve the problems mentioned above by amortizing the memory allocation of small objects in a large trunk of memory, alleviating the memory fragmentation issue. In addition, because the memory allocation is amortized, when the user requests memory from memory pool, usually it is an amortized $O(1)$ operation, which is much faster and more efficient.
There are data structures and algorithms for the objects involved in the memory pool implementation, such as storage, allocation, and deallocation, to be considered.
Storage is the memory that memory pool manages and is usually invisible to the memory pool user. Based on the storage data structure, there are algorithms for memory allocation and deallocation. Of course, there can be many different ways to implement a storage data structure and its memory allocation and deallocation algorithm.
For example, Simple Segregated Storage is the basic idea behind the Boost Pool library. boost::simple_segregated_storage
is the simplest, and probably the fastest, memory allocation/deallocation algorithm. It consists of memory blocks and each memory block consists of allocated contingent memory and is divided into memory trunks. boost::simple_segregated_storage
uses a free list, basically a linked list, to manage all the free trunks. It provides interfaces to malloc
a memory trunk and free
an allocated memory trunk in $O(1)$. It also provides interfaces to allocate and deallocate contiguous sequence of memory chunks. There are two scenarios where the allocation can fail.
- There are no sequence of memory trunks are not contiguous in a memory block on (virtual) memory.
- There are sequence of memory trunks. However, those memory trunks are not contiguous (ordered) in the free list.
The first scenario can happen when the largest memory block is smaller than the size of memory requested. The second scenario can happen when the memory trunks got allocated and deallocated a few times using the $O(1)$ algorithm and the memory trunks available are not order-preserving in free list. A free list is ordered if repeated calls to malloc
will result in a constantly-increasing sequence of values, as determined by std::less<void *>
. For example, if there is a memory block consisting of 4 memory trunks of 256 Bytes. However, in the free list, the 1st memory trunk points to the 3rd memory trunk, the 3rd memory trunk points to the 2nd memory trunk, the 3nd memory trunk points to the 4th memory trunk. The $O(N)$ allocation algorithm for contiguous sequence of memory chunks, where $N$ is the size of free list, will not be able to find the contiguous sequence of memory chunks if the memory trunks are not order-preserving. So in order to achieve efficient and effective allocation for contiguous sequence of memory chunks, the free list has to be order-preserving. To preserve the order of free list, ordered_free
and ordered_free_n
that are not $O(1)$ operations should be called for memory deallocation, instead of the $O(1)$ operation free
.
1 |
|
Simple Segregated Storage is a general-purpose memory pool implementation for any type and it does not own the memory. To use a type-specific memory pool, Boost Pool library provides boost::object_pool
that not only allows object allocation but also construction. The memory that boost::object_pool
owns can be fixed at construction or dynamic during runtime with or without a capacity limit. What’s different from boost::simple_segregated_storage
, because the memory management is mostly hidden, instead of allocating a memory trunk upon a malloc
call, boost::object_pool
allocates memory for one single object upon a malloc
call.
1 |
|
In addition to boost::object_pool
, Boost Pool library also provides boost::singleton_pool
which is something between the boost::simple_segregated_storage
and boost::object_pool
. Similar to boost::simple_segregated_storage
, it does not have to specify a type. However, it has to know the size of the type at compile time. It also allows allocating memory for multiple contiguous objects. Because it is a singleton class, it is often used by the memory pool allocators, such as boost::pool_allocator
and boost::fast_pool_allocator
.
Conclusions
Memory pool optimizes the memory utilization by reducing the memory fragmentation caused by small object allocations and accelerates the memory allocation and deallocation by amortization.
References
Memory Pool