In the ST example we never used the thread_id variable present in each block. Let's start by explaining the purpose of this in a MT application.
The concept of "ownership" was introduced since many MT applications allocate and deallocate memory to shared containers from different threads (such as a cache shared amongst all threads). This introduces a problem if the allocator only returns memory to the current threads freelist (I.e., there might be one thread doing all the allocation and thus obtaining ever more memory from the system and another thread that is getting a longer and longer freelist - this will in the end consume all available memory).
Each time a block is moved from the global list (where ownership is irrelevant), to a threads freelist (or when a new freelist is built from a chunk directly onto a threads freelist or when a deallocation occurs on a block which was not allocated by the same thread id as the one doing the deallocation) the thread id is set to the current one.
What's the use? Well, when a deallocation occurs we can now look at the thread id and find out if it was allocated by another thread id and decrease the used counter of that thread instead, thus keeping the free and used counters correct. And keeping the free and used counters corrects is very important since the relationship between these two variables decides if memory should be returned to the global pool or not when a deallocation occurs.
When the application requests memory (calling allocate()) we first look at the requested size and if this is >_S_max_bytes we call new() directly and return.
If the requested size is within limits we start by finding out from which bin we should serve this request by looking in _S_binmap.
A call to _S_get_thread_id() returns the thread id for the calling thread (and if no value has been set in _S_thread_key, a new id is assigned and returned).
A quick look at _S_bin[ bin ].first[ thread_id ] tells us if there are any blocks of this size on the current threads freelist. If this is not NULL - fine, just remove the block that _S_bin[ bin ].first[ thread_id ] points to from the list, update _S_bin[ bin ].first[ thread_id ], update the free and used counters and return a pointer to that blocks data.
If the freelist is empty (the pointer is NULL) we start by looking at the global freelist (0). If there are blocks available on the global freelist we lock this bins mutex and move up to block_count (the number of blocks of this bins size that will fit into a _S_chunk_size) or until end of list - whatever comes first - to the current threads freelist and at the same time change the thread_id ownership and update the counters and pointers. When the bins mutex has been unlocked, we remove the block that _S_bin[ bin ].first[ thread_id ] points to from the list, update _S_bin[ bin ].first[ thread_id ], update the free and used counters, and return a pointer to that blocks data.
The reason that the number of blocks moved to the current threads freelist is limited to block_count is to minimize the chance that a subsequent deallocate() call will return the excess blocks to the global freelist (based on the _S_freelist_headroom calculation, see below).
However if there isn't any memory on the global pool we need to get memory from the system - this is done in exactly the same way as in a single threaded application with one major difference; the list built in the newly allocated memory (of _S_chunk_size size) is added to the current threads freelist instead of to the global.
The basic process of a deallocation call is simple: always add the block to the front of the current threads freelist and update the counters and pointers (as described earlier with the specific check of ownership that causes the used counter of the thread that originally allocated the block to be decreased instead of the current threads counter).
And here comes the free and used counters to service. Each time a deallocation() call is made, the length of the current threads freelist is compared to the amount memory in use by this thread.
Let's go back to the example of an application that has one thread that does all the allocations and one that deallocates. Both these threads use say 516 32-byte blocks that was allocated during thread creation for example. Their used counters will both say 516 at this point. The allocation thread now grabs 1000 32-byte blocks and puts them in a shared container. The used counter for this thread is now 1516.
The deallocation thread now deallocates 500 of these blocks. For each deallocation made the used counter of the allocating thread is decreased and the freelist of the deallocation thread gets longer and longer. But the calculation made in deallocate() will limit the length of the freelist in the deallocation thread to _S_freelist_headroom % of it's used counter. In this case, when the freelist (given that the _S_freelist_headroom is at it's default value of 10%) exceeds 52 (516/10) blocks will be returned to the global pool where the allocating thread may pick them up and reuse them.
In order to reduce lock contention (since this requires this bins mutex to be locked) this operation is also made in chunks of blocks (just like when chunks of blocks are moved from the global freelist to a threads freelist mentioned above). The "formula" used can probably be improved to further reduce the risk of blocks being "bounced back and forth" between freelists.