What?
Yeah! As I said, the STL is fundamentally bad design. Many implementations have atrocious performance as a direct consequence of badly designed specs, and it is missing abstractions for key concepts which have been widespread in general computing for the last 25 years or so.
Welcome to my unrequested Ted talk!
There is a lot to hate about the STL, but today I would like to focus on three offenders, underlying concepts which are, in my opinion, ill-conceived and undermine the usability and usefulness of the STL as a whole:
- Streams
- RAII-centric allocators
- Absolute memory references
Streams
Streams were a good abstraction in the 80’, and still are in some narrow computing scenarios.
The basic assumption is that external storage is much bigger and slower compared to physical memory, so we perform reading and writing tasks on some kind of buffer, allowing us to have a sliding window moving over the file descriptor.
We can also pipe these windows together and create complex workflows with minimal usage of physical memory.
On paper, it is a good and sensible design right?
Yes, it was, but there are alternative approaches which have not been part of the STL, and so most people end up abusing streams a lot in tasks they are not really best for.
Most modern systems, surely so for general computing, are capable of handling virtual memory. Basically, the address space we are given to work with is much larger compared any amount of physical memory installed on our system. This allows to map devices and files into the same address space with no meaningful constraint on size.
So having a file in memory no longer requires streaming it block by block from disk into a memory buffer, effectively generating a full second copy.
We can simply leverage basic functionality any general purpose OS provides to map a file descriptor onto a span of virtual memory.
The crucial advantage is that no actual loading takes place unless explicit access is needed, making the process lazy and allowing to load files of any size (at the cost of more disk reads if RAM gets depleted).
Now, there are systems and platforms for which memory mapping is not an option, either because hardware or software don’t support that. Those are mostly low power embedded systems with unconventional or no operating system at all. But guess what? Most of the streams functionality and file-based operation would not work regardless as they are.
So yes, while streams are not a bad design patten on their own, the lack of alternative mechanisms being encoded as part of the standard library leads to people abusing them.
Allocators and containers
One other mistake the STL designed made, in my opinion, is that they imposed the RAII model onto everything, selling it as a zero cost abstraction.
Well, it is not. Have you ever wondered why C has malloc, free and realloc (so easily forgotten) while C++ only new and delete? Because of RAII.
Trivially moving memory slices around can and probably will break data types fully reliant on C++’s own OOP features; so they did not even bother providing support for those which don’t share the same restrictions.
Realistically, there is no way to tag which structures are safe and which ones are not under moving operations. It would have required some built-in traits, a feature which has not been historically present.
Still, the consequence of OOP by all costs affected the very concept of allocator at a core level, and as such this influenced all the STL containers based on them1.
Hence, even for trivial data types, we are unable to implement an allocator which uses realloc or other copy-free memory movements like mremap, we just don’t have the necessary interface as part of the standard.
The result is C++ making us all pay a heavy bill for an abstraction which is superficially 0-cost.
Lack of standalone containers
And finally the last ingredients: absolute pointers and split states.
Basically, these two alone prevent any form of trivial relocation for STL containers.
Since relocation is intrinsically incompatible with arbitrary types in a RAII design, they decided no one could enjoy it while working with the standard library.
Originally it might have been a sound decision and one which removed a lot of unsafe behaviour, but in 2025 it has very bad consequences and is hurting high performance computing.
It means we are given no good way to memory map data structures onto files, relocate them in memory or make the available on offloaded devices (unless using unified memory access which is an even more recent development).
To address this issue one would have to:
- Avoid absolute pointers in the data portion of a container, using only relative pointers based on
thisor some other base address which is floating with the container. - Store the state of the container (cells allocated etc.) as part of the allocated data itself, making sure that if mapped onto file this metadata is also preserved.
Solutions
Nothing widespread sadly. I recently had a nice conversation on reddit where several proposals were made.
We have a Boost.Interprocess which has been historically providing support for memory mapping.
We also have several experimental packages like decodeless, or libraries of data structures with a layout which is compatible with relocation like my own vs.xml or STL replacements like gtl.
And a super secret project I am currently working on which will be released at some point 😊.
-
At the very least we finally got polymorphic allocators in one of the more recent revisions. ↩︎