Capacity Members for vector – shrink_to_fit()

shrunkC++11 introduced a new member function called shrink_to_fit() for the three sequence containers vector, deque, and string. Before shrink_to_fit() there was no member to reduce capacity on these containers, not even erase(). In fact, the standard made it very unlikely that any implementation would reduce capacity with any call. The standard provides the guarantee that if reserve() is called, then any subsequence addition to the container will not invalidate iterators (until an addition increases the size of the container above the reserve() point.

Since this guarantee is not relaxed even if the user calls erase(), a implementation that chose to reduce capacity would be required to track whether or not reserve had previously been called. It would be theoretically possible for an implementation to add this extra state to the container implementation, but in practice no implementation is going to do that.

Perhaps this lack of attention to minimizing capacity isn’t surprising. In most modern code, our concern is usually time performance rather than space requirements. But C++ is the language of choice for demanding situations and sometimes this means minimizing space requirements.

Scott Meyers
Scott Meyers

So what can users do if they want to reduce the capacity of a container? Is this not possible? It turns out that it is possible. In Item 17 of Scott Meyer’s Effective STL, Scott explains that there is a “Swap Trick” that we can use to reduce capacity.

Here is the quick synopsis of the Swap trick:

Container(begin(c), end(c)).swap(c);

where “c” is a container whose capacity we want to reduce and Container is it’s type.

What is happening is that we are creating a temporary container as a copy of our original container using the iterator range constructor and then swapping the contents of the our temporary container with our original container. This works because most implementations will implement the iterator range constructor to not make the capacity any larger than necessary. The reserve() guarantee is swapped to the other container with the call to swap().

In practice we’d almost certainly want to write this something like this:

if (c.capacity() > c.size())
  {
  Container(begin(c), end(c)).swap(c);
  }

or:

if (c.capacity() > (c.size() + some_value))
  {
  Container(begin(c), end(c)).swap(c);
  }

Because this trick involves copying all the items in the container, we don’t want to do it unless it saves enough space to be worthwhile. (In C++11 we could use moves instead of copies. This would make the technique less expensive, but we really only want to do this if the moves are no_except. This is a beyond the scope of what I want to discuss in this post.)

Meyers’ Swap Trick was state of the art, at least until C++11 where we can now do this:

if (c.capacity() > c.size())
  {
  c.shrink_to_fit);
  }

assuming type of c is vector, deque, or string. For vector and string the standard says “… request to reduce capacity() to size().” (We’ll consider the ellipse shortly.) For deque the standard says “… request to reduce memory use.”

Why is deque’s description different than vector and string? Because deque doesn’t have a capacity() member function. The point of the capacity member function is tell us how much we can grow the number of items in the container before an addition would invalidate iterators. This concept doesn’t make much sense for deque, so there is no such call as deque::capacity().

So let’s look at what I left out just now when I quoted the standard. What it really says for vector and string is “shrink_to_fit is a non-binding request to reduce capacity() to size().” The emphasis is mine. The standard has made shrink_to_fit() non-binding. (Also true for deque.) It can be implemented as a no-op!

Why would the committee do such a thing? The standard gives an answer: “[Note: The request is non-binding to allow latitude for implementation-specific optimizations. — end note ]” What this means is that the committee knows that there are implementations where doing the right thing might not result in size() == capacity().

Consider the “short string” optimization. A string is an allocated array and a straight-forward implementation would allocation an array even for a string that was only one character long. One character long strings are not particularly common, but short strings are. So an library might choose to implement the string class so that the class itself has (data member) space for “short” strings so that no allocation is necessary until the size of the string is no longer “short” for some value of “short.” Given this implementation how would capacity() be implemented? In particular would capacity() ever return a value less than the “short” string limit? No because we can always expand the string to the “short” string limit without invalidating iterators (assuming the current size() is less than this limit).

This means that if we currently have a string whose size() is less than the “short” string limit, a “request to reduce capacity() to size()” is going to fail. So the standard doesn’t want to say that the implementation must reduce capacity() to size().

But in my opinion the committee has let us down. While I’m all for giving implementers latitude for their implementations, I think the committee has underspecified this call. It is acceptable to say that the implementation need not reduce capacity() all the way to size(), but it should say that it will reduce capacity at least as far as the Swap Trick would have.

Since the Swap Trick is potentially expensive in time, it is likely only used in situations where users are very serious about reducing excess capacity. Given this, and faced with the fact that shrink_to_fit() may be a no-op, I think conscientious coders will choose to skip the new shrink_to_fit() and continue to use the Swap Trick.

And that is a shame. With shrink_to_fit() the user is very explicitly stating the desired outcome, so this has the potential to allow implementers to exploit interesting optimizations. This should be better than the somewhat obscure looking Swap Trick. But clarity and interesting optimizations aren’t of much use if users don’t use the new call. And they may avoid it because the old way gives them guarantees that the committee didn’t see fit to apply to the new call.

Please comment on Google Plus or reddit.

Capacity Members for vector – reserve()

I want to talk about a couple of the lesser known member functions of the best known STL container: reserve() and shrink_to_fit(). These members manipulate the vector’s capacity. The reserve() member is original and I discuss it in this post. New with C++11 is shrink_to_fit(). I’ll talk about it in a future post.

Note: The use of kittens in explaining C++ is patented by STL. This image used without permission.
Note: The use of kittens in explaining C++ is patented by STL. This image used without permission.
The vector class stores data in an allocated array whose allocation the class handles for us. Each call to push_back() does a check to see if the array has space for the new item. If not, the class must perform a reallocation. This involves allocating a larger array and copying all existing items into the new array before destroying the originals in the old array and releasing the old array. This is expensive, but the cost is amortized so that order of push_back() is amortized constant time.

How can it be constant time when copying all existing items will take time proportional to the number of existing items? The trick is that when push_back() triggers a reallocation, the new array is not a constant size increase of the existing array, but is proportional to the existing array (e.g. doubling in size). This approach means that although each reallocation is more expensive (due the increased number of exiting items), the allocations end up being required exponentially less often. The result works out to average (or amortized) constant time.

This is a great way to deal with allocations when we don’t know how many items we will need the vector to hold. But what if we do know how many items will be required? Suppose we know in advance that our vector will need to hold a thousand items. If vector initially allocates enough space for eight items and allocates an array twice as large as the existing one with each reallocation, we’ll need seven reallocations (and copies of all existing items) before we are finished adding our thousand items. This may be optimal if we have no idea how many items we’ll need to handle, but it is wasteful if we know in advance that we will need to hold a thousand items. We’d like a way to tell vector upfront that we need to hold a thousand items so it can create this space with one allocation and avoid needlessly copying items.

One way to do this to pre-fill our vector. If we are working with a vector of ints we can do something like this:

#include <iostream>
#include <vector>

using Ints = std::vector<int>;
// some how we know the upper bound of our storage requirement
const int vector_limit(1000);

int main()
{
    Ints v(vector_limit); // pre-fill a thousand ints

    std::cout << v.size() << std::endl;
    std::cout << v.capacity() << std::endl;
}
1000
some value >= 1000, likely 1000

The drawback to this approach is that we’ve not just changed the vector’s capacity, but its content as well. If we know we need to store a thousand ints, pre-filling the vector may not be a problem, but consider some other cases. Suppose we know only the upper bound of the number of items, but not the exact number and the item type has an expensive constructor. Or no default constructor. In these cases, pre-filling the vector is problematic.

What we need is a way to increase a vector’s capacity without changing it’s size.

Enter reserve(). reserve() takes a parameter that tells the vector class how many items we expect to hold. In our example we can call reserve() with the value one thousand before pushing back any items. This would trigger a single allocation and since we haven’t added any items to the vector, there are no existing items that need to be copied. Big win.

#include <iostream>
#include <vector>

using Ints = std::vector<int>;
// some how we know the upper bound of our storage requirement
const int vector_limit(1000);

int main()
{
    Ints v; // creates an empty vector

    v.reserve(vector_limit); // increase capacity only

    std::cout << v.size() << std::endl;
    std::cout << v.capacity() << std::endl;
}
0
some value >= 1000, likely 1000

One of the reasons that I wanted to discuss reserve() is because of a discussion I had with some very experienced and knowledgable C++ programmers. I was surprised to learn that they believed that reserve() is only an optimization-enabling hint to the library and that the library has the option of implementing it as a no-op. This is wrong on two counts. The standard does not allow reserve() to be a no-op (unless the passed values is already less than or equal to the vector’s current capacity).

Further, the standard guarantees that adding items to the vector up to the reserve() limit, will not only not cause a reallocation, but that iterators will not be invalidated. This is not just an optimization, but is an important guarantee because in general, we always assume that push_back() and insert() will invalidate iterators. (You can verify that these calls will not invalidate iterators for a specific call by checking the current capacity before the call.)

Because of this guarantee we can be confident that iterators (and references and pointers) are not invalidated as long as don’t increase the size of our vector beyond our reserve.

#include <cassert>
#include <iostream>
#include <vector>

using Ints = std::vector<int>;
// some how we know the upper bound of our storage requirement
const int vector_limit(1000);

int main()
{
    Ints v; // creates an empty vector

    v.reserve(vector_limit); // increase capacity only

    // add a single element and remember its address
    v.push_back(0);
    const int* start_of_allocated_array(&v[0]);

    // fill the vector to our limit
    for (int i(1); i < vector_limit; ++i) v.push_back(i);

    // verify no invalidation of iterators, pointers, or references
    assert(start_of_allocated_array == &v[0]);
}

What if we want to maintain stable access to items in the face of iterator invalidation (where we are adding items beyond the vector's capacity)? The trick is to use index values instead of iterators, pointers, or references. The Nth item in the vector will still be the Nth items after a reallocation.

The other reason I wanted to discuss reserve() is because of a nasty performance bug that some of my co-workers found within the last few days. Proper use of reserve() can be a nice performance win, but there is a gotcha scenario. In the bug, the code was calculating how many items needed to be added to the vector and then calling reserve() before adding them. This seems like a good use, until you realize that this code is being called in a loop. So what is happening is, each time through the loop we find that we need to add a few more items so we call reserve() which has the result of copying all existing items. What leads to this result is one of the characteristic of reserve() that isn't mandated by the standard, but it is usually the way that it is implemented. Let me explain.

As I said before, when a reallocation happens, the new size of the array is some proportion, usually double, of the old size of the array. The standard doesn't mandate any particular ratio. It also doesn't provide an upper limit on the capacity after a call to reserve(). An implementation is free to allocate an array double the value of the reserve() parameter. But that isn't what users would usually want or what most implementations provide. Usually reserve() will create a capacity exactly equal to the value passed. The passed value is a very good hint about the upper bound of the vector's size, and implementations take advantage of this hint.

Consider the implication of this on our performance bug. Without the call to reserve(), the push_back() in the loop would have added items with amortized constant time. But by calling reserve() in the loop, each reserve() resulted in copying every item in the vector every time through the loop and also pinning the capacity to exactly the reserve() value rather than a proportionally larger value, with disastrous performance consequences.

reserve() is a great tool for optimizing vector performance, but keep in mind the optimal use case for it. We want to call reserve() once when we can determine the upper bound on the vector's size requirement before we have added any values to the vector. There are many cases were we can't predict the size requirement upper bound, so we can't implement this optimal strategy and we may have to modify it. But calling reserve() several times with increasing estimates of the required capacity is not a strategy likely to yield success.

Please comment on Google Plus or reddit.

Tutorials for C++Now

The C++Now Call for Submissions for 2014 is out. You might think that after releasing it, we just sit back and wait for all the great submissions to come rolling it. I hope it looks that easy to the outside world, but truth is that encouraging potential speakers to submit sessions is a year-round job that just gets more intense after the Call the Submissions. We are constantly looking for people with interesting things to say about C++ and asking them if they’d like to share at C++Now.

C++Now

In 2012, when we changed the conference name from BoostCon to C++Now, we also added a third track of presentations. We wanted to create space for tutorials, specifically tutorials on the new language and library features. For this track, we are not just looking for good speakers, we also are looking for coverage of specific topics.

This year Scott Schurr is leading the effort to make certain that we are presenting an excellent tutorial track. I’ve his permission to share some of the ideas that we are pursuing.

Scott Schurr
Scott Schurr

Although some of the talks have been proposed with specific presenters in mind, most of these talks are “unclaimed” by any particular presenter. Why am I sharing them? Boost is an open-source library and I thought we could experiment with adding some transparency to our conference planning.

I’m also hoping that this list might be inspirational. Perhaps it will inspire you to claim one of these topics for yourself and make a submission of it or a completely different topic (the deadline is December 8th). If you have any ideas or questions about making a submission, you can contact the program committee. If you’d just like to comment on the list, you can do that at google plus or redit.

  • “Moving Well.”  Guidance for the best way(s) to design a movable class. This can be complicated by interactions between movability and perfect forwarding (See Scott Meyers’ reflections on this problem at http://scottmeyers.blogspot.com/2013/07/video-for-universal-referenceoverloadin.html)
  • “How and Why to Implement Value Semantics in C++”.” This talk might start out with an introduction to the ideas behind value semantics.  A talk that discusses moving, deleting, defaulting, explicit conversion functions and delegating construction would be very valuable.
  • “The Libraries That You Should Know About.”  Consider the “optional” library (no longer in C++14), “flat_set”, and reference wrappers.  Consider about 5 minutes each on twelve small libraries. This could include standardized but under-appreciated libraries as well as Boost and Adobe Source Libraries.
  • “Unicode Support in Standard C++.”  A lot of work has gone on here. What can you do with Unicode and just standard C++ in real-life situations?
  • “Intro to Functional Programming in C++.”  What’s the point of functional programming.  Don’t I have to do that in Haskell?  Is functional programming an all-or-nothing thing?  Can I benefit from making my C++ “more functional”?  If so, how would I do that?
  • “Intro to Concurrency.” Obviously this could span two or three sessions and would make a terrific workshop.
  • “A Survey of Algorithms.” If you haven’t seen Sean Parent’s talks at Going Native, stop reading this and go watch them. (http://channel9.msdn.com/Events/GoingNative/2013/Cpp-Seasoning and http://channel9.msdn.com/Events/GoingNative/2013/Inheritance-Is-The-Base-Class-of-Evil) One reaction to these talks is that “we” don’t really know what algorithms are available to us. Another point is that even if we know the available algorithms we don’t necessarily recognize them in the wild.  Sean recognized a particular operation as a kind of rotate. Insights into how to recognize algorithms would be invaluable.  If this topic were restricted to std, then it might be a short talk, but if we include Boost and the Adobe Source Library there could be real gems.
  • “How To Do Random Numbers Right (and How Not To Do Them).” We should ask STL to do his talk from Going Native as a tutorial.
  • “RegEx in C++.”  STL did a great tutorial on this at Going Native.  We could ask STL for a repeat, or we could ask someone else to give their take on it.
  • “Building Great Hashing Functions.” Perhaps this could be combined with bucket size/rehashing strategies for unordered containers.
  • “Fixed Size Arrays.”
  • “Alignment Control in C++11.”  Most programmers don’t think about alignment.  This talk could start with an introduction to alignment including where the requirement comes from.  From there the talk could go into aligneof, alignas, aligned_storage.  It could finish up with a discussion of over aligned data types (see N3396) and how to manage them.
  • “How To Build a Perfect Function Wrapper.”  Suppose I want to do “something.” (I always use the example of timing how long we spend inside an API. This is conceptually very simple. You just start timing when you are about to make an API call and stop when you finish.) But you want to be able to wrap arbitrary calls. So you need to use variadic template parameters and perfect forwarding. This may be too simple to justify a complete session. Maybe it could be combined with a tutorial on bind, function, and lambda. (Or a tutorial on the time library.)
  • “Managing Object Lifetimes.”  Start with RAII, move on to smart pointers, then show how the combination gives you 90% of the exception-safety that you want.  Finish up with a discussion on passing parameters – how to make your APIs be more explicit about ownership and lifetimes.
  • “Compilation Errors: The Best Kind of Errors.”  When I’ve tried talking to typical programmers about compile-time error checking they lift their noses and suggest that it couldn’t be useful.  But with careful design many kinds of errors (admittedly not all) can be hoisted to compile-or link-time.  We need to promote finding programming errors early.
  • “SFINAE Boot Camp.”  This is a really important technique that is more of a black art than a science.  If someone can present rhyme and reason for how to use SFINAE in a variety of circumstances I think a lot of people would benefit from that.
  • “Understanding &&.”  You’ve no doubt noticed that && in a signature does not act like & or * in a signature.  A session that helps folks develop an intuition for && and its effects would be great.  All the experts already have this nailed, but this could be a valuable topic for mid-level programmers.  Perhaps this topic could incorporate the first topic, “Moving Well.”
  • “New Features in C++14.”  Surely we ought to cover this topic.
  • “C++11/14 Glossary.”  C++11 extends rvalues and lvalues to include a total of five different values categories.  PODs used to be important, but now aggregates, trivial classes, and standard-layout classes carry more of the semantic load.  The volatile keyword has a new meaning in C++11 that it didn’t have in C++98.  A deep dive into some of these vocabulary changes (and others) could be useful for many mid-level programmers.
  • “What’s My Value?”  To mix it up a bit, we could do C++ related questions in some sort of game show format.  Imagine 45 minutes of glossary followed by a 45 minute game show format presentation with questions about the preceding material.  This could also be applied to a talk about the memory model followed by game show style questions.
  • “Building a Boost Reference Application.”  Boost could have a reference application (or maybe several) which uses many Boost libraries to do something useful. The reference application would be an example of how to use Boost libraries in the real world.
  • Topics that don’t focus on a specific feature.  The Meeting C++ conference (http://meetingcpp.com/index.php/schedule13.html) has several such topics this year:
    • Simpler C++ code through C++11
    • Clean C++ – throw down the gauntlet against software entropy!
    • Generic Programming for the rest of us
    • Efficient Team Development for C++ Projects
    • Presentations on tools C++ developers typically use or might want to
  • “Presentations on tools C++ developers typically use or might want to use: Git, CMake, Quickbook (and how to use the tool chain to generate documentation) etc.
  • Workshop to organize “review teams” (probably easier than finding individuals who have to do all the work 🙂 which start working their way through the libraries in the review queue (“Review manager in a week” instead of “Library in a week”? 😉
  • “Google Summer of Code.”  This is a successful program.  Can we leverage it into a useful tutorial topic?
  • “C++11/14 Overview: Core Language Features”
    • C++ Timeline
    • Goals for C++11
    • The Simpler Core Language Features
      • auto, decltype, trailing return types
      • nullptr
      • Range for
      • >> in template specializations
      • static_assert
      • noexcept
      • extern template
      • constexpr
  • “C++11/14 Overview: Features Specific to Class Design”
    • Generated functions: default / delete
    • Override control: override / final
    • Delegating constructors
    • Inheriting constructors
    • Increased flexibility for in-class initializers
    • Explicit conversion operators
  • “C++11/14 Overview: Larger Language Features”
    • Initialization
      • Initializer lists
      • Uniform initialization
      • Prevention of narrowing
    • Lambdas
    • Rvalue references, move semantics, perfect forwarding:
      • lvalues and rvalues (the modern view)
      • “move” semantics
      • Universal references
      • Perfect forwarding
  • “C++11/14 Overview: Concurrency”
    • Threads
    • Passing arguments to threads
    • Data lifetime considerations
    • Synchronization with mutexes and locking
    • Returning values from threads using async() and futures
    • Condition variables
    • Thread-local storage
    • Atomics

Please comment on Google Plus or reddit.

Ode to a Flat Set

Grecian-Urn
In my previous post I discussed the most important changes in hardware since the STL was created. These changes are the increasing cost of malloc and the increasing importance of caching considerations.

I’ll group the standard containers into contiguous containers: vector, string, and deque, and node-based containers: list, *set, *map, and unordered*. The implications are that we want to favor contiguous containers over node-based containers both because they have fewer allocations per item and also because they are more cache friendly.

This isn’t earth shaking news. Alex Stepanov, the STL’s creator, has been quoted as saying, “Use vectors whenever you can. If you cannot use vectors, redesign your solution so that you can use vectors.”

Using vector, which has always been good advice, is now even better advice. It has always been true that for small values of N, vector beats containers with better order operations. Do we care about performance if N is small? If don’t have much data, why worry about which container might be faster? It is true that a small data set may mean that cost is insignificant in any one operation, but we may still be concerned about optimizing container performance . If our look-ups are in a small data set, but we are doing them in a loop, performance is still a consideration. With modern hardware the values for N where contiguous containers can beat node based containers can be larger than ever. How large? Follow Andrei Alexandrescu’s advice: measure.

If your entire vector fits in the cache, it will be hard to beat by a node-based container such as list, set, or map and perhaps even for unordered containers (with their constant-time accesses).

Scott Meyers
Scott Meyers

That is, if your vector is sorted.

In Item 23 of Effective STL, Scott Meyers discussed replacing associative containers with sorted vectors. There are trade-offs, of course. If you are doing inserts, erases, and lookups in an unpredictable pattern, then sorted vector looks less attractive. They also look less attractive if the container holds a very large number of items. Large numbers of items make cache misses likely even for vector and also favor the constant time access of hashes.

They look more attractive as the ratio of lookups to modifications gets larger or if you can segregate the container modification phase from the lookup phase. The ideal scenario for sorted vector is that you populate the vector, sort it once, then read it many times.

Lookups on sorted vector use lower_bound, upper_bound, or equal_range which have the same order (log) as associative container lookups. Although the order is the same, sorted vector will out perform tree-based containers because tree-based containers are dereferencing pointers and often getting cache misses while sorted vector is calculating offsets and getting cache hits.

The story for populating the container is more nuanced. An arbitrary insert into a sorted vector will be order N because vector will need to move all the elements following the insertion point. The log order of associative containers is better. On the other hand, the associative container is going to allocate a node. Vector can move a lot of elements in the time it takes to allocated a node.

Of course the interface for vector isn’t the same as that of an associative container. In order for lookups to be log order, the vector must be sorted and it must stay sorted (or be re-sorted) in the face of modifications. Insertions will need to be preceded by calls to lower_bound to find the insertion point. A single lapse may create a hard to find bug.

Get Boost
Boost Downloads

Enter flat_set.

The Boost Container library has a family of flat_* containers that have associative container interfaces and semantics, but are implemented as sorted vectors.

In addition to faster lookup, the flat_* containers have much faster iteration, less memory overhead, fewer allocations, and improved cache performance.

However, as with all things, there are trade-offs. Unlike items in an associative container which are allocated into their own nodes and are never moved or copied (and thus never throw while being moved or copied), items in a flat_* container must be movable or copyable and may throw when being moved or copied during insertion or erasure. Further, insertion and erasure invalidates iterators and may be slower (particularly for non-movable types).

In short, the smaller your container and the closer your usage is to read-only, the better flat_* containers look as replacements for associative containers.

In a future post I’ll discuss an alternative to consider when the sorted vector’s limitations make it unattractive.

Please comment on Google Plus or reddit.

STL is Dated

Alex Stepanov
Alex Stepanov

I’ve been using the STL since before it became part of the 1998 standard, so I’ve heard a number of opinions on it. Still it was a little surprising to hear Alex Stepanov, the library’s creator, say that it was out-of-date. We both work for A9.com, one of the perks of which is that I have occasional conversations with him. (Submit your resumes to jonkalb (at) a9.com.)

He knew he had my attention.

Alex has devoted his professional life to the intersection of mathematical purity and hardware-based practicality. The STL is the product of that work and is the best expression of applying mathematics to the hardware of its day.

But hardware has changed since the STL was designed. In at least two important ways. The first is that multi-core and even many-core machines are the common target so heap allocation, always expensive, now requires a lock. The second is that the assumption that all memory accesses are equally expensive is no longer valid. Caching has a huge impact on performance. Accessing memory in the near vicinity (a cache hit) is dramatically less expensive than arbitrary memory accesses.

So what are the implications for STL users?

One implication is that allocators are becoming more and more important. Allocators as originally designed were not created to deal with allocation performance, but to deal with near and far pointers (if you don’t know about them, count your 32-bit blessings). For this reason, the standard definition of allocators hasn’t been sufficient to deal with some of the allocation strategies users want to exploit.

Alisdair Meredith
Alisdair Meredith

This has changed in C++11.

Alisdair Meridth is the Chair of the Standard Library Subcommittee and an employee of Bloomberg, one of the companies that is really concerned about the impact of allocators on performance. Alisdair spoke on “Allocators in C++” at C++Now 2013. I’ll refer you to that talk because it brings you up to date on what is in C++11 and what is being proposed for future standards and because referring you to it allows me to pretend that I actually understand it all.

In a future posts, I’ll discuss the implications of containers, both for contiguous containers and node-based containers.

Please comment on Google Plus or reddit.