Introduction
The scalability of ETS tables of type ordered_set
with the
write_concurrency
option is substantially better in Erlang/OTP 22
than earlier releases. In some extreme cases, you can expect
more than 100 times better throughput in Erlang/OTP 22 compared to
Erlang/OTP 21. The cause of this improvement is a new data structure
called the contention adapting search tree (CA tree
for short). This blog post will give you insights into how the CA tree
works and show you benchmark results comparing the performance of ETS
ordered_set
tables in OTP 21 and OTP 22.
Try it Out!
This escript makes it convenient for you
to try the new ordered_set
implementation on your own machine with
Erlang/OTP 22+ installed.
The escript measures the time it takes for P
Erlang processes to
insert N
integers into an ordered_set
ETS table, where P
and N
are parameters to the escript. The CA tree is only utilized when the
ETS table options ordred_set
and {write_concurrency, true}
are
active. One can, therefore, easily compare the new data structure’s
performance with the old one (an AVL tree protected by a
single readers-writer lock). The write_concurrency
option had no
effect on ordered_set
tables before the release of Erlang/OTP 22.
I got the following results when I ran the escript on my laptop with two cores (Intel(R) Core(TM) i7-7500U CPU @ 2.70GHz):
We see that in this particular benchmark, the CA tree has superior scalability to the old data structure. The benchmark ran about twice as fast with the new data structure and four processes as with the old data structure and one process (remember that the machine only has two cores). We will look at the performance and scalability of the new CA tree-based implementation in greater detail later after describing how the CA tree works.
The Contention Adapting Search Tree in a Nutshell
The key feature that distinguishes the CA tree from other concurrent data structures is that the CA tree dynamically changes its synchronization granularity based on how much contention is detected inside the data structure. This way, the CA tree can avoid the performance and memory overheads that come from using many unnecessary locks without sacrificing performance when many operations happen in parallel. For example, let us imagine a scenario where the CA tree is initially populated from many threads in parallel, and then it is only used from a single thread. In this scenario, the CA tree will adapt to use fine-grained synchronization in the population phase (when fine-grained synchronization reduces contention). The CA tree will then change to use coarse-grained synchronization in the single-threaded phase (when coarse-grained synchronization reduces the locking and memory overheads).
The structure of a CA tree is illustrated in the following picture:
The actual items stored in the CA tree are located in sequential data structures in the bottom layer. These sequential data structures are protected by the locks in the base nodes in the middle layer. The base node locks have counters associated with them. The counter of a base node lock is increased when contention is detected in the base node lock and decreased when no such contention is detected. The value of this base node lock counter decides if a split or a join should happen after an operation has been performed in a base node. The routing nodes at the top of the picture above form a binary search tree that directs the search for a particular item. A routing node also contains a lock and a flag. These are used when joining base nodes. The details of how splitting and joining work will not be described in this article, but the interested reader can find a detailed description in this CA tree paper (preprint PDF). We will now illustrate how the CA tree changes its synchronization granularity by going through an example:
-
Initially, a CA tree only consists of a single base node with a sequential data structure as is depicted in the picture below:
-
If parallel threads access the CA tree, the value of a base node’s counter may eventually reach the threshold that indicates that the base node should be split. A base node split divides the items in a base node between two new base nodes and replaces the original base node with a routing node where the two new base nodes are rooted. The following picture shows the CA tree after the base node pointed to by the tree’s root has been split:
-
The process of base node splitting will continue as long as there is enough contention in base node locks or until the max depth of the routing layer is reached. The following picture shows how the CA tree looks like after another split:
-
The synchronization granularity may differ in different parts of a CA tree if, for example, a particular part of a CA tree is accessed more frequently in parallel than the rest. The following picture shows the CA tree after yet another split:
-
The following picture shows the CA tree after the forth split:
-
The following picture shows the CA tree after the fifth split:
-
Two base nodes holding adjacent ranges of items can be joined. Such a join will be triggered after an operation sees that a base node counter’s value is below a certain threshold. Remember that a base node’s counter is decreased if a thread does not experience contention when acquiring the base node’s lock.
-
As you might have noticed from the illustrations above, splitting and joining results in that old base nodes and routing nodes gets spliced-out from the tree. The memory that these nodes occupy needs to be reclaimed, but this can not happen directly after they have got spliced-out as some threads might still be reading them. The Erlang run-time system has a mechanism called delayed dealloc, which the ETS CA tree implementation uses to reclaim these nodes safely.
Benchmark
The performance of the new CA tree-based ETS ordered_set
implementation has been evaluated in a benchmark that measures the
throughput (operations per second) in many scenarios. The
benchmark lets a configurable number of Erlang processes perform a
configurable distribution of operations on a single ETS table. The
curious reader can find the source code of the benchmark in the test
suite for
ETS.
The following figures show results from this benchmark on a machine with two Intel(R) Xeon(R) CPU E5-2673 v4 @ 2.30GHz (32 cores in total with hyper-threading). The average set size in all scenarios was about 500K. More details about the benchmark machine and configuration can be found on this page.
We see that the throughput of the CA tree-based ordered_set
improves
when we add cores all the way up to 64 cores, while the old
implementation’s throughput often gets worse when more processes are
added. The old implementation’s write operations are serialized as the
data structure is protected by a single readers-writer lock. The
slowdown of the old version when adding more cores is caused by
increased communication overhead when more cores try to acquire the
same lock and by the fact that the competing cores frequently
invalidate each other’s cache lines.
Further Reading
The following paper describes the CA tree and some optimizations in much more detail than this blog post. The paper also includes an experimental comparison with related data structures.
- A Contention Adapting Approach to Concurrent Ordered Sets (preprint). Journal of Parallel and Distributed Computing, 2018. Konstantinos Sagonas and Kjell Winblad
There is also a lock-free variant of the CA tree that is described in the following paper. The lock-free CA tree uses immutable data structures in its base nodes to substantially reduce the amount of time range queries, and similar operations can conflict with other operations.
- Lock-free Contention Adapting Search Trees (preprint). In the proceedings of the 30th Symposium on Parallelism in Algorithms and Architectures (SPAA 2018). Kjell Winblad, Konstantinos Sagonas, and Bengt Jonsson.
The following paper, which discusses and evaluates a prototypical CA tree implementation for ETS, was the first CA tree-related paper.
- More Scalable Ordered Set for ETS Using Adaptation (preprint). In Thirteenth ACM SIGPLAN workshop on Erlang (2014). Konstantinos Sagonas and Kjell Winblad
It might also be interesting to look at the author’s Ph.D. thesis if you want to get more links to related work or want to know more about the motivation for concurrent data structures that adapt to contention.
Conclusion
The Erlang/OTP 22 release introduced a new ETS ordered_set
implementation that is active when the write_concurrency
option is
turned on. This data structure (a contention adapting search tree) has
superior scalability to the old data structure in many different
scenarios and a design that gives it excellent performance in a variety
of scenarios that benefit from different synchronization
granularities.