• This repository has been archived on 15/Sep/2021
  • Stars
    star
    273
  • Rank 150,780 (Top 3 %)
  • Language
    C++
  • Created over 14 years ago
  • Updated over 3 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

A portable in-place bitwise binary Fredkin trie algorithm which allows for near constant time insertions, deletions, finds, closest fit finds and iteration. Is approx. 50-100% faster than red-black trees and up to 20% faster than O(1) hash tables.

This algorithm has been ported to modern C++ and can be found at https://github.com/ned14/quickcpplib/blob/master/include/quickcpplib/algorithm/bitwise_trie.hpp. This project has been ARCHIVED and will no longer be maintained. Thanks for all the user support throughout the years.


nedtries v1.03 trunk (?)

by Niall Douglas

Web site: http://www.nedprod.com/programs/portable/nedtries/

API Reference: https://ned14.github.io/nedtries/nedtrie_8h.html


master branch CI status: Linux: Windows:

Enclosed is nedtries, an in-place bitwise binary Fredkin trie algorithm which allows for near constant time insertions, deletions, finds, closest fit finds and iteration. On modern hardware it is approximately 50-100% faster than red-black binary trees, it handily beats even the venerable O(1) hash table for less than 3000 objects and it is barely slower than the hash table for 10000 objects. Past 10000 objects you probably ought to use a hash table though, and if you need nearest fit rather than close fit then red-black trees are still optimal.

It is licensed under the Boost Software License which basically means you can do anything you like with it. Commercial support is available from ned Productions Limited.

Its advantages over other algorithms are sizeable:

  1. It has all the advantages of red-black trees such as close-fit finds (i.e. find an item which is similar but not exact to the search term) without anything like the impact on memory bandwidth as red-black trees because it doesn't have to rebalance itself when adding new items (i.e. it scales far better with memory pressure than red-black trees).
  2. It doesn't require dynamic memory like hash tables, so it can be used in a bounded environment such as a bootstrapper or a tiny embedded systems kernel. It is also a lot faster than hash tables for less than a few thousand items.
  3. Unlike either red-black trees or most hash tables, nedtries can store as many items with identical keys as you like.
  4. Its performance is nearly perfectly stable over time and number of contents N with a worst case complexity of O(M) following a mostly linear degradation with increasing M average where 1 <= M <= 8*sizeof(void *). M is a measure of the entropy between differing keys, so where keys are very similar at the bit level M is higher than where keys are very dissimilar. This leads to an unusual complexity where it can be like O(log N) for some distributions of key, and like O(1) for other distributions of key. The scaling graphs below are for completely random keys.
  5. Its complexities for find and insert are identical, whereas for deletion it is slightly more constant. Unlike almost any other algorithm, bitwise binary tries have nearly identical real world speeds for ALL its operations rather than being fast at one thing but slow at the others. In other words, if your code equally inserts, deletes and finds with no preference for which then this algorithm will typically beat all others in the general purpose situation.

Its two primary disadvantages are that (i) it can only key upon a size_t (i.e. the size of a void *), so it cannot make use of an arbitrarily large key like a hash table can (though of course one could hash the large key into a size_t sized key) and (ii) it is lousy at guaranteed nearest fit finds. It also runs fastest when the key is as unique from other keys as possible, so if you wish to replace a red-black tree which has a complex left-right comparison function which cannot be converted into a stable size_t value then you will need to stick with red-black trees. In other words, it is ideal when you are keying on pointer sized keys where each item has a definitive non-changing key.

Have a look at the scaling graphs below to decide if your software could benefit. Note for non-random key distributions you may get significantly better or worse performance than shown. If you're interested, read on for how to add them to your software.

Bitwise Trees ScalingRed Black Trees ScalingHash Table Scaling
Bitwise Trees ScalingRed Black Trees ScalingHash Table Scaling

A. Breaking changes since previous versions:

In v1.02 a breaking change in what NEDTRIE_NFIND means and does was introduced, and the version was bumped to v1.02 to warn users of the change. The problem in v1.01 and earlier was that I had incorrectly documented what NEDTRIE_NFIND does: I said that it found the nearest item to the search term when this was patently untrue (thanks to smilingthax for reporting this). In fact, NEDTRIE_NFIND used to return a matching item, if there was one, but if there was no matching item, it returned any larger item rather than the next largest item. I do apologise to the users of nedtries for this documentation error, and for the lost productivity it surely must have caused some of you.

The good news is that NEDTRIE_NFIND now guarantees to return the next largest item, and it therefore now matches BSD's red-black Nfind. The bad news is that bitwise tries are really not ideal for guaranteed nearest matching, and performance is terrible as you can see via the purple line in the graphs above. If you can put up with non-guaranteed nearest matching, NEDTRIE_CFIND offers much better performance. NEDTRIE_CFIND takes a rounds parameter which indicates how hard the routine should try to return a close item: rounds=0 means to return the first item encountered which is equal or larger to search key, rounds=1 means try one level down, rounds=2 means try two levels down and so on. rounds=INT_MAX means try hardest, and guarantees that any item with a matching key will be found and that if not matching, any item returned will have a very close key (those not necessarily the closest).

For a summary of the differences between Nfind and Cfind, see this useful table. Note that if you just want any item with a key larger or equal to the search key, NEDTRIE_CFIND(rounds=0|1|2) is extremely swift and has O(1) complexity as shown in the graphs above.

B. Implementation:

The source makes use of C macros on C and C++ templates on C++ - therefore, unlike typical C-macro-based algorithms it is easy to debug and in fact, the improved metadata specified by the templates lets a modern C++ compiler produce 5-15% faster code through PGO guided selective inlining. The code is 100% standard C and C++, so it should run on any platform or architecture though you may need to implement your own nedtriebitscanr() function if you're not using GCC nor MSVC and want to keep performance high. If you are building debug, NEDTRIEDEBUG is by default turned on: this causes a complete state validation check to be performed after each and every change to the trie which tends to be very good at catching bugs early, but can make debug builds a little slow.

So what is "an in-place bitwise binary Fredkin trie algorithm" then? Well you ought to start by reading and fully digesting the Wikipedia page on Fredkin tries as what comes won't make much sense otherwise. The Wikipedia page describes a non-inplace trie which uses dynamic memory to store each consecutive non-differing section of a string, and indeed this is how tries are normally described in algorithm theory and classes. nedtries obviously enough selects on individual bits rather than substrings, and it uses an inplace instead of dynamically allocated implementation.

Here is how nedtries performs an indexation: firstly, the most significant set bit X is found using nedtriebitscanr() which is no more than one to three CPU cycles on modern processors. This is used to index an array of bins. Each bin X contains a binary tree of items whose keys are (1<<X) <= key < (1<<(X+1)), so what one does is to follow the tree downwards selecting left or right based on whether the next bit downwards is 0 or 1. If an item has children, its key is only guaranteed to be constrained to that of its bin, whereas if an item does not have children then its key is guaranteed to match as closely as possible its position in the tree.

If you insert an item, nedtries indexes as far as it can down the existing tree where the new item ought to be and inserts it there. If you remove an item, if that item has no children it is simply removed. If it has a child then a nobble function is called to select the bias for how to select the childless item to be nobbled and used as the replacement i.e. one either traverses down preferentially 1 or preferentially 0 until you find a childless item, then you delink it from there and link it in to replace the item being removed.

If you think about this hard enough, you realise that you will get a "nearly sorted" binary tree i.e. one whose node keys are very nearly in order. In fact, the more randomised the key, the more in order the tree becomes. The tree is usually sufficiently ordered that one can assume it to be so for most operations, but if you need to guarantee order then you can bubble sort per MSB bin as bubble sort performs very well on nearly sorted data (as does smooth sort if you have a very large set of data) .

The enclosed benchmark.cpp will run a series of scalability tests comparing the bitwise binary trie implementation from nedtries with others outputting its results in CSV format:

  • If compiled as C not C++, the C macro version of nedtries is compared to the red-black binary tree implementation from FreeBSD and the O(1) hash table implementation from http://uthash.sourceforge.net/.
  • If compiled as C++, the C++ template version of nedtries is compared to all of the C tests above as well as the STL associative container classes std::map<> and std::unordered_map<>. NOTE THAT YOU NEED A TR1 SUPPORTING COMPILER FOR std::unordered_map<> SUPPORT!

You will also find enclosed a set of precomputed Microsoft Excel spreadsheets which were generated on a 2.67Ghz Intel Core 2 Quad Windows 7 x64 machine. They should be representative of performance on modern hardware - though note that the Intel Atom has a 17 cycle nedtriebitscanr() which is the only modern CPU to be so slow. See http://gmplib.org/~tege/x86-timing.pdf for x86 and x64 instruction timings.

C. C Macro Usage:

Usage via C macros follows the FreeBSD rbtree.h format. See the enclosed nedtries.chm for detailed API documentation. Here is some sample code which can be compiled cleanly using gcc -Wall -pedantic -std=c99 test.c (or as C++ via g++ -Wall -pedantic test.c):

#include <stdio.h>
#include <string.h>
#include "nedtrie.h"

typedef struct foo_s foo_t; struct foo_s { NEDTRIE_ENTRY(foo_s) link; size_t key; }; typedef struct foo_tree_s foo_tree_t; NEDTRIE_HEAD(foo_tree_s, foo_s); static foo_tree_t footree;

size_t fookeyfunct(const foo_t *r) { return r->key; }

NEDTRIE_GENERATE(static, foo_tree_s, foo_s, link, fookeyfunct, NEDTRIE_NOBBLEZEROS(foo_tree_s))

int main(void) { foo_t a, b, c, r; NEDTRIE_INIT(&footree); a.key=2; NEDTRIE_INSERT(foo_tree_s, &footree, &a); b.key=6; NEDTRIE_INSERT(foo_tree_s, &footree, &b); r=NEDTRIE_FIND(foo_tree_s, &footree, &b); assert(r==&b); c.key=5; r=NEDTRIE_NFIND(foo_tree_s, &footree, &c); assert(r==&b); / NFIND finds next largest. Invert the key function (i.e. 1-key) to find next smallest. */ NEDTRIE_REMOVE(foo_tree_s, &footree, &a); NEDTRIE_FOREACH(r, foo_tree_s, &footree) { printf("%p, %u\n", (void *) r, (unsigned) r->key); } assert(!NEDTRIE_PREV(foo_tree_s, &footree, &b)); assert(!NEDTRIE_NEXT(foo_tree_s, &footree, &b)); return 0; }

There isn't really much more to it - if you want to throw away the trie, simply NEDTRIE_INIT() its head. As no dynamic memory is involved, nothing is lost.

Choosing The Nobble Function

I should mention what the nobble function is for: you have three default choices, NEDTRIE_NOBBLEZEROS, NEDTRIE_NOBBLEONES and NEDTRIE_NOBBLEEQUALLY though you can of course also define your own. The nobble function contributes to tree balance by working against bit bias in your keys, so if your keys contain an excess of non-leading zeros then you should preferentially nobble zeros. Equally if your keys contain an excess of ones, then you should preferentially nobble ones and, as you might have guessed, if your bits after the first set bit are completely random (which is rare) then you should nobble equally.

Sounds complicated? In fact it's very easy if you simply use trial & error. Start with nobble zeroes which tends to be right in most situations, and then use benchmarking your code to determine the correct setting.

Nfind versus Cfind

Where the BSD red-black tree implementation has RB_NFIND() for finding items which are nearest to the search term, nedtries provides NEDTRIE_CFIND() and NEDTRIE_NFIND(). What's the difference? Here's a quick table:

Nfind Cfind(rounds=INT_MAX) Cfind(rounds=0|1)
  • Returns an exact match if there is an exact match in the trie
  • If there is not an exact match, guarantees that the item returned is the next largest
  • Complexity: Somewhat worse than O(log N), as must perform a O(log N) search of the subtree returned by Cfind(rounds=INT_MAX).
  • Returns an exact match if there is an exact match in the trie
  • If there is not an exact match, item returned will be very close to the next largest
  • Complexity: Identical to Find i.e. O(1/DKL(key||average key)). However because it does much more work, it is approximately four times slower than a straight find.
  • Returns an item with a key no larger than the next power of two multiple higher than the search key
  • Complexity: approximately O(2^rounds) for small numbers of rounds, so approaches O(1).

D. C++ Usage:

NOTE: Enable the C++ STL containers using the NEDTRIE_ENABLE_STL_CONTAINERS macro. You are suggested to avoid the C++ STL containers where possible, they were hacked together.

C++ usage is even easier than the C macro usage thanks to nedtries::trie_map<> and nedtries::trie_multimap<> which is API compatible with the std::map<>, std::multimap<> and std::unordered_map<> STL associative containers. trie_map<> and trie_multimap<> makes full use of rvalue construction if either you are running on C++0x according to the value of __cplusplus, or have defined HAVE_CPP0X. In the general case, simply drop trie_map<> or trie_multimap<> in where your STL associative container used to be and enjoy the speed benefits!

Note that insertion and deletion speed in any STL container is heavily bound by the speed of your memory allocator. You may wish to consider employing nedmalloc which can deliver some unholy speed benefits if you run it as root, otherwise it will need some small source changes to employ its advanced v2 malloc API.

In case you are not familiar with STL associative containers, they are very simple e.g.:

nedtries::trie_map<size_t, Foo> foomap;
foomap[5]=Foo();
foomap.erase(foomap.find(5));

You can of course iterate through them and do all the normal things you can do with any STL container.

The trie_map<> and trie_multimap<> Implementation

trie_map<> and trie_multimap<> are actually a STL container wrappers rather than a proper STL container in its own right i.e. it subclasses an existing STL container passing through most of its API, but selectively overrides certain members. Its default parameters point at std::list<> which is its most likely usage model for most people.

The advantages are mainly that it is quick to implement and can be theoretically applied to any arbitrary STL container, thus taking advantage of that STL container's optimisations and customisations. The big disadvantage is that it is hacky, dirty and prone to getting bugs into it, and if you look at the source you'll see what I mean. There is after all a number of places where I am doing a number of very illegal things in C++ which just happen to usually work.

The chances are that this implementation will be good enough for most people. If however you might like to sponsor the development of a full bitwise trie STL associative container for submission to the Boost C++ peer reviewed libraries (and thereafter into the standard C++ language itself), I would be very pleased to oblige. Please contact ned Productions Consulting Ltd. for further details.

E. ChangeLog:

v1.03 trunk (?):

  • Added some support for architectures where CHAR_BIT is not 8. Thanks to Sebastian Ramadan for contributing this.
  • Disable the STL container being compiled by default, and stop suppressing warnings about strict aliasing type punning.

v1.02 Final (9th July 2012):

  • Due to the breaking change in what NEDTRIE_NFIND means and does, bumped version number.
  • [master 910ef60] Fixed the C++ implementation causing memory corruption when built as 64 bit.
  • [master 0ea1327] Added some compile time checks to ensure C++ implementation will never again cause memory corruption when built as 64 bit.

v1.01 RC2 (unreleased):

  • [master bd6f3e5] Fixed misc documentation errors.
  • [master 79efb3a] Fixed really obvious documentation bug in the example usage. Thanks to Fabian Holler for reporting this.
  • [master 99e67e3] Fixed that the example usage in the documentation spews warnings on GCC. Now compiles totally cleanly. Thanks to Fabian Holler for reporting this.
  • [master 537c27b] Added NEDTRIE_FOREACH_SAFE and NEDTRIE_FOREACH_REVERSE_SAFE. Thanks to Stephen Hemminger for contributing this.
  • [master 4b12a3c] Fixed the fact that I had forgotten to implement iterators for trie_map<>. Also added trie_multimap<>. Thanks to Ned for pointing out the problem.
  • [master 8b53224] Renamed Nfind to Cfind.

v1.01 RC1 (19th June 2011):

  • [master 30a440a] Fixed misc documentation errors.
  • [master 2103969] Fixed misoperation when trie key is zero. Thanks to Andrea for reporting this.
  • [master 083d94b] Added support for MSVC's as old as 7.1.
  • [master f836319] Added Microsoft CLR target support.
  • [master 85abf67] I, being a muppet of the highest order, was actually benchmarking the speed of the timing routines rather than much else. Performance is now approx. 10x higher in the graphs ... I am a fool!
  • [master 6aa344e] Added check for key uniqueness in benchmark test (hash tables suffer is key isn't unique). Added cube root averaging to results output.
  • [master feb4f56] Replaced the use of rand() with the Mersenne Twister ( http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/SFMT/index.html).

v1.00 beta 1 (18th June 2010):

  • [master e4d1245] First release.

More Repositories

1

llfio

P1031 low level file i/o and filesystem library for the C++ standard
C++
839
star
2

outcome

Provides very lightweight outcome<T> and result<T> (non-Boost edition)
C++
696
star
3

nedmalloc

An EXTREMELY FAST portable thread caching malloc implementation written in C for multiple threads without lock contention based on dlmalloc. Optimised for x86 and x64. Compatible with C++. Can patch itself into existing binaries on Windows.
C
399
star
4

pcpp

A C99 preprocessor written in pure Python
Python
214
star
5

quickcpplib

Eliminate all the tedious hassle when making state-of-the-art C++ 14 - 23 libraries!
C
121
star
6

status-code

Proposed SG14 status_code for the C++ standard
C++
63
star
7

stl-header-heft

Measures how parsing overweight the major STLs have become
Python
55
star
8

uthash

A GIT clone of uthash.sourceforge.net, a hash table, implemented in C, supporting constant-time add/find/remove of C structures. Any structure having a unique, arbitrarily-typed key member can be hashed by adding a UT_hash_handle member to the structure and calling these macros.
C
36
star
9

ntkernel-error-category

A C++ 11 std::error_category for the NT kernel's NTSTATUS error codes
C++
35
star
10

Easyshop

An E-Commerce solution for Plone supporting a shopping basket, stock quantities, differing delivery and taxation rules based on criteria, product variants and Paypal payment processing
Python
26
star
11

static-website-activitypub

Wraps a static website generator with an ActivityPub client-to-server implementation
Python
25
star
12

mcpp

A C99 conforming preprocessor
C
21
star
13

FastDirectoryEnumerator

Enumerates very, very large directories quickly by directly using kernel syscalls. For POSIX and Windows. WARNING THIS IS OBSOLETE. USE BOOST.AFIO INSTEAD.
C++
14
star
14

zero-copy-socket-test

C++
13
star
15

kerneltest

C++14 automated code test infrastructure with permutation, fuzzing, sanitising and edge coverage
C++
11
star
16

BEurtle

TortoiseXXX plugin for the Bugs Everywhere distributed issue tracker
Python
11
star
17

automodinit

Solves the problem of forgetting to keep __init__.py files up to date
Python
10
star
18

tnfox

FOX for Tn
C++
10
star
19

boost-trunk

NO LONGER UPDATED NOW BOOST HAS MOVED TO MODULARISED GIT
C++
7
star
20

NiallsCPP11Utilities

Some C++ 11 rewrites of useful stuff from TnFOX (http://www.nedprod.com/TnFOX/)
C
7
star
21

VL53L3CX_rasppi

Port of the 2020 support library to Raspberry Pi for the VL53L3CX Time-of-Flight ranging sensor with advanced multi-object detection
C
6
star
22

Bugs-Everywhere-for-BEurtle

Bugs Everywhere is a “distributed bugtracker” designed to complement distributed revision control systems
Python
5
star
23

boost-bmv-cmake

Mock up of what a bare minimum viable cmakeification of Boost would look like
C++
5
star
24

libcurl-cxx-std-networking-integration

Demonstration of libcurl integration into C++ 23 standard networking
C++
4
star
25

HouseBuild

Tracking progress, open issues, and project milestones of my house build
4
star
26

BEXML

Provides fast, lazy, RESTful fastcgi access to various issue (bug) trackers. Compilable into a fast binary with IronPython and PyPy for even faster access.
Python
3
star
27

ISO_POSIX_standards_stuff

Reference Implementations for my ISO and POSIX standards change proposals
C
3
star
28

TripleGit

The world's stupidest triplestore database
C++
2
star
29

ClamAV-Plugin-for-PKP-Open-Journal-Systems

Implements a plugin which scans uploaded documents for viruses
PHP
2
star
30

c11-permit-object

A C11 and C++11 pthreads permit object. With adaptors for Boost.Thread and Boost.Expected
C++
1
star
31

boostdoc

A Hugo theme generating a Boost C++ Libraries documentation
CSS
1
star
32

Oxyderkeia

Super secret R&D project
1
star
33

boost-release

NO LONGER BEING UPDATED
C++
1
star
34

LiveXMLCV

An Interactive Live XML based Curriculum Vitae with XHTML output
JavaScript
1
star
35

dvb_ttusb_ned

Just enough of the TechnoTrend S-2400 Linux kernel driver to cold start a faulty tuner
C
1
star