One major part of our product is a data acquisition framework. The framework is responsible of gathering data on devices and sending it to the server. The server then processes the data and eventually stores it in CouchDB.
CouchDB storage is pretty simple. It's just a single database where the data gets inserted with a random ID. Removals or modifications are hardly done to the data after it has been inserted there, its only function is just to be read later and to be displayed as graphs and such
Things started to get weird
However, the data server started to behave strangely. First symptoms were slow inserts and huge disk usage of the database. For the slow inserts we first suspected the reason was that we were not using CouchDB's bulk document API for inserting the data but after fixing that we only got a minor speed increase. The compaction of the database didn't really help, as it compacted the database too slow, peak rate being 300 documents per second and the average about 100 doc/s. With 8 million docs that takes a while. Worst thing was that the compaction didn't even reduce the disk usage. The database ate 26 gigabytes of disk containing only 8 million documents. That's a whopping 3 kilobytes per document, and the documents were really about 100 bytes in size.
What was even weirder we didn't really see anything strange in the way the server performed. IO ops were low, CPU usage was low and there was a plenty of free memory. Heck, we even could write zeros (dd if=/dev/zero of=/var/somefile.img) on the disk at the rate of 50Mb/s. Even killing most of the other services didn't help: CouchDB just kept being slow. We even upgraded the CouchDB in our test environment to try if it helped, but we only gained small performance gain from that operation.
Let's not be that random
As you might have heard, the problem with the random is that you never can be sure. So after a week or so pondering the issue we stumbled upon on a chapter about bulk inserts and monotonic DocIDs in the excellent CouchDB guide. With a tip from the awesome folks at #couchdb we rebuilt our system using sequential IDs instead of purely random IDs.
We copied the idea of sequential IDs straight from CouchDB which uses them for its auto-generated DocIDs. We just needed to implement it in Python and ended up with the following:
The
SequentialID is a random 32 character hexadecimal string. The first 26 characters is a prefix that stays the same for approx. 8000 subsequent calls. The suffix is increased monotonically. After around 8000 calls the the prefix is regenerated and the suffix is reseted to a small positive value. We also built another one using ISO-formatted UTC-timestamp with few random digits suffixed.Now you might think that we would have collisions with the
SequentialID, especially if multiple processes are writing to the same CouchDB database. However, we're not since the prefix is a random generated string and the entropy (i.e. string length) is big enough to make the collision highly unlikely. Plus the SequentialID is never shared between processes it is regenerated for every single one instead.Performance rocketed through the roof
No fix would be good without performance metrics so Petri wrote a small benchmarking script.
This showed us the huge performance increase we got. The write speed is almost four times as fast with sequential IDs than with random IDs. Not to mention that the database takes a one seventh of the space on the disk! We didn't try the compaction speed, but everything indicates that should be a lot faster too.
How does it work?
The reason why this worked is due to the design how CouchDB stores it data on the disk. Internally CouchDB uses B+-tree (from now on referred as B-tree) to store the data of a single database. Now if the data that is inserted in the B-tree (in this case, the DocIDs) is random, it causes the tree to fan out quickly. As the minimum fill rate is 1/2 for every internal node, the nodes are mostly filled up to the 1/2 (as the data spreads evenly due to its randomness) generating more internal nodes than before.
More the internal nodes the more disk seeks the CouchDB has to perform to write the correct leaf to write the data in to. This is why we didn't see any IO ops stacking up: the disk seeks do not show up in the iostat! And this was the single biggest cause to slow down both reads and especially writes of our CouchDB database.
The problem aren't even limited to the disk seeking. The randomness of the DocIDs causes the tree structure to be changed often (due to the nature of B-trees). When using append only log the database has no other way to do than rewrite the whole new structure of the tree to the end of the append only log. As this gets more common and common as more random data gets poured in, the disk usage grows rapidly, eventually hogging a lot more disk than the data in the database is actually requiring. Not only this, this slow down compaction too, as the compaction needs to constantly rebuild the new database.
The B-tree is why using the (semi-)sequential IDs is a real life saver. The new model causes the database to be filled in orderly fashion and the buckets (i.e. leafs) are filled in instead of leaving them half full. Best part here is that the auto generated IDs by CouchDB (which were not an option for us) already use the sequential ID scheme, so using those IDs you don't really need to worry a thing.
So remember kids: if you cram loads of data in your CouchDB, remember to select your document ID scheme carefully!
Two questions.
ReplyDeleteDo the SequentialId and MononthonicId classes work together to produce one non-purely random UUID, or are these classes alternatives between which one can choose ?
Why did you write your own id generator (by means of the SequentialID an MonothonicID classes), when CouchDb already uses sequential ids for its auto-generated DocIds ?
I mean why not use the CouchDb API and ask CouchDb for one UUID or a set of UUIDs ? Doesn't that API call return UUIDs that solves your performance problem ?
Thanks,
EDH
SequentialID and MonotonicID are alternatives. SequentialID is a bit more random, MonotonicId relies on datetime with a small random variation to avoid collisions. MonotonicID is intended for cases where the ids need to be sortable.
ReplyDeleteThe CouchDB built-in id generator in fact produces somewhat similar results as SequentialID. The reason the built-in mechanism was not fit for our purposes was the requirement to prefix the keys. We use different prefixes inside the database to signal the type of the documents (for example: status/aabbccddee, where 'status/' is the prefix).
We could've solved this problem by asking an ID from CouchDB every time we wanted to create a new doc and then just concat the prefix and the ID, but that would be an extra round-trip to CouchDB, producing an extra 10-20ms latency depending on setup. So we ended up writing our own SequentialID generator that we can use without the penalty of a round-trip to our CouchDB server
Ok, that makes sense.
ReplyDeleteSo if I don't have these extra requirements (prefix the key, ids that must be sortable, ...) and rely on the CouchDb generated Ids I should be fine and shouldn't hit them mentioned performance problems. Is that right ?
Thanks,
EDH
Yeah, that's correct.
ReplyDeleteThis blog post merely tries to point out caveats when generating your own ids on documents.
Ok. Thanks.
ReplyDeleteEDH
Oh yeah, and another reason is this: COUCHDB-465
ReplyDeleteCouchDBs prior to 0.11 just use purely random IDs and for various reasons we run 0.10 in production. So if you're running way too old CouchDB, this bug might affect you too :)
This is a stretch, but @Jyrki Pulliainen:
ReplyDeleteCould I ask for some advice towards coding a random UUID generator based on your experience? I'm about to implement a similar solution after reviewing the low-level couch protocols, similarly impressed with performance gains seen upon implementing custom modifications.. I'm new to development in general so I jump at any opportunity to avoid 'newbie' mistakes where possible :)
So remember kids: if you cram loads of data in your CouchDB, remember to select your document ID scheme carefully! local shredding companies
ReplyDeleteIs this still relevant? I did an experiment comparing side by side bulk inserts with docs of uuids of random order vs ordered uuids. There's no substantial differences between the two approach. Am I missing anything?
ReplyDeleteSee the script and result in this gist:
https://gist.github.com/pts-kevinqiu/eac766ec4e1157301a48457c464e036a