Data object serialization in C++ has been something that's been bugging me for a long time. I'm developing a distributed memory management library named Kelpie in C++ that sits on top of an RDMA/RPC communication library for HPC named Nessie that's in C. Nessie is a solid library and does a number of low-level things that make my life easier and my code portable to different HPC platforms. However, since it's written in C, there have been a number of times where I've had to do awkward things to interface my C++ code to it. Serialization is one major sore point with me. Nessie uses XDR for serialization, which doesn't work with C++ strings, and just feels cumbersome. I decided to write up a little benchmark to measure how well different, popular serialization libraries performed so I could convince the Nessie developers to support something else.
Serialization Libraries
Serialization is the process of converting a data structure you're using into a contiguous series of bites that can be transported over the network and revived into an object on the remote side without any additional information. This process often takes into account hardware architecture differences between the source and destination (byte endianness and word widths), as well as conversions between different programming languages. Some serialization libraries use a definition file to generate code for your application, while others provide hooks for you to easily define how objects get serialized in your code. I experimented with four main serialization libraries in this experiment.
- XDR: XDR is a C-based serialization that was developed in the 1980's for NFS. Users define data structures and RPC handles in a file that rpcgen translates to C code. Interestingly, XDR defers the actual packing of data until the last possible minute, allowing it to place data in an outgoing buffer. Unfortunately, it uses malloc, does not understand C++ strings, and defaults to storing all values in a big-endian format (it was developed by Sun).
- Boost: Boost has a handy C++ serialization library that makes it trivial to instrument your code with serialization routines. It supports nesting and can use different encoders (e.g., binary or xml). Like a lot of Boost code, it has a reputation for being flexible by heavyweight.
- Cereal: Cereal is a C++11 "headers-only" library designed to be similar but faster to Boost's serialization library. The C++11 requirement wasn't a problem for me, though I did find that I had to make some changes to my code to switch from Boost to Cereal.
- Protocol Buffers: PB is an object-oriented serialization library developed by Google that focuses on performance and storage efficiency. Similar to XDR you define a definition file that a tool translates into (a lot of) code you can compile into your application. PB gives you data objects to use in your code with getters/setters. It feels much more invasive to me, but it gives PB the ability to do more sophisticated things internally (e.g., know when an object is only partially packed). PB has a reputation for being both fast and space efficient.
There are other serialization libraries out there that I've experimented with but excluded from this test (thrift, avro, capn proto, flat buffers, etc.). Each library requires a good bit of tinkering to setup and incorporate into a test. I'd like to revisit them at some point for a bigger comparison.
Experiments
I'm only interested in seeing how well these libraries perform when using the types of data structures I frequently ship in my Kelpie library. My messages usually send a small number of (integer) default values followed by a series of id/string data entries. The test program generates all the application data at start time and then proceeds to serialize different amounts of input by changing how many id/string data entries are packed. Packing for me involves all steps in converting my application's data into a buffer that my RDMA library can transmit. In Boost, Serial, and PB, this RDMA buffer requirement involves copying the packed data to a memory location that is suitable for transmission (XDR allows me to write directly to the buffer).
Measurements
In separate tests I measure the encode/decode times, as well the size of the packed data that would be transmitted over the network. The packed message size shows that PB performs compression and achieves a noticeable improvement over the other libraries at all sizes. Boost carries the most overhead. Message size is important to me in Kelpie, as there are sizable communication penalties if you can't fit a message in a single MTU. Beyond fitting everything into an MTU, the differences between the libraries aren't that significant.
Encoding/Decoding speed however, is very important to me as every step in the communication path counts towards the latency of my operations. I was surprised and impressed by both XDR and PB: they both performed well and were significantly better that Boost and Serial at the left end of the plot where I care the most.
Decode times were another interesting story. XDR did very well for short messages and remained competitive with Cereal in larger messages. I was surprised PB did so poorly, even getting beat by Boost after 32 records were sent. Decode isn't as critical as encode (since the sender is free to do more work once the message leaves), but it does add to the communication cost.
Back to XDR?
I was pretty disappointed by these tests because I'd hoped that with all their shiny C++ features, newer libraries would outperform our aging XDR code. In some metrics that was true, but there was no universal winner in all situations: the air goes somewhere when you squeeze the balloon. It's important to point out here that these tests were designed to reflect my particular scenario, and are not meant to be an end-all test for comparing serialization libraries. Most people don't have my buffer challenges, and therefore would see different results. The best thing to do is code it up yourself and try it out with your own data.
2014-10-14 Tue
tracks gis code planes
The other day while riding the rental car shuttle to the Albuquerque International Sunport (ie, the ABQ airport), I started thinking about how some airports are labeled as being international while others are not. My first thought was that it didn't take much for an airport to become international- all you'd need is a few flights to Mexico. However, Wikipedia points out that international airports have to have customs and immigration, and generally have longer runways to support larger planes. In any case, it still seems like the international label is on a lot more airports than usual. This got me wondering, how different are the workloads of different airports anyways?
Since I have a big pile of airline data, I decided to see if I could better characterize airports by analyzing where the outgoing flights were heading. I wrote a Python script that takes a day's worth of flight data, extracts all direct flights that left a particular airport, and plots their altitude/distance traveled on the same graph. The X axis here is the cumulative distance the plane flew, calculated by summing up the distance between the coordinates in its track. The Y axis is altitude in miles. I picked several flights and verified that my distances roughly match the expected optimal path between cities (see AirMilesCalculator.com).
Below are some plots for SFO, ATL, ABQ, and CHS, all of which are international airports. A few interesting things pop out looking at the charts. First, SFO has a broad mix of travel, including local (LAX is about 340 miles), domestic (many stops between here an the east coast), and international (the large gaps in distances are for oceans). ATL is similar, but they have a lot more variety in the under 1,000 miles range (due to the number of airports on the east coast). ATL also has plenty of international flights, but they're shorter since Atlanta is closer to Europe. Interestingly, the longest flights for both SFO and ATL in this sample were both to Dubai. In contrast, the international Sunport (ABQ) and Charleston (CHS) didn't seem to have much range. In ABQ's case, this can be partially be attributed to the fact that it's towards the middle of the country (and close to Mexico).
This started out as a fun project that I quickly hacked together to get a first order result. I had the data, so all it took was building something to calculate the distances and plot them. The first plots I did showed a lot of promise, but I also noticed a lot of problems that need fixing. These fixes ate up a good bit of my time.
Distance Problems
The first problem I had was that my distances were completely wrong. I had several instances of flights that were going 30,000 miles, which is bigger than the circumference of the planet. My initial thought was that I wasn't properly handling flights that crossed the international dateline. I went through several variations of the code that looked for dateline crossings (ie, lon goes from near +/-180 to near -/+180) and fixed their distances. This helped but the long flights were still 2x longer than they should have been.
I realized later the problem was my distance calculator. I'd hacked something together that just found the Euclidean distance in degrees and then converted degrees to miles. That would be ok if the world was flat and lon/lat were for a rectilinear grid, but the world is round and lon/lat grid cells become smaller as you near the poles. I felt pretty stupid when I realized my mistake. A quick look on stack/overflow pointed me to the Haversine formula and gave me code I could plug in. The numbers started working out after that.
Multihop Problems
Another problem I hit was that my original data was for what a plane did during the day as opposed to the actual flights it took. At first, I tried to just compensate in my plotter by making an inline splicer that used altitudes to chop the plane's data into multiple hops. This partially worked, but unfortunately the data wasn't always sampled at the right time, so some flights don't always go to zero. I raised the cutoffs, but then had to add some logic that figured out whether a plane was landing or taking off. A lot of the times this worked, but there were still a bunch of individual cases that slipped by and made the plots noisy.
The solution was to go back to the original data and just use the source field that sits in the data. I reworked the data and generated all single hops based on the field. A nice benefit of this approach was that it made it easier to search for the airport of origin. Previously I'd been using the lon/lat boundaries for different airports to detect when a plane was leaving a certain airport. Now I can just look for the tag. This solution does have some problems, as there are many flights that don't label their src/dst. I've also found a few where the pilot must have forgotten to change the src/dst for the flight (ie, one sfo flight to chicago went on to Europe).
Velocity Problems
The last problem (and one I don't have a good fix for) is velocity. Looking at the individual distances and times for track segments, I sometimes see situations where the calculated velocity is faster than the speed of sound (I'm assuming Concordes aren't flying). Often this seems to happen out in the middle of the ocean, when the data becomes more sparse. It could be bad values in the data, or maybe two observers reporting the same thing differently. I'll need to take a closer look at it later. For now I just put in some kludge filters that throw out flights that are out of range.
Code
I decided to go ahead and make some of the scripts I've been writing available on github. This one is called Canonball Plotter, which can be found in: github:airline-plotters
A few weeks ago Matthew Pugh pointed me to Testing Grounds, a blog he started writing to help motivate him to learn more Python for doing statistical problems. The math is beyond me, but seeing the blog got me thinking about my old websites and how I haven't done anything with them for a lone time. I used to write a personal blog and put my publications on CraigUlmer.com, but I stopped updating both of them about the same time as when we had kids. A few years back I started writing things on Google+. As much as people like to hate on G+, it's been a good outlet for me. It's easy to add new things, you can write short or long posts, and it's a great way to hear stuff from all my friends that left the lab and went to Google.
Still, there are plenty of annoyances with G+. The main complaint I have is that it's effectively a "0-day content" site, where anything older than a day or so doesn't matter to anyone (well, except Google's data mining machines). It's easy for good posts to get buried in your stream if you've got a crazy uncle that likes to post about guns, cats, and Obama. When I first started posting, I often found myself wondering if my post had actually been dropped. It turned out they hadn't- it's just that other friends were posting more stuff and that had just pushed mine down on the list. At some level, you either get caught up in the popularity contest, or you write things for yourself and stop caring if other read it. I'm the latter. The crazy people are also a problem on G+. Comments aren't as bad as YouTube, but there are a lot of people on G+ that have nothing better to do than shout you down for mildly mentioning a view on a touch subject. The last big concern is what Google will do with the content in the future. I don't think G+ will be going away anytime soon, but then again I didn't think Google Reader would go away either. I'd just assume have a copy of what I write somewhere else.
Getting the Blog Back Together
So, I started thinking about ways I could pull some of my previous writing out of G+ and host it myself. Getting the content was easy- I just walked through my page and started cutting-and-pasting posts that were technical in nature. I use Emacs's org-mode a great deal at home and work, so I just stuck the content in a flat file and used org's markup to group and tag posts. The images got placed in a local directory, but those were easy enough to link back into the org file. The more stuff I put into the org doc, the more I felt like it was the native content format I wanted to use to host a blog. The next step was looking for something that could render it into a webpage.
People have written tons of tools that convert org-mode data to other things. Org-mode has a built-in exporter to html or latex. While this would be the easiest to get running, I decided against it because static-site generators drive me crazy. I wrote a static generator for my first blog, and it was a pain to maintain. Plus I don't like all the annotation you have to do to the org file to make it render right. Next, I saw lots of people do wiki-like blogs using org-mode. It seemed like most of these were more wiki-like, and used org's doc linking to stitch everything together. I wanted everything to get plugged into the same org file. Plus, the styles never seemed like what I wanted.
Next I started looking at client-side rendering. I figured I'd push org chunks out from the server and then do something in the client to render them. I'd been meaning to check out Dart, so I picked up the dev kit and started writing simple examples. Dart is pretty cool, especially since it looks like it'd let me skip having to learn JavaScript. However, I ran into a few roadblocks. First, I realized the parsing I'd need to do was way beyond what I would be able to figure out without a lot more work. Second, I realized that it would be dumb to send the entire org file to the client, and that what I really needed was something on the server side to chop up entries and serve them up. That meant parsing on both the client and the server, which sounded like a lot of work. During this time, I discovered org-js, a Javascript interpreter for org-mode that could do the rendering. I got it to work, but I had trouble shoehorning it into the web page I liked because the Javascript is complex enough that I wouldn't want to mess with it.
Back to the 90's
Since I needed some server side of code to chop things up anyways, I decided to resort back to my 1990's view of the web and just write some CGI. I looked into using Go, as there's an easy-to-use CGI lib available. However, my provider doesn't have Go support, and the iterative development process for web work makes compiled languages a pain. So, sadly, after all my hopes of doing something new, I wound up just using Perl again.
Ok, to be fair, parsing and web page generating is the kind of thing Perl does very well. It was satisfying to know that I could just throw in a couple of lines of regex and be able to get cool useful features. The bulk of the work was coming up with a simple tokenizer that could drive a simple state machine and begin/close sections properly. Fortunately, org's main keywords are usually easy to pick off, as they usually start at the beginning of the line. Bulletized lists were a pain because:
- They require surrounding stuff
- You have to know when you're still in a list
- You still have to parse for other markup
- They can jump around a lot
I took advantage of Google's Code Prettify to allow me to put code stubs in the stream. It's a bit of Javascript that sets the colors of code, which meant I didn't have to do anything to display code (besides handle lt/gt kinds of symbols that html has problems with). I need to tweak the colors, but good enough.
#!/bin/bash
echo "Hello, suckers!"
The one thing left I probably should do is put in tables. Org-mode has an awesome table entry system that makes it easy to create tables and put new things into it. I think that should just be a simple fix, but I'll get to that when I need it.
I probably spent just as much time writing the parser as I did trying to figure out how to make html do what I want to. I know it's not popular, but I decided to make this page a side scroller. Vertical scrollers are nice, but I get sick of dragging down to get to the next thing. Everyone has widescreen monitors these days- I don't know why more people do stuff horizontally. I decided not to absolute the top heading bar at the last minute. I like the idea of having navigation fields in a constant location, but I don't like the idea of losing screen space. If you have a better compromise, let me know. For the record, yeah, I know this looks pretty 90-ish. The 90s were still good times though.
It's Up
If you're reading this, then it looks like I've gotten the whole thing up and running. My intention is to post longer writeups on technical things to craigulmer.com and crosspost to G+, so people can send me comments there. I probably won't post personal blog stuff here, as I intend this to be the place where people can see what kind of work I do (plus, the world has gotten a lot creepier since when I first started running a personal blog, and thus I'm a lot more worried about what I say these days). Here's to hoping this isn't the last post that's made on this blog.
2014-08-24 Sun
tracks gis planes
How does military conflict affect commercial airline flight paths? Below are the daily flight heat maps around Ukraine from late March to early April. Each frame is an aggregate of all flights for a particular day. The darker the color, the more flights went there that day. On 3/31, you see a lot of flights circled a city in Crimea (maybe Simferopol?). The next day the airlines started steering their flights around Crimea (except some flights coming from Russia).
#+CAPTION: Flight patterns over Crimea changed after the conflict broke out
#+NAME: fig:crimeaconflict
I know others have plotted this before, but it's been fun writing something to walk through the data on my own. Python/Matplotlib was taking forever, so I wrote something in Go that allowed me to clean up the original data (eg, split dateline-crossing flights) and filter it down to regions of interest. I still plot with Matplotlib, but now that the data is simplified it takes a lot less time to do.
2014-08-14 Thu
tracks gis planes code viz
This week at home I've been parsing through some flight data I acquired, looking for a good way to generate heat maps for where planes fly. I thought of a few elaborate ways to do this, but in the end, I just ran a chunk of 10k flights through a simple xy plotter in matplotlib using a low alpha (transparency) value. It's interesting to see the flight corridors emerge as you throw more data at it. The below is about half of the data for a single day.
Code
I've now posted the plotting code in github, under my airline-plotters project: github:airline-plotters