The other day while riding the rental car shuttle to the Albuquerque International Sunport (ie, the ABQ airport), I started thinking about how some airports are labeled as being international while others are not. My first thought was that it didn't take much for an airport to become international- all you'd need is a few flights to Mexico. However, Wikipedia points out that international airports have to have customs and immigration, and generally have longer runways to support larger planes. In any case, it still seems like the international label is on a lot more airports than usual. This got me wondering, how different are the workloads of different airports anyways?
Since I have a big pile of airline data, I decided to see if I could better characterize airports by analyzing where the outgoing flights were heading. I wrote a Python script that takes a day's worth of flight data, extracts all direct flights that left a particular airport, and plots their altitude/distance traveled on the same graph. The X axis here is the cumulative distance the plane flew, calculated by summing up the distance between the coordinates in its track. The Y axis is altitude in miles. I picked several flights and verified that my distances roughly match the expected optimal path between cities (see AirMilesCalculator.com).
Below are some plots for SFO, ATL, ABQ, and CHS, all of which are international airports. A few interesting things pop out looking at the charts. First, SFO has a broad mix of travel, including local (LAX is about 340 miles), domestic (many stops between here an the east coast), and international (the large gaps in distances are for oceans). ATL is similar, but they have a lot more variety in the under 1,000 miles range (due to the number of airports on the east coast). ATL also has plenty of international flights, but they're shorter since Atlanta is closer to Europe. Interestingly, the longest flights for both SFO and ATL in this sample were both to Dubai. In contrast, the international Sunport (ABQ) and Charleston (CHS) didn't seem to have much range. In ABQ's case, this can be partially be attributed to the fact that it's towards the middle of the country (and close to Mexico).
This started out as a fun project that I quickly hacked together to get a first order result. I had the data, so all it took was building something to calculate the distances and plot them. The first plots I did showed a lot of promise, but I also noticed a lot of problems that need fixing. These fixes ate up a good bit of my time.
The first problem I had was that my distances were completely wrong. I had several instances of flights that were going 30,000 miles, which is bigger than the circumference of the planet. My initial thought was that I wasn't properly handling flights that crossed the international dateline. I went through several variations of the code that looked for dateline crossings (ie, lon goes from near +/-180 to near -/+180) and fixed their distances. This helped but the long flights were still 2x longer than they should have been.
I realized later the problem was my distance calculator. I'd hacked something together that just found the Euclidean distance in degrees and then converted degrees to miles. That would be ok if the world was flat and lon/lat were for a rectilinear grid, but the world is round and lon/lat grid cells become smaller as you near the poles. I felt pretty stupid when I realized my mistake. A quick look on stack/overflow pointed me to the Haversine formula and gave me code I could plug in. The numbers started working out after that.
Another problem I hit was that my original data was for what a plane did during the day as opposed to the actual flights it took. At first, I tried to just compensate in my plotter by making an inline splicer that used altitudes to chop the plane's data into multiple hops. This partially worked, but unfortunately the data wasn't always sampled at the right time, so some flights don't always go to zero. I raised the cutoffs, but then had to add some logic that figured out whether a plane was landing or taking off. A lot of the times this worked, but there were still a bunch of individual cases that slipped by and made the plots noisy.
The solution was to go back to the original data and just use the source field that sits in the data. I reworked the data and generated all single hops based on the field. A nice benefit of this approach was that it made it easier to search for the airport of origin. Previously I'd been using the lon/lat boundaries for different airports to detect when a plane was leaving a certain airport. Now I can just look for the tag. This solution does have some problems, as there are many flights that don't label their src/dst. I've also found a few where the pilot must have forgotten to change the src/dst for the flight (ie, one sfo flight to chicago went on to Europe).
The last problem (and one I don't have a good fix for) is velocity. Looking at the individual distances and times for track segments, I sometimes see situations where the calculated velocity is faster than the speed of sound (I'm assuming Concordes aren't flying). Often this seems to happen out in the middle of the ocean, when the data becomes more sparse. It could be bad values in the data, or maybe two observers reporting the same thing differently. I'll need to take a closer look at it later. For now I just put in some kludge filters that throw out flights that are out of range.
I decided to go ahead and make some of the scripts I've been writing available on github. This one is called Canonball Plotter, which can be found in: github:airline-plotters
2014-10-04 Sat web
A few weeks ago Matthew Pugh pointed me to Testing Grounds, a blog he started writing to help motivate him to learn more Python for doing statistical problems. The math is beyond me, but seeing the blog got me thinking about my old websites and how I haven't done anything with them for a lone time. I used to write a personal blog and put my publications on CraigUlmer.com, but I stopped updating both of them about the same time as when we had kids. A few years back I started writing things on Google+. As much as people like to hate on G+, it's been a good outlet for me. It's easy to add new things, you can write short or long posts, and it's a great way to hear stuff from all my friends that left the lab and went to Google.
Still, there are plenty of annoyances with G+. The main complaint I have is that it's effectively a "0-day content" site, where anything older than a day or so doesn't matter to anyone (well, except Google's data mining machines). It's easy for good posts to get buried in your stream if you've got a crazy uncle that likes to post about guns, cats, and Obama. When I first started posting, I often found myself wondering if my post had actually been dropped. It turned out they hadn't- it's just that other friends were posting more stuff and that had just pushed mine down on the list. At some level, you either get caught up in the popularity contest, or you write things for yourself and stop caring if other read it. I'm the latter. The crazy people are also a problem on G+. Comments aren't as bad as YouTube, but there are a lot of people on G+ that have nothing better to do than shout you down for mildly mentioning a view on a touch subject. The last big concern is what Google will do with the content in the future. I don't think G+ will be going away anytime soon, but then again I didn't think Google Reader would go away either. I'd just assume have a copy of what I write somewhere else.
Getting the Blog Back Together
So, I started thinking about ways I could pull some of my previous writing out of G+ and host it myself. Getting the content was easy- I just walked through my page and started cutting-and-pasting posts that were technical in nature. I use Emacs's org-mode a great deal at home and work, so I just stuck the content in a flat file and used org's markup to group and tag posts. The images got placed in a local directory, but those were easy enough to link back into the org file. The more stuff I put into the org doc, the more I felt like it was the native content format I wanted to use to host a blog. The next step was looking for something that could render it into a webpage.
People have written tons of tools that convert org-mode data to other things. Org-mode has a built-in exporter to html or latex. While this would be the easiest to get running, I decided against it because static-site generators drive me crazy. I wrote a static generator for my first blog, and it was a pain to maintain. Plus I don't like all the annotation you have to do to the org file to make it render right. Next, I saw lots of people do wiki-like blogs using org-mode. It seemed like most of these were more wiki-like, and used org's doc linking to stitch everything together. I wanted everything to get plugged into the same org file. Plus, the styles never seemed like what I wanted.
Back to the 90's
Since I needed some server side of code to chop things up anyways, I decided to resort back to my 1990's view of the web and just write some CGI. I looked into using Go, as there's an easy-to-use CGI lib available. However, my provider doesn't have Go support, and the iterative development process for web work makes compiled languages a pain. So, sadly, after all my hopes of doing something new, I wound up just using Perl again.
Ok, to be fair, parsing and web page generating is the kind of thing Perl does very well. It was satisfying to know that I could just throw in a couple of lines of regex and be able to get cool useful features. The bulk of the work was coming up with a simple tokenizer that could drive a simple state machine and begin/close sections properly. Fortunately, org's main keywords are usually easy to pick off, as they usually start at the beginning of the line. Bulletized lists were a pain because:
#!/bin/bash echo "Hello, suckers!"
The one thing left I probably should do is put in tables. Org-mode has an awesome table entry system that makes it easy to create tables and put new things into it. I think that should just be a simple fix, but I'll get to that when I need it.
I probably spent just as much time writing the parser as I did trying to figure out how to make html do what I want to. I know it's not popular, but I decided to make this page a side scroller. Vertical scrollers are nice, but I get sick of dragging down to get to the next thing. Everyone has widescreen monitors these days- I don't know why more people do stuff horizontally. I decided not to absolute the top heading bar at the last minute. I like the idea of having navigation fields in a constant location, but I don't like the idea of losing screen space. If you have a better compromise, let me know. For the record, yeah, I know this looks pretty 90-ish. The 90s were still good times though.
If you're reading this, then it looks like I've gotten the whole thing up and running. My intention is to post longer writeups on technical things to craigulmer.com and crosspost to G+, so people can send me comments there. I probably won't post personal blog stuff here, as I intend this to be the place where people can see what kind of work I do (plus, the world has gotten a lot creepier since when I first started running a personal blog, and thus I'm a lot more worried about what I say these days). Here's to hoping this isn't the last post that's made on this blog.
How does military conflict affect commercial airline flight paths? Below are the daily flight heat maps around Ukraine from late March to early April. Each frame is an aggregate of all flights for a particular day. The darker the color, the more flights went there that day. On 3/31, you see a lot of flights circled a city in Crimea (maybe Simferopol?). The next day the airlines started steering their flights around Crimea (except some flights coming from Russia).
I know others have plotted this before, but it's been fun writing something to walk through the data on my own. Python/Matplotlib was taking forever, so I wrote something in Go that allowed me to clean up the original data (eg, split dateline-crossing flights) and filter it down to regions of interest. I still plot with Matplotlib, but now that the data is simplified it takes a lot less time to do.
This week at home I've been parsing through some flight data I acquired, looking for a good way to generate heat maps for where planes fly. I thought of a few elaborate ways to do this, but in the end, I just ran a chunk of 10k flights through a simple xy plotter in matplotlib using a low alpha (transparency) value. It's interesting to see the flight corridors emerge as you throw more data at it. The below is about half of the data for a single day.
I've now posted the plotting code in github, under my airline-plotters project: github:airline-plotters
he downside of reading expired-copyright ebooks is that many of them use offensive, racist terms. I first noticed this a few weeks ago when I downloaded "Tom Swift and His Motor Cycle" to read to my son. I'd heard of the Swift books and the premise sounded fun (young inventor has adventures and fixes anything with mechanical problems). Unfortunately, a short way into it Tom meets a bumbling African American named Eradicate, who equates technology with magic and speaks in broken English. It's pretty awful stuff and has a lot of racist terms in it (I wound up doing a lot of filtering and correction while reading it to my son). In the edit logs of Wikipedia's Tom Swift page, I was also surprised to see that a lot of the "this is offensive" comments were struck down by a moderator's "show me proof" comments.
Anyways, I was wanted to see how other books in Project Gutenberg have racist terms. I downloaded the 2010 iso with 29k books and then went about writing a go program to count offensive terms and score each book (1 point for each "could be" term, 100 points for each "definitely" term). The code flagged some 5900 books, though many of those innocently use terms that are commandeered by racists. Below is a plot, where each square is a book in the collection (grey is zero, dark is low score, light is high score).
While there are plenty of books that use the terms for a good purpose (eg, Huck Finn), I suspect many of them just use the terms because that's just how society was a hundred years ago, unfortunately. I think I'll be doing some more greps before I pick another bed time story.
People asked me for the code for this, so I posted it on Github. You can find it (and documentation) in my gut-buster project: github:gut-buster