Celebrating the 4000th git commit!

Few people really know much about the AuraUAS autopilot system, but this post celebrates the 4000th git commit to the main code repository!

The entire AuraUAS system is hosted on github and can be browsed here:


AuraUAS traces it’s roots back to a simple open-source autopilot developed by Jung Soon Jang to run on the XBOW MNAV/Stargate hardware back in the 2005-2006 time frame.  I worked at the University of Minnesota at that time and we modified the code to run on an original 400mhz gumtsix linux computer which talked to the MNAV sensor head via a serial/uart connection.

From the mid-2000’s and through the 2010’s I have been advancing this code in support of a variety of fixed-wing UAS projects.  Initially I called the system “MicroGear” or “ugear” which was a nod of the head to my other long term open-source project: FlightGear.  Along the way I aligned myself with a small Alaska-based aerospace company called “Airborne Technologies” or ATI for short.  We branched a version of the code specifically for projects developed under NOAA funding as well as for various internal R&D.  However, throughout the development the code always stayed true to it’s open-source heritage.

In the summer of 2015 I took a full time position in the UAS lab at the Aerospace Engineering Department of the University of Minnesota.  Here I have been involved in a variety of UAS-related research projects and have assumed the roll of chief test pilot for the lab.  AuraUAS has been central to several projects at the UAS lab including a spin testing a project, a phd project to develop single surface fault detection and a single surface flight controller on a flying wing, and several aerial survey projects.  I continue to develop AuraUAS in support of ongoing research projects.

Design choices

What makes AuraUAS different?  What makes AuraUAS interesting?  Why is it important to me?

Big processor / little processor architecture

From the start AuraUAS has been designed with the expectation of a small embedded (arduino-style) processor to handle all the sensor inputs as well as the actuator outputs.  A “big processor” (i.e a raspberry pi, beaglebone, gumstix, edison, etc.) is used for all the higher level functionality such as the EKF (attitude determination), flight control, mission management, communication, and logging.  The advantage is that the system can be built from two smaller and simpler programs.  The “little” processor handles all the hard real time tasks.  This frees up the “big” processor to run a standard linux distribution along with all it’s available libraries and fanciness.  So AuraUAS is built around two simpler programs versus the one really complicated program architecture that most other autopilot systems use.

Single thread architecture

Because of the big/little processor architecture, the big processor doesn’t need to do hard real time tasks, and thus can be written using a single-threaded architecture.  This leads to code that is far simpler, and far easier to understand, manage, and maintain.  Anyone who has tried to understand and modify someone else’s threaded code might have a small inkling of why this could be important.  How many large applications suffer through a variety of obscure, rare, and nearly impossible to find bugs that maybe trace to the threading system, but no one knows for sure?


The “big” processor in the AuraUAS system runs linux, so we can easily incorporate python in the mission critical main loop of the primary flight computer.  This has the advantage of further simplifying coding tasks and shortening the edit/compile/test development loop because there is often no need to compile and reflash code changes between test runs.  You can even do development work remotely, right on the flight computer.  For those that are skeptical of python in the critical main loop, I have successfully flown this system for 2 full flight seasons … all of 2016, and all of 2017.  The main flight computer is hitting it’s 100hz performance target and the memory usage is stable.  Speaking of myself personally, my python code is almost always less buggy than my C code.

In my experience, when porting C/C++ code to python, the result is a 50, 60, even 70% reduction in the number of lines of code.  I believe that fewer lines of code == more readable code on average.  More readable code means fewer bugs and bugs are often found more quickly.

Python/C++ hybrid

AuraUAS is a hybrid C++/Python application.  Some modules are written in C++ and some modules are written in Python.  My choice of language for a particular module tends to center around peformance versus logic.  Our 15-state EKF (also developed in house at the U of MN) remains C++ code for performance reasons.  The mission manager and individual tasks are all written in python.  Python scripts greatly accelerate coding of higher level logic tasks.  These typically are not performance critical and it s a great fit.

What makes this hybrid language system possible is a central property tree structure that is shared between the C++ and Python modules within the same concurrent app.  Imagine something like a dictionary() structure in javascript or a dict() structure in python (or imagine a json structure, or even an xml structure.)  We build an in memory tree structure that contains all the important shared data within the application, and then modules can read/write to the structure as they wish in a collaborative way.  This property tree fills much of the same place as “uorb” in the px4 software stack.  It glues all the various modules together and provides a structured way for them to communicate.

Simplicity and robustness

When you hand over control of an airplane to an autopilot (even a small model airplane) you are putting an immense amount of trust in that hardware, firmware and software.  Software bugs can crash your aircraft.  It’s important for an autopilot to be immensely robust, for the code base to be stable and change slowly, for new changes to be extensively tested.  The more complicated a system becomes, the harder it is to ensure robust, bug free operations.

Throughout the development of the AuraUAS project, the emphasis has been on keeping the code and structure simple and robust.  The overall all goal is to do a few core simple things very, very well.  There are other autopilot systems that have every feature that anyone has ever suggested or wanted to work on;  they support every sensor, run on every possible embedded computer board, and can fly every possible rotor and fixed wing airframe configuration.  I think it’s great that px4 and ardupilot cover this ground and provide a big tent that welcomes everyone.  But I think they do pay a price in terms of code complexity, which in turn has implications for introducing, collecting, and hiding bugs.

The first commit in the AuraUAS repository traces back to about 2006.  So hears to nearly 12 years of successful development!


Failure is not fatal

This post is penned during a moment of extreme frustration, beware!

Kobayashi Maru


One of the reasons I loved the original Star Trek series is because no matter what the odds, no matter how hopeless the circumstances, no matter how impossible the foe, Captain Kirk always found a way to think his way out of the mess.  He never ultimately failed or lost to an opponent, not once, not ever.  That makes a great hero and fun TV!  Fictional super heroes do things that normal human beings could never possibly do … like fly, or be stronger than steel, or always win.

Stress and the Brain

I don’t have time to read stuff like the following link, especially when I’m coming up short of a promised deadline.  Maybe you do?  http://www.health.harvard.edu/staying-healthy/understanding-the-stress-response

I’m told that when we begin to get stressed, the front area of the brain that is responsible for logic and reason starts to shut down, and command functions begin to be transferred back to the “fight or flight” portion of the brain.  I think about standing up in front of a group and speaking, then sitting down and wondering what I even said?  I think about arguments that got out of hand?  Where was the front part of my brain in all of that?  I think about looming deadlines and mounting stress … and … and … and mounting stress!

Recursive Stress

My job largely amounts to puzzle solving.   I love the process and I love finding clever solutions.  But if I ask you a riddle or give you a logic problem, can you give me a specific estimate of how much time it will take you to solve it?  That’s not how puzzle solving works, it’s not a step by step recipe that leads to a solution in a known time.  Failing to solve the problem in time stresses me out!  What is needed in these situations is clear, logical, and calm thinking.  But that is the first part of the brain to turn off during stressful situations!  It’s exactly the part of the brain we desperately need the most.  I know all this, and I watch helplessly as it happens.  What does that create?  More stress of course which accelerates the process of losing the most important part of my brain!

What is the solution?

No, seriously, what is the solution???

People often say they do their best work under pressure.  I know for myself, I do my worst work under pressure.  I strive whenever possible to get a long head start on a complex and difficult task.  I strive whenever possible to identify and solve the hardest parts of the task first.  But that isn’t always possible.

So instead I sometimes see failure coming weeks away, maybe like an asteroid on a collision course with earth.  I’m very serious about the task, I do everything I possibly can, I pour in all my energy and expertise, but it’s not always enough.  Things I thought would be easy turn out to be 10x more difficult than imagined.  Things that were working break for unexpected reasons.  Things that shouldn’t take time, take way too much precious time.

Captain Kirk to the rescue?  Sadly, no … he is a fictional character.  In the real world the asteroid looms bigger and bigger, it’s trajectory is a mathematical certainty.  The physics of the impact can largely be predicted.  At some point it becomes clear my efforts will fall short and there’s nothing left to do but watch the asteroid in it’s last few hours of flight.  Then <boom>.

Is it just me that fails colossally?

It usually seems like I’m the only one that makes a miserable mess of things I try to do, the things I’ve promised I could do, things I’ve been paid to do.  Everyone else is posting about their giant success on facebook.  Everyone else’s resume is a spotless collection of triumphs.  But not me.  Maybe once or twice I got lucky and the rest of the time is a huge struggle!  Honestly though, the only reason I’m posting this is because I know it’s not just me.  Any sports fans out there?  How many teams and players succeed to win the championship every season?  What percentage of players ever win a championship in their whole career?  Political success and failure?  How many new businesses succeed versus failing?

High Profile Failures

By mentioning specific companies, I don’t mean to imply specific people or imply anything negative here.  My intent here is to show we are all in this together and we all, even the best and most successful of us, suffer set backs in our work.  I live and work in the world of drones (or small unmanned aerial systems, aka UAS’s).  This is a tough business.  For all the hype and excitement even big companies can struggle.  Gopro recently did a full recall of their new drone product.  Hopefully they’ll try again in 2017, and hopefully the process will go better for them.  Recently 3DR, the king of DIY drones announced they were cancelling all their hardware efforts to focus on a software package for managing data collected by drones.  Parrot (another big name in the small drone market) just announced layoffs.  Edit: 12 Jan, Lily just announces it is dropping out of the drone race and shutting down.  Edit: Facebook Aquila crashed on first test flight.  Edit:  Titan (google’s own high altitude effort solar powered effort) is shut down.  It’s tough, even for the big guys with enough money to hire the best engineers, best managers and do professional marketing.

There are even higher profile failures than these … Chernobyl, TIger Woods, the Titanic, Exon Valdez, the Challenger, and most Sylvester Stallone movies except for Rocky I.

The Aftermath

So the asteroid hit.  In the last moments we just threw up our hands, gave up, and just watched it come in and do it’s destruction.  The dust is settling, what happens next?  Maybe the asteroid wasn’t as big as we imagined?  Maybe the damage not as severe?  Maybe life goes on.  In a work context, maybe you take a big professional hit on your reputation?  Maybe you don’t?  Maybe it’s about survival and living to fight another day?

Failures suck.  Failures happen to everyone.  Failures are [usually] not fatal.  The sun will still rise tomorrow for most of us.

Survival – Yes?!?

If you are reading this, you are still here and still surviving.  That’s great news!  Hopefully I’m here too!  Lets all live to fight another day.  Let’s all help each other out when we can!  There is a word called “grace” which is worth looking up if you don’t know what it means.  It’s a quantity that we all need more of and we all need to give each other in big healthy doses!

“Success is not final; failure is not fatal. It is the courage to continue that counts.” — Budweiser ad, 1938.  (Not Winston Churchill)

Image Stitching Tutorial Part #2: Direct Georeferencing

What is Direct Georeferencing?


The process of mapping a pixel in an image to a real world latitude, longitude, and altitude is called georeferencing.  When UAS flight data is tightly integrated with the camera and imagery data, the aerial imagery can be directly placed and oriented on a map.   Any image feature can be directly located.  All of this can be done without needing to feature detect, feature match, and image stitch.  The promise of “direct georeferencing” is the ability to provide useful maps and actionable data immediately after the conclusion of a flight.

Typically a collection of overlapping images (possibly tagged with the location of the camera when the image was snapped) are stitched together to form a mosaic.  Ground control points can be surveyed and located in the mosaic to georectify the entire image set.   However the process of stitching imagery together can be very time consuming and specific images sets can fail to stitch well for a variety of reasons.

If a good estimate for the location and orientation of the camera is available for each picture, and if a good estimate of the ground surface is available, the tedious process of image stitching can be skipped and image features can be georeferenced directly.

Where is Direct Georeferencing Suitable for use?


There are a number of reasons to skip the traditional image stitching pipeline and use direct georeferencing.  I list a few here, but if the readers can think of other use cases, I would love to hear your thoughts and expand this list!

  • Any situation where we are willing to trade a seamless ‘nice’ looking mosaic for fast results.  i.e. a farmer in a field that would like to make a same-day decision rather than wait for a day or two for image stitching results to be computed.
  • Surveys or counting.  One use case I have worked on is marine surveys.  In this use case the imagery is collected over open-ocean with no stable features to stitch.  Instead we were more interested on finding things that were not water and getting a quick but accurate location estimate for them.  Farmers might want to count livestock, land managers might want to locate dead trees, researchers might want to count bird populations on a remote island.



How can Direct Georeferencing Improve the Traditional Image Stitching Pipeline?

There are some existing commercial image stitching applications that are very good at what they do.  However, they are closed-source and don’t give up their secrets easily.  Thus it is hard to do an apples-to-apples comparison with commercial tools to evaluate how (and how much) direct georeferencing can improve the traditional image stitching pipeline.  With that in mind I will forge ahead and suggest several ways I believe direct georeferencing can improve the traditional methods:

  • Direct georeferencing provides an initial 3d world coordinate estimate for every detected feature before any matching or stitching work is performed.
  • The 3d world coordinates of features can be used to pre-filter match sets between images.  When doing an n vs. n image compare to find matching image pairs, we can compare only images with feature sets that overlap in world coordinates, and then only compare the overlapping subset of features.  This speeds up the feature matching process by reducing the number of individual feature comparisons.  This increases the robustness of the feature matching process by reducing the number of potential similar features in the comparison set.  And this helps find potential match pairs that other applications might miss.
  • After an initial set of potential feature matches is found between a pair of images, these features must be further evaluated and filtered to remove false matches.  There are a number of approaches for filtering out incorrect matches, but with direct georeferencing we can add real world proximity to our set of strategies for eliminating bad matches.
  • Once the entire match set is computed for the entire image set, the 3d world coordinate for each matched feature can be further refined by averaging the estimate from each matching image together.
  • When submitting point and camera data to a bundle adjustment algorithm, we can provide our positions already in 3d world coordinates.  We don’t need to build up an initial estimate in some arbitrary camera coordinate system where each image’s features are positioned relative to neighbors.  Instead we can start with a good initial guess for the location of all our features.
  • When the bundle adjustment algorithm finishes.  We can compare the new point location estimates against the original estimates and look for features that have moved implausibly far.  This could be evidence of remaining outliers.

Throughout the image stitching pipeline, it is critical to create a set of image feature matches that link all the images together and cover the overlapping portions of each image pair as fully as possible.  False matches can cause ugly imperfections in the final stitched result and they can cause the bundle adjustment algorithm to fail so it is critical to find a good set of feature matches.  Direct georeferencing can improve the critical feature matching process.

What Information is Required for Direct Georeferencing?

  • An accurate camera position and orientation for each image.  This may require the flight data log from your autopilot and an accurate (sub second) time stamp when the image was snapped.
  • An estimate of the ground elevation or terrain model.
  • Knowledge of your camera’s lens calibration (“K” matrix.)  This encompasses the field of view of your lens (sensor dimensions and focal length) as well as the sensor width and height in pixels.
  • Knowledge of your camera’s lens distortion parameters.  Action cameras like a gopro or mobius have significant lens distortion that must be accounted for.

Equipment to Avoid

I don’t intend this section to be negative, every tool has strengths and times when it is a good choice.  However, it is important to understand what equipment choices works better and what choices may present challenges.

  • Any camera that makes it difficult to snap a picture precisely (< 0.1 seconds) from the trigger request.  Autofocus commercial cameras can introduce random latency between the camera trigger and the actual photo.
  •  Rolling shutter cameras.  This is just about every commercial off the shelf camera sadly, but rolling shutter introduces warping into the image which can add uncertainty to the results.  This can be partially mitigated by setting your camera to a very high shutter speed (i.e. 1/800 or 1/1000th of a second.)
  • Cameras with slow shutter speeds or cameras that do not allow you to set your shutter speed or other parameters.
  • Any camera mounted to an independent gimbal.  A gimbal works nice for stable video, but if it separates the camera orientation from the aircraft orientation, then we can no longer use aircraft orientation to compute camera orientation.
  • Any flight computer that doesn’t let you download a complete flight log that includes real world time stamp, latitude, longitude, altitude, roll, pitch, and yaw.

The important point I am attempting to communicate is that tight integration between the camera and the flight computer is an important aspect for direct georeferencing.  Strapping a gimbaled action cam to a commercial quad-copter very likely will not allow you to extract all the information required for direct georeferencing.