Autopilot Visualization

Blending real video with synthetic data yields a powerful and cool! way to visualize your kalman filter (attitude estimate) as well as your autopilot flight controller.

screenshot-from-2016-11-30-14-31-14

Conformal HUD Elements

Conformal definition: of, relating to, or noting a map or transformation in which angles and scale are preserved.  For a HUD, this means the synthetic element is drawn in a way that visually aligns with the real world.  For example: the horizon line is conformal if it aligns with the real horizon line in the video.

  • Horizon line annotated with compass points.
  • Pitch ladder.
  • Location of nearby airports.
  • Location of sun, moon, and own shadow.
  • If alpha/beta data is avaliable, a flight path marker is drawn.
  • Aircraft nose (i.e. exactly where the aircraft is pointing towards.)

Nonconformal HUD Elements

  • Speed tape.
  • Altitude tape.
  • Pilot or autopilot ‘stick’ commands.

Autopilot HUD Elements

  • Flight director vbars (magenta).  These show the target roll and pitch angles commanded by the autopilot.
  • Bird (yellow).  This shows the actual roll and pitch of the aircraft.  The autopilot attempts to keep the bird aligned with the flight director using aileron and elevator commands.
  • Target ground course bug (show on the horizon line) and actual ground course.
  • Target airspeed (drawn on the speed tape.)
  • Target altitude (drawn on the altitude tape.)
  • Flight time (for referencing the flight data.)

Case Study #1: EKF Visualization

(Note this video was produced earlier in the development process and doesn’t contain all the HUD elements described above.)

What to watch for:

  • Notice the jumpiness of the yellow “v” on the horizon line.  This “v” shows the current estimated ground track, but the jumpiness points to an EKF tuning parameter issue that has since been resolved.
  • Notice a full autonomous wheeled take off at the beginning of the video.
  • Notice some jumpiness in the HUD horizon and attitude and heading of the aircraft.  This again relates back to an EKF tuning issue.

I may never have noticed the EKF tuning problems had it not been for this visualization tool.

Case Study #2: Spin Testing

What to watch for:

  • Notice the flight path marker that shows actual alpha/beta as recorded by actual alpha/beta airdata vanes.
  • Notice how the conformal alignment of the hud diverges from the real horizon especially during aggressive turns and spins.  The EKF fits the aircraft attitude estimate through gps position and velocity and aggressive maneuvers lead to gps errors (satellites go in and out of visibility, etc.)
  • Notice that no autopilot symbology is drawn because the entire flight is flown manually.

Case Study #3: Skywalker Autopilot

What to watch for:

  • Notice the yellow “v” on the horizon is still very jumpy.  This is the horizontal velocity vector direction which is noisy due to EKF tuning issues that were not identified and resolved when this video was created.  In fact it was this flight where the issue was first noticed.
  • Notice the magenta flight director is overly jumpy in response to the horizontal velocity vector being jumpy.  Every jump changes the current heading error which leads to a change in roll command which the autopilot then has to chase.
  • Notice the flight attitude is much smoother than the above Senior Telemaster flight.  This is because the skywalker EKF incorporates magnetometer measurements as well as gps measurements and this helps stabilize the filter even with poor noise/tuning values.
  • You may notice some crazy control overshoot on final approach.  Ignore this!  I was testing an idea and got it horribly wrong.  I’m actually surprised the landing completed successfully, but I’ll take it.
  • Notice in this video the horizon stays attached pretty well.  Much better than in the spin-testing video due to the non-aggressive flight maneuvers, and much better than the telemaster video due to using a more accurate gps: ublox7p versus ublox6.  Going forward I will be moving to the ublox8.

Case Study #4: Results of Tuning

What to watch for:

  • Notice that visually, the HUD horizon lines stays pegged to the camera horizon within about a degree for most of this video.  The EKF math says +/-3 standard deviations is about 1.4 degrees in pitch and roll.
  • You may notice a little more variation in heading.  +/-3 standard deviations in heading is roughly 4 degrees.
  • Now that the EKF is tamed a bit better, we can start to tune the PID’s and go after some more subtle improvements in flight control.  For example, this skywalker is kind of a floppy piece of foam.  I estimate that I have to hold a 4-5 degree right bank to fly straight.  We can begin to account for these aircraft specific nuances to improve tracking, autoland performance, etc.

Howto?

These flight visualization videos are created with an automated process using open source tools and scripts. I have started a post on how to create these videos yourself.

Mistakes!

dj-mistakes-homer-dohmistake-pano_13891

I make thousands of mistakes a day, mistakes typing, mistakes coding software, mistakes driving, mistakes walking, forgetting to order my sandwich without mayo, etc.  Most of the time they are immediately obvious — a red squiggly line under a word I mistyped, a compiler spewing an error message on line #42, a stubbed toe, my gps suggesting a u-turn at the next intersection, etc.

mistakes

But what happens when the mistake isn’t obvious, isn’t noticed immediately, and doesn’t cause everything around me to immediately fail?  Often these mistakes can have a long lifespan.  Often we discover them when we are looking for something else.

Mistakes from the Trenches.

I wanted to write about a few subtle unnoticed mistakes that lurked in the AuraUAS code for quite some time.

Temperature Calibration #Fail

AuraUAS has a really cool capability where it can estimate the bias (error) of the accelerometers during flight.  The 15-state EKF does this as part of it’s larger task of estimating the aircraft’s attitude, location, and velocity.  These bias estimates along with the corresponding IMU temperature can be used to build up a temperature calibration fit for each specific IMU based on flight data over time.  The more you fly in different temperature conditions, the better your temperature calibration becomes.  Sweet!  Calibrated accelerometers are important because accel calibration errors directly translate to errors in initial roll and pitch estimates (like during launch or take off where these values can be critical.)  Ok, the EKF will sort them out once in the air, because that is a cool feature of the EKF, but it can’t work out the errors until after flying a bit.

The bias estimates and temperature calibration fit are handled by post-flight python scripts that work with the logged flight data.  Question: should I log the raw accel values or should I log the calibrated accel values.  I decided I should log the calibrated values and then use the inverse calibration fit function to derive the original raw values after the flight.  Then I use these raw values to estimate the bias (errors), add the new data to the total collection of data for this particular IMU, and revise the calibration fit.  The most straightforward path is to log calibrated values on board during flight (in real time) and push the complicated stuff off into post processing.

However, I made a slight typo in the property name of the temperature range limits for the fit (we only fit within the range of temperatures we have flight data for.)  This means the on-board accel correction was forcing the temperature to 27C (ignoring the actual IMU temperature.)  However, when backing out the raw values in post processing, I was using the correct IMU temperature and thus arriving at a wrong raw value.  What a mess.  That means a year of calibration flight data is basically useless and I have to start all my IMU calibration learning over from scratch.  So I fixed the problem and we go forward from here with future flights producing a correct calibration.

Integer Mapping #Fail

This one is subtle.  It didn’t produce incorrect values, it simply reduced the resolution of the IMU gyros by a factor of 4 and the accels by a factor of 2.

Years ago when I first created the apm2-sensor firmware — that converts a stock APM2 (atmega2560) board into a pure sensor head — I decide to change the configured range of the gyros and accels.  Instead of +/-2000 degrees per second, I set the gyros for +/-500 degrees per second.  Instead of +/-8 g’s on the accels, I set them for +/- 4 g’s.  The sensed values get mapped to a 16 bit integer, so using a smaller range results in more resolution.

The APM2 reads the raw 16 bit integer values from the IMU and converts this to radians per second.  However, when the APM2 sends these values to the host, it re-encodes them from a 4-byte float to a 2-byte (16-bit) integer to conserve bandwidth.  Essentially this undoes the original decoding operation to efficiently transmit the values to the host system.  The host reads the encoded integer value and reconverts it into radians per second for the gyros (or mps^2 for the accels.)

The problem was that for encoding and decoding between the APM2 and the host, I used the original scaling factor for +/-2000 dps and +/-8g, not the correct scaling factor for the new range I had configured.  This mistake caused me to lose all the resolution I intended to gain.  Because the system produced the correct values on the other end, I didn’t notice this problem until someone asked me exactly what resolution the system produced, which sent me digging under the hood to refresh my memory.

This is now fixed in apm2-sensors v2.52, but requires a change to the host software as well so the encoding and decoding math agrees.  Now the IMU reports the gyro rates with a resolution of 0.015 degrees per second where as previously the resolution was 0.061 degrees per second.  Both are actually pretty good, but it pained me to discover I was throwing away resolution needlessly.

Timing #Fail

This one is also very subtle; timing issues often are.  In the architecture of the AuraUAS flight controller there is an APM2 spitting out new sensor data at precisely 100 hz.  The host is a beaglebone (or any linux computer) running it’s own precise 100 hz main loop.  The whole system runs at 100 hz throughput and life is great — or so I thought.

I had been logging flight data at 25hz which has always been fine for my own needs.  But recently I had a request to log the flight data at the full 100 hz rate.  Could the beaglebone handle this?  The answer is yes, of course, and without any trouble at all.

A question came up about logging high rate data on the well known PX4, so we had a student configure the PX4 for different rates and then plot out the time slice for each sample.  We were surprised at the huge variations in the data intervals, ranging from way too fast, to way too slow, and rarely exactly what we asked for.

I know that the AuraUAS system runs at exactly 100hz because I’ve been very careful to design it that way.  Somewhat smugly I pulled up a 100hz data set and plotted out the time intervals for each IMU record.  The plot surprised me — my timings were all over the map and not much better than the PX4.  What was going on?

I took a closer look at the IMU records and noticed something interesting.  Even though my main loop was running precisely and consistently at 100 hz, it appeared that my system was often skipping every other IMU record.  AuraUAS is designed to read whatever sensor data is available at the start of each main loop iteration and then jump into the remaining processing steps.  Because the APM2 runs it’s own loop timing separate from the host linux system, the timing between sending and receiving (and uart transferring) can be misalligned so that when the host is ready to read sensor data, there might not be any yet, and next time there may be 2 records waiting.  It is subtle, but communication between to free running processor loops can lead to issues like this.  The end result is usually still ok, the EKF handles variable dt just fine, the average processing rate maybe drops to 50hz, and that’s still just fine for flying an airplane around the sky … no big deal right?  And it’s really not that big of a deal for getting the airplane from point A to point B, but if you want to do some analysis of the flight data and want high resolution, then you do have a big problem.

What is the fix?  There are many ways to handle timing issues in threaded and distributed systems.  But you have to be very careful, often what you get out of your system is not what you expected or intended.  In this case I have amended my host system’s main loop structure to throw away it’s own free running main loop.  I have modified the APM2 data output routine to send the IMU packet at the end of each frame’s output to mark the end of data.  Now the main loop on the host system reads sensor data until it receives an IMU packet.  Then and only then does it drop through to the remaining processing steps.  This way the timing of the system is controlled precisely by the APM2, the host system’s main loop logic is greatly simplified, and the per frame timing is far more consistent … but not consistent enough.

The second thing I did was to include the APM2 timestamp with each IMU record.  This is a very stable, consistent, and accurate timestamp, but it counts up from a different starting point than the host.  On the host side I can measure the difference between host clock and APM2 clock, low pass filter the difference and add this filtered difference back to the APM2 timestamp.  The result is a pretty consistent value in the host’s frame of (time) reference.

Here is a before and after plot. The before plot is terrible! (But flies fine.)  The after plot isn’t perfect, but might be about as good as it gets on a linux system.  Notice the difference in Y-scale between the two plots.  If you think your system is better than mine, log your IMU data at 100hz and plot the dt between samples and see for yourself.  In the following plots, the Y axis is dt time in seconds.  The X axis is elapsed run time in seconds.

imu_dt_beforedt using host’s timestamp when imu packet received.

imu_dt_afterdt when using a hybrid of host and apm2 timestamps.

Even with this fix, I see the host system’s main loop timing vary between 0.008 and 0.012 seconds per frame, occasionally even worse (100hz should ideally equal exactly 0.010 seconds.)  This is now far better than the system was doing previously, and far, far better than the PX4 does … but still not perfect.  There is always more work to do!

Conclusions

These mistakes (when finally discoveded) all led to important improvements with the AuraUAS system: better accelerometer calibration, better gyro resolution, better time step consistency with no dropped frames.  Will it help airplanes get from point A to point B more smoothly and more precisely?  Probably not in any externally visible way.  Mistakes?  I still make them 1000’s of times a day.  Lurking hidden mistakes?  Yes, those too.  My hope is that no matter what stage of life I find myself in, I’m always working for improvements, always vigilant to spot issues, and always focused on addressing issues when they are discovered.

Flight Milestones

Congratulations!

Congrats ATI Resolution 3, Hobby Lobby Senior Telemaster, Hobbyking Skywalker, and Avior-lite autopilot on your recent milestones!

IMG_20130923_183033 IMG_20150804_073522 IMG_20150804_093541 IMG_20150801_135249

Avior-lite (beaglebone + apm2 hybrid) autopilot:

  • 300th logged flight
  • 7000+ logged flight minutes (117.8 hours)
  • 6400+ fully autonomous flight minutes (107.2 hours)
  • 2895 nautical miles flown (3332 miles, 5362 km)

Hobby Lobby Senior Telemaster (8′ wing span)

  • Actively flight testing autopilot hardware and software changes since 2007!
  • 200th logged flight.
  • 5013 logged flight minutes (83.5 hours)
  • 4724 fully autonomous flight minutes (78.7 hours)
  • 2015 nautical miles flown (2319 miles, 3733 km)

Today (October 7, 2015) I logged the 300th avior-lite flight and simultaneously logged the 200th flight on my venerable Senior Telemaster.  I realize these are just numbers, and they wouldn’t be big numbers for a full scale aircraft operation or even a production uav operation, but it represents my personal effort in the UAV field.

I’m proud of a mishap-free 2015 flying season so far!  (Ok, err, well one mishap on the very first launch of the skywalker … grrr … pilot error … and fixable thankfully.)

Enjoy the fall colors and keep flying!

fall-colors-skywalker

APM2 Sensor Head

apm2

The ardupilot mega is a fairly capable complete autopilot from both the hardware and the software perspective.  But what if you projects needs all the sensors and not the full APM2 autopilot code?

Overview

The apm2-sensorhead project provides a quick, robust, and inexpensive way to add a full suite of inertial and position sensors to your larger robotics project.  This project is a replacement firmware for the ardupilot-mega hardware.  The stock arduplane firmware has been stripped down to just include the library code that interrogates the connected sensors, and also maintains the code that can read your RC transmitter stick positions through a stock RC receiver as well as drive up to 8 servo outputs.  It also includes manual override (safety pilot) code and code to communicate with a host computer.  The next upcoming feature will be onboard mixing for vtail, elevon, and flaperon aircraft configurations.  This allows you the option to fly complicated airframes (and have an autopilot fly complicated airframes) without needing a complicated transmitter or adding complicated mixing code to your autopilot application.

Why?

Speaking a bit defensively: I want to address the “why?” question.  First of all, I needed something like this for one of my own autopilot projects.  That’s really the primary motivation right there and I could just skip to the next section.  If you can answer this question for yourself, then you are my target audience!  Otherwise if your imagination is not already running off on it’s own, why should you or anyone else possibly be interested?

  • You are working in the context of a larger project and need to incorporate an IMU and GPS and possibly other sensors.  Do you design your own board?  Do you shoehorn your code onto the ardupilot?  Do you look at some of the other emerging boards (pixhawk, etc.?)  What if you could integrate the sensors you need quickly and easily?
  • The ardupilot mega is a relatively inexpensive board.  There are some APM clones available that are even less expensive.  It would be hard to put together the same collection of sensors for a lower price by any other means.
  • The ardupilot mega is proven and popular and has seen very wide scale testing.  It is hard to find any other similar device on the market that has been tested as thoroughly and under as wide a variety of applications and environments than the ardupilot.
  • The ardupilot mega code (especially the sensor I/O and RC code) is also tremendously well tested and ironed out.
  • By stripping all the extra functionality out of the firmware and concentrating simply on sensor IO and communication with a host computer, the new firmware is especially lean, mean, fast, and simple.  Where as the full arduplane code is bursting the atmega2560 cpu at the seams with no more room to add anything, compiling the amp2-sensorhead code reports: “Binary sketch size: 36,132 bytes (of a 258,048 byte maximum)”
  • Along with plenty of space for code, removing all the extraneous code allows to CPU to run fast and service all it’s required work without missing interrupts and without dropping frames.
  • There is a design philosophy the prefers splitting the hard real-time work of low level sensor interrogation from the higher level intelligence of the application.  This can lead to two simpler applications that each do their own tasks efficiently and well, versus a single monolithic conglomerations of everything which can grow to be quite complex.  With all the hard real time work taken care of by the apm2, the host computer application has far less need for a complicated and hard-to-debug thread based architecture.
  • The APM2 board can connect to a host computer trivially via a standard USB cable.  This provides power and a UART-based communication channel.  That’s really all you need to graft a full suite of sensors to your existing computer and existing application.  Some people like to solder chips onto boards, and some people don’t.  Some people like to write SPI and I2C drivers, and some people don’t.  Personally, I don’t mind if once in a while someone hands me an “easy” button. 🙂

Some Technical Details

  • The apm2-sensorhead firmware reports the internal sensor data @ 100hz over a 115,200 baud uart.
  • The apm2-sensorhead firmware can be loaded on any version of the ardupilot-mega from 2.0 to 2.5, 2.6, and even 2.7.2 from hobbyking.
  • The UART on the APM2 side is 5V TTL which you don’t have to worry about if you connect up with a USB cable.
  • This firmware has flown for extensively for 3 flying seasons at the time of this writing (2012, 2013, 2014) with no mishap attributable to a firmware problem.
  • The firmware communicates bi-directionally with a host computer using a simple binary 16-bit checksum protocol.  Sensor data is sent to the host computer and servo commands are read back from the host computer.
  • Released under the GPLv3 license (the same as the arduplane library code.)
  • Code available at the project site: https://github.com/clolsonus/apm2-sensorhead  This code requires a minor ‘fork’ of the APM2 libraries available here: https://github.com/clolsonus/libraries

Questions?

I am writing this article to coincide with the public release of the apm2-sensorhead firmware under the LGPL license.  I suspect there will not be wide interest, but if you stumble upon this page and have a question or see something important I have not addressed, please leave a comment.  I continue to actively develop and refine and fly (fixed wing aircraft) with this code as part of my larger system.

 

Adventures in Aerial Image Stitching

A small UAV + a camera = aerial pictures.

SAM_0079

SAM_0057

SAM_0053

This is pretty cool just by itself.  The above images are downsampled, but at full resolution you can pick out some pretty nice details.  (Click on the following image to see the full/raw pixel resolution of the area.)

SAM_0057-detail

The next logical step of course is to stitch all these individual images together into a larger map.  The questions are: What software is available to do image stitching?  How well does it work?  Are there free options?  Do I need to explore developing my own software tool set?

Expectations

Various aerial imaging sites have set the bar at near visual perfection.  When we look at google maps (for example), the edges of runways and roads are exactly straight, and it is almost impossible to find any visible seam or anomaly in their data set.  However, it is well known that google imagery can be several meters off from it’s true position, especially away from well travelled areas.  Also, their imagery can be a bit dated and is lower resolution than we can achieve with our own cameras … these are the reasons we might want to fly a camera and get more detailed, more current , and perhaps more accurately placed imagery.

Goals

Of course the first goal is to meet our expectations. 🙂  I am very adverse to grueling manual stitching processes, so the second goal is to develop a highly automated process with minimal manual intervention needed.  A third goal is to be able to present the data in a way that is useful and manageable to the end user.

Attempt #1: Hugin

Hugin is a free/open-source image stitching tool.  It appears to be well developed, very capable, and supports a wide variety of stitching and projection modes.  At it’s core it uses SIFT to identify features and create a set of keypoints.  It then builds a KD tree and uses fastest nearest neighbor to find matching features between image pairs.  This is pretty state of the art stuff as far as my investigation into this field has shown.

Unfortunately I could not find a way to make hugin deal with a set of pictures taken mostly straight down and from a moving camera position.  Hugin seems to be optimized for personal panormas … the sort of pictures you would take from the top of a mountain when just one shot can’t capture the full vista.  Stitching aerial images together involves a moving camera vantage point and this seems to confuse all of hugin’s built in assumptions.

I couldn’t find a way to coax hugin into doing the task.  If you know how to make this work with hugin, please let me know!  Send me an email or comment at the bottom of this post!

Attempt #2: flightriot.com + Visual SFM + CMPMVS

Someone suggested I checkout flightriot.com.  This looks like a great resource and they have outlined a processing path using a number of free or open-source tools.

Unfortunately I came up short with this tool path as well.  From the pictures and docs I could find on these software packages, it appears that the primary goal of this site (and referenced software packages) is to create a 3d surface model from the aerial pictures.  This is a really cool thing to see when it works, but it’s not the direction I am going with my work.   I’m more interested in building top down maps.

Am I missing something here?  Can this software be used to stitch photos together into larger seamless aerial maps?  Please let me know!

Attempt #3: Microsoft ICE (Image Composite Editor)

Ok, now we are getting somewhere.  MS ICE is a slick program.  It’s highly automated to the point of not even offering much ability for user intervention.  You simply throw a pile of pictures at it, and it finds keypoint matches, and tries to stitch a panorama together for you.  It’s easy to use, and does some nice work.  However, it does not take any geo information into consideration.  As it fits images together you can see evidence of progressively increased scale and orientation distortion.  It has trouble getting all the edges to line up just right, and occasionally it fits an image into a completely wrong spot.  But it does feather the edges of the seams so the final result has a nice look to it.  Here is an example.  (Click the image for a larger version.)

SAM_0125_stitch

The result is rotated about 180 degrees off, and the scale at the top is grossly exaggerated compared to the scale at the bottom of the image.  If you look closely, it has a lot of trouble matching up the straight line edges in the image.  So ICE does a commendable job for what I’ve given it, but I’m still way short of my goal.

Here is another image set stitched with ICE.  You can see it does a better job avoiding progressive scaling errors on this set.  However, linear features still are crooked, there are many visual discontinuities, and it one spot it has completely bungled the fit and inserted a fragment completely wrong.  So it still falls pretty short of my goal of making a perfectly scaled, positioned, and seamless map that would be useful for science.

SAM_0372_stitch-reduced

Attempt #4: Write my own stitching software

How hard could it be … ? 😉

  1. Find the features/keypoints in all the images.
  2. Compute a descriptor for each keypoint.
  3. Match keypoint descriptors between all possible pairs of images.
  4. Filter out bad matches.
  5. Transform each image so that it’s keypoint position matches exactly (maybe closely? maybe roughly on the same planet as ….?) that same keypoint as it is found in all other matching images.

I do have an advantage I haven’t mentioned until now:  I have pretty accurate knowledge of where the camera was when the image was taken, including the roll, pitch, and yaw (“true” heading).  I am running a 15-state kalman filter that estimates attitude from the gps + inertials.  Thus it converges to “true” heading, not magnetic heading, not ground track, but true orientation.  Knowing true heading is critically important for accurately projecting images into map space.

The following image shows the OpenCV “ORB” feature detector in action along with the feature matching between two images.

feature-pairs

Compare the following fit to the first ICE fit above.  You can see a myriad of tiny discrepancies.  I’ve made no attempt to feather the edges of the seams, and in fact I’m drawing every image in the data set using partial translucency.   But this fit does a pretty good job at preserving overall all geographically correct scale, position, and orientation of all the individual images.

fit1

Here is a second data set taken of the same area.  This corresponds to the second ICE picture above.  Hopefully you can see that straight line edges, orientations, and scaling is better preserved.

fit2

Perhaps you might also notice that because my own software tool set understands the camera location when the image is taken, the projection of the image into map space is more accurately warped (none of the images have straight edge lines.)

Do you have any thoughts, ideas, or suggestions?

This is my first adventure with image stitching and feature matching.  I am not pretending to be an expert here.  The purpose of this post is to hopefully get some feedback from people who have been down this road before and perhaps found a better or different path through.  I’m sure I’ve missed some good tools, and good ideas that would improve the final result.  I would love to hear your comments and suggestions and experiences.  Can I make one of these data sets available to experiment with?

To be continued….

Expect updates, edits, and additions to this posting as I continue to chew on this subject matter.

Spiraling Under Control

One of the staples of fixed wing autopilots is the circle hold.  A circle hold is the closest thing you can get to a pause button with a vehicle that must be in constant forward motion to stay aloft.  There are a few hidden challenges in the task, including wind compensation and some unexpected coupling that can lead to weird looking oscillations if not handled well.  People have been doing circle holds with UAV’s for a long long time, so I don’t bring anything new to the table, but it is always fun to play, err I mean experiment.

Circles

What happens if you periodically move the center of your circle (maybe every 5 mintues) to a new location, hold for a while, move the circle center again, circle for a while, and repeat?  Well short answer is you end up with a bunch of circles, but if you are strategic in the placement of the circle center, some pretty patterns can begin to emerge.  To add visual interest we can also change the circle speed when we move the center of the circle.  Increasing the circling speed, increases the circle radius.

fgfs-screen-001

What if each time we move the circle center point, we move it by a fixed angle (let’s try 30 degrees), and increase the offset distance by a little bit, and also in crease the airspeed by a 1 kt?  An interesting spiral pattern begins to emerge.  From the original starting point, each circle is a bit bigger than the one before, and also a bit further away.  A spiral pattern begins to emerge.

fgfs-screen-002

A flight simulator isn’t encumbered by battery or fuel limitations, so what happens if we let this run for a long time?  The spiral pattern continues to develop and grow.fgfs-screen-003

Spirals aren’t that big of a deal of course.  20 minutes of python coding could produce some far fancier spirals.  What interests me is how the patterns are produced … this is the flight track of a simulated UAV and that makes it a bit more challenging and fun.

Continuous Spirals

What happens if we fly a basic circle hold, but instead of changing the circle center point every 5 minutes, we continually move the center target of the circle in sweeping spiral pattern?  We would get something that looks a bit like a stretched spring viewed from a bit of an angle.

At the beginning of the pattern, the airspeed is set to 25 kts.  The radius (not of the circle hold but of the spiral) is set to 0.1 nm.   Each second the position is swept forward 0.125 degrees.  This means the spiral pattern sweeps forward 7.5 degrees per minute, or 450 degrees per hour.  The airspeed and spiral radius are also increased by a tiny bit each second.  The next picture shows the pattern after a bit more than an hour of flight.

fgfs-screen-004

After about 5 hours of flight we get the next picture:

fgfs-screen-006

And if we let the simulation run over night and zoom out the map a couple steps, we get:

fgfs-screen-007

A Few More Details

All these patterns were produced by the flight track of a UAV.  That means we aren’t just plotting mathematical functions, but instead are combatting wind and physics to produce these patterns.  A complex system of sensors, attitude estimation, control theory, navigation and planning blends together to produce the end result.

Simulation vs. Reality

All of this was flown in the FlightGear.org flight simulator.  However, there is a much stronger tie to reality than might first appear.  FlightGear is the ‘stand in’ for reality.  However all the sensor data is sent externally to the “avior” autopilot running in “software in the loop” testing mode.  The avior autopilot code sends all the raw sensor information (simulated IMU and simulated GPS) into a 15 state kalman filter to estimate the actual roll, pitch, and yaw.  It would be possible to cheat and use the simulators exact roll, pitch, and yaw, but the “avior” can’t cheat in real life (because this information isn’t directly available) so it faithfully estimates these values in simulation just like it must in the real aircraft.

The roll, pitch, & yaw estimates are then passed to flight control system and compared with target values for basic flight control and navigation.  The “avior” autopilot computes control surface deflections to eliminate the error between target speed, and attitude vs. actual speed and attitude.

Finally, these control surface positions are sent back to FlightGear which then computes the flight physics based on where the avior moves the control surfaces and throttle.  FlightGear computes new gyro, accelerometer, and gps sensor values, sends them to the avior autopilot, and the process repeats.

In this way we can test the “avior” autopilot quite thoroughly using the simulation.  The avior thinks it’s really flying, it receives realistic sensor data, drives control surfaces as if they were real servos, and the simulator responds realistically.  Really the only part of the avior not being thoroughly exercised is the specific code that reads the real world sensors and the specific code that drives the real world servos.

Wind

At first glance, the job of flying a circle hold around a fixed center point amounts to holding a fixed bank angle.  There is a relationship between airspeed, bank angle, and turning radius, so we just need to work out these numbers and we can fly any circle of any radius at any speed, simply by computing the required bank angle.

But things get a bit more complicated than this because we need to adjust our actual heading and bank angle to compensate for drifting inside or outside the target circle radius.  And there is this whole business of figuring out how to smoothly enter the circle pattern from any starting position and heading inside or outside the circle.

I’ll skip over the boring details (ask if you are curious) but along with all the other things that must be accounted for, the real world almost always has a little (or a lot) of wind.  Flying up wind, small heading changes of the aircraft can yield large changes in your ground track heading.  Imagine a worst case scenario where you are flying at 30 kts, exactly into a 30 kt head wind.  You are hoving relative to the ground.  But even the tiniest heading change (or wind change) will slide you 90 degrees left or right.  The opposite happens when you are flying downwind.  Aircraft heading changes produce proportionally smaller ground track heading changes.

Wind adds some unique challenges to flying circle holds that are actual circles from a ground perspective.

Coupling

I also wanted to say some brief words about coupling between axis, because it can be a bigger issue in circle holds than might first be expected.  Imagine you are flying a perfect circle hold in zero wind.  You are at a 30 degree bank, at your target airspeed, and at your target radius.  Now imagine you are a bit outside of the target radius.  You need to bank a bit more to tighten your turning radius.  But this tighter bank could cause a loss of altitude (basic airplane physics).  If the aircraft responds to the lost altitude with increased elevator, this will tighten your turn even more because you are banked.  It is easy to over shoot and end up inside the circle, which means the flight controller will command the aircraft to fly less of a bank, increasing the circle radius, but that creates more lift, more climb, etc.  Roll, pitch, and throttle can combine in some very awkward ways during a circle hold and that, along with wind and all the other aspects of basic aerodynamics and physics can make flying an accurate and stable circle hold a bit more of a challenge than you might first expect.

External Scripting

Hah, if you are still reading all the way down here, you are really a UAV geek (or you are following good skimming rules of reading the first paragraph, the last paragraph, and looking at the pictures.)

One of the fun things about the avior autopilot is that all the sensor data, attitude estimations, control surface positions, navigation state, and just about every other interesting variable is published in a giant hierarchical tree of name/value pairs.  If you’ve poked inside the windows registery, it’s kind of the same basic idea, except all in memory and very fast to access or update.

The cool thing about this is that we can expose external interfaces to this giant data structure and completely command or monitor the autopilot externally.  This enables a developer to write a perl or python script to monitor and command the autopilot.  What sorts of things could such a script do?

  • Command the autopilot to do a circle hold, then smoothly adjust the center point of the circle hold to produce interesting patterns in the UAV flight track.  That’s what this whole article is about.
  • Fly specific flight test maneuvers repeatably and accurately.  Do you want to nail down the exact stall speed of your aircraft under different scenarios?  You could write a script to fly the aircraft into a stall, then recover, climb to safe altitude, repeat.  You could write scripts to put the aircraft into all kinds of different specific flight situations and carefully record the data.  What is the elevator trim and power settings for level flight @ 25 kts, 30 kts, 35 kts, 40 kts, etc.  How about at a 5 degree bank, 10 degree bank, etc.
  • Create higher level intelligence without having to write code inside the core autopilot.  The autopilot does everything for the aircraft, and it must be reliable and robust and never fail or do something dumb.  This comes through long hours of flight testing.  Now you want to go fiddle around under the hood and change something?  That is begging for trouble!  Why not leave the autopilot alone and write your new functions in an external script?  Maybe you could write a fancy engine-out auto-land script that knows the basic performance characteristics of your aircraft and can plot and command an optimal approach path to touch down from any position and altitude to any landing strip in any wind conditions.

I’m Thankful for my Senior Telemaster

This Thanksgiving day I’m thankful for many blessings: family, friends, neighbors, shelter, food, our siberian husky who lived 16 years, and many other things.  The weather was nice Thanksgiving morning so I was thankful to have an hour to run out to the field first thing to do some flying.

On Thursday, November 22 (Thanksgiving day) my Senior Telemaster past an interesting milestone: 12 cumulative hours of fully autonomous flight. This may not sound like a whole lot, but it represents 58 separate flights over the span of 3 years. In every flight it has performed beautifully and majestically and very scale like. It is not a fast aircraft, but it’s solid, forgiving, and controllable even at very slow airspeeds. It has a lot of room, a lot of carrying capacity, and it’s very tolerant of CG changes. I have flown it on calm days and I have flown it in up to 27 kt winds (31 mph) that were gusting to higher at times. With a set speed of 30 kts there have been times it has been blown backwards in flight! She has seen sun, clouds, fog, drizzle, and even rain. Winter, summer, spring, fall, hot days, cold days — she’s seen just about everything. Through all of this it has been a solid, reliable, patient, unfailing work horse — and she’s still going as strong as ever!

She has been a test bed for 4 major autopilot revisions:

  • An original gumstix :gumstix” processor + a Xbow MNAV sensor head.
  • A gumstix “verdex”+ a sparkfun 6DOFv4 IMU
  • A gumstix “verdex” + a VectorNav VN-100T temperature calibrated IMU
  • A gumstix “overo” + a 3D Robotics APM2.5 “sensor head”

I recall one close call where I made a code+hardware change and didn’t check it thoroughly enough.  Transitioning to autonomous flight resulted in an immediate full throttle dive towards the earth.  I switched to manual mode (almost not quickly enough) and was able to manually recover from the dive just in the nick of time.  That is the only time I’ve put any serious flex on the wings!

I recall one landing when the farmer’s corn was 8′ tall and completely surrounding our little patch of flying field.  I typically had to come over the edge of the corn with inches to spare in order to get it down and stopped before the end of our flying field.  On this day we had crazy gusting cross winds and as I cleared the corn the wind just fell out from under the telemaster.  It dropped from maybe 10′ of altitude and came to a complete stop no more than 5′ from the approach edge of the corn.  That has to be my all time shorted landing with any aircraft (only counting landings on the wheels.) 🙂

So thanks to my trusty work horse for many good years of service with hopefully many more to come!

Hacking the APM2 Part #5: Flight Testing

This is the payoff video showing the hybrid autopilot system in action in the Resolution 3 airframe. (By the way, this is HD video so watch it full screen if you can!)

I am skipping many details between integrating the hardware and flying, but just as a quick overview:

We first integrated the system into a Senior Telemaster.  After 4 trips to the field over the span of 2 days, numerous flights, and a bunch of work in the evenings, we felt like the system was coming together and working every bit as well as it was supposed to.  There are always more things we could do with the telemaster airframe, but now that we were fairly confident that the system was working end-to-end as designed, we dropped it into our Resolution 3 (blended body, composite flying wing airframe.)  The new hybrid autopilot system needed some gain adjustments versus the old prototype autopilot so we guessed at those based on what changes were needed with the Telemaster configuration.

We enjoyed a flawless bungee launch of the wing, climbed to altitude, flipped over to AP mode, and the aircraft flew off flawlessly and ran through it’s demo routine without a hitch.  The clouds were dramatic and we noticed a rain squall moving in, so we landed and packed up just as the rain set in.

Here are some high points of the new system:

  • 800Mhz ARM processor (with hardware floating point) running a 15 state kalman filter and all the autopilot and navigation code.
  • WGS-84 math used for heading and route following.
  • Accurate yaw estimation when the filter converges.
  • Accurate wind estimation
  • 100hz sensor sampling, 100hz filter updates, 100hz autopilot and navigation updates, 100hz actuator updates with up to 400hz PWM signal generation rate for digital servos.
  • APM2 sensors.
  • Tablet/smart phone/laptop ground station interface.

If you just like airplanes, here’s some nice footage of our Telemaster landing.  This video was taken during our 2 days of Telemaster integration effort …

Landing in psycho winds:

In calmer winds:

Last flight of the day:

Hacking the APM2 Part #4: Laying out a “hybrid” system

Imagine for one second that you are a UAV developer. The DIYdrones ArduPilot is an awesome piece of hardware; you love all the great sensors that are attached; you love the all-in-one design and the small/light size.  But you are also feeling the limits of it’s ATMega 2560 processor and the limits of it’s 256Kb of RAM.  And maybe, you enjoy developing code within a full blown Linux developers environment with all the supporting libraries and device drivers that Linux has to offer.

Hardware Layout

Here is a picture of a protype “hybrid” autopilot system for small UAV’s.  What you see in the picture includes:

  • An ArduPilot Mega 2.0 (lower left)  The APM2 includes an MPU-6000 IMU, a Mediatek 5hz GPS, static pressure sensor, compass, and a variety of other items.
  • A 1Ghz Gumstix Overo with a Pinto-TH expansion board. (longer thinner board, upper right).  The Overo runs Linux, has 512Mb of RAM, and has a hardware floating point processor.
  • An MPXV7002DP pressure sensor (upper left).  This is attached to a pitot tube and measures airspeed.
  • A Sparkfun TTL level translator (between the overo and the pressure sensor)
  • A Futaba FASST 6 channel receiver (lower right)
  • A set of power distribution rails (8×3 block of 0.1″ pins.)
  • [not shown/mounted remotely] Digi Xtend 900Mhz radio modem.

The above picture was deliberately taken without most of the wires to show the components.  Once all the wires are connected, things will look much more “busy”.  This is a prototype system so the components will be jumpered together, probably with a bit of hot glue here and there to ultimately secure the wiring before flight.

Software Layout

Just as important as the hardware layout is the software layout.  This is maybe a bit more boring and I can’t really take a picture of it to post here, but I will give a few bullet points for anyone who might be interested.

  • The APM2 runs a custom built firmware that simply collects all the sensor data, packages it up into checksummed binary packets, and sends the data out over a uart.  In addition, the APM2 reads checksummed binary packets inbound over the same uart.  Generally inbound packets are actuator commands.
  • The Overo runs a custom built autopilot application that is portable.  It is not device/hardware independent, but it is designed to talk to a variety of hardware over a common interfaces.  This Linux-based autopilot app can read all the sensor data from the APM2 (being transmitted at 100hz over a 500,000 baud link.)  The autopilot app runs a computationally intensive 15-state kalman filter.  Manages the mission tasks, computes wgs-84 great circle navigation targets, runs the low level PID code, and ultimately sends servo position commands back to the APM2 over the same 500,000 baud link.
  • The high level autopilot software runs in an environment that is more natural and has more power/space for developing the “high concept” portions of the code.  The APM2 is great for reading a bunch of sensors really quickly.  In a hybrid system both major components are able to do what they do best without having to suffer through tasks they aren’t as good at.
  • The high level autopilot is configurable through a structured XML configuration system.  It includes an advanced 15-state kalman filter.  It runs the same PID code configured in the same way as the FlightGear flight simulator.  It also includes the same “property system” that FlightGear pioneered.  It includes a rich set of IO API’s, and hopefully will soon sport a built in scripting engine.
  • A word about the “property system”:  The property system is an in-memory tree structure that relates “ascii” variable names to corresponding values.  There is loose type checking and on-the-fly type conversion similar to many scripting systems (I can write a value as a ‘string’ and read it back out as a ‘double’ and the system does sensible conversions for me.)  Any internal module can read/write the property tree, so it is a great way to build “loose” interfaces between modules.  Modules simply reference the values they need and there is no compile time enforcement.  This allows for building “robust” interfaces and eliminates much of the cascading changes throughout the code when a class gets updated.  In addition, we can build external network interfaces to the property system which allows external scripts or other code to monitor and even modify the property tree.  This allows external scripts access to the core values of the sim and access to set modes and otherwise “drive” what the autopilot does.  In addition, there is a natural one-to-one mapping between the property tree and an xml config file.  This is handy for easily loading/saving configurations.  At a low level, property system variable access from the core code is simply a pointer dereference, so using these values is almost as fast as referencing a locally scoped variable.  It is hard to describe in a few words what all the “property system” is or does, but it’s an amazing infrastructure tool that helps build flexible, robust, and high functioning code.

September 7, 2012 Update:

Here is a picture showing the whole system plugged together with 6″ sparkfun jumper wires.  The wiring gets a little “busy” when everything is connected together so I plan to do some work buttoning things up, simplifying, and cleaning up the wire runs.  I’ve attached 3 servos to test auto vs. manual outputs.  You can see a 900Mhz Xtend modem (1 watt power, 115,200 baud), and the entire system is powered by a single RC receiver battery.  In the aircraft this should all run just fine off the BEC.  And here is a video of everything in action:

That’s about it. The modified APM2 firmware is happy and running.  The overo autopilot code is up and running and talking to the APM2.  Next up is validating the radio modem ground station connection, and then I’m running out of things to do before dropping this into an airplane!

September 17, 2012 Update:

It’s far from a perfect integration, but I spent some time this evening bundling up wires.  Definitely an improvement.  Working towards full integration into a Senior Telemaster (electric powered) and hope to do some first test flights later this week.  Today I test fit everything, powered everything from the main flight battery and verified basic control surface movements in manual mode and autopilot mode.  The radio modem link is running and the mapping and instrument panel views on the ground link up and run.

Hacking the APM2 Part #3: Servos

Here are a few random notes on the APM2 and servos.

As we all know, the servos are controlled by sending a digital pulse on the signal line to the servo.  The length (time) of the pulse maps to the position of the servo.  A 1500us pulse is roughly the center point.  900us is roughly one extreme and 2100us is roughly the other extreme.  Different systems will use slightly different numbers and ranges, but these are good numbers to start with.  Historically, RC systems stacked the pulses for multiple channels on a single radio signal — affectionately called the “pulse train.”  If you crunch the numbers — given a max pulse width of about 2000us and a bit of inter-pulse spacing, we can send about 8-10 channels of data at 50hz on a single signal line.  Standard analog servos are designed to expect 50 pulses per second and can misbehave if we send a faster signal.

With modern 2.4Ghz systems, the servo/channel data is sent digitally and at much faster data rates.  Receivers drive each servo pulse concurrently and the concept of a “pulse train” is largely gone — as is the need to space things out at a 50hz rate.  If you crunch the numbers again and look at a max pulse width of 2000us with a bit of inter-pulse space, we can actually send servo data at 400hz to a digital servo and it will respond as expected.  (8 channels at 50hz, 1 channel at 400hz — makes sense.) 🙂

What does all this mean to an UAS builder?

  • It means that if we use digital servos, we can send position updates to the servo and expect some response at up to a 400hz rate. In real life these servos have mass, motors that take time to move, on board electronics that take time to process the signal, and other factors, so we can’t expect instantaneous response, but we can expect an improvement over the old 50hz analog servos.
  • It means we have a strong motivation to sample our sensors faster than 50hz, run our attitude filter faster than 50hz, run our guidance, control, and navigation code faster than 50hz, and command our actuators faster than 50hz.
  • It means that if we can increase our throughput to 100hz or 200hz, we can have a far more responsive and “tight” system than we could if everything was running at 50hz.  Faster response to gusts and disturbances, ability to counter more of the aircraft’s natural dynamics.  With proper autopilot tuning, this can lead to a much higher quality system, much more accurate flying, and a much more stable platform to hang our sensors off of.

The APM2 has the ability to set the PWM frequency for the output channels.  Channels are grouped and several hang from a single clock generator so we can’t change the PWM rate individually, but we can do it for groups of channels.  Question: if our system is running at a full 100hz update rate, does it make sense to set the servo output rate to 100hz or something higher?  I argue that we should set the output rate as high as is possible (400hz) because the servo will pick up a change fractionally quicker at a 400hz output rate than at a 100hz output rate.  In the end though, most servos are somewhat noisy and crude and subject to physics, so it may not have a huge noticeable effect, but any little thing that makes the system more responsive and increases the end to end throughput from sensing to responding is welcome.