Last Friday I flew an aerial photography test flight using a Skywalker 1900 and a Sony A6000 camera (with 20mm lens.) On final approach we noticed a pair of deer crossing under the airplane. I went back through the image set to see if I could spot the deer in any of the pictures. I found at least one deer in 5 different shots. Here are the zoom/crops:
Manual UAV sensor calibration is dead!
I know the above statement isn’t exactly true, but it could be true if everyone who develops UAVs would read this article. 🙂
In this article I propose a system that continuously and dynamically self calibrates the magnetometers on a flying UAV so that manual calibration is no longer ever needed.
With traditional UAVs, one the most important steps before launching your UAV is calibrating the magnetometers. However, magnetometers are also one of the most unpredictable and troublesome sensors on your UAV. Electric motors, environmental factors, and many other things can significantly interfere with the accuracy and consistency of the magnetometer readings. Your UAV uses the magnetometer (electronic compass) to compute its heading and thus navigate correctly through the sky.
We can accurately estimate heading without a compass
Most UAV autopilots do depend on the magnetometer to determine heading. However, their is a class of “attitude estimators” that run entirely on gyros, accelerometers, and gps. These estimators can accurately compute roll, pitch, and yaw by fitting the predicted position and velocity to the actual position and velocity each time a new gps measurement is received. In order for these estimators to work, they require some amount of variation in gps velocity. In other words, they don’t estimate heading very well when you are sitting motionless on the ground, hovering, or flying exactly straight and level for a long time.
A compass is still helpful!
For fixed wing UAVs it is possible to fly entirely without a magnetometer and still estimate the aircraft attitude accurately throughout the flight. However, this requires some carefully planned motion before launch to help the attitude estimate converge. A compass is obviously still helpful to improve the attitude estimate before launch and in the initial moments of the launch–before the inertial only system sees enough change in velocity to converge well on it’s own. A compass is also important for low dynamic vehicles like a hovering aircraft or a surface vehicle.
Maths and stuff…
The inertial only (no compass) attitude estimator I fly is developed at the University of Minnesota UAV lab. It is a 15-state kalman filter and has been at the core of our research here for over a decade. A kalman filter is a bit like a fancy on-the-fly least squares fit of a bunch of complicated interrelated parameters. The kalman filter is based on minimizing statistical measures and is pretty amazing when you see it in action. However, no amount of math can overcome bad or poorly calibrated sensor data.
So Here We Go…
First of all, we assume we have a kalman filter (attitude and location estimator) that works really well when the aircraft is moving, but not so well when the aircraft is stationary or hovering. (And we have that.)
We know our location and time from the gps, so we can use the World Magnetic Model 2015 to compute the expected 3d vector that points at the north pole. We only have to do this once at the start of the flight because for a line of sight UAV, position and time doesn’t change all that much (relative to the magnetic pole) during a single flight.
During the flight we know our yaw, pitch, and roll. We know the expected direction of the magnetic north pole, so we can combine all this to compute what our expected magnetometer reading should be. This predicted magnetometer measurement will change as the aircraft orientation changes.
The predicted magnetometer measurement and the actual magnetometer measurements are both 3d vectors. We can separate these into their individual X, Y, and Z components and build up a linear fit in each axis separately. This fit can be efficiently refined during flight in a way that expires older data and factors in new data. When something changes (like we travel to a new operating location, or rearrange something on our UAV) the calibration will always be fixing and improving itself as soon as we launch our next flight.
Once we have created the linear fit between measured magnetometer value and expected magnetometer value in all 3 axes, we can use this to convert our raw magnetometer measurements into calibrate our magnetometer measurements. We can feed the calibrated magnetometer measurements back into our EKF to improve our heading estimate when the UAV is on the ground or hovering.
Does this actually work?
Yes it does. In my own tests, I have found that my on-the-fly calibration produces a calibrated magnetometer vector that is usually within 5 degrees of the predicted magnetometer vector. This error is on par with the typical cumulative attitude errors of a small UAS kalman filter and on par with a typical hand calibration that is traditionally performed.
The source code for the University of Minnesota 15-state kalman filter, along with a prototype self calibration system, and much more can be found at my AuraUAS github page here: https://github.com/AuraUAS/navigation
This article is fairly rough and I’ve done quite a bit of hand waving throughout. If you have questions or comments I would love to hear them and use that as motivation to improve this article. Thanks for reading!
Everything in this post shows real imagery taken from a real camera from a real uav which is really in flight. Hopefully that is obvious, but I just want to point out I’m not cheating. However, with a bit of math and a bit of camera calibration work, and a fairly accurate EKF, we can start drawing the locations of things on top of our real camera view. These artificial objects appear to stay attached to the real world as we fly around and through them. This process isn’t perfected, but it is fun to share what I’ve been able to do so far.
For the impatient
In this post I share 2 long videos. These are complete flights from start to finish. In them you can see the entire previous flight track whenever it comes into view. I don’t know how best to explain this, but watch the video and feel free to jump ahead into the middle of the flight. Hopefully you can see intuitively exactly what is going on.
Before you get totally bored, make sure to jump to the end of each video. After landing I pick up the aircraft and point the camera to the sky where I have been flying. There you can see my circles and landing approach drawn into the sky.
I think it’s pretty cool and it’s a pretty useful tool for seeing the accuracy and repeatability of a flight controller. I definitely have some gain tuning updates ready for my next time out flying based on what I observed in these videos.
The two videos
Additional notes and comments
- The autopilot flight controller shown here is built from a beaglebone blaock + mpu6000 + ublox8 + atmega2560.
- The autopilot is running the AuraUAS software (not one of the more popular and well known open-source autopilots.)
- The actual camera flown is a Runcam HD 2 in 1920×1440 @ 30 fps mode.
- The UAV is a Hobby King Skywalker.
- The software to post process the videos is all written in python + opencv and licensed under the MIT license. All you need is a video and a flight log and you can make these videos with your own flights.
- Aligning the flight data with the video is fully automatic (described in earlier posts here.) To summarize, I can compute the frame-to-frame motion in the video and automatically correlate that with the flight data log to find the exact alignment between video and flight data log.
- The video/data correlation process can also be used to geotag video frames … automatically … I don’t know … maybe to send them to some image stitching software.
If you have any questions or comments, I’d love to hear from you!
On June 1, 2009 Air France flight #447 disappeared over the Atlantic Ocean. The subsequent investigation concluded “that the aircraft crashed after temporary inconsistencies between the airspeed measurements – likely due to the aircraft’s pitot tubes being obstructed by ice crystals – caused the autopilot to disconnect, after which the crew reacted incorrectly and ultimately caused the aircraft to enter an aerodynamic stall from which it did not recover.” https://en.wikipedia.org/wiki/Air_France_Flight_447
This incident along with a wide variety of in-flight pitot tube problems across the aviation world have led the industry to be interested in so called “synthetic airspeed” sensors. In other words, is it possible to estimate the aircraft’s airspeed by using a combination of other sensors and techniques when we are unable to directly measure airspeed with a pitot tube?
Next, I need to say a quick word about basic aerodynamics with much hand waving. If we fix the elevator position of an aircraft, fix the throttle position, and hold a level (or fixed) bank angle, the aircraft will typically respond with a slowly damped phugoid and eventually settle out at some ‘trimmed’ airspeed.
If the current airspeed is faster than the trimmed airspeed, the aircraft will have a positive pitch up rate which will lead to a reduction in airspeed. If the airspeed is slower than the trimmed airspeed, the aircraft will have a negative pitch rate which will lead to an acceleration. https://en.wikipedia.org/wiki/Phugoid
The important point is that these variables are somehow all interrelated. If you hold everything else fixed, there is a distinct relationship between airspeed and pitch rate, but this relationship is highly dependent on the current position of elevator, possibly throttle, and bank angle.
Measurement Variables and Sensors
In a small UAS under normal operating conditions, we can measure a variety of variables with fairly good accuracy. The variables that I wish to consider for this synthetic airspeed experiment are: bank angle, throttle position, elevator position, pitch rate, and indicated airspeed.
We can conduct a flight and record a time history of all these variables. We presume that they have some fixed relationship based on the physics and flight qualities of the specific aircraft in it’s current configuration.
It would be possible to imagine some well crafted physics based equation that expressed the true relationship between these variables …. but this is a quick afternoon hack and that would require too much time and too much thinking!
Radial Basis Functions
Enter radial basis functions. You can read all about them here: https://en.wikipedia.org/wiki/Radial_basis_function
From a practical perspective, I don’t really need to understand how radial basis functions work. I can simply write a python script that imports the scipy.interpolate.Rbf module and just use it like a black box. After that, I might be tempted to write a blog post, reference radial basis functions, link to wikipedia, and try to sound really smart!
Training the Interpolater
Step one is to dump the time history of these 5 selected variables into the Rbf module so it can do it’s magic. There is a slight catch, however. Internally the rbf module creates an x by x matrix where x is the number of samples you provide. With just a few minutes of data you can quickly blow up all the memory on your PC. As a work around I split the entire range of all the variables into bins of size n. In this case I have 4 independent variables (bank angle, throttle position, elevator position, and pitch rate) which leads to an n x n x n x n matrix. For dimensions in the range of 10-25 this is quite manageable.
Each element of the 4 dimensional matrix becomes a bin that holds he average airspeed for all the measurements that fall within that bin. This matrix is sparse, so I can extract just the non-zero bins (where we have measurement data) and pass that to the Rbf module. This accomplishes two nice results: (1) reduces the memory requirements to something that is manageable, and (2) averages out the individual noisy airspeed measurements.
Testing the Interpolater
Now comes the big moment! In flight we can still sense bank angle, throttle position, elevator position, and pitch rate. Can we feed these into the Rbf interpolater and get back out an accurate estimate of the airspeed?
Here is an example of one flight that shows this technique actually can produce some sensible results. Would this be close enough (with some smoothing) to safely navigate an aircraft through the sky in the event of a pitot tube failure? Could this be used to detect pitot tube failures? Would this allow the pitot tube to be completely removed (after the interpolater is trained of course)?
The source code for this experimental afternoon hack can be found here (along with quite a bit of companion code to estimate aircraft attitude and winds via a variety of algorithms.)
This is the results of a quick afternoon experiment. Hopefully I have showed that creating a useful synthetic airspeed sensor is possible. There are many other (probably better) ways a synthetic air speed sensor could be derived and implemented. Are there other important flight variables that should be considered? How would you create an equation that models the physical relationship between these sensor variables? What are your thoughts?
NOTICE: This is a draft document and in the process of being written. It is incomplete and subject to change at any time without notice.
In this post I share my process and tools for creating HUD overlays on a flight video. Here is a quick overview of the process: (1) Calibrate your camera, (2) Extract the roll rate information from the flight video, (3) Automatically and perfectly time correlate the video with the flight data, (4) Render a new video with the HUD overlay.
Please be aware that this code has not yet gone through a v1.0 release process or outside review, so you are likely to run into missing python packages or other unanticipated issues. I hope that a few brave souls will plow through this for themselves and help me resolve poor documentation or gaps in the package requirements. All the code is open-source (MIT license) and available here:
Let’s jump right into it. The very first thing that needs to be done is calibrate your specific camera. This might sound difficult and mysterious, but have no fear, there is a script for everything! Every camera (especially cheaper action cameras) has unique lens imperfections. It is best to do your own calibration for each of your cameras rather than trying save some time and copy someone else’s configuration.
The camera calibration process involves feeding several images of a checkerboard calibration pattern to a calibration script. Each image is analyzed to locate the checkerboard pattern. This information is then passed to a solver that will find your camera’s specific calibration matrix and lens distortion parameters.
To make this process easier, the calibration script expects a short 1-2 minute movie. It processes each frame of the movie, locates the checkboard pattern and stashes that information away. After all frames are processed, the script samples ‘n’ frames from the movie (where ‘n’ is a number around 100-200) and uses those frames to solve for your camera’s calibration. The reason that all frames are not used is because when ‘n’ starts pushing 250-300+, the solver begins to take a long time where long is measured in hours not minutes.
Here is a sample calibration video. The goal is to always keep the checkerboard pattern fully in view while moving it around, closer, further, to different parts of the frame, and from different offset angles. It is also important to hold the camera steady and move it slowly to avoid the effects of blurring and rolling shutter.
Now save your movie to your computer and run:
calibrate_movie.py --movie <name>
The script will run for quite some time (be patient!) but will eventually spit out a camera calibration matrix and set of lens distortion parameters. Save these somewhere (copy/paste is your friend.)
Extract the Roll Rate Information from the Flight Video
First take the camera calibration and distortion parameters derived in step one, and copy them into the gyro rate estimation script.
Next, run the script. This script will detect features in each frame, match the found features with the previous frame, and compute the amount of rotation and translation from each frame to the next. (It also is a hacky image stabilization toy).
1-est-gyro-rates.py --scale 0.4 --movie <name> --no-equalize
Here is an example of the script running and detecting features. Notice that a very significant portion of the frame is covered by the aircraft nose and the prop covers most of the remaining area. That’s ok! The gyro estimation is still good enough to find the correlation with the flight data.
Correlate the Video with the Flight Data
The next script actually performs both of the last two steps (correlation and rendering.)
2-gen-hud-overlay.py --movie <name> --aura-dir <flight data dir> --resample-hz 30 --scale 0.45
The script loads the flight data log and the movie data log (created by the previous script). It resamples them both at a common fixed rate (30hz in the above example.) Then it computes the best correlation (or time offset) between the two.
Here is a plot of the roll rate estimate from the video overlaid with the actual roll gyro. You can see there are many differences, but the overall trend and pattern leads to only one possible correct time correlation.
This graph is especially noisy because only a large portion of the outside view is obscured by the nose and the visible portion is further obscured by the propeller. But it is ok, the correlation process is magic and is really good at finding the best true correlation. The next plot shows the results we can attain when we have more idea conditions with an unobstructed view. Here is a plot that shows the video roll estimate and the actual roll gyro are almost perfectly in agreement.
Taking a step back, what did we just do there? Essentially, we have created an automated way to align the video frames with the flight data log. In other words, for any video frame number, I can compute the exact time in the flight log, and for any point in the flight log, I can compute the corresponding video frame number. Now all that is left is to draw the exact current flight data (that we now have a way to find) on top of the video.
I look forward to your comments and questions!
This tutorial is far from complete and I know there are some built in assumptions about my own system and aircraft cooked into the scripts. Please let me know your questions or experiences and I will do my best to answer or improve the code as needed.
Blending real video with synthetic data yields a powerful and cool! way to visualize your kalman filter (attitude estimate) as well as your autopilot flight controller.
Conformal HUD Elements
Conformal definition: of, relating to, or noting a map or transformation in which angles and scale are preserved. For a HUD, this means the synthetic element is drawn in a way that visually aligns with the real world. For example: the horizon line is conformal if it aligns with the real horizon line in the video.
- Horizon line annotated with compass points.
- Pitch ladder.
- Location of nearby airports.
- Location of sun, moon, and own shadow.
- If alpha/beta data is avaliable, a flight path marker is drawn.
- Aircraft nose (i.e. exactly where the aircraft is pointing towards.)
Nonconformal HUD Elements
- Speed tape.
- Altitude tape.
- Pilot or autopilot ‘stick’ commands.
Autopilot HUD Elements
- Flight director vbars (magenta). These show the target roll and pitch angles commanded by the autopilot.
- Bird (yellow). This shows the actual roll and pitch of the aircraft. The autopilot attempts to keep the bird aligned with the flight director using aileron and elevator commands.
- Target ground course bug (show on the horizon line) and actual ground course.
- Target airspeed (drawn on the speed tape.)
- Target altitude (drawn on the altitude tape.)
- Flight time (for referencing the flight data.)
Case Study #1: EKF Visualization
(Note this video was produced earlier in the development process and doesn’t contain all the HUD elements described above.)
What to watch for:
- Notice the jumpiness of the yellow “v” on the horizon line. This “v” shows the current estimated ground track, but the jumpiness points to an EKF tuning parameter issue that has since been resolved.
- Notice a full autonomous wheeled take off at the beginning of the video.
- Notice some jumpiness in the HUD horizon and attitude and heading of the aircraft. This again relates back to an EKF tuning issue.
I may never have noticed the EKF tuning problems had it not been for this visualization tool.
Case Study #2: Spin Testing
What to watch for:
- Notice the flight path marker that shows actual alpha/beta as recorded by actual alpha/beta airdata vanes.
- Notice how the conformal alignment of the hud diverges from the real horizon especially during aggressive turns and spins. The EKF fits the aircraft attitude estimate through gps position and velocity and aggressive maneuvers lead to gps errors (satellites go in and out of visibility, etc.)
- Notice that no autopilot symbology is drawn because the entire flight is flown manually.
Case Study #3: Skywalker Autopilot
What to watch for:
- Notice the yellow “v” on the horizon is still very jumpy. This is the horizontal velocity vector direction which is noisy due to EKF tuning issues that were not identified and resolved when this video was created. In fact it was this flight where the issue was first noticed.
- Notice the magenta flight director is overly jumpy in response to the horizontal velocity vector being jumpy. Every jump changes the current heading error which leads to a change in roll command which the autopilot then has to chase.
- Notice the flight attitude is much smoother than the above Senior Telemaster flight. This is because the skywalker EKF incorporates magnetometer measurements as well as gps measurements and this helps stabilize the filter even with poor noise/tuning values.
- You may notice some crazy control overshoot on final approach. Ignore this! I was testing an idea and got it horribly wrong. I’m actually surprised the landing completed successfully, but I’ll take it.
- Notice in this video the horizon stays attached pretty well. Much better than in the spin-testing video due to the non-aggressive flight maneuvers, and much better than the telemaster video due to using a more accurate gps: ublox7p versus ublox6. Going forward I will be moving to the ublox8.
Case Study #4: Results of Tuning
What to watch for:
- Notice that visually, the HUD horizon lines stays pegged to the camera horizon within about a degree for most of this video. The EKF math says +/-3 standard deviations is about 1.4 degrees in pitch and roll.
- You may notice a little more variation in heading. +/-3 standard deviations in heading is roughly 4 degrees.
- Now that the EKF is tamed a bit better, we can start to tune the PID’s and go after some more subtle improvements in flight control. For example, this skywalker is kind of a floppy piece of foam. I estimate that I have to hold a 4-5 degree right bank to fly straight. We can begin to account for these aircraft specific nuances to improve tracking, autoland performance, etc.
These flight visualization videos are created with an automated process using open source tools and scripts. I have started a post on how to create these videos yourself.
I make thousands of mistakes a day, mistakes typing, mistakes coding software, mistakes driving, mistakes walking, forgetting to order my sandwich without mayo, etc. Most of the time they are immediately obvious — a red squiggly line under a word I mistyped, a compiler spewing an error message on line #42, a stubbed toe, my gps suggesting a u-turn at the next intersection, etc.
But what happens when the mistake isn’t obvious, isn’t noticed immediately, and doesn’t cause everything around me to immediately fail? Often these mistakes can have a long lifespan. Often we discover them when we are looking for something else.
Mistakes from the Trenches.
I wanted to write about a few subtle unnoticed mistakes that lurked in the AuraUAS code for quite some time.
Temperature Calibration #Fail
AuraUAS has a really cool capability where it can estimate the bias (error) of the accelerometers during flight. The 15-state EKF does this as part of it’s larger task of estimating the aircraft’s attitude, location, and velocity. These bias estimates along with the corresponding IMU temperature can be used to build up a temperature calibration fit for each specific IMU based on flight data over time. The more you fly in different temperature conditions, the better your temperature calibration becomes. Sweet! Calibrated accelerometers are important because accel calibration errors directly translate to errors in initial roll and pitch estimates (like during launch or take off where these values can be critical.) Ok, the EKF will sort them out once in the air, because that is a cool feature of the EKF, but it can’t work out the errors until after flying a bit.
The bias estimates and temperature calibration fit are handled by post-flight python scripts that work with the logged flight data. Question: should I log the raw accel values or should I log the calibrated accel values. I decided I should log the calibrated values and then use the inverse calibration fit function to derive the original raw values after the flight. Then I use these raw values to estimate the bias (errors), add the new data to the total collection of data for this particular IMU, and revise the calibration fit. The most straightforward path is to log calibrated values on board during flight (in real time) and push the complicated stuff off into post processing.
However, I made a slight typo in the property name of the temperature range limits for the fit (we only fit within the range of temperatures we have flight data for.) This means the on-board accel correction was forcing the temperature to 27C (ignoring the actual IMU temperature.) However, when backing out the raw values in post processing, I was using the correct IMU temperature and thus arriving at a wrong raw value. What a mess. That means a year of calibration flight data is basically useless and I have to start all my IMU calibration learning over from scratch. So I fixed the problem and we go forward from here with future flights producing a correct calibration.
Integer Mapping #Fail
This one is subtle. It didn’t produce incorrect values, it simply reduced the resolution of the IMU gyros by a factor of 4 and the accels by a factor of 2.
Years ago when I first created the apm2-sensor firmware — that converts a stock APM2 (atmega2560) board into a pure sensor head — I decide to change the configured range of the gyros and accels. Instead of +/-2000 degrees per second, I set the gyros for +/-500 degrees per second. Instead of +/-8 g’s on the accels, I set them for +/- 4 g’s. The sensed values get mapped to a 16 bit integer, so using a smaller range results in more resolution.
The APM2 reads the raw 16 bit integer values from the IMU and converts this to radians per second. However, when the APM2 sends these values to the host, it re-encodes them from a 4-byte float to a 2-byte (16-bit) integer to conserve bandwidth. Essentially this undoes the original decoding operation to efficiently transmit the values to the host system. The host reads the encoded integer value and reconverts it into radians per second for the gyros (or mps^2 for the accels.)
The problem was that for encoding and decoding between the APM2 and the host, I used the original scaling factor for +/-2000 dps and +/-8g, not the correct scaling factor for the new range I had configured. This mistake caused me to lose all the resolution I intended to gain. Because the system produced the correct values on the other end, I didn’t notice this problem until someone asked me exactly what resolution the system produced, which sent me digging under the hood to refresh my memory.
This is now fixed in apm2-sensors v2.52, but requires a change to the host software as well so the encoding and decoding math agrees. Now the IMU reports the gyro rates with a resolution of 0.015 degrees per second where as previously the resolution was 0.061 degrees per second. Both are actually pretty good, but it pained me to discover I was throwing away resolution needlessly.
This one is also very subtle; timing issues often are. In the architecture of the AuraUAS flight controller there is an APM2 spitting out new sensor data at precisely 100 hz. The host is a beaglebone (or any linux computer) running it’s own precise 100 hz main loop. The whole system runs at 100 hz throughput and life is great — or so I thought.
I had been logging flight data at 25hz which has always been fine for my own needs. But recently I had a request to log the flight data at the full 100 hz rate. Could the beaglebone handle this? The answer is yes, of course, and without any trouble at all.
A question came up about logging high rate data on the well known PX4, so we had a student configure the PX4 for different rates and then plot out the time slice for each sample. We were surprised at the huge variations in the data intervals, ranging from way too fast, to way too slow, and rarely exactly what we asked for.
I know that the AuraUAS system runs at exactly 100hz because I’ve been very careful to design it that way. Somewhat smugly I pulled up a 100hz data set and plotted out the time intervals for each IMU record. The plot surprised me — my timings were all over the map and not much better than the PX4. What was going on?
I took a closer look at the IMU records and noticed something interesting. Even though my main loop was running precisely and consistently at 100 hz, it appeared that my system was often skipping every other IMU record. AuraUAS is designed to read whatever sensor data is available at the start of each main loop iteration and then jump into the remaining processing steps. Because the APM2 runs it’s own loop timing separate from the host linux system, the timing between sending and receiving (and uart transferring) can be misalligned so that when the host is ready to read sensor data, there might not be any yet, and next time there may be 2 records waiting. It is subtle, but communication between to free running processor loops can lead to issues like this. The end result is usually still ok, the EKF handles variable dt just fine, the average processing rate maybe drops to 50hz, and that’s still just fine for flying an airplane around the sky … no big deal right? And it’s really not that big of a deal for getting the airplane from point A to point B, but if you want to do some analysis of the flight data and want high resolution, then you do have a big problem.
What is the fix? There are many ways to handle timing issues in threaded and distributed systems. But you have to be very careful, often what you get out of your system is not what you expected or intended. In this case I have amended my host system’s main loop structure to throw away it’s own free running main loop. I have modified the APM2 data output routine to send the IMU packet at the end of each frame’s output to mark the end of data. Now the main loop on the host system reads sensor data until it receives an IMU packet. Then and only then does it drop through to the remaining processing steps. This way the timing of the system is controlled precisely by the APM2, the host system’s main loop logic is greatly simplified, and the per frame timing is far more consistent … but not consistent enough.
The second thing I did was to include the APM2 timestamp with each IMU record. This is a very stable, consistent, and accurate timestamp, but it counts up from a different starting point than the host. On the host side I can measure the difference between host clock and APM2 clock, low pass filter the difference and add this filtered difference back to the APM2 timestamp. The result is a pretty consistent value in the host’s frame of (time) reference.
Here is a before and after plot. The before plot is terrible! (But flies fine.) The after plot isn’t perfect, but might be about as good as it gets on a linux system. Notice the difference in Y-scale between the two plots. If you think your system is better than mine, log your IMU data at 100hz and plot the dt between samples and see for yourself. In the following plots, the Y axis is dt time in seconds. The X axis is elapsed run time in seconds.
Even with this fix, I see the host system’s main loop timing vary between 0.008 and 0.012 seconds per frame, occasionally even worse (100hz should ideally equal exactly 0.010 seconds.) This is now far better than the system was doing previously, and far, far better than the PX4 does … but still not perfect. There is always more work to do!
These mistakes (when finally discoveded) all led to important improvements with the AuraUAS system: better accelerometer calibration, better gyro resolution, better time step consistency with no dropped frames. Will it help airplanes get from point A to point B more smoothly and more precisely? Probably not in any externally visible way. Mistakes? I still make them 1000’s of times a day. Lurking hidden mistakes? Yes, those too. My hope is that no matter what stage of life I find myself in, I’m always working for improvements, always vigilant to spot issues, and always focused on addressing issues when they are discovered.
Congrats ATI Resolution 3, Hobby Lobby Senior Telemaster, Hobbyking Skywalker, and Avior-lite autopilot on your recent milestones!
Avior-lite (beaglebone + apm2 hybrid) autopilot:
- 300th logged flight
- 7000+ logged flight minutes (117.8 hours)
- 6400+ fully autonomous flight minutes (107.2 hours)
- 2895 nautical miles flown (3332 miles, 5362 km)
Hobby Lobby Senior Telemaster (8′ wing span)
- Actively flight testing autopilot hardware and software changes since 2007!
- 200th logged flight.
- 5013 logged flight minutes (83.5 hours)
- 4724 fully autonomous flight minutes (78.7 hours)
- 2015 nautical miles flown (2319 miles, 3733 km)
Today (October 7, 2015) I logged the 300th avior-lite flight and simultaneously logged the 200th flight on my venerable Senior Telemaster. I realize these are just numbers, and they wouldn’t be big numbers for a full scale aircraft operation or even a production uav operation, but it represents my personal effort in the UAV field.
I’m proud of a mishap-free 2015 flying season so far! (Ok, err, well one mishap on the very first launch of the skywalker … grrr … pilot error … and fixable thankfully.)
Enjoy the fall colors and keep flying!
The ardupilot mega is a fairly capable complete autopilot from both the hardware and the software perspective. But what if you projects needs all the sensors and not the full APM2 autopilot code?
The apm2-sensorhead project provides a quick, robust, and inexpensive way to add a full suite of inertial and position sensors to your larger robotics project. This project is a replacement firmware for the ardupilot-mega hardware. The stock arduplane firmware has been stripped down to just include the library code that interrogates the connected sensors, and also maintains the code that can read your RC transmitter stick positions through a stock RC receiver as well as drive up to 8 servo outputs. It also includes manual override (safety pilot) code and code to communicate with a host computer. The next upcoming feature will be onboard mixing for vtail, elevon, and flaperon aircraft configurations. This allows you the option to fly complicated airframes (and have an autopilot fly complicated airframes) without needing a complicated transmitter or adding complicated mixing code to your autopilot application.
Speaking a bit defensively: I want to address the “why?” question. First of all, I needed something like this for one of my own autopilot projects. That’s really the primary motivation right there and I could just skip to the next section. If you can answer this question for yourself, then you are my target audience! Otherwise if your imagination is not already running off on it’s own, why should you or anyone else possibly be interested?
- You are working in the context of a larger project and need to incorporate an IMU and GPS and possibly other sensors. Do you design your own board? Do you shoehorn your code onto the ardupilot? Do you look at some of the other emerging boards (pixhawk, etc.?) What if you could integrate the sensors you need quickly and easily?
- The ardupilot mega is a relatively inexpensive board. There are some APM clones available that are even less expensive. It would be hard to put together the same collection of sensors for a lower price by any other means.
- The ardupilot mega is proven and popular and has seen very wide scale testing. It is hard to find any other similar device on the market that has been tested as thoroughly and under as wide a variety of applications and environments than the ardupilot.
- The ardupilot mega code (especially the sensor I/O and RC code) is also tremendously well tested and ironed out.
- By stripping all the extra functionality out of the firmware and concentrating simply on sensor IO and communication with a host computer, the new firmware is especially lean, mean, fast, and simple. Where as the full arduplane code is bursting the atmega2560 cpu at the seams with no more room to add anything, compiling the amp2-sensorhead code reports: “Binary sketch size: 36,132 bytes (of a 258,048 byte maximum)”
- Along with plenty of space for code, removing all the extraneous code allows to CPU to run fast and service all it’s required work without missing interrupts and without dropping frames.
- There is a design philosophy the prefers splitting the hard real-time work of low level sensor interrogation from the higher level intelligence of the application. This can lead to two simpler applications that each do their own tasks efficiently and well, versus a single monolithic conglomerations of everything which can grow to be quite complex. With all the hard real time work taken care of by the apm2, the host computer application has far less need for a complicated and hard-to-debug thread based architecture.
- The APM2 board can connect to a host computer trivially via a standard USB cable. This provides power and a UART-based communication channel. That’s really all you need to graft a full suite of sensors to your existing computer and existing application. Some people like to solder chips onto boards, and some people don’t. Some people like to write SPI and I2C drivers, and some people don’t. Personally, I don’t mind if once in a while someone hands me an “easy” button. 🙂
Some Technical Details
- The apm2-sensorhead firmware reports the internal sensor data @ 100hz over a 115,200 baud uart.
- The apm2-sensorhead firmware can be loaded on any version of the ardupilot-mega from 2.0 to 2.5, 2.6, and even 2.7.2 from hobbyking.
- The UART on the APM2 side is 5V TTL which you don’t have to worry about if you connect up with a USB cable.
- This firmware has flown for extensively for 3 flying seasons at the time of this writing (2012, 2013, 2014) with no mishap attributable to a firmware problem.
- The firmware communicates bi-directionally with a host computer using a simple binary 16-bit checksum protocol. Sensor data is sent to the host computer and servo commands are read back from the host computer.
- Released under the GPLv3 license (the same as the arduplane library code.)
- Code available at the project site: https://github.com/clolsonus/apm2-sensorhead This code requires a minor ‘fork’ of the APM2 libraries available here: https://github.com/clolsonus/libraries
I am writing this article to coincide with the public release of the apm2-sensorhead firmware under the LGPL license. I suspect there will not be wide interest, but if you stumble upon this page and have a question or see something important I have not addressed, please leave a comment. I continue to actively develop and refine and fly (fixed wing aircraft) with this code as part of my larger system.
A small UAV + a camera = aerial pictures.
This is pretty cool just by itself. The above images are downsampled, but at full resolution you can pick out some pretty nice details. (Click on the following image to see the full/raw pixel resolution of the area.)
The next logical step of course is to stitch all these individual images together into a larger map. The questions are: What software is available to do image stitching? How well does it work? Are there free options? Do I need to explore developing my own software tool set?
Various aerial imaging sites have set the bar at near visual perfection. When we look at google maps (for example), the edges of runways and roads are exactly straight, and it is almost impossible to find any visible seam or anomaly in their data set. However, it is well known that google imagery can be several meters off from it’s true position, especially away from well travelled areas. Also, their imagery can be a bit dated and is lower resolution than we can achieve with our own cameras … these are the reasons we might want to fly a camera and get more detailed, more current , and perhaps more accurately placed imagery.
Of course the first goal is to meet our expectations. 🙂 I am very adverse to grueling manual stitching processes, so the second goal is to develop a highly automated process with minimal manual intervention needed. A third goal is to be able to present the data in a way that is useful and manageable to the end user.
Attempt #1: Hugin
Hugin is a free/open-source image stitching tool. It appears to be well developed, very capable, and supports a wide variety of stitching and projection modes. At it’s core it uses SIFT to identify features and create a set of keypoints. It then builds a KD tree and uses fastest nearest neighbor to find matching features between image pairs. This is pretty state of the art stuff as far as my investigation into this field has shown.
Unfortunately I could not find a way to make hugin deal with a set of pictures taken mostly straight down and from a moving camera position. Hugin seems to be optimized for personal panormas … the sort of pictures you would take from the top of a mountain when just one shot can’t capture the full vista. Stitching aerial images together involves a moving camera vantage point and this seems to confuse all of hugin’s built in assumptions.
I couldn’t find a way to coax hugin into doing the task. If you know how to make this work with hugin, please let me know! Send me an email or comment at the bottom of this post!
Attempt #2: flightriot.com + Visual SFM + CMPMVS
Someone suggested I checkout flightriot.com. This looks like a great resource and they have outlined a processing path using a number of free or open-source tools.
Unfortunately I came up short with this tool path as well. From the pictures and docs I could find on these software packages, it appears that the primary goal of this site (and referenced software packages) is to create a 3d surface model from the aerial pictures. This is a really cool thing to see when it works, but it’s not the direction I am going with my work. I’m more interested in building top down maps.
Am I missing something here? Can this software be used to stitch photos together into larger seamless aerial maps? Please let me know!
Attempt #3: Microsoft ICE (Image Composite Editor)
Ok, now we are getting somewhere. MS ICE is a slick program. It’s highly automated to the point of not even offering much ability for user intervention. You simply throw a pile of pictures at it, and it finds keypoint matches, and tries to stitch a panorama together for you. It’s easy to use, and does some nice work. However, it does not take any geo information into consideration. As it fits images together you can see evidence of progressively increased scale and orientation distortion. It has trouble getting all the edges to line up just right, and occasionally it fits an image into a completely wrong spot. But it does feather the edges of the seams so the final result has a nice look to it. Here is an example. (Click the image for a larger version.)
The result is rotated about 180 degrees off, and the scale at the top is grossly exaggerated compared to the scale at the bottom of the image. If you look closely, it has a lot of trouble matching up the straight line edges in the image. So ICE does a commendable job for what I’ve given it, but I’m still way short of my goal.
Here is another image set stitched with ICE. You can see it does a better job avoiding progressive scaling errors on this set. However, linear features still are crooked, there are many visual discontinuities, and it one spot it has completely bungled the fit and inserted a fragment completely wrong. So it still falls pretty short of my goal of making a perfectly scaled, positioned, and seamless map that would be useful for science.
Attempt #4: Write my own stitching software
How hard could it be … ? 😉
- Find the features/keypoints in all the images.
- Compute a descriptor for each keypoint.
- Match keypoint descriptors between all possible pairs of images.
- Filter out bad matches.
- Transform each image so that it’s keypoint position matches exactly (maybe closely? maybe roughly on the same planet as ….?) that same keypoint as it is found in all other matching images.
I do have an advantage I haven’t mentioned until now: I have pretty accurate knowledge of where the camera was when the image was taken, including the roll, pitch, and yaw (“true” heading). I am running a 15-state kalman filter that estimates attitude from the gps + inertials. Thus it converges to “true” heading, not magnetic heading, not ground track, but true orientation. Knowing true heading is critically important for accurately projecting images into map space.
The following image shows the OpenCV “ORB” feature detector in action along with the feature matching between two images.
Compare the following fit to the first ICE fit above. You can see a myriad of tiny discrepancies. I’ve made no attempt to feather the edges of the seams, and in fact I’m drawing every image in the data set using partial translucency. But this fit does a pretty good job at preserving overall all geographically correct scale, position, and orientation of all the individual images.
Here is a second data set taken of the same area. This corresponds to the second ICE picture above. Hopefully you can see that straight line edges, orientations, and scaling is better preserved.
Perhaps you might also notice that because my own software tool set understands the camera location when the image is taken, the projection of the image into map space is more accurately warped (none of the images have straight edge lines.)
Do you have any thoughts, ideas, or suggestions?
This is my first adventure with image stitching and feature matching. I am not pretending to be an expert here. The purpose of this post is to hopefully get some feedback from people who have been down this road before and perhaps found a better or different path through. I’m sure I’ve missed some good tools, and good ideas that would improve the final result. I would love to hear your comments and suggestions and experiences. Can I make one of these data sets available to experiment with?
To be continued….
Expect updates, edits, and additions to this posting as I continue to chew on this subject matter.