From wikipedia: In photography and optics, vignetting is a reduction of an image’s brightness or saturation toward the periphery compared to the image center.
When presenting a collection of images as a mosaic, the vignetting in the imagery can cause visual discontinuities at the image borders. Here I present a simple strategy to model and correct the vignette in a collection of images.
Step 1: Compute the pixel-wise average for a set of images
I start by creating 2d numpy array of float32 type with the same dimension as our camera. Each [u, v] position in the numpy array is the sum of that pixel position from each image. The “average image” is created by dividing each pixel sum by the total number of images. Here is an average of 7 images. Features from the individual images show through, but you can already begin to see the darkening at the corners:
Now here is the average of the full 1635 image set. The vignetting is clearly visible at the edges and corners. No individual image details are discernible:
Step 2: Compute a best fit function
The camera/lens calibration provides the optical center of the image (which may be offset from the actual center of the image due to lens irregularities.) For each [u, v] pixel coordinate in the final image, the algorithm computes the radius from the optical center versus pixel intensity. (This is done for each BGR color channel individually.) Here is the plot of radius vs. red channel intensity and the best fit function. The fit function is a * x^4 + b * x^2 + c.
Step 3: Generate the idealized vignette correction mask
Finally, the algorithm, generates an idealized vignette mask based on the fit function. Dithering is used to hide possible banding artifacts. Here is the final vignette mask. This is summed with the original images to give them an approximate even brightness across the entire image. This may look like a plain black image, but the corners are carefully lightened based on the best fit function determined above.
Step 4: Apply the mask to the images
Here is a before mosaic with no vignette correction. I intentionally picked a snowy scene shot in low light because the vignetting is maximally visible under these conditions.
After correction. Here is the same scene with vignetting correction applied. The result is not perfect, but the improvement is substantial.
The code to compute the vignette correction for a set of images and then non-destructively apply the correction for visual presentation (along with code to optimize the fit of the images in the first place) is all part of the ImageAnalysis project developed by the University of Minnesota AEM Department, UAV Lab. The code is written entirely in python and is licensed with the MIT open-source software license:https://github.com/UASLab/ImageAnalysis
Recently I flew a DJI Phantom 4 Pro v2 head to head with an in-house (U of MN AEM UAS Lab) developed fixed wing UAS. This comparison isn’t entirely apples to apples, but maybe someone will find it useful.
DJI is the king of the hill for small UAS aerial surveys. Once you figure out the apps and a few basic things, operating one of these is pretty much click and fly and makes aerial survey work about as easy as it can be. Some quick details of our system:
Camera horizontal field of view: about 67 degrees.
20 Megapixel RGB sensor (4864 x 3648 pixels)
At 400′ AGL the image would cover approximately 162 x 121 meters @ 3.3 cm per pixel.
Useful mission flight time: approximately 20 minutes.
Typical mission cruise speed 13 mph.
Megapixels collected per mission: about 8000.
Vertical take of and landing for operating in constrained areas: Yes!
Our in-house fixed wing survey platform is a full size X-UAV Talon (with the 15 cm wing extensions.) The camera is a Sony A6000 with a 30mm prime lens.
Camera horizontal field of view: about 43 degrees.
24 Megapixel RGB sensor (6000 x 4000 pixels)
At 400′ AGL the image would cover approximagely 96 x 63 meters @ 1.6 cm per pixel.
Useful mission flight time: approximately 75 minutes.
Typical mission cruise speed 30 mph.
Megapixels collected per mission: about 60,000 (7.5x more data than a DJI flight.)
Operates out of constrained areas: No! (But the system has auto-launch and auto-land capabilities to minimize pilot workload during operations.)
Side note: our in-house system flies with our in-house “Goldy3” autopilot flight controller.
Recently we flew both of these systems over the same area for a head to head match up. We flew both systems at an altitude of 200′ AGL. If you look at the respective camera specs you will already know what to expect, but I wanted to share the head to head imagery.
The Sony A6000 yields approximately double the pixel resolution at the same altitude (4x the pixels for any area.) In addition, the Sony imaging sensor has about 3x the area compared to the DJI imaging sensor. You can see much better pixel detail in the Sony images as well as more subtle color variations, less washout (saturation), richer colors, and fewer compression artifacts when you zoom way in. So with all that said, here are the image snippets for direct comparison.
Important! For best image pair viewing results: right-click on each image and select “open in new window”. Then drag the windows so you can see the images side by side. You may be able to click in each new window to expand the size to full resolution. If you are reading this on your phone, then I’m sorry!
Van: look for subtle things like the windshield washer nozzles, the wheel rim pattern, gravel pattern.
Stumps: notice the overall details, you can see bark patterns in the sony image.
Evergreens and snow patch: see how the subtle details are preserved in the snow patch better with the Sony camera and individual pine needles are clearly visible.
Spring buds: notice the details in the fine branching structures and the new buds on all the branches. This is a pair of images that also show the extra jpg compression artifacts in the DJI images.
Gravel road: notice the richer colors in the Sony image, notice the subtle details in the snow patch (the DJI image gets totally washed out in the snow area) and notice the overall sharper detail in the gravel. And again you can see where the extra jpg compression on the DJI image limits how much you can usefully zoom in.
Pile of branches: look for the detail in the bark pattern, the shading detail in the snow patches, the detail vs. compression artifacts in the tiny branches.
Conclusion: This is a comparison of two systems (the Phantom 4 is popular commercial system vs. our U of MN UAS Lab in-house built fixed wing system) both flying at 200′ AGL. This is not a perfect apples to apples comparison and it may not be representative of your own typical use cases. What I have shown here is that a higher quality camera produces better pictures (duh!) For some types of missions the details and the image quality do matter!
If you have a thought or question about the aircraft, cameras, or images, please leave a comment below. Thanks for reading!
First, if you are interested in doing real time process control on Linux, go watch this awesome presentation. This is way more important than reading my post! Do it now!
Where was I when all this happened?
From the casual way Sandra speaks of SCHED_FIFO, I feel like this is something just about everyone in the Linux world has known and used in for the last 10 years except for me! Sadly, I am just hearing about it now, but happily I am hearing about it! Here is a quick report on what I have learned.
First, what do I want to do?
The goal of this entire exercise is to run my linux-based AuraUAS autopilot controller application at a solid 100hz. This gives me a time budget of 10 milliseconds (ms) per frame. Note: AuraUAS is not related to the ardupilot or px4 flight stacks, it has a completely independent development history and code base.
Baseline (Naive) Performance Results
I start by running my autopilot code on a beaglebone (single core Arm Cortex @ 1000Mhz) with a relatively slow micro sd card. The slow SD card has a significant impact on the portion of my app that writes the log file. Note: the AuraUAS autopilot code is single-threaded for logical simplicity, easier code validation, avoidance of tricky thread-related bugs, etc.
By default linux uses something called SCHED_CFS (completely fair scheduler.) This scheduler tries to maximize cpu utilization, interactive response times, and give every process an equal opportunity to run. It is a good default scheduler, but obviously a bit more fair than I would like.
Launching my app with default priorities and scheduling options, I get an average frame time of about 7 milliseconds (ms.) I get an occasional big miss in the logging module where it can hang out there for 0.2, 0.3, 0.4 seconds (200-400 ms), presumably stuck writing to the SD card. Even many of my sub-modules have surprisingly variable time consumption. Some of these occasionally get stuck longer than the 10 ms time budget. This is about as lousy as you can get. No one wants random time delays mixed with a wide spread of time jitter delay for the average case. This is the whole reason you don’t use linux (or python) for real time process control.
As a side note, the PX4 flight stack is an example that shows how naive use of a pretty good real-time system (Nuttx) can also lead to similarly non-deterministic timing intervals (possibly even worse than stock linux.) In the end it pays to know your system and focus on timing if you are doing real time process control. Threads and real-time operating systems gain you nothing if you don’t use them strategically. (I don’t mean to be critical here, PX4 also does some amazing and beautiful stuff, they just didn’t care much about timing intervals, which leads to much worse results than you would expect if you actually measure their performance.)
For extra credit, go watch this awesome video presentation by Andrew Tridgell that explains some of the amazing work they are doing with ChibiOS to achieve extremely tight interval timing results. Tridge also includes some information on why Nuttx couldn’t meet their real-time objectives (really interesting stuff for those that care about these things). Go!
The Linux ‘chrt’ Command
While watching Sandra’s presentation, she referenced something called SCHED_FIFO. I paused the video and googled it to determine that it was ‘a thing’ and how to spell it. After the video finished, I sat down and figured out how to invoke processes with FIFO / real time priority. Doing this gives me much better results than I could acheive with the default linux scheduler. The command is chrt (CHange Real-Time attributes:)
sudo chrt -f 99 <my command and options>
The -f says run this with FIFO scheduling, and 99 is the highest possible system priority. If I run ‘top’ in another window, my process now shows up with a special ‘rt’ code in the priority column. Now my process is running at a higher priority than those pesky dumb kernel worker threads. Essentially as long as it has something to do, my app will run ahead of just about everything else on the system. (There are still a couple things with similar top rt status, so I don’t quite ever have exclusive control of the CPU.)
When I run my app this way, all the timings become rock solid, much more like I would expect/hope to see them. All my sub-module timings are showing exactly what I want, no over runs or unexplained extra long intervals periodically. The only exception to this is the logging module. I see that periodically gets stuck for 0.6-0.7 seconds(600-700 ms), even though the overall timing of the system is much, much better, the worst case has now become worse. It is also very clear this is happening exclusively in the logging section when writing to a relatively slow SD card. All the other random/unexplained sub-module delays have gone away.
Writing the log file in a separate process
Most people would simply push their logging work out into a another separate thread, but I have a philosophical aversion to threaded application architectures as part of my every day program design.
Conveniently, a year ago I had setup my code to optionally spew log messages out a UDP/socket port and then wrote a 10 line python script to suck in those UDP packets and write the data to a file. (I have been thinking about the occasional performance hit I take during logging for quite some time!)
Next I activated this remote logging feature, and tested my main autopilot app with the same chrt invocation. I start up the logging script with the normal CFS scheduler but at a slightly lower than default priority using the “nice” command.
Now without needing to write to a log file ever, my autopilot app is pegged at exactly 100.0 frames per second. The average main loop time is 5.56 ms, and the average wait for the sync packet from the sensor head (FMU) is 4.43 ms. The worst case main loop time interval is 9.79 (ms) still just inside my time interval budget.
Once in a while the kernel still figures out something to do that interrupts my high priority process, but the overall performance and interval timing has now met my original goal. This is really exciting (at least for me!) The separate log writing process seems to be keeping up while running in the slack time. The kernel worker threads that handle uart communication and file IO seem to be happy enough.
So all in all, chrt -rf 99 (SCHED_FIFO @ highest real-time priority) seems to be a really good thing. When I push the log writing work out to a separate process, it can eat the blocking/wait time of a slow SD card. Over all, app timing really tightens up and I’m able to meet my real-time process control goals without ever missing a frame.
I can’t end this post without mentioning that AuraUAS is heavily invested in running python code inside the core flight controller main loop. The flight controller strategically mixes C++ and python code modules into a single hybrid application.
I have intentionally chosen to use some non-traditional approaches to make my code simpler, easier to read, and more flexible and extensible. This project has evolved with education and research as one of the core priorities. Even with a significant amount of python code executing every frame, I am still able to achieve solid 100 hz performance with consistent frame intervals.
My overall goal with this project is to combine: the power of linux, the ease and simplicity of python, and inexpensive hardware to create an open-source UAV autopilot that performs to the highest standards of accuracy and timing. The linux ‘chrt’ command is a huge step forward to achieve precise timing intervals and never missing a frame deadline.
Feb 11, 2018 update: After some amount of tearing my hair out and some help from the beaglebone IRC community, I have the pocketbeagle UART2 configured and working to my external connector. Telemetry is up and running! I also plugged in an SBUS receiver, verified it is getting power correctly, bound it to my transmitter, and then verified the system is seeing my pilot inputs correctly. Next up: verifying all 8 pwm output channels.
Feb 10, 2018 update: Back to my v2.0 board build. I soldered on the gps connector and am now getting gps data into the teensy and relayed to the pocketbeagle. Now my autopilot code has gps + imu and compute it’s attitude @ 100hz.
I haven’t done much low level circuitry in my life, so the next thing was to solder on two resistors to complete my voltage divider circuit. This lets the board sense it’s own avionics voltage (output from the onboard regulator.) This is a “nice to know” item. It’s a good preflight check item, and a nice thing to make sure stays in the green during a flight. The result is 5.0 volts spot on, exactly what it should be, cool!
Version 2.2 of this board will also sense the external voltage in the same way. This would typically be your main battery voltage. It is another good thing to check in your preflight to make sure you have a fully charged and healthy battery before launch. During flight it is one piece of information that can help estimate remaining battery.
Feb 8, 2018 update: I have completed v2.2 of my board design using KiCad. I really like the kicad workflow and the results. I uploaded my kicad file to Oshpark and it can generate all the layers for production automatically. I don’t need to handle gerber or drill files unless I want to do it that way. Here is the Oshpark rendering of my board! From other oshpark boards I’ve seen, this is exactly what the real board will look like.
My plan is to finish building up the v2.0 board I have on hand. (Keep moving downwards to see a picture and movie of the v2.0 board.) It is very similar but designed with ExpressPCB and ordered from them. I found a few small issues with the v2.0 board and that led in part to learning kicad and testing out this new path.
Feb 7, 2018 update: I put a momentary pause on board assembly because I discovered I was missing a couple bits. I placed an order to mouser and those items should arrive shortly. In the mean time I started experimenting with kicad (a fuller featured and open-source pcb design tool.) My long term trajectory has always been away from proprietary, locked-in tools like express pcb.
Here is my schematic redrawn in kicad. Using named wire connections, the schematic really cleans up and becomes easier to decode:
I have also begun the process of arranging the footprints and running the traces. I still have more to learn about the kicad tools and there is much work left to finish with the pcb layout editor, but here is what I have so far:
The benefits of using a tool like kicad is it outputs standard gerber files which can then be submitted to just about any board house in the world. This also means potentially much lower costs to do a board run. It also opens the door for me to experiment with some simple reflow techniques. In addition, there are pick and place shops I could potentially take advantage of in the future. The would potentially allow me to select smaller surface mount components.
Jan 31, 2018 update: It’s alive! I installed the 4 major subcomponents on the board this evening. Everything powers up, the software on the teensy and pocketbeagle runs, everything that is connected is talking to each other correctly. So far so good! Next up is all the external connectors and devices.
Jan 30, 2018 update: First board run is in! Here is a quick video that shows how it will go together:
Jan 14, 2018 update:I have replaced the board layout picture with a new version that reflects significant repositioning of the components which leads to simplification of the traces and better access to usb ports and microsd cards in the final design.
There is something relaxing about nudging traces and components around on a pcb layout. Last night I spent some more time working on a new board design which I present here:
If you can’t guess from the schematic, this is a fixed wing autopilot built from common inexpensive components. The whole board will cost about $100 to assemble. I figure $20 for the teensy, $20 for the pocketbeagle (on sale?), $15 for a nice voltage regulator, $15 for the IMU breakout, $25 for the board itself. Add a few $$$ for connectors, cables, and other odds and ends and that puts the project right around $100.
The layout is starting to come together. It still requires some more tweaking. I’d like to label the connectors better, thicken the traces that carry main power, and I’m sure I’ll find some things to shift around and traces to reroute before I try to build up a board.
I am doing the design work in ExpressPCB, mainly because I know how to use this tool, and in the small quantities I would need, board prices are not a big factor. The board itself will cost about $25/ea when ordering 3 (that includes shipping.)
This is actually an evolution of an autopilot design that has been kicking around now for more than 10 years. The basic components have been improved over the years, but the overall architecture has remained stable. The teensy firmware and beaglebone (linux) autopilot software are complete and flying — as much as any software is ever complete. This board design will advance the hardware aspects of the AuraUAS project and make it possible to build up new systems.
This new board will have a similar footprint to a full size beaglebone, but will have everything self contained in a single board. Previously the system included a full size beaglebone, a cape, plus an APM2 as a 3rd layer. All this now collapses into a single board and drops at least two external cables.
A few additional parts are needed to complete the system: a ublox 8 gps, a pair of radio modems, and an attopilot volt/amp sensor. The entire avionics package should cost between $200-250 depending on component and supplier choices.
Source code for the teensy firmware, source code for the beaglebone AP, and hardware design files are all available at the AuraUAS github page: https://github.com/AuraUAS
For this episode I present a plot of optimized camera locations. I have a set of 840 images. Each image is taken from a specific location and orientation in space. Let us call that a camera pose. I can also find features in the images and match them up between pairs of images. The presumption is that all the features and all the poses together create a giant puzzle with only one correct solution. When all the features are moved to their correct 3d location, when all the cameras are in their correct poses, the errors in the system go to zero and we have a correct stitch.
However, look at the plot of camera pose locations. You can see a distinct radial pattern emerges. The poses at the fringes of the set have much different elevation (color) compared to the poses in the middle.
I have set up my optimizer to find the best fit for 3d feature location, camera pose (location and orientation) as well as solving for the focal length of the lens and the distortion parameters. (Sometimes this is referred to as ‘sparse bundle adjustment’ in the literature.)
Optimizers are great at minimizing errors, but they often do some strange things. In this case the optimizer apparently came up with wrong lens distortion parameters but then moved all the poses around the fringe of the set to compensate and keep the error metric minimized.
How can I fix this? My first try will return to a set of precomputed camera calibration and lens distortion parameters (based on taking lots of pictures of a checkerboard pattern.) I will rerun the optimization on just the 3d features and camera poses and see how that affects the final optimized camera locations. I can also set bounds on any of the parameters depending on how certain I am of the correct locations (which I’m not.)
Fun side note: the code that estimates camera calibration and lens distortion parameters is itself an optimizer. As such, it can distribute the error into different buckets and come up with a range of solutions depending on the input image set.
Jan 14, 2018 update: I have taken several steps forward understanding the issues here.
In the original scatter plot, I was plotting something called tvec. Anyone who has done significant camera pose work with opencv will recognize rvec and tvec. “tvec” gets rotated through the rvec transform to produce the 3d location of the camera pose, so plotting tvec itself was not useful. I have done the extra work to derive 3d point location and that makes a huge difference.
After plotting actual 3d camera pose location it became clear that globally optimizing the camera focal length and distortion parameters does seem to be far more productive and correct than attempting to optimize these parameters for each individual camera pose separately.
Giving the optimizer too much leeway in picking camera calibration and distortion paramters seems to lead to bad results. The closer I can set these at the start, and the tighter I can constrain them during the optimization, the better the final results.
A new issue is emerging. During the optimization, the entire system seems to be able to warp or tip. One corner of the plot seems lower and more compressed. The other side seems higher and more spread out. Here are some ideas for dealing with this:
I could apply a global affine transform to refit the cameras as best as possible to their original locations, however I would need to reproject and triangulate all the feature points to come up with their new 3d locations.
I could apply some sort of constraint to the camera locations. For example I could pretend I know location to +/- 10 meters and add that as a constraint to the camera pose location during the optimization. But do I know the relative positions this accurately?
Here is my latest plot of camera locations:
Jan 15, 2018 update: Here is a quick youtube video showing the optimizer in action. Each frame shows the result of an optimizer step. Altitude is encoded as color. The result isn’t perfect as you can tell from the various artifacts, but this is all a work in progress (and all open-source, built on top of python, numpy, scipy, and opencv.)
Update: 29 November, 2017: The work described below has been connected up to the on board autopilot and tested in simulation. Today I am planning to go out and test with an actual survey aircraft in flight. I can draw and save any number of areas together as a single project (and create and save any number of projects.) Once in flight, I can call up a project, select an area, and send it up to the aircraft. The aircraft itself will generate an optimized route based on planned survey altitude, wind direction, camera field of view, and desired picture overlap. The result (with zero wind) looks like the following picture. 5 areas have been sketched, one area has been submitted to the aircraft to survey, the aircraft has computed it’s route, the route is trickled back to the ground station and drawn on the map.
Today I am working on auto-generating routes that cover arbitrary (closed, non-self intersecting) polygon areas. The operator is able to draw a collection of polygons on the ground station, save them by project name, and then during flight call up the project/area they wish to survey, send (just the area perimeter) to the aircraft, and it will generate the coverage route automatically on the fly.
The main benefit is that the ground station doesn’t need to upload a 1000 waypoint route, only the area perimeter. The aircraft will already know the camera field of view and mounting orientation. It will know target altitude, wind direction and speed. The operator can include additional parameters like endlap and sidelap percentages.
The end goal is a smart, streamlined, easy to use (fixed wing) survey and mapping system.
There are additional issues to consider such as aircraft turn radius, turn strategies (dog bone turns, versus naive turns), and possibly interleaving transacts (a bit like a zamboni covers a hockey rink.)
AuraUAS traces it’s roots back to a simple open-source autopilot developed by Jung Soon Jang to run on the XBOW MNAV/Stargate hardware back in the 2005-2006 time frame. I worked at the University of Minnesota at that time and we modified the code to run on an original 400mhz gumtsix linux computer which talked to the MNAV sensor head via a serial/uart connection.
From the mid-2000’s and through the 2010’s I have been advancing this code in support of a variety of fixed-wing UAS projects. Initially I called the system “MicroGear” or “ugear” which was a nod of the head to my other long term open-source project: FlightGear. Along the way I aligned myself with a small Alaska-based aerospace company called “Airborne Technologies” or ATI for short. We branched a version of the code specifically for projects developed under NOAA funding as well as for various internal R&D. However, throughout the development the code always stayed true to it’s open-source heritage.
In the summer of 2015 I took a full time position in the UAS lab at the Aerospace Engineering Department of the University of Minnesota. Here I have been involved in a variety of UAS-related research projects and have assumed the roll of chief test pilot for the lab. AuraUAS has been central to several projects at the UAS lab including a spin testing a project, a phd project to develop single surface fault detection and a single surface flight controller on a flying wing, and several aerial survey projects. I continue to develop AuraUAS in support of ongoing research projects.
What makes AuraUAS different? What makes AuraUAS interesting? Why is it important to me?
Big processor / little processor architecture
From the start AuraUAS has been designed with the expectation of a small embedded (arduino-style) processor to handle all the sensor inputs as well as the actuator outputs. A “big processor” (i.e a raspberry pi, beaglebone, gumstix, edison, etc.) is used for all the higher level functionality such as the EKF (attitude determination), flight control, mission management, communication, and logging. The advantage is that the system can be built from two smaller and simpler programs. The “little” processor handles all the hard real time tasks. This frees up the “big” processor to run a standard linux distribution along with all it’s available libraries and fanciness. So AuraUAS is built around two simpler programs versus the one really complicated program architecture that most other autopilot systems use.
Single thread architecture
Because of the big/little processor architecture, the big processor doesn’t need to do hard real time tasks, and thus can be written using a single-threaded architecture. This leads to code that is far simpler, and far easier to understand, manage, and maintain. Anyone who has tried to understand and modify someone else’s threaded code might have a small inkling of why this could be important. How many large applications suffer through a variety of obscure, rare, and nearly impossible to find bugs that maybe trace to the threading system, but no one knows for sure?
The “big” processor in the AuraUAS system runs linux, so we can easily incorporate python in the mission critical main loop of the primary flight computer. This has the advantage of further simplifying coding tasks and shortening the edit/compile/test development loop because there is often no need to compile and reflash code changes between test runs. You can even do development work remotely, right on the flight computer. For those that are skeptical of python in the critical main loop, I have successfully flown this system for 2 full flight seasons … all of 2016, and all of 2017. The main flight computer is hitting it’s 100hz performance target and the memory usage is stable. Speaking of myself personally, my python code is almost always less buggy than my C code.
In my experience, when porting C/C++ code to python, the result is a 50, 60, even 70% reduction in the number of lines of code. I believe that fewer lines of code == more readable code on average. More readable code means fewer bugs and bugs are often found more quickly.
AuraUAS is a hybrid C++/Python application. Some modules are written in C++ and some modules are written in Python. My choice of language for a particular module tends to center around peformance versus logic. Our 15-state EKF (also developed in house at the U of MN) remains C++ code for performance reasons. The mission manager and individual tasks are all written in python. Python scripts greatly accelerate coding of higher level logic tasks. These typically are not performance critical and it s a great fit.
Simplicity and robustness
When you hand over control of an airplane to an autopilot (even a small model airplane) you are putting an immense amount of trust in that hardware, firmware and software. Software bugs can crash your aircraft. It’s important for an autopilot to be immensely robust, for the code base to be stable and change slowly, for new changes to be extensively tested. The more complicated a system becomes, the harder it is to ensure robust, bug free operations.
Throughout the development of the AuraUAS project, the emphasis has been on keeping the code and structure simple and robust. The overall all goal is to do a few core simple things very, very well. There are other autopilot systems that have every feature that anyone has ever suggested or wanted to work on; they support every sensor, run on every possible embedded computer board, and can fly every possible rotor and fixed wing airframe configuration. I think it’s great that px4 and ardupilot cover this ground and provide a big tent that welcomes everyone. But I think they do pay a price in terms of code complexity, which in turn has implications for introducing, collecting, and hiding bugs.
The first commit in the AuraUAS repository traces back to about 2006. So hears to nearly 12 years of successful development!
Location: South Central Ag Lab (near Clay Center, NE)
Aircraft: Skywalker 1900 with AuraUAS autopilot system.
Wing camera with augmented reality elements added (flight track, astronomy, horizon.)
Wind: (from) 160 @ 16 kts (18.5 mph) and very turbulent.
Temperature: 65 F (18 C)
Target Cruise: 25 kts (~29 mph)
The Full Video
Here are a couple of comments about the flight.
The conditions were very windy and turbulent, but it was a long drive to the location so we decided the risk of airframe damage was acceptable if we could get good data.
The wing view was chosen so I could observe one aileron control surface in flight. You might notice that the aileron ‘trim’ location puts the right aileron up significantly from the center point. A 1.3 pound camera is hanging off the right wing and this weight of the camera has twisted the wing a bit and put the aircraft significantly out of roll trim. The autopilot automatically compensates for the slightly warped wing by finding the proper aileron position to maintain level flight.
Throughout the flight you can see the significant crab angle, short turns up wind, and really wide/long turns down wind.
Because of the winds, the field layout, obstacles, etc. I was forced to spot the airplane landing in a very very tight area. I mostly managed to do that and the result was a safe landing with no damage.
Despite the high winds and turbulence, the aircraft and autopilot handled itself remarkably well. The HUD overlay uses simulated RC pilot sticks to show the autopilot control commands.
The augmented reality graphics are added after the flight in post processing using a combination of python and opencv. The code is open-source and has some support for px4 data logs if anyone is interested in augmenting their own flight videos. I find it a very valuable tool for reviewing the performance of the EKF, the autopilot gains, and the aircraft itself. Even the smallest EKF inaccuracies or tuning inefficiencies can show up clearly in the video.
I find it fascinating to just watch the video and watch how the autopilot is continually working to keep the aircraft on condition. If you would like to see how the Skywalker + AuraUAS autopilot perform in smoother air, take a look at Flight #71 at the end of this post: http://gallinazo.flightgear.org/uas/drosophila-nator-prototype/
Wikipedia Spins: In aviation’s early days, spins were poorly understood and often fatal. Proper recovery procedures were unknown, and a pilot’s instinct to pull back on the stick served only to make a spin worse. Because of this, the spin earned a reputation as an unpredictable danger that might snatch an aviator’s life at any time, and against which there was no defense.
Even in today’s modern world, spins are disorienting and still can be fatal. This project aims to study spins with a highly instrumented aircraft in order to better understand them, model them, and ultimately create cockpit instrumentation to help a pilot safely recover from a spin.
The test aircraft is an Ultrastick 120 operated by the University of Minnesota UAS Research Labs. It is outfitted with two NASA designed air data booms, one at each wing tip along with a traditional IMU, GPS, pitot probe, and control surface position sensors. The pilot is safely on the ground throughout the entire test flight (as well as before and after.)
These are in-flight videos of two test flights. Flight #14 is the ‘nominal’ configuration. In flight #15 the CG is moved to the aft limit and the plane is repeatedly put into an aggressive spin.
In both videos, the onboard attitude estimate and other sensors are drawn as a conformal overlay. The pilot stick inputs are shown in the lower left and right corners of the display. This aircraft is equipped with alpha/beta probes so the data from those sensors is used to draw a ‘flight path marker’ that shows angle of attack and side slip. Airspeed units are in mps, altitude units are in meters. The last 120 seconds of the flight track is also drawn into the video to help with visualizing the position of the aircraft in the air.
These flight tests are conducted by the University of Minnesota UAS Research Labs.
This is Skywalker Flight #74, flown on Sept. 7, 2017. It ended up being a 38.5 minute flight–scheduled to land right at sunset. The purpose of the flight was to carry insect traps at 300′ AGL and collect samples of what might be flying up at that altitude.
What I like about this flight is that the stable sunset air leads to very consistent autopilot performance. The circle hold is near perfect. The altitude hold is +/- 2 meters (usually much better), despite continually varying bank angles which are required to hold a perfect circle shape in 10 kt winds.
The landing at the end is 100% autonomous and I trusted it all the way down, even as it dropped in between a tree-line and a row of turkey barns. The whole flight is presented here for completeness, but feel free to skip to part 3 if you are interested in seeing the autonomous landing.
As an added bonus, stick around after I pick up the aircraft as I walk it back. I pan the aircraft around the sky and you can clearly see the perfect circle hold as well as the landing approach. I use augmented reality techniques to overlay the flight track history right into the video–I think it’s kind of a “cool tool” for analyzing your autopilot and ekf performance.