New Hardware Design Project

Feb 11, 2018 update: After some amount of tearing my hair out and some help from the beaglebone IRC community, I have the pocketbeagle UART2 configured and working to my external connector.  Telemetry is up and running!  I also plugged in an SBUS receiver, verified it is getting power correctly, bound it to my transmitter, and then verified the system is seeing my pilot inputs correctly.  Next up: verifying all 8 pwm output channels.

Feb 10, 2018 update: Back to my v2.0 board build.  I soldered on the gps connector and am now getting gps data into the teensy and relayed to the pocketbeagle.  Now my autopilot code has gps + imu and compute it’s attitude @ 100hz.

I haven’t done much low level circuitry in my life, so the next thing was to solder on two resistors to complete my voltage divider circuit.  This lets the board sense it’s own avionics voltage (output from the onboard regulator.)  This is a “nice to know” item.  It’s a good preflight check item, and a nice thing to make sure stays in the green during a flight.  The result is 5.0 volts spot on, exactly what it should be, cool!

Version 2.2 of this board will also sense the external voltage in the same way.  This would typically be your main battery voltage.  It is another good thing to check in your preflight to make sure you have a fully charged and healthy battery before launch.  During flight it is one piece of information that can help estimate remaining battery.

Feb 8, 2018 update: I have completed v2.2 of my board design using KiCad.  I really like the kicad workflow and the results.  I uploaded my kicad file to Oshpark and it can generate all the layers for production automatically.  I don’t need to handle gerber or drill files unless I want to do it that way.  Here is the Oshpark rendering of my board!  From other oshpark boards I’ve seen, this is exactly what the real board will look like.

The aura hardware designs are ‘open-source’.  All the kicad design files can be checked out here: https://github.com/AuraUAS/aura-hardware

My plan  is to finish building up the v2.0 board I have on hand.  (Keep moving downwards to see a picture and movie of the v2.0 board.)  It is very similar but designed with ExpressPCB and ordered from them.  I found a few small issues with the v2.0 board and that led in part to learning kicad and testing out this new path.

Feb 7, 2018 update: I put a momentary pause on board assembly because I discovered I was missing a couple bits.  I placed an order to mouser and those items should arrive shortly.  In the mean time I started experimenting with kicad (a fuller featured and open-source pcb design tool.)  My long term trajectory has always been away from proprietary, locked-in tools like express pcb.

Here is my schematic redrawn in kicad.  Using named wire connections, the schematic really cleans up and becomes easier to decode:

I have also begun the process of arranging the footprints and running the traces.  I still have more to learn about the kicad tools and there is much work left to finish with the pcb layout editor, but here is what I have so far:

The benefits of using a tool like kicad is it outputs standard gerber files which can then be submitted to just about any board house in the world.  This also means potentially much lower costs to do a board run.   It also opens the door for me to experiment with some simple reflow techniques.   In addition, there are pick and place shops I could potentially take advantage of in the future.  The would potentially allow me to select smaller surface mount components.

Jan 31, 2018 update: It’s alive!  I installed the 4 major subcomponents on the board this evening.  Everything powers up, the software on the teensy and pocketbeagle runs, everything that is connected is talking to each other correctly. So far so good!  Next up is all the external connectors and devices.

Jan 30, 2018 update: First board run is in!  Here is a quick video that shows how it will go together:

Jan 14, 2018 update: I have replaced the board layout picture with a new version that reflects significant repositioning of the components which leads to simplification of the traces and better access to usb ports and microsd cards in the final design.

There is something relaxing about nudging traces and components around on a pcb layout.  Last night I spent some more time working on a new board design which I present here:

If you can’t guess from the schematic, this is a fixed wing autopilot built from common inexpensive components.  The whole board will cost about $100 to assemble.  I figure $20 for the teensy, $20 for the pocketbeagle (on sale?), $15 for a nice voltage regulator, $15 for the IMU breakout, $25 for the board itself.  Add a few $$$ for connectors, cables, and other odds and ends and that puts the project right around $100.

The layout is starting to come together.  It still requires some more tweaking.  I’d like to label the connectors better, thicken the traces that carry main power, and I’m sure I’ll find some things to shift around and traces to reroute before I try to build up a board.

I am doing the design work in ExpressPCB, mainly because I know how to use this tool, and in the small quantities I would need, board prices are not a big factor.  The board itself will cost about $25/ea when ordering 3 (that includes shipping.)

Version 2.0

This is actually an evolution of an autopilot design that has been kicking around now for more than 10 years.  The basic components have been improved over the years, but the overall architecture has remained stable.  The teensy firmware and beaglebone (linux) autopilot software are complete and flying — as much as any software is ever complete.  This board design will advance the hardware aspects of the AuraUAS project and make it possible to build up new systems.

This new board will have a similar footprint to a full size beaglebone, but will have everything self contained in a single board.  Previously the system included a full size beaglebone, a cape, plus an APM2 as a 3rd layer.  All this now collapses into a single board and drops at least two external cables.

A few additional parts are needed to complete the system: a ublox 8 gps, a pair of radio modems, and an attopilot volt/amp sensor.  The entire avionics package should cost between $200-250 depending on component and supplier choices.

Source code for the teensy firmware, source code for the beaglebone AP, and hardware design files are all available at the AuraUAS github page: https://github.com/AuraUAS

Adventures in Aerial Image Stitching Episode #7

Lens Distortion Issues (or Fun with Optimizers)

For this episode I present a plot of optimized camera locations.   I have a set of 840 images.  Each image is taken from a specific location and orientation in space.  Let us call that a camera pose.  I can also find features in the images and match them up between pairs of images.  The presumption is that all the features and all the poses together create a giant puzzle with only one correct solution.  When all the features are moved to their correct 3d location, when all the cameras are in their correct poses, the errors in the system go to zero and we have a correct stitch.

However, look at the plot of camera pose locations.  You can see a distinct radial pattern emerges.  The poses at the fringes of the set have much different elevation (color) compared to the poses in the middle.

I have set up my optimizer to find the best fit for 3d feature location, camera pose (location and orientation) as well as solving for the focal length of the lens and the distortion parameters.  (Sometimes this is referred to as ‘sparse bundle adjustment’ in the literature.)

Optimizers are great at minimizing errors, but they often do some strange things.  In this case the optimizer apparently came up with wrong lens distortion parameters but then moved all the poses around the fringe of the set to compensate and keep the error metric minimized.

How can I fix this?  My first try will return to a set of precomputed camera calibration and lens distortion parameters (based on taking lots of pictures of a checkerboard pattern.)  I will rerun the optimization on just the 3d features and camera poses and see how that affects the final optimized camera locations.  I can also set bounds on any of the parameters depending on how certain I am of the correct locations (which I’m not.)

Fun side note: the code that estimates camera calibration and lens distortion parameters is itself an optimizer.  As such, it can distribute the error into different buckets and come up with a range of solutions depending on the input image set.

Jan 14, 2018 update: I have taken several steps forward understanding the issues here.

  1. In the original scatter plot, I was plotting something called tvec.  Anyone who has done significant camera pose work with opencv will recognize rvec and tvec.  “tvec” gets rotated through the rvec transform to produce the 3d location of the camera pose, so plotting tvec itself was not useful.  I have done the extra work to derive 3d point location and that makes a huge difference.
  2. After plotting actual 3d camera pose location it became clear that globally optimizing the camera focal length and distortion parameters does seem to be far more productive and correct than attempting to optimize these parameters for each individual camera pose separately.
  3. Giving the optimizer too much leeway in picking camera calibration and distortion paramters seems to lead to bad results.  The closer I can set these at the start, and the tighter I can constrain them during the optimization, the better the final results.
  4. A new issue is emerging.  During the optimization, the entire system seems to be able to warp or tip.  One corner of the plot seems lower and more compressed.  The other side seems higher and more spread out.  Here are some ideas for dealing with this:
    1. I could apply a global affine transform to refit the cameras as best as possible to their original locations, however I would need to reproject and triangulate all the feature points to come up with their new 3d locations.
    2. I could apply some sort of constraint to the camera locations.  For example I could pretend I know location to +/- 10 meters and add that as a constraint to the camera pose location during the optimization.  But do I know the relative positions this accurately?

Here is my latest plot of camera locations:

Jan 15, 2018 update:  Here is a quick youtube video showing the optimizer in action.  Each frame shows the result of an optimizer step.  Altitude is encoded as color.  The result isn’t perfect as you can tell from the various artifacts, but this is all a work in progress (and all open-source, built on top of python, numpy, scipy, and opencv.)

Adventures in Aerial Image Stitching Episode #6

Generating survey area coverage routes

Update: 29 November, 2017:  The work described below has been connected up to the on board autopilot and tested in simulation.  Today I am planning to go out and test with an actual survey aircraft in flight.  I can draw and save any number of areas together as a single project (and create and save any number of projects.)  Once in flight, I can call up a project, select an area, and send it up to the aircraft.  The aircraft itself will generate an optimized route based on planned survey altitude, wind direction, camera field of view, and desired picture overlap.  The result (with zero wind) looks like the following picture.  5 areas have been sketched, one area has been submitted to the aircraft to survey, the aircraft has computed it’s route, the route is trickled back to the ground station and drawn on the map.

Original Post

Today I am working on auto-generating routes that cover arbitrary (closed, non-self intersecting) polygon areas.  The operator is able to draw a collection of polygons on the ground station, save them by project name, and then during flight call up the project/area they wish to survey, send (just the area perimeter) to the aircraft, and it will generate the coverage route automatically on the fly.

The main benefit is that the ground station doesn’t need to upload a 1000 waypoint route, only the area perimeter.  The aircraft will already know the camera field of view and mounting orientation.  It will know target altitude, wind direction and speed.  The operator can include additional parameters like endlap and sidelap percentages.

The end goal is a smart, streamlined, easy to use (fixed wing) survey and mapping system.

There are additional issues to consider such as aircraft turn radius, turn strategies (dog bone turns, versus naive turns), and possibly interleaving transacts (a bit like a zamboni covers a hockey rink.)

Celebrating the 4000th git commit!

Few people really know much about the AuraUAS autopilot system, but this post celebrates the 4000th git commit to the main code repository!

The entire AuraUAS system is hosted on github and can be browsed here:

https://github.com/AuraUAS

AuraUAS traces it’s roots back to a simple open-source autopilot developed by Jung Soon Jang to run on the XBOW MNAV/Stargate hardware back in the 2005-2006 time frame.  I worked at the University of Minnesota at that time and we modified the code to run on an original 400mhz gumtsix linux computer which talked to the MNAV sensor head via a serial/uart connection.

From the mid-2000’s and through the 2010’s I have been advancing this code in support of a variety of fixed-wing UAS projects.  Initially I called the system “MicroGear” or “ugear” which was a nod of the head to my other long term open-source project: FlightGear.  Along the way I aligned myself with a small Alaska-based aerospace company called “Airborne Technologies” or ATI for short.  We branched a version of the code specifically for projects developed under NOAA funding as well as for various internal R&D.  However, throughout the development the code always stayed true to it’s open-source heritage.

In the summer of 2015 I took a full time position in the UAS lab at the Aerospace Engineering Department of the University of Minnesota.  Here I have been involved in a variety of UAS-related research projects and have assumed the roll of chief test pilot for the lab.  AuraUAS has been central to several projects at the UAS lab including a spin testing a project, a phd project to develop single surface fault detection and a single surface flight controller on a flying wing, and several aerial survey projects.  I continue to develop AuraUAS in support of ongoing research projects.

Design choices

What makes AuraUAS different?  What makes AuraUAS interesting?  Why is it important to me?

Big processor / little processor architecture

From the start AuraUAS has been designed with the expectation of a small embedded (arduino-style) processor to handle all the sensor inputs as well as the actuator outputs.  A “big processor” (i.e a raspberry pi, beaglebone, gumstix, edison, etc.) is used for all the higher level functionality such as the EKF (attitude determination), flight control, mission management, communication, and logging.  The advantage is that the system can be built from two smaller and simpler programs.  The “little” processor handles all the hard real time tasks.  This frees up the “big” processor to run a standard linux distribution along with all it’s available libraries and fanciness.  So AuraUAS is built around two simpler programs versus the one really complicated program architecture that most other autopilot systems use.

Single thread architecture

Because of the big/little processor architecture, the big processor doesn’t need to do hard real time tasks, and thus can be written using a single-threaded architecture.  This leads to code that is far simpler, and far easier to understand, manage, and maintain.  Anyone who has tried to understand and modify someone else’s threaded code might have a small inkling of why this could be important.  How many large applications suffer through a variety of obscure, rare, and nearly impossible to find bugs that maybe trace to the threading system, but no one knows for sure?

Python

The “big” processor in the AuraUAS system runs linux, so we can easily incorporate python in the mission critical main loop of the primary flight computer.  This has the advantage of further simplifying coding tasks and shortening the edit/compile/test development loop because there is often no need to compile and reflash code changes between test runs.  You can even do development work remotely, right on the flight computer.  For those that are skeptical of python in the critical main loop, I have successfully flown this system for 2 full flight seasons … all of 2016, and all of 2017.  The main flight computer is hitting it’s 100hz performance target and the memory usage is stable.  Speaking of myself personally, my python code is almost always less buggy than my C code.

In my experience, when porting C/C++ code to python, the result is a 50, 60, even 70% reduction in the number of lines of code.  I believe that fewer lines of code == more readable code on average.  More readable code means fewer bugs and bugs are often found more quickly.

Python/C++ hybrid

AuraUAS is a hybrid C++/Python application.  Some modules are written in C++ and some modules are written in Python.  My choice of language for a particular module tends to center around peformance versus logic.  Our 15-state EKF (also developed in house at the U of MN) remains C++ code for performance reasons.  The mission manager and individual tasks are all written in python.  Python scripts greatly accelerate coding of higher level logic tasks.  These typically are not performance critical and it s a great fit.

What makes this hybrid language system possible is a central property tree structure that is shared between the C++ and Python modules within the same concurrent app.  Imagine something like a dictionary() structure in javascript or a dict() structure in python (or imagine a json structure, or even an xml structure.)  We build an in memory tree structure that contains all the important shared data within the application, and then modules can read/write to the structure as they wish in a collaborative way.  This property tree fills much of the same place as “uorb” in the px4 software stack.  It glues all the various modules together and provides a structured way for them to communicate.

Simplicity and robustness

When you hand over control of an airplane to an autopilot (even a small model airplane) you are putting an immense amount of trust in that hardware, firmware and software.  Software bugs can crash your aircraft.  It’s important for an autopilot to be immensely robust, for the code base to be stable and change slowly, for new changes to be extensively tested.  The more complicated a system becomes, the harder it is to ensure robust, bug free operations.

Throughout the development of the AuraUAS project, the emphasis has been on keeping the code and structure simple and robust.  The overall all goal is to do a few core simple things very, very well.  There are other autopilot systems that have every feature that anyone has ever suggested or wanted to work on;  they support every sensor, run on every possible embedded computer board, and can fly every possible rotor and fixed wing airframe configuration.  I think it’s great that px4 and ardupilot cover this ground and provide a big tent that welcomes everyone.  But I think they do pay a price in terms of code complexity, which in turn has implications for introducing, collecting, and hiding bugs.

The first commit in the AuraUAS repository traces back to about 2006.  So hears to nearly 12 years of successful development!

 

Aerial Survey Flight (with Augmented Reality)

Basic Details

  • Date: October 11, 2017
  • Location: South Central Ag Lab (near Clay Center, NE)
  • Aircraft: Skywalker 1900 with AuraUAS autopilot system.
  • Wing camera with augmented reality elements added (flight track, astronomy, horizon.)
  • Wind: (from) 160 @ 16 kts (18.5 mph) and very turbulent.
  • Temperature: 65 F (18 C)
  • Target Cruise: 25 kts (~29 mph)

The Full Video

Notes

Here are a couple of comments about the flight.

The conditions were very windy and turbulent, but it was a long drive to the location so we decided the risk of airframe damage was acceptable if we could get good data.

The wing view was chosen so I could observe one aileron control surface in flight.  You might notice that the aileron ‘trim’ location puts the right aileron up significantly from the center point.  A 1.3 pound camera is hanging off the right wing and this weight of the camera has twisted the wing a bit and put the aircraft significantly out of roll trim.  The autopilot automatically compensates for the slightly warped wing by finding the proper aileron position to maintain level flight.

Throughout the flight you can see the significant crab angle, short turns up wind, and really wide/long turns down wind.

Because of the winds, the field layout, obstacles, etc. I was forced to spot the airplane landing in a very very tight area.  I mostly managed to do that and the result was a safe landing with no damage.

Despite the high winds and turbulence, the aircraft and autopilot handled itself remarkably well.  The HUD overlay uses simulated RC pilot sticks to show the autopilot control commands.

The augmented reality graphics are added after the flight in post processing using a combination of python and opencv.  The code is open-source and has some support for px4 data logs if anyone is  interested in augmenting their own flight videos.  I find it a very valuable tool for reviewing the performance of the EKF, the autopilot gains, and the aircraft itself.  Even the smallest EKF inaccuracies or tuning inefficiencies can show up clearly in the video.

I find it fascinating to just watch the video and watch how the autopilot is continually working to keep the aircraft on condition.  If you would like to see how the Skywalker + AuraUAS autopilot perform in smoother air, take a look at Flight #71 at the end of this post: http://gallinazo.flightgear.org/uas/drosophila-nator-prototype/

Spin Testing

Wikipedia Spins: In aviation’s early days, spins were poorly understood and often fatal. Proper recovery procedures were unknown, and a pilot’s instinct to pull back on the stick served only to make a spin worse. Because of this, the spin earned a reputation as an unpredictable danger that might snatch an aviator’s life at any time, and against which there was no defense.

Even in today’s modern world, spins are disorienting and still can be fatal.  This project aims to study spins with a highly instrumented aircraft in order to better understand them, model them, and ultimately create cockpit instrumentation to help a pilot safely recover from a spin.

The test aircraft is an Ultrastick 120 operated by the University of Minnesota UAS Research Labs.  It is outfitted with two NASA designed air data booms, one at each wing tip along with a traditional IMU, GPS, pitot probe, and control surface position sensors.  The pilot is safely on the ground throughout the entire test flight (as well as before and after.)

These are in-flight videos of two test flights.  Flight #14 is the ‘nominal’ configuration.  In flight #15 the CG is moved to the aft limit and the plane is repeatedly put into an aggressive spin.

In both videos, the onboard attitude estimate and other sensors are drawn as a conformal overlay.  The pilot stick inputs are shown in the lower left and right corners of the display.  This aircraft is equipped with alpha/beta probes so the data from those sensors is used to draw a ‘flight path marker’ that shows angle of attack and side slip.  Airspeed units are in mps, altitude units are in meters.  The last 120 seconds of the flight track is also drawn into the video to help with visualizing the position of the aircraft in the air.

These flight tests are conducted by the University of Minnesota UAS Research Labs.

 

Sunset Flight

This is Skywalker Flight #74, flown on Sept. 7, 2017.  It ended up being a 38.5 minute flight–scheduled to land right at sunset.  The purpose of the flight was to carry insect traps at 300′ AGL and collect samples of what might be flying up at that altitude.

What I like about this flight is that the stable sunset air leads to very consistent autopilot performance.  The circle hold is near perfect.  The altitude hold is +/- 2 meters (usually much better), despite continually varying bank angles which are required to hold a perfect circle shape in 10 kt winds.

The landing at the end is 100% autonomous and I trusted it all the way down, even as it dropped in  between a tree-line and a row of turkey barns.  The whole flight is presented here for completeness, but feel free to skip to part 3 if you are interested in seeing the autonomous landing.

As an added bonus, stick around after I pick up the aircraft as I walk it back.  I pan the aircraft around the sky and you can clearly see the perfect circle hold as well as the landing approach.  I use augmented reality techniques to overlay the flight track history right into the video–I think it’s kind of a “cool tool” for analyzing your autopilot and ekf performance.

Field Comparison of MPU6000 vs VN100T

The U of MN UAV Lab has flown a variety of sensors in aircraft, ranging from the lowly MPU-6000 (such as is found on an atmel based APM2 board) all the way up to an expensive temperature calibrated VectorNAV VN-100T.  I wish to present a quick field comparison of these two sensors.

Hobby King Skywalker with MPU-6000.
Sentera Vireo with VectorNav VN-100T onboard.

[disclaimers: there are many dimensions to any comparison, there are many individual use cases, the vn100 has many features not found on a cheap MPU-6000, the conditions of this test are not perfectly comparable: two different aircraft flown on two different days.   These tests are performed with a specific MPU-6000 and a specific VN-100T — I can’t say these predict the performance of any other specific IMU.  Both sensors are being sampled at 100hz externally.  Internally the MPU-6000 is being sampled at 500hz and filtered.  I suspect the VN-100T is outputting unfiltered values — but that is just a guess from the plot results.]

The point of this post is not to pick on the expensive solution, comparisons are difficult with a perfectly level playing field.  But hopefully I can show that in many ways, the less expensive solution may not be as bad as you thought–especially with a little calibration help.

Exhibit A: Raw gyro data in flight

I will mostly let the plots speak for themselves, they share the same vertical scale and cover about the same time span.  The less expensive sensor is clearly less noisy.  This trend holds up when the two sensors are motionless on the ground.

MPU-6000 gyros (100 seconds of flight)
VN-100T gyros (100 seconds of flight)

Exhibit B: Raw accelerometer data in flight

Again, the plots speak for themselves.  Given the same vertical and horizontal scales, the less expensive sensor is by far less noisy.

MPU-6000 accelerometers (100 seconds of flight)
VN-100T accelerometers (100 seconds of flight)

Exhibit C: Sensor bias estimates

The UMN UAV lab flies it’s aircraft with our own high fidelity EKF library.  The EKF code is the fancy mathematical code that estimates the aircraft state (rolll, pitch, yaw, position, velocity, etc.) given raw IMU and GPS inputs.

One of the arguments against a cheap sensor versus an expensive temperature calibrated sensor is you are paying for high quality temperature calibration in the expensive model.  This is a big deal with mems IMU sensors, because temperature can have a big effect on the bias and accuracy of the sensor.

This next pair of plots requires a small bit of explanation (please read this first!)

  • For every temp step, the UMN EKF library estimates the bias (or error) of each individual gyro and accelerometer (along with estimating aircraft attitude, position, and velocity.)
  • We can plot this bias estimate over time and compare them.
  • The bias estimates are just estimates and other errors in the system (like gps noise) can make the bias estimates jump around.
  • I have developed a temperature calibration process for the inexpensive IMU’s.  This process is being used to pre-correct the MPU-6000 sensor values in flight.  This correction process uses past flight data to develop a temp calibration fit and the more you fly and the bigger range of temperatures you fly in, the better the calibration becomes.
  • Just to be clear: for these final plots, the MPU-6000 is using our external temp calibration process — derived entirely from past flight data.  The VN-100T is running it’s own internal temp calibration.
  • These bias estimates are not perfect, but they give a suggestion of how well the IMU is calibrated.  Higher bias values suggest a larger calibration error.

MPU-6000 sensor bias estimates from UMN EKF library.

VN-100T sensor bias estimates from UMN EKF library.

What can we learn from these plots?

  • The MPU-6000 gyro biases (estimated) are approximately 0.05 deg/sec.
  • The VN-100T gyro biases (estimated) are as high as -1.0 deg/sec in roll and 0.35 deg/sec in yaw.
  • The MPU-6000 accel biases (estimated) are in the 0.1 m/s^2 range.
  • The VN-100T accel biases (estimated) are also in the 0.1 m/s^2 range.

In some cases the MPU-6000 with external temperature calibration appears to be more accurate than the VN-100 and in some cases the VN-100T does better.

Summary

By leveraging a high quality EKF library and a bit of clever temperature calibration work, an inexpensive MPU-6000 seems to be able to hold it’s own quite well against an expensive temperature calibrated mems IMU.

Drosophila-nator (Prototype)

This is a joint Entomology / Aerospace project to look for evidence that Spotted Wing Drosophila (an invasive species to North America) may be migrating at higher altitudes where wind currents can carry them further and faster than otherwise expected.

Skywalker 1900 outfitted with 2 petri-dish sized insect traps.

Skywalker Flight #69

Altitude: 200′ AGL
Airspeed: 20 kts
Weather:  10 kts wind, 22C
Mission: Circle fruit fields with insect traps.

Skywalker Flight #70

Altitude: 300′ AGL
Airspeed: 20 kts
Weather:  12 kts wind, 20C
Mission: Circle fruit fields with insect traps.

Skywalker Flight #71

Altitude: 400′ AGL
Airspeed: 20 kts
Weather:  13-14 kts wind, 20C
Mission: Circle fruit fields with insect traps.

Flying on the Edge of a Storm

This is a follow up to my eclipse post.  I was forced to end my eclipse flight 10 minutes before the peak because a line of rain was just starting to roll over the top of me.  I waited about 20-30 minutes for the rain to clear and launched a post-eclipse flight that lasted just over an hour of flight time.

Here are some interesting things in this set of flight videos:

  • You will see the same augmented reality heads up display and flight track rendering.  This shows every little blemish in the sensors, EKF, flight control system, and airplane!  It’s a great testing and debugging tool if you really hope to polish your aircraft’s tuning and flight performance.
  • IT IS WINDY!!!!  The skywalker cruises at about 20 kts indicated airspeed.  Winds aloft were pushing 16 … 17 … 18 kts sustained.  At one point in the flight I record 19.5 kt winds.
  • At t=2517 (there is a timer in seconds in the lower left corner of the HUD) we actually get pushed backwards for a few seconds.  How does your autopilot navigation work when you are getting pushed backwards by the wind?!?  You can find this about 20 seconds into Part #3.  Check it out. 🙂
  • In the 2nd half of the flight the winds transition from 16-17 kts and fairly smooth, to 18-19 kts and violent.  The poor little skywalker is getting severely thrashed around the sky in places.  Still it and the autopilot seem to handle it pretty well.  I was a bit white knuckle watching the flight unfold from the ground, but the on board HUD shows the autopilot was pretty relaxed and handled the conditions without really breaking much of a sweat.
  • When the winds really crank up, you will see the augmented flight track pass by sideways just after we pass the point in the circle where we are flying directly into the wind  … literally the airplane is flying sideways relative to the ground when you see this.
  • Does this bumpy turbulent video give you a headache?  Stay tuned for an upcoming video in super smooth air with a butterworth filter on my airspeed sensor.

Note: the hobbyking skywalker (1900mm) aircraft flown in this video has logged 71 flights, 31.44 hours in the air (1886 minutes), and covered 614 nautical miles (1137 km) across the ground.