New Hardware Design Project

Jan 14, 2018 update: I have replaced the board layout picture with a new version that reflects significant repositioning of the components which leads to simplification of the traces and better access to usb ports and microsd cards in the final design.

There is something relaxing about nudging traces and components around on a pcb layout.  Last night I spent some more time working on a new board design which I present here:

If you can’t guess from the schematic, this is a fixed wing autopilot built from common inexpensive components.  The whole board will cost about $100 to assemble.  I figure $20 for the teensy, $20 for the pocketbeagle (on sale?), $15 for a nice voltage regulator, $15 for the IMU breakout, $25 for the board itself.  Add a few $$$ for connectors, cables, and other odds and ends and that puts the project right around $100.

The layout is starting to come together.  It still requires some more tweaking.  I’d like to label the connectors better, thicken the traces that carry main power, and I’m sure I’ll find some things to shift around and traces to reroute before I try to build up a board.

I am doing the design work in ExpressPCB, mainly because I know how to use this tool, and in the small quantities I would need, board prices are not a big factor.  The board itself will cost about $25/ea when ordering 3 (that includes shipping.)

Version 2.0

This is actually an evolution of an autopilot design that has been kicking around now for more than 10 years.  The basic components have been improved over the years, but the overall architecture has remained stable.  The teensy firmware and beaglebone (linux) autopilot software are complete and flying — as much as any software is ever complete.  This board design will advance the hardware aspects of the AuraUAS project and make it possible to build up new systems.

This new board will have a similar footprint to a full size beaglebone, but will have everything self contained in a single board.  Previously the system included a full size beaglebone, a cape, plus an APM2 as a 3rd layer.  All this now collapses into a single board and drops at least two external cables.

A few additional parts are needed to complete the system: a ublox 8 gps, a pair of radio modems, and an attopilot volt/amp sensor.  The entire avionics package should cost between $200-250 depending on component and supplier choices.

Source code for the teensy firmware, source code for the beaglebone AP, and hardware design files are all available at the AuraUAS github page: https://github.com/AuraUAS

Adventures in Aerial Image Stitching Episode #7

Lens Distortion Issues (or Fun with Optimizers)

For this episode I present a plot of optimized camera locations.   I have a set of 840 images.  Each image is taken from a specific location and orientation in space.  Let us call that a camera pose.  I can also find features in the images and match them up between pairs of images.  The presumption is that all the features and all the poses together create a giant puzzle with only one correct solution.  When all the features are moved to their correct 3d location, when all the cameras are in their correct poses, the errors in the system go to zero and we have a correct stitch.

However, look at the plot of camera pose locations.  You can see a distinct radial pattern emerges.  The poses at the fringes of the set have much different elevation (color) compared to the poses in the middle.

I have set up my optimizer to find the best fit for 3d feature location, camera pose (location and orientation) as well as solving for the focal length of the lens and the distortion parameters.  (Sometimes this is referred to as ‘sparse bundle adjustment’ in the literature.)

Optimizers are great at minimizing errors, but they often do some strange things.  In this case the optimizer apparently came up with wrong lens distortion parameters but then moved all the poses around the fringe of the set to compensate and keep the error metric minimized.

How can I fix this?  My first try will return to a set of precomputed camera calibration and lens distortion parameters (based on taking lots of pictures of a checkerboard pattern.)  I will rerun the optimization on just the 3d features and camera poses and see how that affects the final optimized camera locations.  I can also set bounds on any of the parameters depending on how certain I am of the correct locations (which I’m not.)

Fun side note: the code that estimates camera calibration and lens distortion parameters is itself an optimizer.  As such, it can distribute the error into different buckets and come up with a range of solutions depending on the input image set.

Jan 14, 2018 update: I have taken several steps forward understanding the issues here.

  1. In the original scatter plot, I was plotting something called tvec.  Anyone who has done significant camera pose work with opencv will recognize rvec and tvec.  “tvec” gets rotated through the rvec transform to produce the 3d location of the camera pose, so plotting tvec itself was not useful.  I have done the extra work to derive 3d point location and that makes a huge difference.
  2. After plotting actual 3d camera pose location it became clear that globally optimizing the camera focal length and distortion parameters does seem to be far more productive and correct than attempting to optimize these parameters for each individual camera pose separately.
  3. Giving the optimizer too much leeway in picking camera calibration and distortion paramters seems to lead to bad results.  The closer I can set these at the start, and the tighter I can constrain them during the optimization, the better the final results.
  4. A new issue is emerging.  During the optimization, the entire system seems to be able to warp or tip.  One corner of the plot seems lower and more compressed.  The other side seems higher and more spread out.  Here are some ideas for dealing with this:
    1. I could apply a global affine transform to refit the cameras as best as possible to their original locations, however I would need to reproject and triangulate all the feature points to come up with their new 3d locations.
    2. I could apply some sort of constraint to the camera locations.  For example I could pretend I know location to +/- 10 meters and add that as a constraint to the camera pose location during the optimization.  But do I know the relative positions this accurately?

Here is my latest plot of camera locations:

Jan 15, 2018 update:  Here is a quick youtube video showing the optimizer in action.  Each frame shows the result of an optimizer step.  Altitude is encoded as color.  The result isn’t perfect as you can tell from the various artifacts, but this is all a work in progress (and all open-source, built on top of python, numpy, scipy, and opencv.)