Zombie Door

Run!!!

Zombies are pretty cool.  This post describes something a little less cool, but uses zombies to explain the concept (in a shallow, transparent attempt to capture your attention!)

Zombie Door Method

Imagine we want to generate a uniformly distributed random sampling in some complex space that our random number generator does not directly support.

Let me start with an simple example.  Imagine we have a random number generator that produces a random integer between 1 and 100.  However, we actually want to generate random numbers between 41 and 50.  (I know there are better ways to do this, but stick with me for this example.)  Let’s solve this with the zombie door method.

  • Build a wall and label it 1-100 where 1 is to the far left, and 100 is to the far right.
  • Cut a door in your wall from 41 to 50.
  • Now create random zombies starting anywhere from 1 to 100 and let them walk straight towards the wall.
  • The zombies will lurch across the room and eventually hit wall, explode, and dissolve in a fizzing puddle of bloody goo … or whatever it is that zombies do when they die.
  • The zombies that are lucky enough to walk through the door survive!

For this example it would be easy to scale and offset the usually available random number generator that produces a floating point value between 0 and 1, but it illustrates the basic approach of the ‘zombie door’ method.

A More Interesting Example

Imagine we have an arbitrary polygon outline and we want to splatter it with random circles.  However, we only want circles that are completely within the polygon.  (And we want our random circles to be a true  unbiased, uniformly distributed random sample.)  This example is just like the simple ‘wall’ example except now we have gone ‘2D’.

Imagine an arbitrary polygon shape:

We would like to fill this shape with 200 random circles, making sure none of our circles straddle the boundary or lie outside the polygon.  We want an even distribution over the interior area of our polygon.

We can do this by generating random circles within the min/max range of our shape, and then testing if they lie completely inside the polygon.  We reject any outliers and keep the inliers.  It’s very simple, but very cool because … you know … zombies!  Here is a picture of how our circles turned out:

If you are at all curious as to what became of our dead zombies (all the ones that splatted against the wall) here is the picture of that:

If you the reader are concerned that I am making light of a very real pending zombie apocalypse issue then here is my best tip for protecting your residence:

Finally, here is the python script I used to generate the pictures in this posting.  I leverage the very, very, very cool GPC polygon clipping library to test if the outline polygon ‘covers’ the circle.

#!/usr/bin/python
 
import random
 
# polygon (GPC) python package: https://pypi.python.org/pypi/Polygon2
import Polygon
import Polygon.IO
import Polygon.Shapes
 
# this will seed the random number generator with current time and
# force different results on each run.
random.seed()
 
# this is the bounds of the shape that will contain our splatters
outline = [ [0.5, -0.01], [1.01, -0.01], [0.5, 1.01], [-0.01, 1.01] ]
 
# define circle properties
num_circles = 200
min_r = 0.002
max_r = 0.02
 
# no closer than this to the boundary
margin = 0
#margin = 0.002
 
# create the polygon template (outline)
template = Polygon.Polygon(outline)
 
# make the shape a bit more interesting
c = Polygon.Shapes.Circle(radius=0.4, center=(1, 0.5), points=32)
template = template - c
c = Polygon.Shapes.Circle(radius=0.3, center=(0, 1), points=32)
template = template - c
 
# determine max/min of template
min_x = max_x = outline[0][0]
min_y = max_y = outline[0][1]
for p in outline:
 if p[0] < min_x: min_x = p[0]
 if p[0] > max_x: max_x = p[0]
 if p[1] < min_y: min_y = p[1]
 if p[1] > max_y: max_y = p[1]
 
print 'template bounds:', min_x, min_y, 'to', max_x, max_y
print 'radius range:', min_r, max_r
print 'margin:', margin
print 'num circles:', num_circles
 
# generate splats using zombie door method
circles = []
discarded = []
while len(circles) < num_circles:
 x = random.uniform(min_x, max_x)
 y = random.uniform(min_y, max_y)
 r = random.uniform(min_r, max_r)
 
 # make the circle
 c = Polygon.Shapes.Circle(radius=r, center=(x, y), points=32)
 
 # make the circle padded with extra margin
 cm = Polygon.Shapes.Circle(radius=(r+margin), center=(x, y), points=32)
 
 if template.covers(cm):
  # circle + margin fully contained inside the template
  circles.append(c)
 else:
  discarded.append(c)
 
# assemble final polygons and write output
final = Polygon.Polygon()
for c in circles:
 final += c
Polygon.IO.writeGnuplot('in.plt', [template, final])
Polygon.IO.writeSVG('in.svg', [final], fill_color=(0,0,0))
 
reject = Polygon.Polygon()
for c in discarded:
 reject += c
Polygon.IO.writeGnuplot('out.plt', [template, reject])
Polygon.IO.writeSVG('out.svg', [reject], fill_color=(0,0,0))

Automated Movie Frame Extracting and Geotagging

This is a short tutorial on an automated method to extract and geotag movie frames.  One specific use case is that you have just flown a survey with your quad copter using a 2-axis gimbal pointing straight down, and a gopro action cam in movie mode.  Now you’d like to create a stitched map from your data using tools like pix4d or agisoft.

The most interesting part of this article is the method I have developed to correlate the frame timing of a movie with the aircraft’s flight data log.  This correlation process yields a result such that for any and every frame of the movie, I can find the exact corresponding time in the flight log, and for any time in the flight log, I can find the corresponding video frame.  Once this relationship is established, it is a simple matter to walk though the flight log and pull frames based on the desired conditions (for example, grab a frame at some time interval, while above some altitude AGL, and only when oriented +/- 10 degrees from north or south.)

Video Analysis

The first step of the process is to analyze the frame to frame motion in the video.

features

Example of feature detection in a single frame.

  1. For each video frame we run a feature detection algorithm (such as SIFT, SURF, Orb, etc.) and then compute the descriptors for each found feature.
  2. Find all the feature matches between frame “n-1” and frame “n”.  This is done using standard FLANN matching, followed by a RANSAC based homography matrix solver, and then discarding outliers.  This approach has a natural advantage of being able to ignore extraneous features from the prop or the nose of the aircraft because those features don’t fit into the overall consensus of the homography matrix.
  3. Given the set of matching features between frame “n-1” and frame “n”, I then compute a best fit rigid affine matrix transformation from the features locations (u, v) from one frame to the next.  The affine transformation can be decomposed into a rotational component, a translation (x, y) component, and a scale component.
  4. Finally I log the frame number, frame time (starting at t=0.0 for the first frame), and the rotation (deg), x translation (pixels), and y translation (pixels.)

The cool, tricky observation

I haven’t seen anyone else do anything like this before, so I’ll pretend I’ve come up with something new and cool.  I know there is never anything new under the sun, but it’s fun to rediscover things for oneself.

Use case #1: Iris quad copter with a two axis Tarot gimbal, and a go-pro hero 3 pointing straight down.  Because the gimbal is only 2-axis, the camera tracks the yaw motion of the quad copter exactly.  The camera is looking straight down, so camera roll rate should exactly match the quad copter’s yaw rate.  I have shown I can compute the frame to frame roll rate using computer vision techniques, and we can save the iris flight log.  If these two signal channels aren’t too noisy or biased relative to each other, perhaps we can find a way to correlate them and figure out the time offset.

iris-tarot

3DR Iris + Tarot 2-axis gimbal

Use case #2:  A Senior Telemaster fixed wing aircraft with a mobius action cam fixed to the airframe looking straight forward.  In this example, camera roll should exactly correlate to aircraft roll.  Camera x translation should map to aircraft yaw, and camera y translation should map to aircraft pitch.

IMG_20150804_073522

Senior Telemaster with forward looking camera.

In all cases this method requires that at least one of the camera axes is fixed relative to at least one of the aircraft axes.  If you are running a 3 axis gimbal you are out of luck … but perhaps with this method in mind and a bit of ingenuity alternative methods could be devised to find matching points in the video versus the flight log.

Flight data correlation

This is the easy part.  After processing the movie file, we  now have a log of the frame to frame motion.  We also have the flight log from the aircraft.  Here are the steps to correlate the two data logs.

overlay

Correlated sensor data streams.

  1. Load both data logs (movie log and flight log) into a big array and resample the data at a consistent interval.  I have found that resampling at 30hz seems to work well enough.  I have experimented with fitting a spline curve through lower rate data to smooth it out.  It makes the plots look prettier, but I’m sure does not improve the accuracy of the correlation.
  2. I coded this process up in python.  Luckily python (numpy) has a function that takes two time sequences as input and does brute force correlation.  It slides the one data stream forward against the other data stream and computes a correlation value for every possibly overlap.  This is why it is important to resample both data streams at the same fixed sample rate.
    ycorr = np.correlate(movie_interp[:,1], flight_interp[:,1], mode='full')
  3. When you plot out “ycorr”, you will hopefully see a spike in the plot and that should correspond to the best fit of the two data streams.

correlation

Plot of data overlay position vs. correlation.

Geotagging move frames

 

GOPR0002-009054

Raw Go-pro frame grab showing significant lens distortion.  Latitude = 44.69231071, Longitude = -93.06131655, Altitude = 322.1578

The important result of the correlation step is that we have now determined the exact offset in seconds between the movie log and the flight log.  We can use this to easily map a point in one data file to a point in the other data file.

Movie encoding formats are sequential and the compression algorithms require previous frames to generate the next frame.  Thus the geotagging script steps through the movie file frame by frame and finds the point in the flight log data file that matches.

For each frame that matches the extraction conditions, it is a simple matter to lookup the corresponding longitude, latitude, and altitude from the flight log.  My script provides an example of selecting movie frames based on conditions in the flight log.  I know that the flight was planned so the transacts were flown North/South and the target altitude was about 40m AGL.  I specifically coded the script to extract movie frames at a specified interval in seconds, but only consider frames taken when the quad copter was above 35m AGL and oriented within +/- 10 degrees of either North or South.  The script is written in python so it could easily be adjusted for other constraints.

The script writes each selected frame to disk using the opencv imwrite() function, and then uses the python “pyexiv2” module to write the geotag information into the exif header for that image.

geolocated

A screen grab from Pix4d showing the physical location of all the captured Go-pro movie frames.

Applications?

Aerial surveying and mapping

The initial use case for this code was to automate the process of extracting frames from a go-pro movie and geotagging them in preparation for handing the image set over to pix4d for stitching and mapping.

gopro-stitch

Final stitch result from 120 geotagged gopro movie frames.

Using video as a truth reference to analyze sensor quality

It is interesting to see how accurately the video roll rate corresponds to the IMU gyro roll rate (assume a forward looking camera now.)  It is also interesting in plots to see how the two data streams track exactly for some periods of time, but diverge by some slowly varying bias for other periods of time.  I believe this shows the variable bias of MEMS gyro sensors.  It would be interesting to run down this path a little further and see if the bias correlates to g force in a coupled axis?

Visual odometry and real time mapping

Given feature detection and matching from one frame to the next, knowledge of the camera pose at each frame, opencv pnp() and triangulation() functions, and a working bundle adjuster … what could be done to map the surface or compute visual odometry during a gps outage?

Source Code

The source code for all my image analysis experimentation can be found at the University of Minnesota UAV Lab github page.  It is distributed under the MIT open-source license:

https://github.com/UASLab/ImageAnalysis

Comments or questions?

I’d love to see your comments or questions in the comments section at the end of this page!

Image Stitching Tutorial Part #1: Introduction

features

Background

During the summer of 2014 I began investigating image stitching techniques and technologies for a NOAA sponsored UAS marine survey project.  In the summer of 2015 I was hired by the University of Minnesota Department of Aerospace Engineering and Mechanics to work on a Precision Agriculture project that also involves UAS’s and aerial image stitching.

Over the past few months I have developed a functional open-source image stitching pipeline written in python and opencv.  It is my intention with this series of blog postings to introduce this work and further explain our approach to aerial image processing and stitching.

Any software development project is a journey of discovery and education, so I would love to hear your thoughts, feedback, and questions in the comments area of any of these posts.  The python code described here will be released under the MIT open-source software license (one of my to-do list items is to publish this project code, so that will happen “soon.”)
feature-matching

 Why?

The world already has several high quality commercial image stitching tools as well as several cloud based systems that are free to use.  Why develop yet another image stitching pipeline?  There are several reasons we began putting this software tool chain together.

  • We are looking at the usefulness and benefits of ‘direct georeferencing.’  If we have accurate camera pose information (time, location, and camera orientation of each image), then how can this improve the image analysis, feature detection, feature matching, stitching, and rendering process?
  • One of the the core strengths of the UMN Aerospace Engineering Department is a high quality 15-state kalman filter attitude determination system.  This system uses inertial sensors (gyros and accelerometers) in combination with a GPS to accurately estimate an aircraft’s ‘true’ orientation and position.  Our department is uniquely positioned to provide a high quality camera pose estimate and thus examine ‘direct georeferencing’ based image processing.
  • Commercial software and closed source cloud solutions do not enable the research community to easily ask questions and test ideas and theories.
  • We hope to quantify the sensor quality required to perform useful direct georeferencing as well as the various sources of uncertainty that can influence the results.
  • We would love to involve the larger community in our work, and we anticipate there will be some wider interest in free/open image processing and stitching tools that anyone can modify and run on their own computer.

Outline

I will be posting new tutorials in this series as they are written.  Here is a quick look ahead at what topics I plan to cover:

  • Direct Georeferencing
  • Image stitching basics
  • Introduction to the open-source software tool chain
  • Aircraft vs. camera poses and directly visualizing your georeferenced data set
  • Feature detection
  • Feature matching
  • Sparse Bundle Adjustment
  • Seamless 3D and 2D image mosaics, DEM’s, Triangle meshes, etc.

Throughout the image collection and image stitching process there is art, science, engineering, math, software, hardware, aircraft, skill, and a maybe bit of luck once in a while (!) that all come together in order to produce a successful aerial imaging result.

Software Download

The software referenced in this tutorial series is licensed with the MIT license and available on the University of Minnestoa UAV Lab public github page under the ImageAnalysis repository.

Credits

Adventures in Aerial Image Stitching

A small UAV + a camera = aerial pictures.

SAM_0079

SAM_0057

SAM_0053

This is pretty cool just by itself.  The above images are downsampled, but at full resolution you can pick out some pretty nice details.  (Click on the following image to see the full/raw pixel resolution of the area.)

SAM_0057-detail

The next logical step of course is to stitch all these individual images together into a larger map.  The questions are: What software is available to do image stitching?  How well does it work?  Are there free options?  Do I need to explore developing my own software tool set?

Expectations

Various aerial imaging sites have set the bar at near visual perfection.  When we look at google maps (for example), the edges of runways and roads are exactly straight, and it is almost impossible to find any visible seam or anomaly in their data set.  However, it is well known that google imagery can be several meters off from it’s true position, especially away from well travelled areas.  Also, their imagery can be a bit dated and is lower resolution than we can achieve with our own cameras … these are the reasons we might want to fly a camera and get more detailed, more current , and perhaps more accurately placed imagery.

Goals

Of course the first goal is to meet our expectations. 🙂  I am very adverse to grueling manual stitching processes, so the second goal is to develop a highly automated process with minimal manual intervention needed.  A third goal is to be able to present the data in a way that is useful and manageable to the end user.

Attempt #1: Hugin

Hugin is a free/open-source image stitching tool.  It appears to be well developed, very capable, and supports a wide variety of stitching and projection modes.  At it’s core it uses SIFT to identify features and create a set of keypoints.  It then builds a KD tree and uses fastest nearest neighbor to find matching features between image pairs.  This is pretty state of the art stuff as far as my investigation into this field has shown.

Unfortunately I could not find a way to make hugin deal with a set of pictures taken mostly straight down and from a moving camera position.  Hugin seems to be optimized for personal panormas … the sort of pictures you would take from the top of a mountain when just one shot can’t capture the full vista.  Stitching aerial images together involves a moving camera vantage point and this seems to confuse all of hugin’s built in assumptions.

I couldn’t find a way to coax hugin into doing the task.  If you know how to make this work with hugin, please let me know!  Send me an email or comment at the bottom of this post!

Attempt #2: flightriot.com + Visual SFM + CMPMVS

Someone suggested I checkout flightriot.com.  This looks like a great resource and they have outlined a processing path using a number of free or open-source tools.

Unfortunately I came up short with this tool path as well.  From the pictures and docs I could find on these software packages, it appears that the primary goal of this site (and referenced software packages) is to create a 3d surface model from the aerial pictures.  This is a really cool thing to see when it works, but it’s not the direction I am going with my work.   I’m more interested in building top down maps.

Am I missing something here?  Can this software be used to stitch photos together into larger seamless aerial maps?  Please let me know!

Attempt #3: Microsoft ICE (Image Composite Editor)

Ok, now we are getting somewhere.  MS ICE is a slick program.  It’s highly automated to the point of not even offering much ability for user intervention.  You simply throw a pile of pictures at it, and it finds keypoint matches, and tries to stitch a panorama together for you.  It’s easy to use, and does some nice work.  However, it does not take any geo information into consideration.  As it fits images together you can see evidence of progressively increased scale and orientation distortion.  It has trouble getting all the edges to line up just right, and occasionally it fits an image into a completely wrong spot.  But it does feather the edges of the seams so the final result has a nice look to it.  Here is an example.  (Click the image for a larger version.)

SAM_0125_stitch

The result is rotated about 180 degrees off, and the scale at the top is grossly exaggerated compared to the scale at the bottom of the image.  If you look closely, it has a lot of trouble matching up the straight line edges in the image.  So ICE does a commendable job for what I’ve given it, but I’m still way short of my goal.

Here is another image set stitched with ICE.  You can see it does a better job avoiding progressive scaling errors on this set.  However, linear features still are crooked, there are many visual discontinuities, and it one spot it has completely bungled the fit and inserted a fragment completely wrong.  So it still falls pretty short of my goal of making a perfectly scaled, positioned, and seamless map that would be useful for science.

SAM_0372_stitch-reduced

Attempt #4: Write my own stitching software

How hard could it be … ? 😉

  1. Find the features/keypoints in all the images.
  2. Compute a descriptor for each keypoint.
  3. Match keypoint descriptors between all possible pairs of images.
  4. Filter out bad matches.
  5. Transform each image so that it’s keypoint position matches exactly (maybe closely? maybe roughly on the same planet as ….?) that same keypoint as it is found in all other matching images.

I do have an advantage I haven’t mentioned until now:  I have pretty accurate knowledge of where the camera was when the image was taken, including the roll, pitch, and yaw (“true” heading).  I am running a 15-state kalman filter that estimates attitude from the gps + inertials.  Thus it converges to “true” heading, not magnetic heading, not ground track, but true orientation.  Knowing true heading is critically important for accurately projecting images into map space.

The following image shows the OpenCV “ORB” feature detector in action along with the feature matching between two images.

feature-pairs

Compare the following fit to the first ICE fit above.  You can see a myriad of tiny discrepancies.  I’ve made no attempt to feather the edges of the seams, and in fact I’m drawing every image in the data set using partial translucency.   But this fit does a pretty good job at preserving overall all geographically correct scale, position, and orientation of all the individual images.

fit1

Here is a second data set taken of the same area.  This corresponds to the second ICE picture above.  Hopefully you can see that straight line edges, orientations, and scaling is better preserved.

fit2

Perhaps you might also notice that because my own software tool set understands the camera location when the image is taken, the projection of the image into map space is more accurately warped (none of the images have straight edge lines.)

Do you have any thoughts, ideas, or suggestions?

This is my first adventure with image stitching and feature matching.  I am not pretending to be an expert here.  The purpose of this post is to hopefully get some feedback from people who have been down this road before and perhaps found a better or different path through.  I’m sure I’ve missed some good tools, and good ideas that would improve the final result.  I would love to hear your comments and suggestions and experiences.  Can I make one of these data sets available to experiment with?

To be continued….

Expect updates, edits, and additions to this posting as I continue to chew on this subject matter.

BackupPC, LVM, Cloning and Resizing Drives

I recently went through the process of trying to move my entire 1.5 Tb BackupPC tree to a new drive.  Here are some thoughts and comments from that experience.

  • I spent 40 days (literally) attempting to get various combinations of rsync, tar, cp, etc. to clone the contents of the drives to a new larger drive.  However, the bazillion of small little files hard-linked to a pool of randomly named actual files made this practically impossible to do in a finite amount of time.
  • In the end I used the unix utility ‘dd’ for the fastest possible copying.
  • In order to clone to a larger drive I first ran resize2fs to make the target file system match the size of the source file system.  Then I could do a direct dd copy of the source disk, and finally I resized the file system up to consume the full physical space after the clone was complete.
  • Make sure to add “conv=noerror,sync” to your “dd” options to avoid your transfer dying if it hits a bad-block on the source disk (perhaps a dying drive is among the reasons you are transferring to a new drive?)

Reasons to clone your backuppc drive (or any drive)

  • Your data is beginning to outgrow your available space.
  • The drive is starting to fail (showing smart sector errors, read errors, etc.)
  • Backup/redundancy
  • Prepare a copy of your data for offsite storage (extra safety for your important data, like your life long collection of digital photos …)

Cloning a BackupPC storage tree (or any other file system structure that is too big/complex for rsync/tar/cp to handle efficiently)

  1. Physically attach the destination drive and create physical and logical volumes. “system-config-lvm” is a gui tool that can help do this, otherwise there is the myriad of “pv…” and “lv…” command line tools if you wish to go that route.
  2. Make (or resize) the destination logical volume so its size matches as closely as possible the size of the source volume. I wasn’t able to get it exact, but I forged ahead anyway and it appears that e2fsck and resize2fs were able to get it all sorted out properly after the fact.  Perhaps making your target volume just slightly larger would be safer than making it slightly smaller.
  3. Make sure the dest volume is not mounted! If you have the option, also unmount the source volume. This isn’t absolutely required, but will avoid the risk of copying a drive in an inconsistent state which could lead to some loss of files or data that are being written at the time of the dd.
  4. Run “dd if=/dev/mapper/source_lv of=/dev/mapper/dest_lv bs=100M conv=noerror,sync”  I can’t say what the optimal block size (bs=xxx) value is. Make that too small and you waste time making endless trips back and forth from one drive to the other. Make it too big and you might get into swap. There may be a specific value that runs faster with your hardware than some other value and that might be non-intuitive?
  5. “dd” has no output unless there is an error and on a 1 TB drive or larger this can literally run for many hours. You can find the pid of the dd process and run “kill -USR1 <pid>” and that will signal dd to dump out a status message of how much it has copied so far. With a few brain cells you can figure out how many total blocks there are to copy (at your specified block size) and get an rough estimate of when the copy will finish.
  6. After the “dd” command completes, run “e2fsck -f -y /dev/mapper/dest_lv”. If you were dd’ing a live drive, or if the source drive had some bad/unreadable blocks, or you couldn’t make a destination volume of the exact size of the original, or … This will (or should) bring the destination volume into full consistency with itself.  The end result is pretty much the best possible copy you can get from your source drive.
  7. Now the beautiful part: either by system-config-lvm or with a cli tool like “lvextend” you can now resize your logical volume to fill up the entire available physical space. system-config-lvm will run e2fsck (again) and resize2fs in the background so it may take some time.
  8. The gui tools makes things a bit ‘easier’ but they are silent in their output so you don’t know what’s going on or how long your operation may take (seconds? hours? days?) The command line tools output useful information and can run in ‘verbose’ mode so it may be worth it to pull up the man pages on them and run them directly depending on your level of interest and time available.

BackupPC specifics

  • I mount /dev/mapper/logical_volume someplace like /backuppc03 and then I make /var/lib/BackupPC a symbolic link to “/backuppc03/BackupPC”.  So update this link to point to the dest drive, restart BackupPC and you should be fully up and running on the dest drive … now hopefully with more space and a new error free drive.
  • Hopefully if there were some drive read errors, they are few and come at non-critical locations … hopefully only corrupting some random unimportant file in some random unimportant backup that you will never need to restore.
  • If the drive is too shot and your previous backups too corrupted after the copy process, you may be better off just starting from scratch with a brand new backuppc installation and begin accumulating your backup history from this point forward.
  • One more tip. Gnome (and probably other desktops) have software that can show you your hard drive’s ‘smart’ status. In gnome this tool is called “Disks”. If the drive isn’t showing any smart status, you may wish to double check your bios settings (it’s possible to turn off smart in bios.) It’s good to look at your drive status once in a while to make sure it’s not starting to accumulate bad sectors.
  • The drive I just cloned and replaced was up to 283 bad sectors with about 6 bad block unrecoverable read errors. It was running at about 45C which is pretty hot.

Setting up “autologin” on a gumstix running the 3.5 kernel (yocto, poky, et. al.)

The gumstix.org wiki has a page on how to configure your gumstix to auto-login on boot.  This can be very nice for “production” systems where the intention is to power on and run some specific firmware/code every time.

However, with the new Yocto/Poky images based on the 3.5 kernel, things have changed and the old instructions no longer work.  Here is a quick recipe to get autologin running again on the newer systems.  First of all credit to http://fedoraproject.org/wiki/Systemd for their section on setting up autologin on a virtual terminal with the new systemd architecture.

Step #1:

Compile this “autologin.c” program and install it to /sbin/autologin (make sure you have executable permissions, etc. etc.)

#include <unistd.h>
#include <stdio.h>
#include <string.h>
 
int main()
{
       int nrv = 0;
       FILE* fptr = 0;
       char user[64];
 
       // clear buffer
       memset(user,'\0',64);
 
       // open autologin profile file
       fptr = fopen("/etc/autologin.profile\0","r\0");
 
       // make sure the file exists and was opened
       if (fptr != 0)
       {
               // the return value from fscanf will be 1 if the autologin profile name is read correctly
               nrv = fscanf(fptr,"%s\0",user);
       }
 
       // only autologin if the profile name was read successfully,
       // otherwise show the regular login prompt
       if (nrv > 0)
               nrv = execlp("login\0","login\0","-f\0",user,0);
       else
               nrv = execlp("login\0","login\0","\0",0,0);
 
       return 0;
}

Step #2

Create the /etc/autologin.profile file by running:

echo "root" > /etc/autologin.profile

The autologin program looks for this file to determine which user id should be autologged in.

Step #3

Setup the systemd configuration.

cp /lib/systemd/system/serial-getty@.service /etc/systemd/system/autologin@.service
ln -sf /etc/systemd/system/autologin@.service /etc/systemd/system/getty.target.wants/serial-getty@ttyO2.service
cd /etc/systemd/system/getty.target.wants/
vi serial-getty@ttyO2.service

Next, change the line that reads”

ExecStart=-/sbin/agetty -s %I 115200

to read:

ExecStart=-/sbin/agetty -n -l /sbin/autologin -s %I 115200

Troubleshooting

  • If you still get a login prompt, make sure you created the /etc/autologin.profile file correctly, the autologin program needs that or it will just execute a standard login prompt.

Step #4

You may find yourself in a situation where you may not always want your code executed automatically.  You may want an option to “break in” and get a prompt.  There are many ways you could do this, but here’s one simple way:

Compile the “pressanykey.c” code and install the executable in /sbin/pressanykey

#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <sys/types.h>
#include <termios.h>
#include <unistd.h>
 
int kbhit(void)
{
    fd_set rfds;
    struct timeval tv;
    int retval;
    struct termios term, oterm;
    int fd = 0;
    tcgetattr( fd, &oterm );
    memcpy( &term, &oterm, sizeof(term) );
    cfmakeraw(&term);
    tcsetattr( fd, TCSANOW, &term );
    /* Watch stdin (fd 0) to see when it has input. */
    FD_ZERO(&rfds);
    FD_SET(0, &rfds);
    /* Wait up to one seconds. */
    tv.tv_sec = 1;
    tv.tv_usec = 0;
    retval = select(1, &rfds, NULL, NULL, &tv);
    /* Don't rely on the value of tv now! */
    tcsetattr( fd, TCSANOW, &oterm );
    return(retval);
}
 
int mygetch( ) {
    struct termios oldt, newt;
    int ch;
    tcgetattr( STDIN_FILENO, &oldt );
    newt = oldt;
    newt.c_lflag &= ~( ICANON | ECHO );
    tcsetattr( STDIN_FILENO, TCSANOW, &newt );
    ch = getchar();
    tcsetattr( STDIN_FILENO, TCSANOW, &oldt );
    return ch;
}
 
int main(int argc, char **argv) {
    int count = 5;
 
    if ( argc > 1 ) {
	int tmp = atoi(argv[1]);
	if ( tmp > 0 ) {
	    count = tmp;
	}
    }
 
    printf("Press any key ... ");
    fflush(stdout);
 
    while ( count >= 0 ) {
	printf("%d ", count);
	fflush(stdout);
 
	//int result = mygetch();
	if ( kbhit() ) {
	    return 1;
	}
	//sleep(1);
	count--;
    }
 
    return 0;
}

Next create a ~/.profile file that includes the following:

  /sbin/pressanykey 5
  if [ $? != 0 ]; then
    echo "Starting interactive shell"
  else
    echo "Continuing with default"
    /path/to/my/code
  fi

Now, along with autologging in as root (or which every user you specified) you will then be presented with a count down timer similar to the “u-boot” timer where you can press any key to get a shell prompt, or continue to your firmware code if no interaction is required.

Building Openembedded for Overo on Fedora 14

Building Openembedded from scratch

The primary build instructions for building openembedded for the Overo processor can be found on the gumstix.org web site here:

http://www.gumstix.org/software-development/open-embedded/61-using-the-open-embedded-build-system.html

Fedora 14 Specific Fixes

Follow the instructions at the above link.  However, there are several places where the standard openembedded build breaks.  Here are the Fedora 14 specific problems I encountered with specific fixes and work arounds.  This is a moving target so if you run into new issues, feel free to let me know and I’ll update this page.  In all these cases I found solutions by googling, so if you have encountered something not mentioned here, google is your friend!

module-init-tools

Error in module-init-tools:

/usr/bin/ld: cannot find -lc

Solution:

yum install glibc-static (on the host system)

patch: **** rejecting target file name with “..” component

Error in patch (occurs in many places):

patch: **** rejecting target file name with “..” component: ../generic/tclStrToD.c

Solution:

As of 17th March 2011 if you have patch-2.6.1.-8.fc14 installed you may need to downgrade to an older version if you are getting patch errors during your build. To downgrade:

# yum downgrade patch

Note: pending a better solution, this will get you by … (as of April 12)

docbook build error …

Problem:

docbook.org changed the link to their source file

Solution:

$ cp ${base}/org.openembedded.dev/recipes/docbook-sgml-dtd/docbook-sgml-dtd-native.inc{,.orig}

$ edit ${base}/org.openembedded.dev/recipes/docbook-sgml-dtd/docbook-sgml-dtd-native.inc

Add the following two lines:

SRC_URI = “http://www.docbook.org/sgml/${DTD_VERSION}/docbook-${DTD_VERSION}.zip”

S = “${WORKDIR}”

And delete the following line.

SRC_URI = “http://www.docbook.org/sgml/${DTD_VERSION}/docbook-${DTD_VERSION}.zip;subdir=${BPN}-${PV}”

Finish building the main images with

$ bitbake omap3-console-image

Building MLO-overo

After the omap3-console-image is finished there is still one missing piece: “MLO-overo”.  To build this, run:

$ bitbake x-load

All the generated images will be found in:

${overo_root}/tmp/deploy/glibc/images/overo

Uploading your new images to the Overo

  1. Follow these instructions to Create a bootable microSD card article.
  2. Follow the instructions here to copy the images to your SD card and then to the Overo flash:  http://www.gumstix.org/how-to/70-writing-images-to-flash.html

Stay tuned … 🙂

 

Commercial Systems Versus Research

Comments on the use of commercial hardware or software in a university research context.

I’m so glad you asked! 🙂

Research according to the dictionary implies an investigation or collection of information.  Research in science and engineering often also involves building or designing or trying something new that has never been considered before.  If you purchase a closed commercial system as the platform upon which to do your research there are many risks:

  • Risk that the system won’t do all that you need it to do.
  • Risk that you won’t be able to get the level of internal access you need with the code, risk that even if you do get some access to the code, making your needed changes could be difficult to impossible.
  • Risk that if you do get access to the code, you will find it has been outsourced and may exactly meet the manufacturers requirement specs, but not be close to the level of quality you were hoping for.
  • Risk that the vendor will say “oh, no one has ever wanted to do that before, or no one has ever wanted that many of “XYZ” before)
  • Risk that the vendor will go out of business, discontinue the product, or radically alter the pricing structure.

There are risks and costs associated with developing code or hardware in house as well. There are risks using open-source code developed by others who don’t necessarily share your own priorities. You have to balance all the factors together and make a decision that considers everything together.

But my advice for a research program is to pay close attention to the “openness” and flexibility of the system that you are considering to support your research. The goals and priorities of commercial systems can be at odds with the needs of a research program.