Orthomosaic Mapping

Orthomosaic Mapping

Introduction – Making some sense out of the mess of words

The title is a bit of a misnomer since orthomosaic mapping has more to do images gathered perpendicular to the observed object, as in aerial photography.  I decided to keep the title since the method for rendering of 3D objects from a group of two dimensional images is the same.  Additionally, the basis of the rendering has roots in mapping relief.  All though the title might be somewhat incorrect, the intent of mapping objects allows for some liberty with the verbiage.

Purpose – Getting more details from the data

I have posted before about 3D modeling.  The reason I wanted to revisit the topic was because of the earlier process was cookie cut and highly dependent on cloud providers.  AutoCAD’s 123D Catch and Microsoft’s Photosynth provide impressive results for casual users.  By uploading a series of images, the visual experience is quite revealing.  However, there isn’t any granular control to the process.  Having a one size fits all may induce limits.  This is the reason I decided to go into detail using a standalone process.

In this post, I’ll cover a step by step method to create texture maps from images using VisualSFM and Meshlab.  Much of the work done here has been based on the examples giving at flightriot.com.  I’ll also include some back ground on the software components as well as practical uses.

Details – From a group of images to a 3D object

First thing to do is get the programs that will be used to process and display the renderings.  VisualSFM is the primary program that will process the two dimensional images and is available for download here.  The next program is Meshlab.  It is optional, but will allow renderings to be exported and it offers some visual advantages over VisualSFM.  There are several online tutorials on the installation steps, so I won’t cover them here.

There are requirements when gathering images of a subject.  Imaging a subject has 3 field of view requirements when taking a series of images.  The first option, the subject should be followed in a straight line, with the camera perpendicular to the subject and direction of motion.  The second option, the camera should follow an arch with images taken of the subject from various angles.  The last option, the camera should circle the subject and take images completely around it.

If images do not follow these requirements, the rendering process is likely to fail.  Also, images should be in the JPEG format.

In this example, I took 10 pictures of Haystack Rock at Canon Beach, OR.  I kept Haystack as the focal point of my subject and walked an arch around it.  I attempted to take pictures at even points, but this was more guesswork and some spots were obstructed by beach goers.

HaystackThumbsThe time of day and weather are a factor.  The subject should be clear of clouds and free of glare from the sun.  You want the images to be as clear of artifacts as possible.

Using VisualSFM, I opened all of the image files that pertained to the subject.  Once they loaded, I ran the Compute Missing Matches command in VisualSFM.  Then I ran the Compute 3D Reconstruction command.  Finally, I ran the CMVS Dense Reconstruction.

Now the dense cloud object can be viewed inside VisualSFM.  The data files generated can also be used to open the rendering inside Meshlab.

snapshot02

From Meshlab, open the generated file with the ply extension.  Meshlab has more features than I’ll cover here, so go here if you need more in regards to Meshlab.

Relations – Cloud sourced images

Using photos that are available on the web seems like well suited resource for 3D rendering.  However, it presents some challenges.  Successful renders use image sequences that follow a linear, circular, or semi circular pattern of motion.  They also fair better when the light source is diffused and originates from a uniform point.  In addition, objects that are transparent or have a high degree of reflection render poorly.  All of these ideal conditions are absent when using cloud sourced images.  This means the steps to render Haystack Rock would need to be changed accommodate the diverse image source.  VisualSFM has many more features available to it, this is just a simple introduction to it.

Summary – Back through the 2 mile tunnel

One of the main reasons for me researching this topic was due to my interest in trail rides.  There is bike trail here in the Seattle area that runs from North Bend through Snoqualmie Pass.  On that route there is a tunnel that is 2 miles long.  It’s quite an experience to travel through, but it has been closed on occasion due to failing debris.  It was something about the surface changes over time that made mapping from a visual sensor the most easiest and cheapest way.

In this post I covered 3D rendering using VisualSFM and MeshLab.  The process is basic and grossly simplistic, this is meant to be an introduction to these methods.  I also covered some basic image requirements that make the render possible.  It’s my hope that this will help get you started on texture mapping and 3D rendering.

Comments are closed.