A couple of years ago I wrote about imaging enhancements using ImageMagick. However, I didn’t really cover it in too much detail. One of those methods was the normalization process. This method basically analyzes the histogram of an image and adjusts the black and white levels. The results are a clearer picture with less shadow washout and crisper colors. This process has some limitations.
In this post I’ll cover some additional methods for image enhancements. I’ll continue to work with ImageMagick, as well as introduce the use of GIMP. We’ll use the averaging method, which uses multiple images of the same subject and averages the pixels for a final image. We’ll also cover median filtering which does the same thing but uses the middle pixel value for its final image. Next I’ll utilize both of these results as a source for further refinement. Finally, I’ll cover the Retinex process available in ImageMagick as well as GIMP.
I’ve briefly covered normalization before in a previous post, https://cloudacm.com/?p=1694. In it I used this ImageMagick command.
convert -normalize InputImage.jpg OutputImage.jpg
It’s simple, but what is it really doing? Normalization is like an auto contrast and brightness control for an image. It does this based on the image histogram. The histogram is a measure of the light intensity of an image for the base color sets, these being red, green, and blue. If the image histogram has a narrow band, the histogram range is adjusted to expand the band to its fullest extent. However, if an image histogram has a wide band, little is done this range and no noticeable change will occur.
You can see from these two images that little has changed. If we look at the histogram of the InputImage.jpg file, we can see why. The color levels cover a wide range, so we end up with a similar looking image when normalized.
Averaging and Median Processing
Patrick David’s work is a great source for understanding averaging and median processing. I recommend that you check out his work here, http://blog.patdavid.net/
It’s important to note that multiple images of the same subject will be used. There are some key things to consider. First, this process references the pixel values in image files and either averages or determines the median value. Second, the pixel alignment between each image has to be spot on, otherwise results will be unexpected. Third, if images do not align, as is the case with my source files, some work will need to be done to align them.
I shot 4 images using my iPhone. Even with a steady hand, I managed to move the camera when taking each shot. It moved both up and down, as well as left and right. Also, I tilted the camera between shots. When I opened all the images in GIMP and layered them, I could easily see how poorly aligned they were.
I won’t cover how I corrected the images in detail, I might cover it in detail in another post, but I will say it was a combination of moving and tilting layers in GIMP. I doubt my efforts were exact, but close enough. My suggestion is to use a tripod and a shutter control that won’t influence the shot.
Now for the fun stuff. Averaging the 4 source images and outputting an average of them is done using this command.
convert IMG_2493.jpg IMG_2494.jpg IMG_2495.jpg IMG_2496.jpg -evaluate-sequence mean OutputFile_Average.jpg
The result looks like a layering of transparent images. It has a soft appearance and a ghosting effect for objects in motion. I like to point out that none of the pixels are a result of an camera, they are purely mathematical results. The image is generated by computation, it’s a computer generated image.
Now we’ll take the same 4 source images and output the median pixel values. This is done with this command.
convert IMG_2493.jpg IMG_2494.jpg IMG_2495.jpg IMG_2496.jpg -evaluate-sequence median OutputFile_Median.jpg
The results are kind of funny. No matter how much I adjust the layers, anything in motion will influence results. The clouds, ocean, and my son are moving so they all appear distorted. Also, the center point of my tilt has the most clarity, this is because the tilting process I did earlier in GIMP is limited. This in contrast is not computer generated, merely a pick and choose of original pixels from our image sources.
If you want more, check out this post from James Britt about using Imagemagick averaging, http://jamesbritt.com/posts/imagemagick-image-averaging.html
I’ve used the unsharp command in ImageMagick to bring out some details in the edges.
convert -unsharp 1.2x1.2+5+0 OutputFile_Median.jpg OutputFile_Unsharp.jpg
You can get the same results running a sharp command. I don’t have all the details about the commands, so reference them from the online manual and experiment, http://www.imagemagick.org/Usage/blur/#sharpen. I found this command useful and more clear that the previous unsharp command.
convert -sharpen 0x8.0 OutputFile_Median.jpg OutputFile_Sharpen8.jpg
Now I’ll cover a technique referred to as Retinex. There process was developed by NASA in the early 1990’s for processing remote sensor imaging data. The principal participants were Zia-ur Rahman, Daniel J. Jobson, and Glenn A. Woodell, with a joint efforts from the College of William and Mary and the NASSA Langley Research Center in Virginia. In 2004 they published a paper on the subject titled “A Comparison of the Multiscale Retinex With Other Image Enhancement Techniques”. It is available here, https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/20040110657.pdf
The abstract from that paper contains this first sentence, “The multiscale retinex with color restoration (MSRCR) has shown itself to be a very versatile automatic image enhancement algorithm that simultaneously provides dy-namic range compression, color constancy, and color ren-dition.” It’s not hard to imagine that present day cameras the employ HDR perform a retinex function.
The concept that image quality enhancement is fictional, as pointed out in a DEFCON lecture, are in contrast to what the creators of retinex processing are able to achieve. Nasa released an article in the summer of 2003 about Langley’s help with law enforcement, http://ipp.nasa.gov/innovation/innovation112/4-advtech2.html.
The concepts of retinex pre-date modern computers. It was first termed by Edwin H. Land (https://en.wikipedia.org/wiki/Edwin_H._Land) in 1958 (http://neuronresearch.net/vision/files/retinex.htm).
Another pioneer in the field was John McCann. In this video, McCann gives some background in color constancy and the theory behind retinex, http://spie.org/newsroom/mccann-video
I seem to have given more background on the subject than I had intended. Those that have developed the technology should be recognized for thier efforts.
The retinex function has been ported over to ImageMagick and GIMP. I ran this command once I downloaded Fred Weinhaus’s script, http://www.fmwconcepts.com/imagemagick/retinex/
sh retinex -m RGB -f 50 "/home/local/Desktop/ImageEnhancements/Retinex/Source.jpg" "/home/local/Desktop/ImageEnhancements/Retinex/Retinex_Output.jpg"
It’s a number cruncher and will take time to complete. When it does complete the results should be considerable, especially for images that are dark or obscured with low contrast. GIMP provides a similar tool located under the Color menu, conveniently named Retinex. I like it because it gives a preview of the results, minus the wait. The downside is its speed, especially with larger image files.
There is a lot of credit to give to all the folks that have made the magic of image enhancement possible. It is near impossible to name everyone, but I would like to attempt to point out those that deserve mention.
Pat David for the easy examples of GIMP and ImageMagick filtering.
Fred Weinhaus for the to the point scripts that leverage ImageMagick to a new level.
The team at Nasa’s Langley Research Center for making was seems impossible possible.
The founders of Retinex, Edwin Land and John McCann for pointing out what we take for granted and furthering our understanding of ourselves and the world around us.