FFMpeg Night Enhancement Processing

FFMpeg Night Enhancement Processing

In this post I’ll be covering how to process footage that was shot at night. By default, it is difficult to see what was captured with the camera. I do know the length of the video as well as other aspects of it. The clip I’ll be using is 5 minutes long and has a resolution of 1280×720 at 30fps using the H.264 / AVC codec. The audio is a single channel (mono) 16-bit PCM at 32000Hz. I like that the audio is a lossless raw format, this will be useful for later posts when doing audio signal processing.

Before I start to apply filters, I’ll want to select a portion of the footage to use as a filter test.  I’ll be selecting a 5 second portion of the video and splitting it out to another file.  I’ll test my filters on that new file instead of wasting time processing all of video.  To split the video, I’ll choose a point when I want to start my split and set it for a duration of 5 seconds.  Here is the command I’ll use.

ffmpeg -i MINI0492.MOV -ss 00:03:10 -t 00:00:05 -acodec copy -vcodec copy -async 1 -y MINI0492_Split1.MOV

Now I have a nice short clip that has most of the visual attributes I can test filters with.  Also, the command retained the format of the source video, so I shouldn’t see any artifacts created by FFMpeg.

The first filter I would like to use is one that will brighten the video.  I’ll use the hue filter to do this.  Here is the command I’ll use to test the filter hue=b=x, x being a range from -10 to 10.

ffmpeg -i MINI0492_Split1.MOV -vf hue=b=2.11 MINI0492_Brightness.MOV

This command increases the output video brightness by a factor of 21.1 percent.  Based on the range, we have a lot of variance.  There is one problem with the results, the dark areas lack depth.  I’m going to use the “eq” filter with my 5 second clip to correct the gamma levels, here is the command.

ffmpeg -i MINI0492_Split1.MOV -vf "eq=gamma=1.8" MINI0492_Gamma.MOV

The results of this were better than the brightness filter.  The clip has better dynamic range.  Now I can see how shaky the video is.  This next filter will stabilize the video with 2 commands.  The first command creates what I call a motion profile.  The second command applies the motion profile against the source video to create a stabilized clip.  Here are the commands.

ffmpeg -i MINI0492_Gamma.MOV -vf vidstabdetect=shakiness=10:accuracy=15 -f null -
ffmpeg -i MINI0492_Gamma.MOV -vf vidstabtransform=smoothing=40:input="transforms.trf" MINI0492_Stable.MOV

The profiling filter are already set to their highest setting, shakiness=10 and accuracy=15.  The second stabilization filter has a smoothing range of 0 to 1000.  The higher you go, the more smooth, but you sacrifice FOV (field of view).  I found 50 good enough.  Now I want to sharpen the video, since much of it appears to be out of focus.  Here is a command I ran to enhance the edges of the clip.

ffmpeg -i MINI0492_Stable.MOV -vf unsharp=luma_msize_x=9:luma_msize_y=9:luma_amount=3 MINI0492_Sharp.MOV

This gave me good results after changing values for the following filter ranges.

luma_msize_x range 3 to 23
luma_msize_y range 3 to 23
luma_amount range -2 to 5

Next I’ll create a side by side composite of both the source and final processed clip.  Here is the command to do that.

ffmpeg -i MINI0492_Split1.MOV -i MINI0492_Sharp.MOV -filter_complex "[0:v]setpts=PTS-STARTPTS, pad=iw*2:ih[bg]; [1:v]setpts=PTS-STARTPTS[fg]; [bg][fg]overlay=w" MINI0492_SideBySide.MOV

The results are impressive, I find that I’m referencing the clearer clip on the right to see what isn’t visible on the left.  With the commands tested and validated with my clip, I’m ready to apply these commands to the entire video.

This will be a time consuming process, that’s alright.  I’m going to script it and go to sleep.  When I wake, the final processed video will be ready.  Here is the script I’ll be using.

[bash]#!/bin/bash
# this runs while everyone else sleeps
# use the command "sh ProcessVideos.sh" to run manually from command line

#########################################################
# #
# Night Time Video #
# Enhance and stabilize processing #
# #
#########################################################

# Change to the working directory
cd /home/local/Desktop/Clip

# Correct the gamma levels so video is brighter and depth remains
# The commented command will brighten only
# ffmpeg -i MINI0492.MOV -vf hue=b=2.11 MINI0492_Brightness1.MOV
ffmpeg -i MINI0492.MOV -vf "eq=gamma=1.8" MINI0492_Gamma.MOV

# Create a stablization profile and apply it
ffmpeg -i MINI0492_Gamma.MOV -vf vidstabdetect=shakiness=10:accuracy=15 -f null –
ffmpeg -i MINI0492_Gamma.MOV -vf vidstabtransform=smoothing=50:input="transforms.trf" MINI0492_Stable.MOV

# Sharpen the video
ffmpeg -i MINI0492_Stable.MOV -vf unsharp=luma_msize_x=9:luma_msize_y=9:luma_amount=3 MINI0492_Sharp.MOV

# Create a side by side video of the original on the left and processed on the right
ffmpeg -i MINI0492.MOV -i MINI0492_Sharp.MOV -filter_complex "[0:v]setpts=PTS-STARTPTS, pad=iw*2:ih[bg]; [1:v]setpts=PTS-STARTPTS[fg]; [bg][fg]overlay=w" MINI0492_SideBySide.MOV[/bash]

Here are the results.  The noise from the camera’s processor shows some noise.  I’m guessing it’s the write burst to the SD media on the camera.  Daylight video doesn’t reveal that artifact, but night time processed video does.  It messes with the stablization, so I would dial the level of stabiliztion down a bit to reduce that.

Still, it’s nice to be able to take a bunch of night time video and process it in bulk.  The platform doing the work could be a Raspberry Pi.  All you really need is time, space, and support for FFMpeg and the filter libraries.  I’m going to cover more things FFMpeg can do in later posts.  Simple enhancements to day time video as well as some audio visual stuff will be covered.  I hope you enjoyed.

Comments are closed.