FFMpeg Side by Side and Tiling Video
FFMpeg has an interesting ability to join multiple video feeds together. In the past, I had use other video editors to do this. They were limited to 2 camera views. In addition, the second camera screen was scaled down and much smaller than the first video. With FFMpeg, we can join more than 2 camera feeds without resizing. I’ll be quick and to the point in this post so try not to blink.
In this example I have 4 cameras that were “filming” the start of the STP bicycle classic. I’m using 4 808-16 keychain cameras, all of which have external wide angle lenses attached. The format of these cameras uses 1280×720 30fps H.264 video and 16 bit PCM mono audio sampled at 32000 Hz. Since all of my videos use the same format, not much else needs to be done before tiling with FFPmeg. Here are the 3 commands I used to tile my 4 videos.
ffmpeg -i "Camera1_Left.MOV" -vf "[in] scale=iw:ih, pad=2*iw:ih [left];movie=Camera2_Right.MOV, scale=iw:ih [right]; [left][right] overlay=main_w/2:0 [out]" "SideBySide_Top.MOV" ffmpeg -i "Camera3_Left.MOV" -vf "[in] scale=iw:ih, pad=2*iw:ih [left];movie=Camera4_Right.MOV, scale=iw:ih [right]; [left][right] overlay=main_w/2:0 [out]" "SideBySide_Bottome.MOV" ffmpeg -i "SideBySide_Top.MOV" -vf "pad=iw:2*ih [top]; movie=SideBySide_Bottome.MOV [bottom]; [top][bottom] overlay=0:main_h/2" "Tiled_4_Camera_View.MOV"
The interesting thing about the tileing method used by FFMpeg is that only one of the cameras will be used as the audio source. The source used is the input video, not the filter video. In this example you will hear the audio from the top left camera.
There are times when the source videos will not be the same format or orientation. For these particular cases, the source video must be processed so all are formatted the same. Otherwise, the results will be unpredictable. Here are the commands I used to make this adjustment to 4 video sources that had 3 different formats.
ffmpeg -i 20100717_STP.png -vf 'scale=640:360:force_original_aspect_ratio=decrease,pad=640:360:x=(640-iw)/2:y=(360-ih)/2:color=gray' 20100717_STP_Scaled.png ffmpeg -i 201207140948.MOV.png -vf scale=iw*.5:ih*.5 201207140948.MOV_Scaled.png ffmpeg -i MINI0014.MOV.png -vf scale=iw*.5:ih*.5 MINI0014.MOV_Scaled.png ffmpeg -i REC_0008.MOV.png -vf scale=iw*.3333:ih*.3333 REC_0008.MOV_Scaled.png
I first established the format I wanted to use and started by resizing my first source to 640×360. This camera was one of my earliest helmet cameras and shot at 640×480 at 15fps. Since it had a native aspect ratio of 4:3, I needed to change it to 1.7:1. A simple scaling filter would result in distorted video with a stretch or squashed look. To avoid the spaghetti western effect, I added padding to fill in the dead space that the source video wouldn’t cover in the new aspect ratio.
The last 3 commands simply scale the source video down to match the 640×360 format I established from the onset. The first 2 videos in this group were from 808-16 1280×720 cameras. The last video source is from a Mobius 1920×1080 camera. I’m not using videos, but screenshots from those videos in this example. However, the same commands would be used for the videos. Here are the results.
I’ll cover some additional effects we can get from FFMpeg with the resize and padding filters. With it, we can create boarders around videos. This will be helpful to distinguish the different camera sources as I start to demonstrate overlays.
I hope you have enjoyed this topic and look forward to covering more about FFMpeg and the features it offers.