Wednesday 20 October 2021

Editing video with open source tools

* Note I will probably add to this in the next few days after the workshops have finished with illustrations so check back.

If you haven't installed kdenlive  you can download it for windows here https://kdenlive.org/en/ or on Debian or Ubuntu based distros do sudo apt install kdenlive. On arch based distros sudo pacman -S kdenlive .

So lets open kdenlive and look at it !

 


 

Kdenlive is pretty much the same on Windows and Linux, on the left you can see a window which has tabs named project bin , effects undo history clip properties etc. then next to that you have the clip monitor which just displays the clip you are working on at the time and next to that the  project monitor which displays the whole thing that's on the timelines below , bottom right you have the audio mixer and effects stack window , more of that later , and below that there's a small slider which took me a while to find which gives us the zoom level , not important until we have a video on the time line , pretty straightforward so far

This is my approach , I don’t present it as the last word or a method to use yourself , its what works for me .

Personally when I’m editing I’m not that interested in narrative as I said in the first session

Naturalism in Film is a construct and a betrayal of possibility

Narrative is unimportant.

The moving image can be treated as a material like paint

In editing seek what is interesting visually rather than a story

The viewer is allowed to move backwards or forwards through the ‘finished’ piece creating it for themselves

With that in mind

My approach to editing reflects this, I look for what I find visually interesting and what ‘flows’ though what I make ostensibly occupies a linear time frame I’m not that concerned if people can pause or rewind to catch something again , its also one of the advantages of making work for the web . Though I do make long work Some of the most amazing pieces of glitch art I’ve seen are relatively short in duration , such as the work of Rui martins aka feathersmakepplattractive https://www.tumblr.com/tagged/Rui+Martins?sort=recent a lot of the time these are very very short , just a moment or a repeating gif of a moment and I will cover making gifs quickly today . Why do these short videos have such impact ? What length should your video be ? In Rui martins case and a lot of datamoshing they are leveraging the moment of surprise where one thing becomes unexpectedly other so the video only has to represent that moment .

Consider these things before editing

1)Preparing material beforehand baking broken files, getting everything the right dimensions , remembering to save periodically and backing up files .

2)Cut out sections that don’t do much, unless that's what you want, avoiding repeating sections which can be especially important in datamoshing. Scanning to find interesting sections deleting boring stuff. unless thats what you want , adding in new material if needed.

3)effects or not . I'm fairly on the no effects side of editing , the glitches are what I’ve already made , having said that , sometimes a little bit of sharpness is good etc,I will run through some of the more interesting effects you can do with kdenlive .

Lets get into it

But what if after having used tomato or ffgac I have a longer video say 5 , 10 or even 30 minutes to an hour long , essentially my approach would be to look through the video finding those moments of surprise and editing out the fluff in between as if you aren't using avidemux to make datamoshes automated scripts will produce a lot of stuff that I don’t want

For instance this video ‘prisoners of the lost universe’ https://archive.org/details/PrisonersOfTheLostUniverse1983 run through Kaspar ravels tomato v2 using this command ‘python tomato.py -i prisoners.avi -m pulse -c 20 -n 7’ which ended up being 30 hours long from an initial 1 hour and 34 minutes

To import a  video into kdenlive – go to the left and open the project bin, right click and add clip to import video. Sometimes when importing you will get a warning which asks if you want to adjust to the profile of the clip , if you are just working on this one video then click to do that , the otherwise we will talk about a little later . You might have to wait a while for it to import your video depending on size and other factors . Now that has actually imported okay , drag and drop that onto your time line like so , and then well try and play it back ( I’ve had problems sometimes with videos manipulated using other methods , if it wont import your video you may have to bake it with handbrake or try re-encoding it with ffmpeg , or capture the glitched video using ob-studio via playback in ffplay or videoplayer of choice , (which may take some time if the video is long ). Now we can see what the zoom control does , handy for zooming in and out of the timeline and finding your way about. We can also scrub back and forwards on the timeline which gives a wonderful weird sound effect as well ( which might be possible to use as an audio scratcher if we had say audacity opn in the background to record what we are doing) . Now as you can see we are getting stuttering and seek problems because the video is datamoshed and it needs to be baked , We could bake it from here but its probably easier to exit and re-encode it using handbrake or ffmpeg .

Open handbrake  run through dimensions , altering , making sure crop is right ( you can check in summary tab handy if you’ve screen grabbed the video sometimes) the video tab encoder ie h264 or not , uhmm choose quality , at this stage we’ll leave it at 25 which is fairly high , lower gives bigger file size though this file is already 8gb ! Audio , important to check this tab out especially if you're resizing or finishing to upload to Instagram or Tumblr as they won’t accept anything but aac audio . Also check out where the file is going to be written , it will warn you if its going to overwrite an existing file but I’m always making different versions so make sure you stick to some kind of naming plan be organised !!

Naming plans side note

When I’m working on a project I can create a lot of files, so its important to know where those files are so be consistent , either save directly to your home directory in the video folder or create a project folder to work from, this is very important if you are going to spend a couple of days putting together a longer project and will be saving an editing timeline constantly as the video editor will need to know where the files are to be able to reopen the project , there is a save project in the file menu of kdenlive , and that function pretty much exists in most video editors but it means you shouldn't move any file you have imported to a project around or the editor won’t be able to find it and any edits you have made will have to either be redone or you will have to locate the media for the video editor ( sometimes they will give a menu for relinking ) .

Organisation is key . Know where your files are , keep them there and don’t move them around until you've finished with them , after that I have backup hardrives where I store them in dated folders by year.

When I’m saving files or versions I tend to put in a shorthand for whatever technique I’ve used so something like somefilehextomatofinalshort.mp4 ( somefile hex edited, sometimes I'll write int the values I've used, Datamoshed using tomato , final being an already baked or finished file, short means I've shortened it , possibly to upload to Tumblr which has a 100mb file limit upload or for Instagram which prefers video under one minute on the main timeline)   would be a typical example. Or maybe version 1 2 3 , I tend not to leave gaps in file names as its just easier and quicker when typing file names into the terminal than if they have spaces in them.

All that being checked out and changed for preference lets encode . Bear in mind encoding can take a while depending on your hardware , tailor the output to your machines capabilities and proposed destination .

So using handbrake would have taken about four hours to render this file . Why so long welllll tomato increased the file size to 8gb but it also stretched the duration of the file to something like 31 hours , from 1hr and 37 minutes , obviously because we are adding extra frames so I then used ffmpeg to re-bake instead using

ffmpeg -i prisoners-pulse-c20-n7.avi -c:v copy -c:a copy prisonersbaked.avi

which left me with a usable but large and long file can we still import that into kdenlive ?

Yes we can , but it started to struggle because its a long file  . But also notice the sound , we could extract that now and store it for later by going back to that directory and doing

ffmpeg -i prisoners-pulse-c20-n7.avi -c:a libmp3lame -abr 1 -b:a 256k prisonerpulsesound.mp3

Which gives me a largeish and reasonable quality mp3 file, usually I'd save to wav format but this file would be too big .

so even though kdenlive can import the 30 hour long video lets cut it down to an hour for manageability ( the bigger the file the more the editor will struggle especially on lower end or older machines or machines lacking in ram ( hopefully you will have 4gb or more ) .

but we will do that using ffmpeg ( obviously if your clip is shorter you wont be having these problems ! )

ffmpeg -ss 00:05:00 -t 01:00:00 -i prisonersbaked.avi -c:a copy -c:v copy prisonersbakedshort.avi

so back with the clip imported into kdenlive so I’m just going to find an area I like and delete the rest and then work on that  , on a small screen , say on a laptop, you can un-dock or delete windows you aren't using and it makes more sense to have the viewports larger.

If you click the scissor icon move it to where you want it then click on the timeline , then delete the excess, now we can stretch the clip on the timeline so we can get a better feel for what’s there using the slider in the bottom right hand corner and we can home in on what we want to keep generally I want to get rid of original references and keep the datamoshed part so something like this ( show searching bracketing and deleting )

now I’m kind of happy with that lets export it to h264 with aac audio by pressing this button here (red button that says render ) and this window opens up giving us various output formats , well stay simple and go for h264 aac , if you flick thru the options you can see which your computer can handle , if they aren’t in red you can use them, and various options just to export audio or whatever . Remember to name the file as well ! And where its going ( here videos in my home folder) you can alter profiles as well if you click on the more options icon at the bottom middle of that window a new window pops open to the side showing you the various parameters kdenlive will use , there in kind of ffmpeg format so you can adjust them as you would with ffmpeg . You can also make adjustments to these scripts by opening the settings menu on the top bar and go to configure kdenlive and click on transcode button and that gives all the formats and encoding setting for each format you can add or subtract to those .

The format for these is basically that used by ffmpeg but without the input and output parts ie it takes off the ffmpeg -i yourfile.extension and the output.extension at the end so if you have an ffmpeg profile you like using you can add it in and then use that for rendering with.

Hit red button wait and

lets look at that now okay lets reload that as a new file in kdenlive ( or we could save that edit as a project before we start again) and run through a few of the filters so go to the window in the top left and click on filters and lets add something like rgbsplittor , if we drag that onto the time line then we can control it from the effects stack on the bottom right im not a big fan of adding effects other than sharpening but some of these are quite fun , rgbsplittor especially , so ima going to render that to a new file again

and play back …… nice !

And if you don’t like the effect you can just click on the red bin icon in the top right hand corner of that window and it disappears !

Try out different effects and see what you like , or none . ( rgbsplittor with lens correction !)

The boring bit

so what if we have lots of different files that we want to make a long form video from ?

I have an external hardrive with videos of different lengths different codecs and different sizes , I generate a lot of files ! But also remember when working with found files you will have the same problem , archive.org is not consistent in the dimensions of videos it hosts and often dimensions will vary depending on era i.e. black and white vs colour or the way the file was encoded .. So what if I want to import them edit them into kdenlive and have a single output size ? Lets see what happens if we try that .If we try and add the whole folder at once what happens is that kdenlive tries to re render all of the videos at once , now as I’d already imported two videos to begin with and the first video was 1280x720 I think that its going to be import everything as 1280x720 no matter what size they are if you look at the top here's a render button and thats giving a time until finished and a slowly moving progress bar nope thats going to take to long to import that many videos ( 18 ) lets do one at a time .

(We could use handbrake on the files we want to work with first which in the end will be quicker but we will do this first in kdenlive) to do the size conversions do this  import one at a time first a 1280x720 ( 2021-04-30_16-48-26.flv) then treestorm.mp4 which is 4:3 show that edges aren’t stretched so that clip will have black borders. If we click on that in the project bin it opens in the clip editor we can then go to the effects menu and use edgecrop to bring that into the same dimensions as the first clip , we can do that for each subsequent clip if needed , so lets import a few more ( vertigosnow.avi) thats 4:3 as well but we have to use edgecrop slightly different , then add that to the timeline . One more this one doesn’t need any altering . Notice the first two have no audio but the third one does , Ill click on the audio icon to mute the audio in that clip)

If we want to add effects we will have to do each clip individually. Open the effects menu select the one for video drag and drop the effect onto the clip on the timeline then alter to taste in the right hand bottom where the effect menu opens now if we look at the top of the kdenlive window we can see while we’ve been doing this kdenlive is rendering stuff in the background so now we cant export the video till that’s finished . Is there a simpler way around resizing video to prepare for this ? Yes there is handbrake !

Which also brings up some of my reasoning behind the way I work when editing – i.e. I don’t just use on program , its possible , but each program is a tool which works more efficiently than the other , so I use handbrake a lot for resizing and cropping and getting stuff to the right dimensions , and ffmpeg for re-encoding or baking , and finally the editor itself for finish and proper cutting and adding and deleting , its not one thing its a process . And a lot of the time I’ll think aah I need that clip but its in this format so ill exit from the editor , saving an edit , goo off and find the clip I want get it to where I want it to be with ffmpeg and handbrake , and then reload the final editor and the edit I was working on

But back to handbrake

Ill show you how quickly now with the windows version of handbrake , its slightly different on Linux . In the windows version we can ( if our hardware allows) turn on intel qsv go to preferences before loading a file to encode and tick the box which says allow qsv ( it might already be ticked for allowing qsv on lower power hardware ) ill talk about qsv in a while cos it saves a lot of time when encoding files.

Decide what dimensions you want to work at ie 1920x1080 , 1280x720, 960x540 , 640x360 are all 16:9 or 4:3 which is generally 640x480 or pal 756x568 open handbrake , open the first video , check dimensions are what you want and codec , then press add to queue then open the next file in your list if its smaller or you want to change its dimensions ( in the dimensions section in the windows verison  remember to tick allow upscaling , turn off anamorphic scaling and at the bottom change where it says display size to the length dimension of your desired size here I'm using 1280x720 so 1280 go back in the summary window and check you don’t need to crop anything , if you do do that then add that video to the queue then add your next video and so on until you have the videos you want ready , then press the start queue , if you’ve queued up a lot of videos to re-encode and resize it could take some while. One of them vertigosnow I had to re-encode using ffmpeg first as it was in a weird codec that handbrake can handle but does so very slowly and ffmpeg was way quicker so its worth playing around with encoding with different applications just to make sure especially on a slow pc or if you are time constrained ( handbrake gave an estimate of 2and a half hours to encode that one file as it was whereas first re-encoding to h264 took maybe 15 minutes in ffmpeg – faster with qsv ! But it will save time when feeding it back into handbrake

interesting discussions on what sar dar and far are in ffmpeg with aspect ratios https://forum.videohelp.com/threads/402501-ffmpeg-and-SAR-and-DAR for further reading and this page on ffmpeg resizing video and changing aspect ratios as well https://ottverse.com/change-resolution-resize-scale-video-using-ffmpeg/

so that took about an hour and a half , compared to 2 and a half hours plus . Could we do it faster ? Obviously if you are only working on a small file this isnt going to be a problem overly much, but if you are working on a longform video ( as I often do ) this will be a concern

Welllll there is a way if your computer has a newer intel processor on windows 10 , linux maybe , using newer versions of ffmpeg we can render faster using a built in function of the gpu called qsv or intel quick sync video more info on that here https://www.intel.com/content/www/us/en/architecture-and-technology/quick-sync-video/quick-sync-video-general.html

not only is it quicker but because it shunts the encoding from the cpu to the gpu and its hardware based it reduces cpu load , heat and general wear and tear

some applications have that enabled including ffmpeg and handbrake ( depending on version) and in fact re-encoding with ffmpeg from the command line can work out faster than using handbrake

simple re-encoding using qsv

ffmpeg -i somefile.extension -c:v h264_qsv outfile.extension

we could use the other opensource video editor openshot which does have qsv enabled by default if available, but openshot on windows and linux isnt very stable on longer files, it used to be my editor of choice but Kdenlive works much better and seems to be more stable ( though I have managed to crash it more than once)  but we could also reconfigure kdenlive to use the latest ffmpeg that we installed with chocolatey ( which does have qsv encoding built in) by clicking settings , then configure kdenlive then click on environment then point each line where it asks for where the exe file is to C:/ProgramData/chocolatey/bin/ and each binary asked for in turn then restart kdenlive , export then becomes a lot faster . But I couldn’t find a way to get qsv encoding to work from kdenlive so far

To alter an encoding script in Kdenlive click on the codec in the main window you want to use after you press render and alter the script in the window that opens replacing libx264 with h264_qsv if you can find a way for it work (?) or whichever codec you choose then give that profile a name and apply it will appear in the render options in the main window now.

but its probably simpler and quicker to do this from the command line using ffmpeg

But as I found out today while helping someone with a problem where they had two videos with the same dimensions ie 1280x720 they didnt have the same pixel ratios which wouldnt normally be a problem but as they were trying to do displacement mapping ffmpeg threw a fit and decided it didn't want to play so we had to run one of the files through handbrake to get the correct pixel ratio and final display size so its probably better to use ffprobe to find the file that is off ( ffmpeg will tell you which one when it exits with an error message ) and use handbrake to change the file thats bad to match the file thats good , its easier to do this until I've found an ffmpeg formula that works .

more info on sar/dar and par here and why that would be an issue - 
https://en.wikipedia.org/wiki/Pixel_aspect_ratio
 
We can use intel qsv with ffmpeg on four codecs , and actually you can get by
with the two ie vp8 ( vp8 webm which googles' YouTube and Facebook will accept and
it also gives nice results with hex editing but its quite fragile as well so
experiment with values there ) and h264 but also mpeg2 , which is an interesting 
codec for datamoshing webm vp8 encoder is  vp8_qsv mpeg2 is mpeg2_qsv and also
 h265  hevc_qsv  ( hevc is quite an interesting codec for hex editing , you get 
some really nice artefacts with it . ) 


so now we have all our material back in the right aspect ratio back to kdenlive ( 
as I say you could re-aspect everything in kdenlive but then it might take hours 
for it to render especially with bigger files as it has to re-aspect each clip then 
add effects then re-encode and render ) . Thinking ahead and re-encoding to how you
 want the file size wise is often better and quicker before you edit .

Reload the files that are now correct aspect ratios and dimensions and edit a bit
 then add effects then render as final video ( gives a chance to reiterate the way
 effects can applied on timeline and too separate clips where the windows appear etc ) 

also talk a little about what im looking for and how I look for flow and rhythm –
 let the material guide you , don’t use assumptions from having watched narrative
 videos use the untutored eye i.e. datamoshing I’m cutting out references to the 
original real source as is and just trying to isolate the flow of the pixels and 
excluding repetition if possible making something new with the flow of the pixels
 as they move .


With hex editing I’m trying to bring out the artefacts or any delicious pieces of breakage would be different to my approach to datamoshing as im trying to show things like colour movement flow , or a deliberate jaggedness letting the way the file has broken guide me , if there’s no information there I remove it , if I like a texture ill keep that .

homogeneity is the enemy of glitch art

But remember

Speed of rendering is  dictated by codec chosen and machine – tailoring output resolution to machine ie 1920x1080 is useless to encode on a core 2 duo especially if its h264 , mpeg 4 and libxvid work better on older computers , ogv , vp8 and newer are more intensive on anything less than a core i3 second gen ( unless you have a decent graphics card or the processor has qsv extensions ) Get to know the capabilities of your machine What processor do you have ? What gpu . Consider shorter videos on older hardware, or lower resolutions longer videos take a lot of time to render depending on format and hardware.

What platform are you posting too What formats do YouTube etc allow for and what length and size ? Most sites need h264 but YouTube will take a wider range including webm Tumblr wants h264 with aac audio but restricts size to 100mb max ( so you could encode to the size but have a 20 minute video ) consider that facebook and instagram destroy video quality and Instagram on mobile works best with a squarish ratio and on the main feed again less than a minute of video and under 100mb.


The final thing before gifs which will help us in the next session with sound

In the second session I showed you how you could take audio and transform that into video . Using kdenlive we can then transform that video by adding effects and then turn it back into audio for use later . Caveat it doesn’t work with all effects and sometimes all you get is noise but its worth it for the times it does come up with something interesting .

So as a quick run through take some audio as wav and change the file extension to .yuv as before . Make a video out of that using :

extract audio from file ffmpeg -i somefile.extension somefile.wav

ffmpeg -f rawvideo -s 640x480 -r 25 -pix_fmt yuv420p -i somefile.yuv -c:v rawvideo output.avi

import that into kdenlive like so add effects then export as lossless huffyYUV

then open that file in audacity like this open audacity , go to file , import raw data . Locate the file with the file explorer click on it and specify unsigned 8bit byte order little endian , 2 channel ( stereo ) press import and then playback and see what it sounds like , if you like it maybe add a few effects change speed and pitch etc export it as wav and then repeat. But ill cover techniques like this in the next session .

Gifs


Making gifs 

to make gif from video select the time segment you want and cut it out of the video 
into a new file so repeating what we did before ie 

ffmpeg -ss 00:00:00 -t 00:0:10 -i somefile.avi -an -c:v copy somefilesegment.avi

( change the -ss 00:00:00 to the point you want to start from in your video ) 

(remember -ss gives us the point to start from and -t gives the amount to cut out 
so sort through the video you have already glitched for a part you want to make 
into a gif note down the time start point and how many seconds after that you want
to include 

note the -an which tells ffmpeg to remove the sound as we don’t need sound in gifs 

so then we do 

ffmpeg -i somefilesegment.avi  somefilesegment.gif

this is a good guide for making gifs into videos and videos into gifs
 https://engineering.giphy.com/how-to-make-gifs-with-ffmpeg/

aaand it goes into better depth than we can cover here  
 
 Using Imagemagick and FFmpeg to create a gif from a set video 

get video from the work you’ve done so far and isolate a 10 second clip .
 The we will divide that clip into stills using ffmpeg . You might want to put
 the video into a new folder the navigate to that folder and open git bash or 
terminal there . 
 

Windows 10 
ffmpeg -i somefile.avi -vf fps=1 image-%05d.png

Linux
ffmpeg -i somefile.avi -vf fps=1 image-%05d.png

then use imagemagick to make into gif via 

win 10 

magick convert -delay 10 -loop 5 -dispose previous *.png iamanimating.gif


Linux
convert -delay 10 -loop 5 -dispose previous *.png iamanimating.gif

.



ikillerpulse ( requires tomato.py in working directory)

I've been working on this script for the last week or so. What does it do? It takes an input video/s,  converts to a format we can use f...