Thursday 2 July 2020

Ethics of Sources Day 4 - Divide by stills

Welcome back to part 4 of The Ethics of sources, the original talk can be found here - Ethics of Sources Day 4


I’m calling this blog Divide by stills as we will be Taking a video and dividing it into stills and showing how you can work on the stills through hex editing, convolution and glic – and how different formats break ( cover dds png /ppm )

Now why on earth would we want to take a video and divide it into stills when all we are trying to do is glitch the video ? Primarily control over the process, plus access to different image formats that aren't availablewhen working with video ( though it is possible to encode something like xwd or png or jpeg2000 into video).  The other reason is to achieve readable video at the end which doesn’t require baking, and we can tailor the process at the beginning by only running commands on one or a small number of images before committing to a batch process and for some techniques and formats this actually works better, say if we want to use a format like dds or ppm or even something obscure like sgi – all of these formats have unique qualities when hex edited which you cant get with a video codec.

As an example this is a film I made sometime back in 2017 – this was an original film from 1968 sourced on archive.org, chopped into 34000 individual stills ( xwd format) hexedited then reassembled using ffmpeg. Audio is taken from original converted to cavs, hex edited then captured using audacity on a seperate computer . final video and audio put together with flowblade . Clip below and full video on my peertube channel here

Pretty Broken Things-2017






How do we go about this then lets choose a video ( this is a section of a video of a Ballet by Oskar Schlemmer and elsa Hotzel called ‘the triadic ballet’ ) and then put that in its own folder, this is important as this process generates a lot of stills ( ill often use a scratch disc for larger works)so we will just work on a small 30 sec film through this session .







so lets go ahead and divide this up with this command 'ffmpeg -i ballet.mp4 -vf fps=16 image-%04d.ppm'






So Just to explain that command . ffmpeg -i take ballet.mp4 or whatever video you are using as input , -vf = Create the filtergraph specified by filtergraph and use it to filter the stream , -fps=16 ( we will change that to match the framerate of the video ) as frames per second – output those frames as this string starting from 0001 - = image-%04d.ppm . In simple terms – take this video, divide it into this many stills per second,  as a series of images in ppm format and name them image-xxxx.ppm as you create them starting from 0001. ( ppm is an image format akin to bmp. I find the closer to raw you get the better the results). 


Now we are going to hex edit these stills first with this command which basically searchs for all file types specified in this folder , hex edits them ( as in the previous days talk ) and then redirects the output to a new folder – remember never copy a file onto itself as then all you have is blank files – I also want to keep the file numbering intact for the next step which is important when we put the stills back together (ffmpeg likes a nice orderly sequence of stills).
 
As I said previously, never copy a file onto itself , but what if I have a dozen or say 3000 files to work with ? Well this is what this script achieves – although in this version its set up for hex-editing it can be modified to do something like pixel sorting or pretty any much any command line script you can pipe to.

This is what this script ‘ find . -type f -name '*.ppm'|while read filename; do echo ${filename};xxd -p ${filename} | sed 's/cade/18/g;s/0a/0b/g;s/00/00/g;s/0/0/g' | xxd -r -p >/home/ian/Desktop/ballet/modified/${filename}; done’ does
find . -type f -name '*.ppm' = look in this folder find files with .ppm extension
|while read filename; do echo ${filename};xxd -p ${filename} | sed 's/cade/18/g;s/0a/0b/g;s/00/00/g;s/0/0/g' | = read the filename feed it into xxd which turns it into a hexadecimal stream which is then read by sed which changes the values you specify between the //
xxd -r -p >/home/ian/Desktop/ballet/modified/${filename} = use xxd to turn that stream back into an ordinary file then write that file to the folder specified after the >
; done - = this script is a loop so it checks if there are more files to do if not it exits if there are it returns and finds a new file to work on .
Ouch !
So simply put it looks for files with the extension you choose , hex-edits them then outputs the new file to a separate folder. This what it looks like in action.






Once that script has run we can look into the output folder and look at at the files in an image viewer – now I have imagemagick installed and we could open them up with imagemagick’s image viewer ( if the file is very damaged often imagemagick will be the only viewer that works). Get Imagemagick through your package manager on Linux or here - Get Imagemagick here

 

Imagemagick really is one of the best command line based image manipulation programs you will find . Often hex edited files are damaged and it's a good idea to bake the files in a similar way to how we would bake video and to do that we will use this command ( mogrify is part of imagemagick )



Bake files with mogrify



 

Lets put all of that back together with ffmpeg -i image-%04d.png -c:v libxvid -q 9 ballethex.avi this basically reverses the process we did earlier turning the stills back into video sequentially starting at the lowest file number to the highest.

So the first time I ran this with the png's I created using mogrify  it didn't work, for some reason ffmpeg didn't like the format of the png files so it just halted so then I ran the same command but specified the ppm originals and it worked .

Some re-assembly required 



 

But what does it look like ?






I could go back and change some of those values in the original bash script if I don't like the initial results or the files are just unreadable – I'd tend to run this a couple of times before I get something I really like but I’m happy with this as an example .Trying different image formats will yield different results and that's what its all about, experimentation. Just be aware that if you use longer files you will generate more images and the script will take longer and fill more hard-drive space. For larger projects I have often generated upwards of 70000 images and had to divide the video into groups of 10000 stills just to be able to manage the process without the disc succumbing to a large amount of churn ( the disc struggling to read and display the output ) , A scratch disc is a good idea . 

Convolution
Having looked a little at that lets look at something slightly  different - convolution, I'm not going to cover it too deeply as its quite a complex concept and I don't want to muddy the waters of understanding a good place to start is here though https://en.wikipedia.org/wiki/Kernel_(image_processing) )

Essentially its  moving across an image with a grid comparing and changing values – the true master of this technique is Kunal Agnohotri – this next section is an abbreviated form of what he does but on the command line using imagemagick and mogrify. 
There are a couple of different matrices I’ve used in my work for example this video where I've used some of the technique I just outlined and a fair few others – nothing I make is just one technique its a combination.




But for our purposes today we will just run this ( you can cut and paste and run this in a terminal in the folder where your images are )

'for i in {1..2}; do mogrify -morphology Convolve '3x3:

-100, 5, 100,
100, 5, -100,
-100, 5, 100 ' *.ppm;echo $i; done'

what does this do well its a little complicated and I don’t want to muddy understanding so simply put imagine that your image is a grid of squares say 640x480 , this script starts at the top of the file in a 3x3 grid sliding along and comparing values that it sees multiplying locally similar entries and summing putting the new value back down into the file as it passes . Lets look at it at work on the folder of images we created earlier before we hex edited them . ( notice that this changes the actual files with no backsies – you could redirect them into a new folder but this is just a quick example ) 




 


 We could run this on a batch of already glitched files and get this:



 


Now this isn’t the greatest of results but tweaking the values in the convolution matrix can result in some pretty startling results though I would suggest looking up Kunals' work if you want to get down and dirty with a nice gui ( I just like doing things basically and command-line)

Glic ( Glitch image codec)

what can I say about glic ? It is beautiful to work with allows you to save presets and just works – at this point ill just run on the stills we created earlier it and talk about the interface and what it does and how to use it where to get it ? First you will need the processing environment which is available for Linux, Mac and Windows Get processing here

And you will need the Glic image processing script Get glic here

 Using Glic 



 



Now we've made a whole bunch of glitched stills we need to turn them back into video ( well I do you could just work with stills and sort through for the best ones) and Processing has a handy script which will allow you to turn the stills back into video and it works like this 




 

The end result 



 

In the next post I will be talking about Finding and exploiting hardware and operating system flaws to make glitched video , and a short introduction to circuit bent webcams and how I use them in my work.












ikillerpulse ( requires tomato.py in working directory)

I've been working on this script for the last week or so. What does it do? It takes an input video/s,  converts to a format we can use f...