Thursday 29 June 2017

Pretty broken things

When I'm making the videos and stills that I post on YouTube, Tumblr and my favorite group on Facebook , Glitch artists collective, I use three separate forms of source material : my own original content , out of copyright or creative commons , or very rarely contemporary news video or other work still under copyright . I try to credit if I'm using material I've sourced online - but generally the final work bears no resemblance to the original - that is the nature of glitch art , online content becomes raw material to be used and reused within the context of online culture , the raw material is just the carrier for what comes after .

I've spent the last week working on a video I ended up calling  'Pretty broken things' . Below is an excerpt , the full video is on my Peertube channel here 






I found the original on archive.org ( a good source for sauce) , I was looking for something to play with that reflected the spirit of 68 in response to an open call for video work I'd found on Artconnect ( http://www.artconnect.com/ )- a good resource for finding exhibitions and opportunities for artists ( if you are interested heres the link for the open call - http://www.artconnect.com/opportunities/vos-6-summer-of-68 ). This is the link to the sauce video - https://archive.org/details/gov.archives.arc.13260.

So just for the hell of it and to give an insight into some of the processes I use these days I'd thought I'd create a walk through of the tools and commands that I used .

So first off I sourced the video from the above link - but the resolution was below what I needed to play with so I had to upscale it in Openshot ( an open-source video editor common to most linux distros http://openshot.org/)  to 720p ( 1270x720 ) - left that running for a while until I had the final file which I then divided into two films of equal length , as I wanted to divide the film into individual frames for the next part of the process.

When I'm outputting to single frames it usually ends up as 1000 plus stills , so to make it easier I'll set up two folders within a master called a and b - . A contains the original stills , b is where I output the worked on frames . The first command to use (from the terminal) is within the a folder where I've also put the video section : 'ffmpeg -i acidclip1 -vf fps=23 image-%03d.xwd'  ( it took a few hours to do this and resulted in over 17000 frames ) so that command  uses a program called ffmpeg ( http://ffmpeg.org/ ) to divide each second of the video ( hasty calculation to make sure I have enough space on the external hardrive ) into 23 frames and output each frame in xwd format ( https://en.wikipedia.org/wiki/Xwd ) , xwd is an interesting image format similar in texture and the way it can be made to break to ppm or bmp .

So now I have the raw frames I want to do something to them , ie break them in a particular way ( in the video you can see progressive lines and broken multicolours in bands and edge waving into the black edges of where the video was upscaled - one of my favourite effects ) . For this I use a bash script that I've worked on for a while , the original was created by Mandy Tonks as a way of creating what is know as the wordpad effect from the command-line , I found her website back in 2012 but the links have since disappeared - so Mandy if you have a new website that I can link to drop me a line ) . A useful explanation of what the wordpad effect is can be found here ( from 2008 ) http://blog.animalswithinanimals.com/2008/08/databending-and-glitch-art-primer-part.html . The command I use looks like this :

find . -type f -name '*.xwd'|while read filename; do echo ${filename};xxd -p ${filename} | sed 's/ddd/eee/g' | xxd -r -p >/home/ian/Desktop/ck/b/${filename}; done

It looks for a file with the .xwd extension ignoring anything else , then converts it to hex ( xxd http://www.linuxcommand.org/man_pages/xxd1.html ) then does a global replace (using sed https://www.gnu.org/software/sed/ ) of the first hex value ( ddd) with the second hex value ( eee) , then converts the output from hex back into the original file format , but in the new directory I've specified (b). It keeps on doing that until it has worked on each file in the current directory , its an automated form of hex-editing , one of the cornerstones of glitch art ( I will usually try a few different values before settling on a final overall run - its easy enough to let the command run through a few files then press ctrl c to halt it - I use imagemagick viewer or xnviewmp to check out they are interesting enough to carry on - some values will stop the file being readable so it takes a bit of experimentation ) imagemagick can be found here http://www.imagemagick.org/script/index.php.

When all the files are converted ( in this case for both halves it took maybe a day to convert to images then hex edit  ) I change to the output directory and convert the stills back into video using ffmpeg again ' ffmpeg -i image-%03d.xwd acidclip1.mpg ' . I then remove all the xwd files - they take up a lot of space - in each directory with ' rm *.xwd' . 

Having done this to both files I then open openshot and recombine both files with the audio from the original - but now I want to change the audio so having experimented on the original file being converted to cavs ( http://xavs.sourceforge.net/ ) and hex edited producing some intriguing sounds



I wanted the sound to be almost legible as speech but not quite . Like aliens talking in the next room. So the file wouldn't convert back in full length after hex editing but it would play through using ffplay ( a function of ffmpeg - you can play a video from the commandline if you have x running say ' ffplay -i acidclip.avi ' . So with the video playing on one computer I fed the output through the headphone output to the input of another laptop and recorded it using audacity ( which can also be used to glitch images and video ) find it here http://www.audacityteam.org/

So to assemble the complete video and audio I then opened the final files in flowblade  ( https://jliljebl.github.io/flowblade/ ) placing the audio over the final video and silencing the audio  that I didn't want - the audio that I recorded through the second laptop was only ten minutes long as opposed to the final 22 of the video so I had to play about a bit to get a full length , so I muted the original ten minutes and replaced it with the glitched audio, the placed it over the final ten odd minutes so they play against each other . I kind of like the effect .

And thats pretty much it for the process , it seems involved and complicated , a lot of it is waiting around for files to process when its something this long , and often because all my equipment is second hand and long in the tooth I'm carefully watching heat stats and loads so nothing breaks down - you have to know how far you can push your equipment , especially if you can't afford to replace it . 

Oh yes - I only use linux , that's it , nothing else , I distro hop quite a lot so the machines I use are a variety of debian based Lmde 2 and linuxmint 18 - both to be found here https://linuxmint.com/ .





ikillerpulse ( requires tomato.py in working directory)

I've been working on this script for the last week or so. What does it do? It takes an input video/s,  converts to a format we can use f...