Tuesday 30 June 2020

The Ethics of sources Day 2 - FFmpeg and datamoshing


The Ethics of sources Day 2 - FFmpeg and datamoshing


So this is part two of the Ethics of sources, in part one  I talked about what kind of sources I like working with, why and how to download them . Today we are going to start working with those sources and I'm going to be running through some of the basic tools that I use.

The original talk can be found here Ethics of sources day 2

Lets have a quick look at what we downloaded in part one 

 





I want to convert them to different formats, one of the most important parts of glitch art is working out how each format reacts to different methods and the different textures and artifacts you can achieve, and working out what format suits the material you are working with using the foundations of glitch art which are to me hex-editing , data-moshing and misinterpretation.




Lets go ahead and convert one of the files we found yesterday, to do that I am going to be using the command-line, ffmpeg and handbrake.




The command line( specifically the bash shell )is probably the tool that I use the most in all of my work, common to must Linux distributions and Mac OS and even within the Windows 10 these days via the windows subsystem for linux .
More information on the bash shell here https://en.wikipedia.org/wiki/Bash_%28Unix_shell%29 
More information on Windows subsystem for Linux here  https://docs.microsoft.com/en-us/windows/wsl/install-win10
Find releases of ffmpeg here https://ffmpeg.org/
Find Handbrake here https://handbrake.fr/


There are file conversion programs based on GUI’S like Handbrake but the command-line is quicker ( these methods with a little alteration and research should be common to windows and mac, I use Linux almost exclusively as I believe software and hardware should be free to use and change as you wish, just like video and content ) . So lets look at the files we downloaded yesterday and open up a command-line .






What do I want to do with this file as that determines what I will convert it too , do I want to datamosh , or hex-edit if I want to datamosh the best format for that is libxvid,  mpeg4 or h261 , though equally zmbv works really nicely ( though takes an age to encode to) and for all methodsits best to use an avi container. One of the programs I'm going to demonstrate, Aviglitch, really only works with libxvid the other two can use multiple formats as long as they are in an .avi container. So anyway lets go ahead and run this command in the directory containing the file .





And we can see ffmpeg doing its stuff – just to break down the command quickly ffmpeg -i means input this file to ffmpeg, filename -an means leave the file with no sound ( its often easier to delete the sound as some programs wont work with it intact , then the -c:v part means use this codec libxvid the -q = quality I set it to 9 here – experiment though as sometimes setting a file to a low quality can give interesting results later, output file is last which is our freshly transcoded file in an avi container . 

Another handy function of FFmpeg, especially if you want to get detailed information about a file such as what codecs its using for sound and video, bitrate , size ie is 640x480, 1280x720, colourspace and metadata is FFprobe, so lets check the file we just created with ffprobe as a quick illustration .







I'm going to datamosh this first then hexedit it but to hexedit it ill probably convert it to a different format .

Now there are a couple of different ways of datamoshing , the long and hard but most effective , way using Avidemux ( theres loads of tutorials on you-tube so I'm not going to cover that ) or command-line ways which rely on brute force so my favorite method is using a ruby script called aviglitch find that here http://ucnv.github.io/aviglitch/ ( on debian based linux you just have to do sudo gem install aviglitch , so simple )





It works by taking out all the key frames – now depending on how much and what type of movement you have in the video this will be more or less effective – you have to learn how to judge that and what type of movement works best so lets play with the video we just transcoded and run aviglitch on it on. Unlike some of the later techniques I will show you this works across any folder ie its universally available in the same way that ffmpeg is on linux if installed via package manager like apt or if you have compiled it yourself from source, you just navigate to the folder your file is in open a terminal in that folder  and issue the command. It goes like this :






And this is what that looks like.





As you can see frames start to bleed into each other and images break through , the colours also take on that blended bleeding texture that we love so much, as if the colours have become liquid.

Lets do another one, if we look at the  video by the Russian witchouse band Icesp3ak  we downloaded yesterday we can see a similar effect but with different movement and points of breakage .





If we want to upload this or work on it further we will need to bake or fix it , to do that I usually use handbrake as it seems to work the best of anything I've found so far, even with quite broken files.

There is a way of doing it through mencoder on the command-line which does work very well , a lot better than ffmpeg in the same situation. So we could open a terminal in the folder with the file and do this 'mencoder <input_movie> -oac mp3lame -ovc lavc -o output_movie.avi' ( for when you want sound ) or 'mencoder out.avi -nosound -ovc lavc -o output_movie.avi' ( with no sound ) , I don't know why this works better than ffmpeg you'll have to take my experience on trust - find mencoder via your package manager in linux or otherwise here -( mplayer is built onto mencoder but get one you get the other) http://www.mplayerhq.hu/design7/news.html







But I’ve recently found the the easiest way is just to use handbrake. While I'm here I'll convert to h265 as we will use that as an example when we come to hex-editing (I'll show you why then). H265 takes a long while to encode compared to other formats but its worth it.



so lets see if that worked . Okay so now we could upload this or we could just flick through and see if there are any decent stills in there ( I must say use vlc because it really is the best tool for video playback, related tasks and taking screenshots ). Compare this to playback in the original output from aviglitch shown earlier

If Handbrake doesn't work and occasionally a file will be so broken it's impossible to put it through any baking program we may have to use a screencapture software – generally I use gtk-recordmydesktop if the file will play in vlc , if not try mpv ( mpv will often play files vlc wont , though it does try to render datamoshed files without the datamoshing) or even gnome-mplayer  also compare how different media players play the same file, sometimes the difference can be astonishing - for instance VLC can often refuse to play H261 in an avi wrapper , and when it does the playback will leap all over the screen ( I usually have vlc play a file on a loop ) .

Find MPV here - https://mpv.io/
Gnome-mplayer can be installed through apt ( sudo apt install gnome-mplayer)or your package manager on linux .



Now for datamoshing there are two other tools we can use, both command-line and I’ll cover Tomato first . Find tomato here  https://github.com/itsKaspar/tomato

Tomato was written by Kaspar ravel  (find him here https://www.kaspar.wtf/ )and quite simply it is one of the best tools there is for datamoshing . A simple python script used on the commandline ( the video file must be in the same folder as the python script )



The command python 'tomato.py -i adbreak.avi -m pulse -c 7 -n' on one of the files we collected yesterday ( icespeak.avi ) gives us this ( playback in vlc ) - what I hadn't realized was that in the new version it retains the sound , in previous versions you had to remove the sound otherwise the script would throw a spanner ( and when converting the file with ffmpeg I'd forgotten to us the -an option so the sound was left in ).



And here is the full version:




There is one other tool thats handy for datamoshing , and thats ffglitch find ffglitch here -
http://ffglitch.org/


The same provisios as aviglitch hold true – get your file transcoded to xvid, without sound, though it will work with mpeg and mjpeg libxvid is just easier for our purposes here . The true master of this is Thomas Collet and his work really is quite astonishing. Find it here https://vimeo.com/chepertom



FFglitch does has a lot of advantages unlike the other approachs ffglitch doesn’t require you to bake the file after datamoshing , on the other hand it is way more complex to use and quite poorly documented ( though it was pointed out to me by Vedran Gligo that if you issue the command ./ffedit -h it will bring up a handy help text ) see below .





How does it work ? First we have to parse the file we are working on to find movement values ( which are then saved to a .json file) in our video using this command (you have to move the file you are working on into the same folder as ffglitch so all commands you see from now on will be issued from a terminal in that folder) and were going to take the ballerina again and work on thatwith this command

./ffedit pirouette.avi -f mv -e test.json

This creates a .json file which we can then examine and modify then re-apply back onto the original video. You have to move the file you are working on into the same folder as ffglitch so all command you see from now on will be in that folder.




If we look at that file we can see the values and think hmmm – we could change one value at a time but in this case im going to do a global search and replace with values I know thru trial and error will work , and that command looks like this

sed 's/0/7/g' test.json > testhex.json

This is a type of  command-line text editing which I'm going to cover in the next blog. Here I'm using sed which is short for stream editor ( part of the basic GNU coreutils that most linux distros come with)) .







slide – so having changed those values  we have to apply those back to the original file with this command.

./ffedit pirouette.avi -f mv -a testhex.json pirouettehex.avi




So as you can see thats a lot more controlled but we could change the initial start values so lets do that and see how it changes the video . We'll change the initial start values to these :

sed 's/-1/-2/g;s/0/32/g;' test.json > testhex.json

Then reapply the values as before

./ffedit pirouette.avi -f mv -a testhex.json pirouettehex.avi



And this is the complete thing.



Notice also notice how quick it is once you have the commands in the command   line to just repeat them or alter values the trick is too look through the initial json file that ffedit produces and spot values that are changeable that will have an effect.

Now I touched in the above on sed and editing a file on the command line and this is the basis for what I’ll be talking about in the next blog post , ffmpeg and hex editing .






Monday 29 June 2020

The Ethics of sources Day 1 - Working with video and open source tools


The Ethics of sources  ( Working with video and open source tools)


( This is the first post in a series of 7 based on the online talks and discussions that I was asked to do as part of an online residency with Hackn3t | hacklab01 here https://hacklab01.org/hackn3t from 22- 28th June 2020 )

The original talk can be found here - Ethics of sources day 1

Introduction

The general theme I’ve chosen for this series of talks / demonstrations is ‘The ethics of sources’ an exploration of finding sources to work with from online or older legacy media and demonstrations of techniques that I use on an everyday basis in my own work and some of the Ethical considerations and consequences that stem from that .I should also say that all work and programs used in these presentations  are being run from a Dell 9010 desktop running Linuxmint 18.3 - some of the programs shown have windows or mac os versions or equivalent. all are open source.  Get linux mint here - https://linuxmint.com/

We swim in an ocean of information. Just as every desktop is a metaphor – the desktop is never real, the files it holds are never real. We move in a space which does not exist , pretending it does . The Internet is the same , as is the content we find there , we can experience it and interact with it and as neither places are real or seperate we can cross between the two and bring back content because in the digital there is no barrier between here and there owned and non-owned , each is an extension of the other . 



A video can look like this





 But it can also look like this 

 





And it can sound like this 



 


I want to show how the choice of source can decide its final form because of its format or content. When I look at a video source for use in my work I also look for what codec it is originally encoded in , what wrapper ( By ‘Format’ I mean what codec or container it is wrapped in, for example h264 in an mp4 container or something like xvid or zmbv in avi. 

By ‘Content’ I mean whether there is movement or perhaps whether it is in black and white or colour. If black and white what kind of space , outline and shadow is there. If the source is lofi is it grainy and artifact prone ( h264 vs mpeg vs msvideo1 vs h261) as a quick example this is the same file encoded in those 4 formats 

H264






Mpeg2 at a low bit rate

  





From the dawn of time - Msvideo1



 


 H261 ( 352x288)






As you will notice even though this is the exact same film clip encoded using the same software ( ffmpeg) the quality that each codec gives to appearance is quite different .


I also want to show how various techniques can be suited to specific material. During this first talk I will source film, news footage from live broadcast, You-tube content and from live capture and subsequently distress and recreate these in different ways over the course of subsequent days to end in a final live real time mixed film performance using the techniques covered.

I also want to examine  the reasons for and the ethics/consequences of sourcing and reusing found material and our responsibilities to the material from an open source  and  creative commons aspect and talk a little about copyright and my attitudes towards it and how that relates to remix culture and Glitch art in particular.

One of the distinguishing features of glitch art is that arising from the web and net art it feeds on online content for its source material – it lives within the screen. As a painter I took my source material from real life. As a glitch artist where do i get my sources from lets go to one of the primary resources I use,

( I should add there is some controversy around The internet Archive at the moment , mainly pushed by the usual villains trying to hoover up and control all content for their own profit . Some films uploaded to the archive are also claimed to be public domain when in fact they are not , but this is no reason to go after The internet Archive in an effort to close it down in the interests of copyright-holders as there their interests are quite often against the interests of the common good) 

So lets load up archive.org to show what sort of resources are there and then start picking .







So, lets see what we can find on archive.org, what I mainly look for are scifi, horror and film noir, I'm a big fan of film noir. I'm looking for  something that isn't too long, the longer the film the longer it takes to re-encode – considerations you have to consider when working like this is how fast is your computer and how long will each format take to encode, if we are transcoding ( changing from one codec to another ie h264 to h261) generally its quicker to transcode small sections of video rather than whole video.

So what am I looking for here ? Something short preferably black and white with movement and a sharpness too it , with space and outline and high contrast. If I'm looking for colour I'm probably looking for wide-screen – these are choices too make as well , wide-screen v’s 4:3,  wide-screen has a lot of detail, the more detail the wilder the glitches usually but 4:3 I like a lot more as its a more contained space - often older black and white films are only available in 4:3 ratio as well .

If we look at this film we can see its quite grainy , a bit lo-fi , now that could work well if I want a blocky and grimy look to the end result , lets take that one , download it with transmission ( a bit torrent client)  as archive.org supports bit torrent . Get transmission here - https://transmissionbt.com/




I'll check over what I can download in – probably go for xvid as thats handy for datamoshing and its just a handy format.

Transmission in action

 





 Looking at video to check out its graininess







Now while that's downloading I will look for something in black and white lets look in film noir I spot a film called  'I love trouble' that sounds good lets look at that and then download it 

 I love trouble






As I flick through these videos you can see if we scroll down a bit that most if not all of the movies hosted on archive.org are public domain ( though as stated before some may not be , check the comments below each movie ) .

Yes I could use a pirated torrent of newer film as material but for the longer works I make I prefer to use public domain films because it avoids problems with things like copyright strikes on YouTube and copyright does start to become an issue when you make work and upload it even though the internet seems to be a place you can just take what you want , if you want to present unique works made out of the building blocks of old films then its best to err on the safe side unless of course we are making a political point or referencing current events – more of that later when we get into live capture.

  We could also download porn







Glitch art has a big place within it for what is know as erotic glitch, if one of the defining needs of glitch art is for movement for data- moshing then porn has it all . And its everywhere. One of the best examinations and uses of porn in glitch art has been by Dom Barra

 
But the prevalence and ease of access to porn can also be ethically problematical ( as well as being an insta ban on facebook for daring to share a nipple - but lets not ban those Trump white power videos eh - Facebooks hypocrisy can be quite chilling)  content hosted in those places has sometimes been found to be revenge porn or non-consensual so when downloading material to work with we have to check our ethics . Is it exploitative , is it consensual, is it uploaded with the permission of those involved ( revenge porn) , by using it are we just continuing a cycle of exploitation of those already exploited.


We could also lift videos from say, YouTube if we wanted to be a bit more current and reflect trends or styles or use something for political commentary purposes – so lets go to you tube and see what we can find there .I have a thing about Russian witch house at the moment so lets download some of that To do that we will need to use the command line and this handy python script youtube-dl
Which works like this.







Now that video at first glance doesn’t seem to have much movement in it so lets just find another one to compare it with when we come to datamoshing and hexediting it , maybe some dancing i don’t know this maybe ? 






You will notice in all of these that I have left out the sound, and I haven’t uploaded the original video, just a video of me showing the downloading , well one sure way of getting your work taken down and a copyright strike on YouTube is to include original video and sound and I sure as hell don't want to allow some company to monetize the viewing of one of my videos through partnering so Ive taken the original sound off and the video as seen are played and recorded on the desktop so there are a couple of layers of obfuscation to keep ahead of the content police – not a problem once you’ve actually worked on the video but at this stage it is . Just to emphasize here , I'm not trying to pirate the original content I'm sourcing material in much the same way as a collage artist would , its the scaffolding I'm going to build something around not the thing in itself.

Having downloaded those we could always take some material from live broadcast maybe something From the news  ? So fire up the tv ( mine is running from a satelite box into my capture card ) lets see that working first in a viewer and then lets capture it using cheese webcam booth ( which captures in webm format ) The main software I use in conjunction with the capture card ( which is an ancient analog Pinnacle PC TV rave ) are Tvtime find that here http://tvtime.sourceforge.net/
Cheese-webcambooth common to most Linux Distributions here https://wiki.gnome.org/Apps/Cheese 
And last but not least , one I use most of all these days because its so flexible is obs-studio find that here https://obsproject.com/





The three channels that I use the most are Talking Pictures, the Horror Channel and BBC News all available on Astra Europe via satellite all free-view. Commercial tv is a good resource as adverts often contain a lot of movement and fast cutting again , I use those a lot and cheese webcam booth also records in vp8 webm format which also gives us some nice instant material to hex edit . For example this:












Now we could also make our own videos – go out and film something – I don’t do a lot of that , I used to when I started getting into circuitbending.









As well as being able to do capture from webcams or capture cards with processing you can distort video in real time with interesting effects like slit scan and time displacement and spatio-temporal disturbance , as well as being able to capture a series of stills which can later be turned into video after maybe hex editing or running through the glic codec , but thats for a different day .




 capture from webcam using Processing







And if we cant use youtube-dl or torrents or capture cards or webcams to capture what we see on the screen video we can always screencapture using say gtk-recordmydesktop ( so for now we will do it quick and dirty with recordmydesktop )



so there are also certain questions we can ask in the light of what I’ve been explaining and showing .

The first of course is how ethical is this ( the ethics of sources remember) to simply take a source from on-line and turn it into something new , at what point does a work stop being derivative and start becoming unique in itself .

Firstly I do acknowledge that copyright exists ( as I’ve already noted the audio in the video segments I’ve shown is muted to get around YouTube copyright bots – my original intention was to do this series of presentations/ demonstrations live from my desktop , showing a typical working day of gathering material and working on it with insights into how and why I do it but bandwidth problems and dysfunctional broadband companies put paid to that and I had to use youtube to upload this sequence of videos which describe some of my search processes ) but in the on-line environment that argument becomes moot, everything is in one way or another copyable or rippable – either through software or hardware .

Ethically I don't want to steal somebody else's work and present it as my own , thats basically wrong, but I have no problem taking a digital medium and making something from it as a framework in the same way that i would in the past throw down a large number of shapes and forms onto canvas until i reached the point where something new arises from that – I see taking on-line stuff in the same way – I choose what I choose carefully looking for movement or textures that I know might work with my methods and proceed from there , sometimes traces of the original sneak out but so distorted and abused that they are in themselves something new and unique .

I also feel a responsibility when sharing what I do to share it under creative commons terms , on-line is where I live , it is an ocean of content and software that I can take from but I must also give back – I must allow my work to be shared and reused under the loosest of terms – in someways its a two way conversation with the medium that is the net to take you have to give back .

There is also a responsibility to the material ( how so – political moral technical ? ) if I'm using news material there is I feel a responsibility on my part not to re-edit or misinterpret political themes , not to further hate or prejudice of any kind .

I make my work on the back of software that I'm allowed to use for free and to give to others free , to change and use as I will as long as I grant those same rights to others , and I see my work in the same way . Without that software I would not be able to make the work that I do , id be stuck with basic windows systems and wandering how to pay for video editing software or image editing software or even something as basic as video playback software all on hardware I already own .

All the software that I use is free and open-source and its from that philosophy that my own attitudes arise but also seeing the uses and abuses that traditional art has been put to , the unique object , the investment vehicle art of gallery stars like Hirst and Koons , the exploitation of artists such as Jean-Michel Basquiat and the gate keeper mentality of the art establishment – far better to publish my work on-line and allow anyone to use it and reuse it than to feed into a system which has become just another arm of investment banking – the power is the idea , the power to give away and the power to pass on ideas and techniques that empower others who may never have thought of themselves as artists before – Remix , reuse , recycle and grant others the right to do like wise.





ikillerpulse ( requires tomato.py in working directory)

I've been working on this script for the last week or so. What does it do? It takes an input video/s,  converts to a format we can use f...