Tuesday, 14 December 2021

Auto hex script


Thought I'd post a few of the scripts I've been working on during the year. This one takes a video , divides it into the number of stills you specify in an image format you specify and then hex edits each still in turn with the values you give the script then reassembles everything back into video at the end , then cleans up after itself. Example here . Its fairly unforgiving in that you have to type in precisely names of files and numbers, there are no backsies if you make a mistake and notice it before the script goes to work you will have to ctrl c and start again and also remember to name the output file differently to the input file. Longer videos can generate a lot of stills so make sure you have the space for that and depending on computer speed and speed of i/o ie ssd vs spinning disk this can take a long time. Minutes for a small 1 minute video to hours for something larger, depending on the frames per second you specify . You will need to have imagemagick and ffmpeg installed for it to work. Should work on Windows 10 from git-bash, definitely works on linux obviously open terminal or navigate to the folder where your video file is. Anyways this is the script : 


#!/bin/bash
# based on this script
#  find . -type f -name '*.png'|while read filename; do echo ${filename};xxd -p ${filename} | sed 's/de/0a/g' | xxd -r -p >/home/user/transit/${filename}; done

echo -n "File to work on ? : "
read n

echo -n "Image format to work with ? : "
read im

echo -n "Hex value from ? : "
read fr

echo -n "Hex value to ? : "
read ft

echo -n "What do you want to name output file ? : "
read z

echo -n "Framerate as a number between 1 and 30 ? : "
read m

echo -n "crf value for final video ? :"
read o

# convert to image format

ffmpeg -i $n -vf fps=$m image-%05d.$im

#find and hex edit each image

find . -type f -name '*.'$im''|while read filename; do echo ${filename};

#get header
 head -n 3 ${filename} > head.$im

#strip header ( to avoid damaging it)
  sed '1d'  ${filename} > swap.$im;

#copy that to 2nd swap file to avoid overwriting
mv swap.$im swap2.$im
 
xxd -p swap2.$im |sed 's/'$fr'/'$ft'/g'| xxd -r -p > swap.$im;

cat head.$im swap.$im > ${filename};

rm head.$im
rm swap.$im
rm swap2.$im
 

done 


ffmpeg -i image-%05d.$im -vcodec libx264 -r $m -crf $o  -pix_fmt yuv420p $z


rm *.$im







Friday, 22 October 2021

Desktop feedback loops and performance.

 

This is going to be a bit different than the last few posts as it revolves around ideas I’ve been having in my own work for the last while , anyone who caught my performance for fubar 2021 , or reads this blog knows I’ve been thinking around ideas of desktop as a performative space, and as such it's more based around using Linux than windows, I will give windows examples but Linux works a lot more elegantly.

What do I mean by performative space ? Well for me it started pretty much with my experiences over the last year or so ( and probably all of your experiences as well ) of the pandemic and being stuck at home day after day, generally in front of my desktop or laptop, making work and talking with people online, a lot . It seemed that this was where I lived and everything I did, other than shopping carefully once a week and walking the dogs, eating and sleeping was pretty much it. Around this time I came across this video of Jodis work my%desktop

https://www.youtube.com/watch?v=CPpUQQ7vMBk

And I’d also started collaborating with the people working in medialab via format c https://formatc.hr/medialab-2021-diwo/ on the online suda project which is essentially a collaborative computer space hosted online in the form of a computer desktop which can be altered and changed and remade by the user within certain parameters and also embodies a manifesto centred around sustainability and reuse (Its a good manifesto I advise everyone to read it, find that here https://pads.ccc.de/sda-manifesto) . I had to research a different kind of desktop TWM and subsequently CTWM which look unlike anything we are used to in the recent past as a desktop space TWM info here https://en.wikipedia.org/wiki/Twm. And they allow us to do some very odd things indeed . I should state this is running on linux and some of the effects I can achieve here are specific to the Distribution used ie Parabola Linux such as this simple command ‘xsetroot -b -grey’ which on any other distribution of Linux will just set the background to grey but on parabola linux in TWM or CTWM we can do this which makes the windows leave trails and impressions behind wherever they go ( this was incorporated into Suda)

And I can’t remember exactly how but as I was researching TWM and CTWM I was looking into things you could do with ffplay , I’d investigated it before for my work with displacement maps which led to something else which I demoed at this years Fubar , ( but we’ll get to that in a moment) and rather than grabbing the video output from a webcam and displaying it via ffplay I found that there was a way to open a window via ffplay which mirrored the desktop around it by focusing on where the mouse was as if the mouse was a lense

An example of this can be found here exampleX11grab.mp4

https://youtu.be/G_0Q4mLlTfE

that command was this

ffplay -f x11grab -follow_mouse centered -framerate 10 -video_size 640x480 -i :0.0

as you can see the focus of that window follows the mouse

if we run two windows of different sizes we can start to see possibilities for feedback loops

and this is pretty much the implementation I came up with for Suda ( but run from a script )

Mantissa

which brings us too mantissa and to show some of the concepts I want to show you before we get to a possible  windows implementation I will have to use Linux to illustrate what Im talking about.

so most of my experimentation at the moment as I say is around the desktop as a performance space and most of this started with mantissa which is a collection of scripts which utilises two webcams and various windows to create and capture video endlessly

So basics - this is all run through shell scripts to begin with. If you don't what a shell script is welll , a shell script is a txt file which you can add in commands that you would normally enter one by one into the command line , a shell scrip ( run from bash shell or git-bash) means you can automate tasks and avoid pesky typing . The full mantissa scripts are here https://crash-stop.blogspot.com/2021/07/mantissa-self-capturing-displacement.html if you feel like experimenting with it.

As an example this is a shell script called afish.sh , it calls up two windows in the middle of the screen using x11grab , which act as webcams kind off, depending on where you place the mouse ( the windows show what the mouse is hovering over) its kind of like a test script for me to try out window sizes and placements . Originally this would have been on parabola Linux but Linuxmint is better for this type of multimedia so I’m using that ( parabola is also good but when I start using multiple windows and multiple videos playing back I need a decentish video card to offload some of the graphics processing from the cpu. 

script below ( copy and paste into geany from '#! /bin/bash' to last 'cif -i :0.0' on linux each line is continuous

#! /bin/bash


mate-terminal --geometry=17x18+0+3 -e 'ffplay -f x11grab -follow_mouse centered -framerate 25 -video_size 640x480 -i :0.0'
mate-terminal --geometry=17x18+0+3 -e 'ffplay -f x11grab -follow_mouse centered -framerate 25 -video_size 352x288 -i :0.0'
#mate-terminal --geometry=17x18+0+3 -e 'ffplay -f x11grab -follow_mouse centered -framerate 25 -video_size 352x288 -i :0.0'
#mate-terminal --geometry=17x18-1-2 -e ffplay -f x11grab -follow_mouse centered -framerate 25 -video_size cif -i :0.0
#mate-terminal --geometry=17x18-2+0 -e ffplay -f x11grab -follow_mouse centered -framerate 25 -video_size cif -i :0.0

 

So thats simple enough , but if we add two webcams and use those as input instead of x11grab we can start to do more interesting things with feed back .

an example command for each camera would be something like

 ffmpeg -i /dev/video0 -f rawvideo -vcodec rawvideo -pix_fmt yuv420p -s 1280x720 - | sed 's/00/00/g'  | ffplay -f rawvideo -vcodec rawvideo -pix_fmt yuv420p -s 1280x720 -  

and then the same command for camera 2 but with smaller resolution

ffmpeg -i /dev/video1 -f rawvideo -vcodec rawvideo -pix_fmt yuv420p -s 960x540 - | sed 's/00/00/g' | ffplay -f rawvideo -vcodec rawvideo -pix_fmt yuv420p -s 960x540 -

Now we have the same windows with the same placement but running two webcams

now there is a windows equivalent to

ffplay -f x11grab -follow_mouse centered -framerate 10 -video_size 640x480 -i :0.0 

which is

ffmpeg -f gdigrab -framerate 6 -offset_x 10 -offset_y 20 -video_size vga -i desktop -f rawvideo -vcodec rawvideo -pix_fmt yuv420p -s 640x360 - | ffplay -f rawvideo -vcodec rawvideo -pix_fmt yuv420p -s 640x360 -

if you open two git-bash terminals and run this command in both it will give you two windows that can to a degree give you the same interactivity as on linux. except you will have to move the windows around to achieve feedback whereas on linux you can just move the mouse

( documented here in the ffmpeg online manual https://ffmpeg.org/ffmpeg-devices.html#gdigrab )

which isn’t as elegant but it is functional and useful , so everyone on windows try that and everyone on Linux try the first command ( look in the shared notes for the command) ie.

ffplay -f x11grab -follow_mouse centered -framerate 10 -video_size 640x480 -i :0.0

try opening two of these at the same time ! Or changing the size of one to this ( depending on the size of your desktop)

ffmpeg -f gdigrab -framerate 6 -offset_x 10 -offset_y 20 -video_size vga -i desktop -f rawvideo -vcodec rawvideo -pix_fmt yuv420p -s 1280x720 - | ffplay -f rawvideo -vcodec rawvideo -pix_fmt yuv420p -s 1280x720 -

https://www.softpedia.com/get/Desktop-Enhancements/Other-Desktop-Enhancements/Mofiki-s-Coordinate-Finder.shtml

helps to give co-ordinates on the desktop for input to windows systems ( pretty easy to use – just place your mouse over a point on the desktop and press the space bar then change the x and y offset values in the windows script to those ) 

* A word of caution when using sites like softpedia via windows be careful ( especially if not using an ad blocker , why aren't you ? I'd personally recommend Ghostery )  that you click on the correct link as these sites are littered with less than trustworthy links and software.

if we then start to add hex editing into the mix it gets more hectic

We don't have to run webcams we can just use virtual windows we call up through the script instead and the thing is you can have as many windows like this open as you wish , as many as your computer can handle and even more importantly you can make them different sizes ( up to the total size of your screen but more of that in a while. ( depending on your pc or laptops power and graphics capability – remember to keep an eye on your temperatures on a laptop that fan should start whizzing away! ) .

so having discovered all of that I went on to design a bigger shell script which took me a couple of weeks to create called mantissa , which runs two webcams , captures the output of those webcams every 30 seconds or so records that as a thirty second clip then runs a displacement map over that file and then displays it . This is a work in itself but afterwards and works as a standalone set running and walk away piece but I found it more interesting to take the techniques I’d learned and start playing with the desktop by opening multiple windows and videos as well and using it as a massive feedback environment and recording that using obs-studio to capture the whole desktop.

like this https://youtu.be/matpsXvoP0c the sound is created using some of the techniques we covered on day 4 . And referring back to what I talked about on the first day I like that the elements of the desktop itself are still there and visible and that they become part of the work as well , rather than trying to hide them , and I think to me that’s important , as I said before I’m not interested in naturalism or the suspension of disbelief .

Because I’d been talking to people a lot on other platforms like reddit and tumblr after leaving Facebook and Instagram earlier in the year and via Glitchlab and Medialab I was meeting people who were using windows more than linux I’d started to try and recreate some of my methodolgy in windows ( which is why these workshops are primarily windows based ) and the question was to myself How could I achieve something similar in windows ?

Back to windows

Given the original commandline above its possible to open windows which act similarly to the linux version doing this

ffmpeg -f gdigrab -framerate 6 -offset_x 10 -offset_y 20 -video_size vga -i desktop -f rawvideo -vcodec rawvideo -pix_fmt yuv420p -s 640x360 - | ffplay -f rawvideo -vcodec rawvideo -pix_fmt yuv420p -s 640x360 -

If we alter it a little we can start to build up texture by live hex editing by doing this on windows

ffmpeg -f gdigrab -framerate 6 -offset_x 10 -offset_y 20 -video_size vga -i desktop -f rawvideo -vcodec rawvideo -pix_fmt yuv420p -s 640x360 - | sed 's/808/181/g'| ffplay -f rawvideo -vcodec rawvideo -pix_fmt yuv420p -s 640x360 -

You can get more control over the hex-editing by changing that command slightly to this

ffmpeg -f gdigrab -framerate 6 -offset_x 10 -offset_y 20 -video_size vga -i desktop -f rawvideo -vcodec rawvideo -pix_fmt yuv420p -s 640x360 - | xxd -p | sed 's/808/181/g'| xxd -p -r | ffplay -f rawvideo -vcodec rawvideo -pix_fmt yuv420p -s 640x360 -

and as you can see if we start running a video in the corner of the screen ( top left hand corner which is where this window grabber is focussed we can start to do more interesting things again – unlike linux the window doesnt reflect what is under the mouse pointer so we have to play about with the x and y offsets ie this bit

-offset_x 10 -offset_y 20 to work out where the window will select to display

or if you know the size of your screen you can first make that active before adding more smaller viewports

we could also try changing the final output codec to see what happens like this

demo changing codec some work some don’t msvideo1 and cljr do , near raw is better , its worth experimenting with differen settings here . If you don’t what codecs you have available just open a terminal ie git-bash etc and type in ffmpeg -codecs and scroll back through the list.

on Linux this command is

ffmpeg -f x11grab -follow_mouse centered -framerate 5 -video_size cif -i :0.0 -f rawvideo -vcodec rawvideo -pix_fmt yuv420p -s 640x360 - | xxd -p | sed 's/808/181/g' | xxd -p -r | ffplay -f rawvideo -vcodec rawvideo -pix_fmt yuv420p -s 640x360 -

we can also use displacement maps

within this command by doing this

ffmpeg -f gdigrab -framerate 6 -offset_x 10 -offset_y 20 -video_size 640x360 -i desktop -f gdigrab -framerate 6 -offset_x 10 -offset_y 20 -video_size 640x360 -i desktop -lavfi '[1]split[x][y],[0][x][y]displace' -f rawvideo -pix_fmt yuv420p - | xxd -p | sed 's/808/181/g'| xxd -p -r | ffplay -f rawvideo -vcodec rawvideo -pix_fmt yuv420p -s 640x360 -

and by placing different windows within the field of view we can add or alter what the viewport sees and distorts

notice I’ve left the hex editing part in

on linux the equivalent would be

ffmpeg -f x11grab -follow_mouse centered -framerate 25 -video_size 640x360 -i :0.0 -f x11grab -follow_mouse centered -framerate 25 -video_size 640x360 -i :0.0 -lavfi '[1]split[x][y],[0][x][y]displace' -f rawvideo -pix_fmt yuv420p - | xxd -p | sed 's/808/181/g'| xxd -p -r | ffplay -f rawvideo -pix_fmt yuv420p -s 640x360 -

Add webcam

Or if we want to feed in a webcam rather than a video on windows we could do this

first find out what your web cam is called using this command from git-bash

ffplay.exe -list_devices true -f dshow -i dummy

look for the output which names your camera , on mine its "Live! Cam Sync HD VF0770" , activate the camera by pasting this into git-bash and press enter

ffplay.exe -f dshow -i video="Live! Cam Sync HD VF0770" ( substituting the name of the camera in your output )

But still not as flexible as the Linux follow mouse function in x11grab

your webcam should now display

now we know the camera is operational and we know what dimensions are going to display in ( mine is 640x360 ie 16:9 ratio yours might be different though if its newish it will either be like mine or 640x480 4:3 ratio )

ffmpeg -f dshow -r 5 -i video="Live! Cam Sync HD VF0770" -f rawvideo -vcodec rawvideo -pix_fmt yuv420p -s 640x480 - | xxd -p | sed 's/808/181/g' | xxd -p -r | ffplay -f rawvideo -vcodec rawvideo -pix_fmt yuv420p -s 640x480 -

note the -r -5 , have to put that in to reduce the framrate input otherwise the process stalls , not so much of a problem on linux

the equivalent Linux command would be

ffmpeg -i /dev/video0 -f rawvideo -vcodec rawvideo -pix_fmt yuv420p -s 640x360 - | xxd -p | sed 's/ff/18/g' | xxd -p -r | ffplay -f rawvideo -vcodec rawvideo -pix_fmt yuv420p -s 640x360 -

(where /dev/video0 is the first attached webcam )

and of course we can use these as a basis for displacement again

on windows 

replace the name of the webcam I have with your own found thru using   

ffplay.exe -list_devices true -f dshow -i dummy and alter dimensions to what works with your camera ie replace 640x360 with vga or cif or a dimension you know your camera will work with 

ffmpeg -f gdigrab -framerate 6 -offset_x 10 -offset_y 20 -video_size 640x360 -i desktop -f dshow -i video="Live! Cam Sync HD VF0770" -lavfi '[1]split[x][y],[0][x][y]displace' -f rawvideo -pix_fmt yuv420p - | ffplay -f rawvideo -pix_fmt yuv420p -s 640x360 -

obviously as what we are dealing with here are generated feedback loops we could always point the webcam at the screen rather than ourselves !

if you don’t have a webcam you could use the built obs-studio virtual webcam

like this if you don’t have a webcam but do have obs installed we could use its virtual webcam function ( which actually mimics in some ways what we can do with linux ie the ‘ x11grab -follow_mouse centered’ part which allows us to create a roving window that sees what the mouse sees

first open obs-studio and add the obs virtual camera to video capture devices , then add window capture

start

$ ffmpeg -f gdigrab -framerate 6 -video_size vga -i desktop -f rawvideo -vcodec rawvideo -pix_fmt yuv420p -s 640x360 - | ffplay -f rawvideo -vcodec rawvideo -pix_fmt yuv420p -s 640x360 -

( with displacement and hex editing using only virtual windows )  

ffmpeg -f gdigrab -framerate 6 -offset_x 10 -offset_y 20 -video_size 640x360 -i desktop -f gdigrab -framerate 6 -offset_x 500 -offset_y 500 -video_size 640x360 -i desktop -lavfi '[1]split[x][y],[0][x][y]displace' -f rawvideo -pix_fmt yuv420p - | xxd -p | sed 's/808/181/g'| xxd -p -r | ffplay -f rawvideo -pix_fmt yuv420p -s 640x360 -


select the window that just opened as the window capture device

do this

ffplay.exe -f dshow -i video="OBS Virtual Camera"

then start the virtual camera ( in the controls section , bottom right hand side of obs-studio main interface) minimise obs-studio watch what happens !

Interestingly if you record the output it only shows the virtual webcam which could be exploited in interesting fashions !

Like this , a brief proof of concept

https://youtu.be/MVrsL6L6FiE

And thats pretty much it theory wise , you can build up layers of these windows and at the same time play pre-recorded video and sound in the background , use it as a performance tool or record the output . To me it allows exploring and remixing in real time seperate to ideas of video editing and becoming a fluid tool for exploration .

















Thursday, 21 October 2021

Working with sound , Audacity , ffmpeg and sox

 

Day 4


Adding sound – what sort of sound do we want to add to our video? Creative commons options, reusing sound from video, ways to treat that using audacity or cd scratching and recording methods. Turning sound into video operating on that then reversing that process to retrieve sound , using audacity and the gimp to modify sound based on Letsglitchits’ reverse sonification methodology optimised for open source software.


Today we will be working with sound

We will mainly be working with ffmpeg, and sox today, I only added sox to the software list recently so if you haven't got it installed already do this – open powershell as administrator and do ‘choco install sox.portable’ on Linux use your package manager to install it or from the terminal in Debian based distros do sudo apt install sox or on arch based systems do sudo pacman -S sox

We are also going to be using audacity today and there is a known problem with audacity in windows ( but not Linux) in that to import certain files like ogg audio or m4a we need to have ffmpeg installed , unfortunately the version it requires is older and slightly different to the one we installed using chocolatey so we will have to download and install a different version for audacity to reference . Issue and links here https://manual.audacityteam.org/man/installing_ffmpeg_for_windows.html. Installer is here https://lame.buanzo.org/ffmpeg64audacity.php

one of the biggest influences on my attitude towards sound is probably this music by the caretaker , and im going to play it back so we can kind of get in the mood, not too much of it but some https://www.youtube.com/watch?v=wJWksPWDKOc its the atmosphere that I most like about it and too a great degree when I’m making sound to go with my videos that’s what I’m thinking of, atmosphere.

My general attitude towards sound is if it doesn’t need it don’t add it, sometimes what is going on visually in a video will be enough in itself, and sound that happens through the process of databending or hex editing a video should stay in or be the basis for what is added or re-added. Failing that we can take the original audio from the video, if it has any and play about with it in various ways. So I’m going to run through a few of those and then let you loose on breaking some sound.

We could find creative commons licenced music to add or even public domain sources, (there are a lot on archive.org ) if I use those I generally data-bend them in various ways. One of the most interesting ways I’ve found is to transcode to a codec that is very very compressed such as codec2, info on codec2 here https://www.rowetel.com/?page_id=452 unfortunately the windows version off ffmpeg we are using doesn’t include c2 so ill have to encode this on linux and bring it back that way ill show you that in a second.

Now we’ve talked about extracting audio from video using ffmpeg

( find a video file with audio open git-bash or bash terminal in that folder )

and do this ffmpeg -i somefile.extension prisoners.wav

Now we can listen to this but how would we begin to glitch it ?  One effective way is to transcode it to a codec which uses a lower bit rate , like speex 

On windows use libspeex at a really low bit rate ( this is 8.2kbits per second)like this

ffmpeg -i somefile.wav -c:a libspeex -q 0 somefile.spx

if you play that back in vlc you can see we get a more crunchy distorted texture.

or if we encode to gsm codec using sox (install using ‘choco install sox.portable’) (as our version of ffmpeg wont encode to gsm)

sox somefile.wav somefile.gsm

Then play that back vlc wont play this type of file to my knowledge but another video player like mpv ( choco install mpv ) will or if you dont want to install yet another media player just transcode that file back to wav using ffmpeg or use ffplay as ffplay will play audio files as well.

ie ffplay somefile.gsm 


and weird things really happen if we do this with sox

sox −r 1k −e signed −b 8 −c 1 somefile.wav somefiledownsample.wav

playback that file using vlc or ffplay 

but if we encode it to c2 on Linux we get this

using this 

ffmpeg -i somefile.wav -c:a codec2 somefile.c2

we get something far more interesting. I've used this codec a lot myself .

but cross-platform on windows and Linux is a newer codec called opus ( which is also fully open source ) which acts very much like c2 indeed its even crunchier

so if we take our test file we can do this

ffmpeg -i somefile.wav -c:a libopus -b:a 500 somefileopus.ogg

Vlc will play opus files , but ffplay will as well

so we can mess with sound without even glitching it by using sox and ffmpeg to encode to different formats unsuited to the material at low bit rates ( the final file was encoded at 500 bits persec whereas a low bit rate mp3 would be 128kbps and the lowest you can go with mp3 is 32kbps so if we found a creative commons licenced or public domain music source it would sound a little like this

( music from here https://archive.org/details/07-rare-and-hot-1925-1930-historical-vol-12) Download that.

lets play memphisjazzers.mp3 first as it is as mp3


Now encode the same file to opus at 500bps


ffmpeg -i memphisjazzers.mp3 -c:a libopus -b:a 500 memphisjazzers.ogg

play memphisjazzers.ogg

now if we open that in audacity we can play with it a bit more . Open audacity add some reverb , playback its starting to sound a little odd lets add some tempo changes and normalise it to adjust the volume . I tend not to speed up, more speed down . Its like a nightmare comb bouncing off instruments the beeps and bops caused by the low bit rate are marvellous. Of course we could edit this more later on or chop and reverse etc or take sections out and reuse them . But I like that pretty much as it is.

We can save that and then use it later on , I'd tend to save as wav if I want to work on it more later.

so with a complete soundtrack for a video we could also do the same thing .


Hex editing audio

Interestingly it is possible to hex edit the opus codec , which surprised me , as I haven’t had much luck with compressed audio files , mp3 in particular is very difficult ( there are online guides By Nic Briz as sky Goodman pointed out in her recent workshops) but this did surprise me .

doing this 

xxd -p somefile.ogg | sed 's/ff/07/g'| xxd -p -r > somefilehex.ogg

then imported into audacity , vlc might recognise that file now I've hex edited it ffplay might but importing it into audacity makes it playable and thus bakable if we export it

speex encoded files  aren't openable in VLC after hex editing but will import into audacity – speex doesn’t do anything overly interesting compared to opus

so going further with the same track but hex edited slightly differently

  try thisand import that into audacity

xxd -p somefileopus.ogg | sed 's/ff/077/g'| xxd -p -r > somefileopushex2.ogg

If at first it doesn’t sound promising , dig around in the files timeline listening for somethingt hat catches your ear . often i found by slowing something down by changing tempo downwards more interesting sonics can be revealed.

sometimes I’ll have a collection of files that I’ve worked on and kept by just in case and I'll go through a process of adding them to the video timeline I'm working on and seeing if they match , maybe incorporating some of the original audio if its been damaged during either the hex edit process or more often through tomato which can give quite interesting files.

so maybe for the next ten minutes pick a short file that you want to work on extract the sound in a codec of your choice and try hex editing it to a different codec. To get a list of codecs just issue this command in git-bash 'ffmpeg -codecs' any which are marked as D.E.A mean that ffmpeg can decode and encode that and its audio. 


Can we edit sound as video and back to sound

What if we want to edit it as video ? 

so lets export that as a wav file then change the file extension to .yuv

then do this

Using this ffmpeg -f rawvideo -s 640x480 -r 25 -pix_fmt yuv420p -i yourfile.yuv -c:v rawvideo output.avi

open that in kdenlive now use rgb splitor , then add pixelate filter and maybe waves as well , change to whatever looks good , export that as huffyuv with flac , open the resulting file ( mkv extension ) in audacity as a stereo track   !

Its an interesting one to play around with, I’m not sure if its a dead end or not sometimes you get complete noise sometimes not. Try different video effects on a shorter file as larger files can be quite difficult to handle   taking a lot of time to load and render and saving raw files takes up a lot of hard drive space.

Dawnia Darkstones reverse sonification modified for gimp.

This is possibly a more useful method for manipulating sound as image based on Dawnia Darkstones reverse sonification methodology that she gave a workshop on for Hacknet last year but modified for use with purely open source tools and specifically the gimp.

The basic methodology goes like this :

1) import an audio file to audacity, import a photograph underneath that file (to get an idea of size (for when you import it to Photoshop as we need to know the size i.e. width and height), delete the photo track before saving  save it as a raw file (ie uncompressed with .raw extension)

2) open that raw file in Photoshop (as photoshop supports raw import) with dimensions close to dimensions found during step 1

3)Add some effects cut and paste etc then export again from photoshop as raw

4) open that file In Audacity and play it back and see what we’ve got

Essentially it treats audio as image and allows us to add image effects to audio. So far so simple. But it becomes more complicated if we only want to use open source software (as we have been doing in all the sessions) as I won’t use closed source software and Adobes business model of software as a service that you don’t own and have to keep on paying a subscription for is deeply suspect (other than using windows ten in these sessions for pragmatism realizing most people are going to be using windows 10) we can do something similar with sox on still images, treating images as sound and I’ve written about that in blog posts – if you are interested the write up is here https://crash-stop.blogspot.com/2021/05/bash-script-for-sonification-images.html

and here https://crash-stop.blogspot.com/2021/05/quick-and-dirty-guide-to-using-shell.html

To use this approach with only open source software we have to take a slightly different path, for one thing the Gimp which we will be using doesn’t import raw files , so to get around that once our wav file is exported ( and I’d advise you to shorten any file you are working on) we will change the file extension to .data which the gimp will open .

So the revised methodology will be

1 ) Open audio in Audacity shorten if needed , possibly only try 5 to 10 minutes in length to begin with depending on the speed and age of your machine .

2)Export that audio as raw data file > export > export audio > specify

save as type ‘other uncompressed files’ >choose raw > unsigned 8bit

3) hit save and remember where the file is being save too. Find that file with file explorer change extension from raw to .data

4) exit audacity . Open gimp find the file we just made and open it as .data but now we have to give dimensions for that file otherwise we will lose information or change it beyond recognition.

In Dawnias tutorial she speaks about placing an image file below the audio you are working on in audacity as a reference for size and dimensions and advises cutting one to suit the other . There is a cheat around this in that if we know the size of standard images we can guess what dimensions we will need to import the raw data into the gimp .

This is a cheat sheet to help you based on standard digital camera dimensions.


MB of  audio          Resolution

4                             2464x1632

6                             3008x2000

8                             4264x2448

10                           3872x2592

12                           4290x2800

16                           4927x3264

18                           5184x3456

19                            5380x3620

21                            5616x3744

24                            6048x4032 ( 6648x4032 ?)

31                            6496x4872

39                            7216x5412

51                            8754x5836


or we could use an online calculator like this - https://www.scantips.com/mpixels.html and feed in the size of our raw file which will give us dimensions to feed into gimp as width and height

The following is based on the information found here https://www.wikihow.com/Calculate-a-Digital-Camera%27s-Resolution-from-its-Pixel-Count

But we could also calculate it by imagining the size of the file we want to input to the gimp is in fact a cameras ccd , the file size gives the first number we need so 137mb we could imagine to be a 3:2 ratio 137megapixel camera ( 3:2 is the most common ratio of dslr and the same ratio as 35mm film)

If we then multiply that by 1million ( 137 megapixels ) we get 137,000,000

with that number we can get our width by height by doing this -

1) Get a horizontal-to-vertical and vertical-to-horizontal ratio. You get the horizontal-to-vertical ratio by dividing the first part of your aspect ratio by the second; you get the vertical-to-horizontal ratio by dividing the second part of your aspect ratio by the third

So with a ratio of 3:2

width = 3/2 =1.5

height = 2/3 = 0.666

Multiply your pixel count by the horizontal-to-vertical ratio, then separately, by your vertical-to-horizontal ratio. Then take the square root of that number 

So 137000000 x 1.5 = 205,500,000 = 14,335 = width

137000000x0.666 = 91,242,000 = 9,552 = height

so with this method for any given file size we can get an appropriate width x height

## I wrote a small basic shell script to run this calculation , requires that you have bc installed – on windows you need to have mysys2 installed so that first you can do pacman -S bc to install bc then navigate to the mysys2 folder , look in usr/bin for bc.exe , copy that file then navigate to git folder in programs and paste bc.exe into usr/bin/ otherwise git-bash will throw a wobbler ( bc allows floating point operations in bash as bash only allows whole numbers otherwise ) on linux do Debian based sudo apt install bc , Arch based Sudo pacman -S bc .

script as follows


#!/bin/bash

#automated calculator for obtaining dimensions to use in gimp

#for reverse sonification

#assuming a 3:2 ratio for pictures

w=1.5

h=0.666

echo -n "Size of file in megabytes (give whole number only) ? : "

#get width

read n

a=$(($n * 1000000 ))

b=$(echo "$a * $w" | bc)

echo $b

# get square root to give width in pixels

width=$(echo "sqrt($b)" | bc)

#get height

c=$(echo "$a * $h" | bc)

echo $c

height=$(echo "sqrt($c)" | bc)

echo " Width is $width Height is $height"

so open the gimp then go to open file select your renamed file with .data extension change the image type to indexed ( it stops a weird flaw with gimp that doesn’t resize picture correctly if its rgb even if you select the right file dimensions) in width and height type in either a rough guess by matching size to megapixel above or work them out given the formula above then press open , we should have a correctly dimensioned greyish looking image . Now we can get to work .

Just wander around the image , zoom in have a look at detail , we could apply effects such as edge ( which works really well ) or we could just start randomly cutting and pasting , I try and stay away from the edges doing this . You could even spin the whole file around by 180dgrees which as the effect of reversing the audio .

Once we’ve added a few effects and cut and pasted, export that file giving it a new name as .data again , to a place you will remember , minimise the gimp , open audacity then import the file as raw data , 8bit unsigned , little endian ( for some reason big endian just results in a mush of noise) stereo and import , then flick through the track and see what it sounds like .

If you don’t like it exit and go back to the file open in the gimp and either undo the changes and restart or go for broke and add more export again as .data and import it again to audacity . If you do like it export it as an mp3 or wav from audacity.

And that's essentially it . Some files work well others don’t , but sometimes you hit on something really interesting . Sometimes if you add effects here , like normalise , time stretch change tempo , delay etc you can get something really nice If you have a slower computer I wouldnt advidse going much beyond 50mb in size though .

So now we have some sound what to do with it . Obviously we want to put it back onto the videos we have been making so fire up kdenlive and open the project you’ve been working and import the file and chuck it on the timeline like so

open kdenlive and demonstrate this .

Often Ill import multiple files and try each or combinations sometimes overlaying , shortening etc . And we can open the audio mixer and adjust levels, often files made using hex editing reverse sonification etc will be very loud , bear this in mind , its a mistake ive made myself before and its been complained about by people using headphones .

Notice with kdenlive we can also add video over the top of other video ie alpha transparency and we can add audio and video on the timeline wherever we wish , there is no snap too where all the clips reorganise themselves, this can be handy or a curse if you have lots of small edits


Burning and scratching cds


The use of glitch in music probably predates visual glitch art , and our very first experiences of glitch may be the sound of a skipping cd – in fact whole albums have been made using this method , especially by the group Oval and this seminal work from 1994 - Diskont , and they have influenced my approach to sound in my own work using their methodology ie take a cd , mark on it with felt tip pens then record the stuttering sounds the cd creates.


Extract the sound from a video file using this command 'ffmpeg -i your.mp4 -vn rippedsound.wav' ( I use wav as I want to retain the highest quality file I can for burning to cd ).Burn that file to cd, mark the cd with felt-tip pen. Record that playback in the software of your choice ( for me audacity )

Adjust the sound and edit, add to your newly glitched film.


I use this technique on a lot of the black and white film noir that I sourced from archive.org .


Open source burning software for windows can be found here https://cdrtfe.sourceforge.io/cdrtfe/download_en.html

and for Linux I generally use k3b
































Mark Fisher – ‘Ghosts of my life’, Fukuyama’s ‘End of history’ and rebooting the future with glitch art.

Note- this was the introduction I gave during a recent online discussion with Verena Voigt ( https://www.verena-voigt-pr.de/ ) a...