Sunday, 5 July 2020

The Ethics of sources Day 6 - Sound

Welcome back to part 6 of The Ethics of sources, the original talk can be found here -Ethics of Sources Day 6


Today I’m going to be talking about sound , and when I say sound , I mean sound , I’m not a musician, and the sound that I use in my work I don't consider to be music as such , more as a product of the processes and manipulations that I put files and codecs through – I make broken sounds to reflect a broken file and a broken narrative .
Before I made glitch art but made video I worked for a long time on an animated film based on my sister Rowena's paintings called colour keeper , that was back in 2006 / 2007 Back then I took a bunch of songs I liked and chopped them into the video in a way which suited the emotional narrative of the film I was making . Rowena's paintings reflected the childhood we had and so did the songs , unfortunately at the time Myspace didn’t take to kindly to this use without permission and decided to pull my video and put me in what they called copyright jail so I had to prove I was a good person and wouldn’t do it again.



I could see their point and pulled the video and it only exists now as a ghost file sitting on a back up dvd somewhere . Lesson learned, so for the next version of that animation I used the sound of an old musical box that I recorded with a microphone and re-edited with Audacity and stretched and redited and did some strange things with mad with the power of an opensource sound editor .And uploaded that to YouTube – thinking that the wild west that it was then (2010) It surely would be fine 


Aaah no , a few hours later the video had a copyright claim against it for the soundtrack which bizarre as that was I did think well shall I contest this and in the end I just couldn’t be bothered so I took that down and between those two takedowns lesson no.2 learned
The third version was right at the beginning of me making glitch art and was the first time that I'd really thought about what the sound could be like, ( its also one of the few pieces I’ve made that has been exhibited irl) id begun experimenting with listening to the sound that video made when put into audacity – much to the disgust of our dogs who howled or barked if I played anything out loud and came up with a soundtrack which sounded a bit like a monster lurking in a basement chewing on bones and it sounded a little like this:


Glitchkeeper







Strike three happened with this video I’d made just before this which had been subject to another take down because I’d used Iggy pops nightclubbing as a backdrop , basically a small sample of the strange and boozy reverby drums looped , so I thought I’d make the sounds as obnoxious and unmusical as possible . Again I think this is the sound of a video file run through Audacity .
 
Tunnel ( 2010)





And I suppose the copyright take-downs kind of coloured my approach to sound as well , let the bots chew on this !



One of the things I find fascinating about glitch art is that some techniques can have unintended consequences – this file when I made it originally before hex editing had no sound – and somehow in the process of hex-editing it ( a process using some kind of sort on the commandline but I can't remember which ) generated the sound that you hear in the background. The sound does seem to reflect what the video is doing.
 
BDN2
 





The sound on the next video ( based on TV news bulletins from the day of 911) is a combination of taking the sound from the original video sources and reworking them with audacity plus feeding the transcript from that day of data transmissions and pager messages ( dumped via Wikileaks) through a command line text to speech software called festival then reversing some of the audio on top of that . Speech synthesis fascinates me in that you can take anything that is text and turn it into sound . Festival speech synthesis website here - http://www.cstr.ed.ac.uk/projects/festival/



The video is 2 DVD plyers mixed through a dirty video mixer based on karl klomps design find that here - Karl Klomp dirty video mixer


The end result looks like this, and apologies for the poor quality of the render , was back when I was still learning about these things ( the speech synthesis sections are around mid-way) 

The Imaginary Tower





Getting back to speech synthesis we could use this, another open source speech synthesizer get gespeaker through your package manager on linux , or compile it from source more on that here gespeaker



And with this we could also turn video into text using xxd as in the slide below.



Taking speech synthesis a bit further we could use a gui application like gespeaker ( gespeaker is a gui frontend for espeak so its also command line as well) so here I’m streaming the contents of the file 'hitcher16bit.mp4' using xxd to a text file containing hexadecimal values then opening that txt file in gespeaker and looking at various ways of playing it back in either male or female or different speeds and pitchs.

Gspeaker in action 
  





which is what I used in the next video ( genhex265.mp4) . Here I use a text about Derrida and then play with that in audacity at different speeds . So in some sense it misinterprets text as sound, at different pitches the highpitched squealing is the same text speeded up greatly with a number of effects on top of that .

Genhex265






To take the speech synthesis a bit further I could turn a html text into a speech and play around with that further

 Percent percent 3





More important for me sound wise was the discovery of an application called the vOICe , which in essence is a system designed to teach the blind to see by using a from of audio radar that changes what a webcam sees into sound, the application which created the video element base of the previous video . As well as being able to use a webcam as input it can also use the desktop itself and sound out and view what you point the mouse at – so in this case I'm using the desktop as a feedback loop to create sound .


vOICe in action (FIC1)

 







And sometimes if you get the sound just right you can achieve some beautiful organ-like bell tones
 
Maze 4
 






So as well a sonifying our desktop we could also turn sound into video.



 We will take the sound file created by gespeaker earlier and turn that into video using this method .

Take any wav file , change the file extension .wav to .yuv . In the same folder open a terminal and enter this command ( using ffmpeg ) 'ffmpeg -f rawvideo -s 640x480 -r 25 -pix_fmt yuv420p -i yourfile.yuv -c:v libx264 -preset ultrafast -qp 0 output.mp4'

Which gives us this



 

Obviously this is a very basic example but it does hold possibilities, for example turning a video into sound as wav then maybe adding effect like reverb etc then turning that sound back into video , a kind of sonification which is a common technique in glitch art but not one that I use much myself .

As I showed before  when I talked about datamoshing there is sound that happens in the process of datamoshing or even hexediting when the sound within a file becomes damaged by the process , my favorite format for that is magicyuv as in this video:



Eat yer greens
 

 



The use of glitch in music probably predates visual glitch art , and our very first experiences of glitch may be the sound of a skipping cd – in fact whole albums have been made using this method , especially by the group oval and this seminal work from 1994 - Diskont , and they have influenced my approach to sound in my own work using their methodolgy ie take a cd , mark on it with felt tip pens then record the stuttering sounds the cd creates.



Extract the sound from a video file using this command 'ffmpeg -i your.mp4 -vn rippedsound.wav' ( I use wav as I want to retain the highest quality file I can for burning to cd ).Burn that file to cd, mark the cd with felt-tip pen. Record that playback in the software of your choice ( for me audacity )
Adjust the sound and edit, add to your newly glitched film.

I use this technique on a lot of the black and white film noir that I sourced from archive.org . This was originally black and white but I ran it through on of my computers running linux mint bea 2.1 which colourised it 


Confessions.






I also use this slinky device , mine is actually the only one I have ever seen in real life as basically it came around at the wrong time . Essentially what it does is you tell it a genre and organize beats , bass lines and drums etc from its sound-banks and it composes on the fly a new and unique track , every time you run it. It also has the abilty to save what you make and also has a handy line out and mic in , and it is quite the strangest device I have ever used , but as it creates generative music , each track is unique , royalty free and copyright stays with you the creator – cos like you just created it , that's the nature of generative music , given a few algorithms anything is possible , and it does all of this in real time . I have used this in some of my work as sound accompaniment for instance on this video , though I've messed around with it after in audacity , I tend to leave the device running, record the output and then pick out bits I like . More information on this unique device here - Dr Mad

Blood Moon


 

So to conclude, there is no one approach I have to sound in my work , I tend to like broken sound or sound that fits the motion of the video . 

The final day of the residency ( Day 7) was a live performance/ demonstration with live skipping cds , find that here -  The Ethics of Sources Day 7




















Mark Fisher – ‘Ghosts of my life’, Fukuyama’s ‘End of history’ and rebooting the future with glitch art.

Note- this was the introduction I gave during a recent online discussion with Verena Voigt ( https://www.verena-voigt-pr.de/ ) a...