Friday, 22 October 2021

Desktop feedback loops and performance.

 

This is going to be a bit different than the last few posts as it revolves around ideas I’ve been having in my own work for the last while , anyone who caught my performance for fubar 2021 , or reads this blog knows I’ve been thinking around ideas of desktop as a performative space, and as such it's more based around using Linux than windows, I will give windows examples but Linux works a lot more elegantly.

What do I mean by performative space ? Well for me it started pretty much with my experiences over the last year or so ( and probably all of your experiences as well ) of the pandemic and being stuck at home day after day, generally in front of my desktop or laptop, making work and talking with people online, a lot . It seemed that this was where I lived and everything I did, other than shopping carefully once a week and walking the dogs, eating and sleeping was pretty much it. Around this time I came across this video of Jodis work my%desktop

https://www.youtube.com/watch?v=CPpUQQ7vMBk

And I’d also started collaborating with the people working in medialab via format c https://formatc.hr/medialab-2021-diwo/ on the online suda project which is essentially a collaborative computer space hosted online in the form of a computer desktop which can be altered and changed and remade by the user within certain parameters and also embodies a manifesto centred around sustainability and reuse (Its a good manifesto I advise everyone to read it, find that here https://pads.ccc.de/sda-manifesto) . I had to research a different kind of desktop TWM and subsequently CTWM which look unlike anything we are used to in the recent past as a desktop space TWM info here https://en.wikipedia.org/wiki/Twm. And they allow us to do some very odd things indeed . I should state this is running on linux and some of the effects I can achieve here are specific to the Distribution used ie Parabola Linux such as this simple command ‘xsetroot -b -grey’ which on any other distribution of Linux will just set the background to grey but on parabola linux in TWM or CTWM we can do this which makes the windows leave trails and impressions behind wherever they go ( this was incorporated into Suda)

And I can’t remember exactly how but as I was researching TWM and CTWM I was looking into things you could do with ffplay , I’d investigated it before for my work with displacement maps which led to something else which I demoed at this years Fubar , ( but we’ll get to that in a moment) and rather than grabbing the video output from a webcam and displaying it via ffplay I found that there was a way to open a window via ffplay which mirrored the desktop around it by focusing on where the mouse was as if the mouse was a lense

An example of this can be found here exampleX11grab.mp4

https://youtu.be/G_0Q4mLlTfE

that command was this

ffplay -f x11grab -follow_mouse centered -framerate 10 -video_size 640x480 -i :0.0

as you can see the focus of that window follows the mouse

if we run two windows of different sizes we can start to see possibilities for feedback loops

and this is pretty much the implementation I came up with for Suda ( but run from a script )

Mantissa

which brings us too mantissa and to show some of the concepts I want to show you before we get to a possible  windows implementation I will have to use Linux to illustrate what Im talking about.

so most of my experimentation at the moment as I say is around the desktop as a performance space and most of this started with mantissa which is a collection of scripts which utilises two webcams and various windows to create and capture video endlessly

So basics - this is all run through shell scripts to begin with. If you don't what a shell script is welll , a shell script is a txt file which you can add in commands that you would normally enter one by one into the command line , a shell scrip ( run from bash shell or git-bash) means you can automate tasks and avoid pesky typing . The full mantissa scripts are here https://crash-stop.blogspot.com/2021/07/mantissa-self-capturing-displacement.html if you feel like experimenting with it.

As an example this is a shell script called afish.sh , it calls up two windows in the middle of the screen using x11grab , which act as webcams kind off, depending on where you place the mouse ( the windows show what the mouse is hovering over) its kind of like a test script for me to try out window sizes and placements . Originally this would have been on parabola Linux but Linuxmint is better for this type of multimedia so I’m using that ( parabola is also good but when I start using multiple windows and multiple videos playing back I need a decentish video card to offload some of the graphics processing from the cpu. 

script below ( copy and paste into geany from '#! /bin/bash' to last 'cif -i :0.0' on linux each line is continuous

#! /bin/bash


mate-terminal --geometry=17x18+0+3 -e 'ffplay -f x11grab -follow_mouse centered -framerate 25 -video_size 640x480 -i :0.0'
mate-terminal --geometry=17x18+0+3 -e 'ffplay -f x11grab -follow_mouse centered -framerate 25 -video_size 352x288 -i :0.0'
#mate-terminal --geometry=17x18+0+3 -e 'ffplay -f x11grab -follow_mouse centered -framerate 25 -video_size 352x288 -i :0.0'
#mate-terminal --geometry=17x18-1-2 -e ffplay -f x11grab -follow_mouse centered -framerate 25 -video_size cif -i :0.0
#mate-terminal --geometry=17x18-2+0 -e ffplay -f x11grab -follow_mouse centered -framerate 25 -video_size cif -i :0.0

 

So thats simple enough , but if we add two webcams and use those as input instead of x11grab we can start to do more interesting things with feed back .

an example command for each camera would be something like

 ffmpeg -i /dev/video0 -f rawvideo -vcodec rawvideo -pix_fmt yuv420p -s 1280x720 - | sed 's/00/00/g'  | ffplay -f rawvideo -vcodec rawvideo -pix_fmt yuv420p -s 1280x720 -  

and then the same command for camera 2 but with smaller resolution

ffmpeg -i /dev/video1 -f rawvideo -vcodec rawvideo -pix_fmt yuv420p -s 960x540 - | sed 's/00/00/g' | ffplay -f rawvideo -vcodec rawvideo -pix_fmt yuv420p -s 960x540 -

Now we have the same windows with the same placement but running two webcams

now there is a windows equivalent to

ffplay -f x11grab -follow_mouse centered -framerate 10 -video_size 640x480 -i :0.0 

which is

ffmpeg -f gdigrab -framerate 6 -offset_x 10 -offset_y 20 -video_size vga -i desktop -f rawvideo -vcodec rawvideo -pix_fmt yuv420p -s 640x360 - | ffplay -f rawvideo -vcodec rawvideo -pix_fmt yuv420p -s 640x360 -

if you open two git-bash terminals and run this command in both it will give you two windows that can to a degree give you the same interactivity as on linux. except you will have to move the windows around to achieve feedback whereas on linux you can just move the mouse

( documented here in the ffmpeg online manual https://ffmpeg.org/ffmpeg-devices.html#gdigrab )

which isn’t as elegant but it is functional and useful , so everyone on windows try that and everyone on Linux try the first command ( look in the shared notes for the command) ie.

ffplay -f x11grab -follow_mouse centered -framerate 10 -video_size 640x480 -i :0.0

try opening two of these at the same time ! Or changing the size of one to this ( depending on the size of your desktop)

ffmpeg -f gdigrab -framerate 6 -offset_x 10 -offset_y 20 -video_size vga -i desktop -f rawvideo -vcodec rawvideo -pix_fmt yuv420p -s 1280x720 - | ffplay -f rawvideo -vcodec rawvideo -pix_fmt yuv420p -s 1280x720 -

https://www.softpedia.com/get/Desktop-Enhancements/Other-Desktop-Enhancements/Mofiki-s-Coordinate-Finder.shtml

helps to give co-ordinates on the desktop for input to windows systems ( pretty easy to use – just place your mouse over a point on the desktop and press the space bar then change the x and y offset values in the windows script to those ) 

* A word of caution when using sites like softpedia via windows be careful ( especially if not using an ad blocker , why aren't you ? I'd personally recommend Ghostery )  that you click on the correct link as these sites are littered with less than trustworthy links and software.

if we then start to add hex editing into the mix it gets more hectic

We don't have to run webcams we can just use virtual windows we call up through the script instead and the thing is you can have as many windows like this open as you wish , as many as your computer can handle and even more importantly you can make them different sizes ( up to the total size of your screen but more of that in a while. ( depending on your pc or laptops power and graphics capability – remember to keep an eye on your temperatures on a laptop that fan should start whizzing away! ) .

so having discovered all of that I went on to design a bigger shell script which took me a couple of weeks to create called mantissa , which runs two webcams , captures the output of those webcams every 30 seconds or so records that as a thirty second clip then runs a displacement map over that file and then displays it . This is a work in itself but afterwards and works as a standalone set running and walk away piece but I found it more interesting to take the techniques I’d learned and start playing with the desktop by opening multiple windows and videos as well and using it as a massive feedback environment and recording that using obs-studio to capture the whole desktop.

like this https://youtu.be/matpsXvoP0c the sound is created using some of the techniques we covered on day 4 . And referring back to what I talked about on the first day I like that the elements of the desktop itself are still there and visible and that they become part of the work as well , rather than trying to hide them , and I think to me that’s important , as I said before I’m not interested in naturalism or the suspension of disbelief .

Because I’d been talking to people a lot on other platforms like reddit and tumblr after leaving Facebook and Instagram earlier in the year and via Glitchlab and Medialab I was meeting people who were using windows more than linux I’d started to try and recreate some of my methodolgy in windows ( which is why these workshops are primarily windows based ) and the question was to myself How could I achieve something similar in windows ?

Back to windows

Given the original commandline above its possible to open windows which act similarly to the linux version doing this

ffmpeg -f gdigrab -framerate 6 -offset_x 10 -offset_y 20 -video_size vga -i desktop -f rawvideo -vcodec rawvideo -pix_fmt yuv420p -s 640x360 - | ffplay -f rawvideo -vcodec rawvideo -pix_fmt yuv420p -s 640x360 -

If we alter it a little we can start to build up texture by live hex editing by doing this on windows

ffmpeg -f gdigrab -framerate 6 -offset_x 10 -offset_y 20 -video_size vga -i desktop -f rawvideo -vcodec rawvideo -pix_fmt yuv420p -s 640x360 - | sed 's/808/181/g'| ffplay -f rawvideo -vcodec rawvideo -pix_fmt yuv420p -s 640x360 -

You can get more control over the hex-editing by changing that command slightly to this

ffmpeg -f gdigrab -framerate 6 -offset_x 10 -offset_y 20 -video_size vga -i desktop -f rawvideo -vcodec rawvideo -pix_fmt yuv420p -s 640x360 - | xxd -p | sed 's/808/181/g'| xxd -p -r | ffplay -f rawvideo -vcodec rawvideo -pix_fmt yuv420p -s 640x360 -

and as you can see if we start running a video in the corner of the screen ( top left hand corner which is where this window grabber is focussed we can start to do more interesting things again – unlike linux the window doesnt reflect what is under the mouse pointer so we have to play about with the x and y offsets ie this bit

-offset_x 10 -offset_y 20 to work out where the window will select to display

or if you know the size of your screen you can first make that active before adding more smaller viewports

we could also try changing the final output codec to see what happens like this

demo changing codec some work some don’t msvideo1 and cljr do , near raw is better , its worth experimenting with differen settings here . If you don’t what codecs you have available just open a terminal ie git-bash etc and type in ffmpeg -codecs and scroll back through the list.

on Linux this command is

ffmpeg -f x11grab -follow_mouse centered -framerate 5 -video_size cif -i :0.0 -f rawvideo -vcodec rawvideo -pix_fmt yuv420p -s 640x360 - | xxd -p | sed 's/808/181/g' | xxd -p -r | ffplay -f rawvideo -vcodec rawvideo -pix_fmt yuv420p -s 640x360 -

we can also use displacement maps

within this command by doing this

ffmpeg -f gdigrab -framerate 6 -offset_x 10 -offset_y 20 -video_size 640x360 -i desktop -f gdigrab -framerate 6 -offset_x 10 -offset_y 20 -video_size 640x360 -i desktop -lavfi '[1]split[x][y],[0][x][y]displace' -f rawvideo -pix_fmt yuv420p - | xxd -p | sed 's/808/181/g'| xxd -p -r | ffplay -f rawvideo -vcodec rawvideo -pix_fmt yuv420p -s 640x360 -

and by placing different windows within the field of view we can add or alter what the viewport sees and distorts

notice I’ve left the hex editing part in

on linux the equivalent would be

ffmpeg -f x11grab -follow_mouse centered -framerate 25 -video_size 640x360 -i :0.0 -f x11grab -follow_mouse centered -framerate 25 -video_size 640x360 -i :0.0 -lavfi '[1]split[x][y],[0][x][y]displace' -f rawvideo -pix_fmt yuv420p - | xxd -p | sed 's/808/181/g'| xxd -p -r | ffplay -f rawvideo -pix_fmt yuv420p -s 640x360 -

Add webcam

Or if we want to feed in a webcam rather than a video on windows we could do this

first find out what your web cam is called using this command from git-bash

ffplay.exe -list_devices true -f dshow -i dummy

look for the output which names your camera , on mine its "Live! Cam Sync HD VF0770" , activate the camera by pasting this into git-bash and press enter

ffplay.exe -f dshow -i video="Live! Cam Sync HD VF0770" ( substituting the name of the camera in your output )

But still not as flexible as the Linux follow mouse function in x11grab

your webcam should now display

now we know the camera is operational and we know what dimensions are going to display in ( mine is 640x360 ie 16:9 ratio yours might be different though if its newish it will either be like mine or 640x480 4:3 ratio )

ffmpeg -f dshow -r 5 -i video="Live! Cam Sync HD VF0770" -f rawvideo -vcodec rawvideo -pix_fmt yuv420p -s 640x480 - | xxd -p | sed 's/808/181/g' | xxd -p -r | ffplay -f rawvideo -vcodec rawvideo -pix_fmt yuv420p -s 640x480 -

note the -r -5 , have to put that in to reduce the framrate input otherwise the process stalls , not so much of a problem on linux

the equivalent Linux command would be

ffmpeg -i /dev/video0 -f rawvideo -vcodec rawvideo -pix_fmt yuv420p -s 640x360 - | xxd -p | sed 's/ff/18/g' | xxd -p -r | ffplay -f rawvideo -vcodec rawvideo -pix_fmt yuv420p -s 640x360 -

(where /dev/video0 is the first attached webcam )

and of course we can use these as a basis for displacement again

on windows 

replace the name of the webcam I have with your own found thru using   

ffplay.exe -list_devices true -f dshow -i dummy and alter dimensions to what works with your camera ie replace 640x360 with vga or cif or a dimension you know your camera will work with 

ffmpeg -f gdigrab -framerate 6 -offset_x 10 -offset_y 20 -video_size 640x360 -i desktop -f dshow -i video="Live! Cam Sync HD VF0770" -lavfi '[1]split[x][y],[0][x][y]displace' -f rawvideo -pix_fmt yuv420p - | ffplay -f rawvideo -pix_fmt yuv420p -s 640x360 -

obviously as what we are dealing with here are generated feedback loops we could always point the webcam at the screen rather than ourselves !

if you don’t have a webcam you could use the built obs-studio virtual webcam

like this if you don’t have a webcam but do have obs installed we could use its virtual webcam function ( which actually mimics in some ways what we can do with linux ie the ‘ x11grab -follow_mouse centered’ part which allows us to create a roving window that sees what the mouse sees

first open obs-studio and add the obs virtual camera to video capture devices , then add window capture

start

$ ffmpeg -f gdigrab -framerate 6 -video_size vga -i desktop -f rawvideo -vcodec rawvideo -pix_fmt yuv420p -s 640x360 - | ffplay -f rawvideo -vcodec rawvideo -pix_fmt yuv420p -s 640x360 -

( with displacement and hex editing using only virtual windows )  

ffmpeg -f gdigrab -framerate 6 -offset_x 10 -offset_y 20 -video_size 640x360 -i desktop -f gdigrab -framerate 6 -offset_x 500 -offset_y 500 -video_size 640x360 -i desktop -lavfi '[1]split[x][y],[0][x][y]displace' -f rawvideo -pix_fmt yuv420p - | xxd -p | sed 's/808/181/g'| xxd -p -r | ffplay -f rawvideo -pix_fmt yuv420p -s 640x360 -


select the window that just opened as the window capture device

do this

ffplay.exe -f dshow -i video="OBS Virtual Camera"

then start the virtual camera ( in the controls section , bottom right hand side of obs-studio main interface) minimise obs-studio watch what happens !

Interestingly if you record the output it only shows the virtual webcam which could be exploited in interesting fashions !

Like this , a brief proof of concept

https://youtu.be/MVrsL6L6FiE

And thats pretty much it theory wise , you can build up layers of these windows and at the same time play pre-recorded video and sound in the background , use it as a performance tool or record the output . To me it allows exploring and remixing in real time seperate to ideas of video editing and becoming a fluid tool for exploration .

















Mark Fisher – ‘Ghosts of my life’, Fukuyama’s ‘End of history’ and rebooting the future with glitch art.

Note- this was the introduction I gave during a recent online discussion with Verena Voigt ( https://www.verena-voigt-pr.de/ ) a...