Tuesday 3 January 2023

The treachery of images - Desktop as performance space

* This is a version of the talk I gave online during Fubar 2022
 
 
 
Hello everyone - I'd like to welcome you to my desktop, as this is what I will be talking about.  I want to start with a digression into art history . Consider this  , “The treachery of images” by Rene Magritte from 1929 . 
 

It is a painting of a pipe with  text in French which reads ‘this is not a pipe’ . It seems obvious to me as an ex-painter  that this is indeed not a pipe, it is a painting of a  pipe.  I'd like you to keep this in mind throughout this talk. My desktop might superficially look like Windows 95 but it isn’t , again bear this in mind.

How do we interact with our computer? In the beginning, after punch cards, was the command line 

be it on mainframes or later home computers  first it travelled by teletype.  A teletype is essentially a two way terminal connected to a computer using a keyboard and paper roll for input and  output. There is a great video of that here https://www.youtube.com/watch?v=gMIL2bvUYIs&t=2s

 
After the teletype came the vdu or visual display unit , rather than a paper roll we now have a screen for interaction but essentially its the same as a teletype but it removes one level of interaction, rather than seeing a continuous physical printed record of what we have typed and received back from the computer we see a screen with non physical electronic readout. 
Credit for source image

Trammel Hudson https://www.flickr.com/photos/osr/


But the beginnings of the interface as we understand it  came with what is known as  ‘the mother of all demos’ given by Douglas Engelbart so called because it introduced pretty much all of the fundamental ideas behind modern computer graphical interfaces – more info here https://en.wikipedia.org/wiki/The_Mother_of_All_Demos  ) on December 9, 1968

To quote from the wikipedia article “ The live demonstration featured the introduction of a complete computer hardware and software system called the oN-Line System or, more commonly, NLS. The 90-minute presentation demonstrated for the first time many of the fundamental elements of modern personal computing: windows, hypertext, graphics, efficient navigation and command input, video conferencing, the computer mouse, word processing, dynamic file linking, revision control, and a collaborative real-time editor. Engelbart's presentation was the first to publicly demonstrate all of these elements in a single system. The demonstration was highly influential and spawned similar projects at Xerox PARC in the early 1970s. The underlying concepts and technologies influenced both the Apple Macintosh and Microsoft Windows graphical user interface operating systems in the 1980s and 1990s.”


Considering that that demonstration was in 1968 lets look at what  we call a desktop. Be it based on some derivative of the following:

Macintosh classic desktop

Windows 3.11 desktop.
Windows 95 desktop.

Mac Os 10.5.4 desktop

The gnome 3 desktop

The interface that we use to interact with our computers or devices, the icons that we click on for a word document, the images that we manipulate with the Gimp or other software , a paintbox program that gives us brush options and colours , the video editor that shows us a time line laid out in a film strip fashion, all of these inhabit the same space as the pipe in ‘the treachery of images’, they are nothing more than signifiers that allow us to navigate an alien and strange landscape which bears no relation to how we see it.

Consider the idea of skeumorphism – to quote from https://www.interaction-design.org/literature/topics/skeuomorphism

“Skeuomorphism is a term most often used in graphical user interface design to describe interface objects that mimic their real-world counterparts in how they appear and/or how the user can interact with them. A well-known example is the recycle bin icon used for discarding files. Skeuomorphism makes interface objects familiar to users by using concepts they recognize.”

Unlike the pipe in Magrittes  painting we can interact with the ‘objects’ on our desktops, we have menus to scroll and click through;


 We have buttons to press;

waste-bins to fill and to empty;


file managers to organise our files;

It makes the unreal real, an electronic office in which we can ‘work’.

This paradigm of the desktop metaphor reflects  the corporate origins and mindset of most of the software we use. Both Windows and MacOS are designed for cubicle dwellers, office drones and ad jockeys . 

 

How can we break out of those inbuilt constraints to move away from mimicking ‘real world counterparts’ using the software in a performative and meaningful way, to make art from and in this space rather than using its tools to create ‘work’ that is then seen as separate from the environment in which it was created. In other words how do we stop ourselves from perceiving a desktop when in fact what is in front of us is an entirely different thing.

Jodi's My%Desktop was one of the first hints to me personally that something else could be done within this space ( more info here https://www.moma.org/calendar/galleries/5160 )  as it says on The MOMA site for that exhibition 

"JODI recorded various versions of My%Desktop in front of live audiences, connecting their Macintosh to a camcorder and capturing their interactions with the user-friendly OS 9 operating system. The resulting “desktop performances,” as the artists call them, look at ways that seemingly rational computer systems may provoke irrational behaviour in people, whether because they are overwhelmed by an onslaught of online data, or inspired by possibilities for play. What appear to be computer glitches are actually the chaotic actions of a user. “The computer is a device to get into someone’s mind,” JODI explained, adding, “We put our own personality there.”

Notice that they say 'Desktop performances' this became of primary importance to my own work after working on the Format C medialab project, Suda  which I will come to shortly.

This idea of desktop performance answered for me one of the main problems I'd hit in my own work , a tiredness with reusing found sources, given the nature of remix culture the same sources can crop up in the work of other artists, its not that I don't recognize that work as valid, it was more wanting to explore new avenues which are separate to the meaning that found work can carry and  the idea that the desktop itself as well as being a space that can be performative can also become a way of generating unique images and video which interrogate the space on its own terms. Rather than the desktop being a place to work in  it becomes a space to work with - a material in itself.  (Video of Jodi's My%Desktop working here https://www.youtube.com/watch?v=CPpUQQ7vMBk)

What Jodi's My desktop illuminates is that the desktop is at its heart a feedback system, if we press a button something usually happens , if we drag a window that window moves . Actions have consequences and meanings separate to those  we assign via skeuomorphic design. It becomes an arena or a performance space by our very act of our presence and our interactions, as you are now within the space of your own screen.

One of the fundamentals of glitch art is  feedback, are there ways to turn the desktop against itself to generate more feedback that we can use? The obvious answer would be to turn a webcam to face the screen , and this gives varying results and can also be seen in the context of early experiments with video feedback conducted by the pioneers of video art in the 60’s , 70’s and eighties . This approach can be fun especially when used alongside the processing environment  . 

Desktop as feedback system and performance space

A lot of what I do at the moment is based around online collaborations with Format C ( https://formatc.hr/about/ ) and  a sub-project of Format C - medialab which at the moment is focused on a project called Suda https://formatc.hr/medialab-2021-diwo/

For Suda we wanted something lightweight which would work online with little lag. Originally I experimented with TWM , which compared to win 95 or mac os is archaic , first appearing some time in 1989 or so ( it has similarities with CDE or the Unix common desktop  environment but crucially is open source whereas cde was/ is proprietary) and the first hint that something might be usable came by realising that minimised icons behaved very oddly and looked somewhat glitchy for example this;


or more simply;


In the end we chose to use ctwm because it's low resource and fairly easy to configure using a  single text file and more importantly unlike TWM it has click to focus, which we needed for one of the scripts that suda runs. Whilst working on TWM on Parabola Linux whilst trying to find a way of changing the background I discovered something interesting , issuing this command xsetroot -bg grey;

Allows us to drag and trail windows across the screen leaving behind trails and traces, turning icons and windows into brushes. It also reminds me of this from win 95/98;


Discovering this was only possible because as I later found Parabola Linux uses lxdm as the default display manager  in the live version I had installed. If you don't use Linux or haven't delved into it much, on Linux the display manager which communicates with the X11 server ( which controls how things are displayed on screen, screen size what sort of screen it is ie crt, lcd, laptop and input devices like mouse and keyboard) is separate to the desktop environment you use and is configured using a text file in this case /etc/lxdm/default.conf ( or lxdm.conf on some distros) and in default.conf there is this one line 
 
 arg=/usr/bin/X -background vt1
 
on every other distro I've looked at this line is commented out, ( ie with a #) but not on Parabola and once we uncomment that line on other distros we get the glitchify effect. This seperation between desktop , window manager and display server is what makes this area of enquiry possible.

So as I said earlier a lot of what I do at the moment is based around online collaborations specificaly around the Medialab project Suda. Ill just show Suda running in a short video below.
 
 




How does suda work? Suda is a set of scripts designed to run and be interacted within online in a kind of server as installation piece. Its desktop is dictated and adjusted by the text file ctwmrc  which is a human readable and editable file which can be altered as needed  allowing us to call up scripts which run certain programs. You can view the version of .ctwmrc that my version of Suda, Suda-Se  uses here https://crash-stop.blogspot.com/2023/01/suda-se-ctwmrc.html
 
At the heart odf the original online Suda  are four scripts, glitchify ( already discussed) , sudacam1, sudacam2 and manifesto ( which calls a second script suda-ctwm-xdotool.sh) Sudacam 1 and 2 use ffplay to open two windows , one at 640x480 resolution and one at 352x288 resolution which act as windows which show whatever the mouse acting as a lens via x11grab and -follow_mouse centered. The exact commandline is this ( with changes between the two for size)

ffplay -f x11grab -follow_mouse centered -framerate 10 -video_size 640x480 -i :0.0
 
These simple elements  allow me  to build up a visual complexity quite quickly, for example;


Feedback loops are essentially generative and what they create is often unexpected , depending on the elements I bring up or delete, open or close. It escapes the trap of looking for material to make glitch art from by becoming the process through which glitch can be created in real time from the desktop. And more than that it becomes an arena to play within outside the formal office space bounds of the gui based operating system or what we expect from narrative linear video and becomes almost a continuous collage of moving and static elements. Almost as if I was playing with hardware glitching techniques

I should at this point explain that the desktop environment I now use is one Ive only just started working with – the previous desktops that I talked about, ctwm and twm are simpler and use only one configuration file which allows me to add in menu entries to call up scripts. The desktop environment I use now, Icewm, is also quite configurable and by altering the text file ( ~/.icewm/toolbar ) I can add in scripts or programs I want which can be assigned buttons on the toolbar at the bottom so in effect I am configuring the workspace to trigger the shellscripts I need to generate feedback in real time. I have also come to appreciate the elements of the desktop as integral to what I’m doing – terminal output , icons , window bars error messages , the trails and movement of the mouse itself – I try not to hide these . So I can recreate what I was doing in ctwm but also add in new features like a script which snapshots an area on the screen then sets the background to that ( as well as saving those snapshots for later use, an example of this is below.



I also have scripts set up which change the colorspace of the windows opened and thus the quality of the feedback these are variations on the sudacam scripts which add in hex editing as well as playing with colourspace for example rather than 
 
lxterminal --geometry=17x18+0+3 -e 'ffplay -f x11grab -follow_mouse centered -framerate 10 -video_size 640x480 -i :0.0' 
 
we could alter it to this
 
ffmpeg -f x11grab -follow_mouse centered -framerate 3 -video_size 640x480 -i :0.0 -f rawvideo -vcodec rawvideo -pix_fmt monob -s 640x480 - | xxd -p | sed 's/802/18f/g' | xxd -p -r | ffplay -f rawvideo -vcodec tmv -pix_fmt monob -s 640x480 -

- example of these scripts running below.
 


And we can add in more elements using a slightly altered version again of the final script triggered above.



By adding a script to ~/.icewm/taskbar which calls a video I can also introduce a pre made video into the feedback process like so: ( though a better example might be this https://www.youtube.com/watch?v=matpsXvoP0c&t=5s
 
 


And finally we could add words. First I created a script which mashs several texts together line by line , these act as seed texts to the next stage which takes each seed text , runs those through dadadodo ( a text mangling program akin to William Burroughs cut up techniques ) and then two scripts run in parallel to write lines of text into a text editors window – we can then use that output as input for the other feedback windows i've already generated and add in a bit of displacement mapping as well by altering one of the scripts to this -
 
 ffmpeg -f x11grab -follow_mouse centered -framerate 10 -video_size 480x640 -i :0.0 -f x11grab -follow_mouse centered -framerate 10 -video_size 480x640 -i :0.0 -lavfi "[1]split[x][y],[0][x][y]displace" -f rawvideo -pix_fmt yuv420p - | ffplay -f rawvideo -pix_fmt yuv420p -s 480x640 -
 
 

 
In essence Ive created a feedback toolbox kit for the desktop which feeds on its imagery and generates broken and glitched visuals in a constantly evolving and recirculating way which can be used live as a performance tool or saved as video and edited down later looking for moments and threads which fit well together as in this longer form video here;




And this is the modified taskbar file in ~/.icewm/ that provides the functionality I'm using they are written in a format like this 

"sudacam1" application-x-shellscript.png /home/crash-stop/sudacam1.sh 

so its name as it appears in menu or mouseover of toolbar icon, icon to use ie application-x-shellscript.png ( for shellscript) icons must be accesible to icewm ie in ~/.icewm/icons/ ( i cheated and moved some system wide ones i wanted to use into that folder to make them accessible) and finally where the program or shellscript is ( programs generally are in /usr/bin/ whilst shellscriptsI keep in my home folder) so below is my ~/.icewm/toolbar as used above 

# This is a default toolbar definition file for IceWM
#
# Place your personal variant in $HOME/.icewm directory.

#prog XTerm ! x-terminal-emulator
#prog FTE fte fte
#prog Netscape netscape netscape
prog     "Processing" pde-32.png /home/crash-stop/processing-3.5.4/processing
prog    "Glitchify" application-x-executable.png /home/crash-stop/glitchify.sh
prog    "Scrot" image-x-generic.png /usr/bin/scrot
prog     "Recordmydesktop" camera-video.png /home/crash-stop/startcam.sh
prog    "Vim" vim /usr/bin/mousepad
prog     "Terminal" terminal.png xterm
prog     "File Manager" file-manager.png pcmanfm
prog    "WWW" ! x-www-browser
prog     "sudacam1" application-x-shellscript.png /home/crash-stop/sudacam1.sh
prog     "sudacam2" application-x-shellscript.png /home/crash-stop/sudacam2.sh
prog     "Norandom" application-x-shellscript.png /home/crash-stop/norandom.sh
prog     "mantissacam1" application-x-shellscript.png /home/crash-stop/mantissacam.sh
prog     "mantissacam2" application-x-shellscript.png /home/crash-stop/mantissacam2.sh
prog     "Norandom2" application-x-shellscript.png /home/crash-stop/norandom2.sh
prog     "Background" application-x-shellscript.png /home/crash-stop/background.sh
prog     "TVM" application-x-shellscript.png /home/crash-stop/tvm.sh
prog     "Displace" application-x-shellscript.png /home/crash-stop/desktopdisplacett.sh
prog     "MVT" application-x-shellscript.png /home/crash-stop/tvm2.sh
prog     "B&W1" application-x-shellscript.png /home/crash-stop/sudacam3.sh
prog     "B&W2" application-x-shellscript.png /home/crash-stop/sudacam4.sh
prog     "V4l2" application-x-shellscript.png /home/crash-stop/v4l2test.sh
prog     "UnholyWriter" accessories-text-editor-symbolic.symbolic.png /home/crash-stop/start.sh
prog    "Movie1" inode-core.png /home/crash-stop/movie.sh
prog    "Movie2" inode-core.png /home/crash-stop/movie2.sh
prog     "Displace2" application-x-shellscript.png /home/crash-stop/desktopdisplace2.sh







 

 























ikillerpulse ( requires tomato.py in working directory)

I've been working on this script for the last week or so. What does it do? It takes an input video/s,  converts to a format we can use f...