Running glic in processing from linux subsystem on chromeosflex still capture via chromeosflex screencapture utility.
I got my first Chromebook back in 2018. A lot of what I was doing was online and for that they are great, it was one of the first Chromebooks which could run some android apps and also could run the sub-system for Linux. What I really bought that Chromebook for was to convert it into a cheap Linux notebook by reflashing the bios using the methods laid out by mrchromeboxtech ( https://github.com/MrChromebox). I'd previously experimented with Chromium-os, the open source version of Chromeos that you can run on any computer and then Neverware cloudready. But really I just wanted a cheap, well built laptop that would run a full Linux distro with a good level of hardware compatibility (something Linux still struggles with) and so I converted that laptop into a Linux laptop, and thought nothing more about Chromeos other than an interesting os that wasn't really up to what I wanted to do with it, great for secure browsing and security in general, but not for me.
Chromeos itself is based on Linux and stupidly secure and almost idiot proof it doesn't take much maintenance. Notice that Chromeos is based on Linux, Gentoo if I recall, this is where it gets interesting for me. Google bought up Neverware and now supports and develops cloudready as a free downloadable and installable os. You literally just install the chromebook recovery tool in Chrome browser run it and insert a usb stick and it it does all the rest of writing to the usb stick and creating the install medium for you .
One of the frustrations of using Linux is that there are all these wonderful tools available and methods that I can use to make my work but these either don't translate to windows well even when using git-bash terminal and a full open source tool set ( see previous posts on Windows 10) or people just find linux too complicated to install and once installed have problems with hardware compatability ie wifi not working would be the main one or just too difficult to set up and use because they have a windows mindset . So if half of the problem of using the tools I use and trying to explain or teach them to people is installation of an os and then unfamiliarity with an environment and it not connecting to wifi or not running screen resolution correctly because its fully libre or you have to enable the right repository ( ie non-free in Debian etc ) finding Explaining computers video on you-tube ( he has a great channel go and subscribe!! https://www.youtube.com/@ExplainingComputers ) about Googles update of cloudready, chromeos-flex, was an eye opener watch it here, it explains everything way better than I can https://www.youtube.com/watch?v=AFAg1FkGgMM&t=588s and the next video which explains how to enable the linux subsystem and installing linux apps and sharing between chromosflex and the linux subsystem https://www.youtube.com/watch?v=AsWgzH3OzYY&t=48s
It was this video that gave me the hint that maybe there was a way of both introducing glitch art techniques to a wider audience, both those who wanted to install a linux based distro but were having problems getting a system up and running and those who might only have because of budget a chromebook to work with and also for reusing older computers ( chromeosflex) or taking advantage of the number of chromebooks floating around after the Pandemic which people were selling off or were being cost cut in shops due to lack of demand.
This is by way of an introduction to my project of making glitch art on chromeosflex/chromebooks , as what works on chromeosflex will also run on a chromebook.
To start off with I'm running chromeosflex on a Lenovo Thinkpad edge 535 from 2012 ( 11 years old!!) ( Amd A8 4500 m apu ) with 16gb of ddr3 ram and a 250gb ssd , everything else is stock which cost 150 euros secondhand. I've installed the subsystem for Linux which runs in an lxc container and allowed it to share with chromeosflex . And that's it. Oh yes ive installed ffmpeg etc, my usual tools and am running through what works and what doesn't , there will be more later but for now here's a peak of what Im doing. This is a standard hex edit of a film, Haunted castle, downloaded from archive.org re-encoded from h264 to xvid using ffmpeg then live hex edited using one of my scripts and captured using chromosflex's screencapture program ( which handily uses webm for capture ) .
My version on Linux takes input in real time from a webcam or, on Linux, a v4l2loopback virtual camera running a custom script which also adds convolution which looks like the line below:
The video above is an example of this working, I had Windows 95 open in a vm using virtualbox , started the v4l2loopback virtual camera then opened the script below, hovered the mouse over the windows 95 vm and started playing solitaire. The video was captured in real time using simplescreenrecorder. ( The script below is a work in progress so it still has notes to myself and additions that I'm playing with which may or may not be commented out - it is provided as it is use at your own discretion - it might work on windows using webcam or obs-studio virtual camera input).
import processing.video.*;
PImage edgeImg; PImage img;
String imgFileName = "face"; String imgName = "face2"; String fileType = "jpg"; float r = 1; // change values in matrices below for different effects
// Calculate the convolution // change loopend for number of iterations you want to run int loopstart =0; int loopend =5; int xstart = 0; int ystart =0 ; int xend = 640; int yend = 640; // 'int matrixsize = ' is number of rows down in matrix. int matrixsize = 6;
// Begin our loop for every pixel in the image for (int l = loopstart; l < loopend; l++) { for (int x = xstart; x < xend; x++) { for (int y = ystart; y < yend; y++ ) { color c = convolution(x, y, matrix, matrixsize, edgeImg); int loc = x + y*edgeImg.width; edgeImg.pixels[loc] = c; } } } image(edgeImg, 0, 0, edgeImg.width, edgeImg.height); //image(img, 0, 0, img.width, img.height); }
color convolution(int x, int y, float[][] matrix, int matrixsize, PImage edgeImg) { float rtotal = 0.0; float gtotal = 0.0; float btotal = 0.0; // change 'int offset = matrixsize' is rows across matrix int offset = matrixsize / 6; for (int i = 0; i < matrixsize; i++){ for (int j= 0; j < matrixsize; j++){ // What pixel are we testing int xloc = x+1-offset; int yloc = y+j-offset; int loc = xloc + edgeImg.width*yloc; // Make sure we haven't walked off our image, we could do better here loc = constrain(loc,0,edgeImg.pixels.length-1); // Calculate the convolution rtotal += (red(edgeImg.pixels[loc]) * matrix[i][j]); gtotal += (green(edgeImg.pixels[loc]) * matrix[i][j]); btotal += (blue(edgeImg.pixels[loc]) * matrix[i][j]); } } // Make sure RGB is within range rtotal = constrain(rtotal, 0, 255); gtotal = constrain(gtotal, 0, 255); btotal = constrain(btotal, 0, 255); // Return the resulting color return color(rtotal, gtotal, btotal); }
Though I found what I think is the most interesting part of the video the point where the solitaire cards are bounced out of their piles as below, which has led me to think of using the script with other inputs like the previously explained sudacam scripts.
* This is a version of the talk I gave online during Fubar 2022
Hello everyone - I'd like to welcome you to my desktop, as this is what I will be talking about. I want to start with a digression into art history . Consider this , “The treachery of images” by Rene Magritte from 1929 .
It is a painting of a pipe with text in French which reads ‘this is not a pipe’ . It seems obvious to me as an ex-painter that this is indeed not a pipe, it is a painting of a pipe. I'd like you to keep this in mind throughout this talk. My desktop might superficially look like Windows 95 but it isn’t , again bear this in mind.
How do we interact with our computer? In the beginning, after punch cards, was the command line
be it on mainframes or later home computers first it travelled by teletype. A teletype is essentially a two way terminal connected to a computer using a keyboard and paper roll for input and output. There is a great video of that here https://www.youtube.com/watch?v=gMIL2bvUYIs&t=2s
After the teletype came the vdu or visual display unit , rather than a paper roll we now have a screen for interaction but essentially its the same as a teletype but it removes one level of interaction, rather than seeing a continuous physical printed record of what we have typed and received back from the computer we see a screen with non physical electronic readout.
But the beginnings of the interface as we understand it came with what is known as ‘the mother of all demos’ given by Douglas Engelbart so called because it introduced pretty much all of the fundamental ideas behind modern computer graphical interfaces – more info here https://en.wikipedia.org/wiki/The_Mother_of_All_Demos ) on December 9, 1968
To quote from the wikipedia article “ The live demonstration featured the introduction of a complete computer hardware and software system called the oN-Line System or, more commonly, NLS. The 90-minute presentation demonstrated for the first time many of the fundamental elements of modern personal computing: windows, hypertext, graphics, efficient navigation and command input, video conferencing, the computer mouse, word processing, dynamic file linking, revision control, and a collaborative real-time editor. Engelbart's presentation was the first to publicly demonstrate all of these elements in a single system. The demonstration was highly influential and spawned similar projects at Xerox PARC in the early 1970s. The underlying concepts and technologies influenced both the Apple Macintosh and Microsoft Windows graphical user interface operating systems in the 1980s and 1990s.”
Considering that that demonstration was in 1968 lets look at what we call a desktop. Be it based on some derivative of the following:
Macintosh classic desktop
Windows 3.11 desktop.
Windows 95 desktop.
Mac Os 10.5.4 desktop
The gnome 3 desktop
The interface that we use to interact with our computers or devices, the icons that we click on for a word document, the images that we manipulate with the Gimp or other software , a paintbox program that gives us brush options and colours , the video editor that shows us a time line laid out in a film strip fashion, all of these inhabit the same space as the pipe in ‘the treachery of images’, they are nothing more than signifiers that allow us to navigate an alien and strange landscape which bears no relation to how we see it.
“Skeuomorphism is a term most often used in graphical user interface design to describe interface objects that mimic their real-world counterparts in how they appear and/or how the user can interact with them. A well-known example is the recycle bin icon used for discarding files. Skeuomorphism makes interface objects familiar to users by using concepts they recognize.”
Unlike the pipe in Magrittes painting we can interact with the ‘objects’ on our desktops, we have menus to scroll and click through;
We have buttons to press;
waste-bins to fill and to empty;
file managers to organise our files;
It makes the unreal real, an electronic office in which we can ‘work’.
This paradigm of the desktop metaphor reflects the corporate origins and mindset of most of the software we use. Both Windows and MacOS are designed for cubicle dwellers, office drones and ad jockeys .
How can we break out of those inbuilt constraints to move away from mimicking ‘real world counterparts’ using the software in a performative and meaningful way, to make art from and in this space rather than using its tools to create ‘work’ that is then seen as separate from the environment in which it was created. In other words how do we stop ourselves from perceiving a desktop when in fact what is in front of us is an entirely different thing.
Jodi's My%Desktop was one of the first hints to me personally that something else could be done within this space ( more info here https://www.moma.org/calendar/galleries/5160 ) as it says on The MOMA site for that exhibition
"JODI recorded various versions of My%Desktop in front of live audiences, connecting their Macintosh to a camcorder and capturing their interactions with the user-friendly OS 9 operating system. The resulting “desktop performances,” as the artists call them, look at ways that seemingly rational computer systems may provoke irrational behaviour in people, whether because they are overwhelmed by an onslaught of online data, or inspired by possibilities for play. What appear to be computer glitches are actually the chaotic actions of a user. “The computer is a device to get into someone’s mind,” JODI explained, adding, “We put our own personality there.”
Notice that they say 'Desktop performances' this became of primary importance to my own work after working on the Format C medialab project, Suda which I will come to shortly.
This idea of desktop performance answered for me one of the main problems I'd hit in my own work , a tiredness with reusing found sources, given the nature of remix culture the same sources can crop up in the work of other artists, its not that I don't recognize that work as valid, it was more wanting to explore new avenues which are separate to the meaning that found work can carry and the idea that the desktop itself as well as being a space that can be performative can also become a way of generating unique images and video which interrogate the space on its own terms. Rather than the desktop being a place to work in it becomes a space to work with - a material in itself. (Video of Jodi's My%Desktop working here https://www.youtube.com/watch?v=CPpUQQ7vMBk)
What Jodi's My desktop illuminates is that the desktop is at its heart a feedback system, if we press a button something usually happens , if we drag a window that window moves . Actions have consequences and meanings separate to those we assign via skeuomorphic design. It becomes an arena or a performance space by our very act of our presence and our interactions, as you are now within the space of your own screen.
One of the fundamentals of glitch art is feedback, are there ways to turn the desktop against itself to generate more feedback that we can use? The obvious answer would be to turn a webcam to face the screen , and this gives varying results and can also be seen in the context of early experiments with video feedback conducted by the pioneers of video art in the 60’s , 70’s and eighties . This approach can be fun especially when used alongside the processing environment .
For Suda we wanted something lightweight which would work online with little lag. Originally I experimented with TWM , which compared to win 95 or mac os is archaic , first appearing some time in 1989 or so ( it has similarities with CDE or the Unix common desktop environment but crucially is open source whereas cde was/ is proprietary) and the first hint that something might be usable came by realising that minimised icons behaved very oddly and looked somewhat glitchy for example this;
or more simply;
In the end we chose to use ctwm because it's low resource and fairly easy to configure using a single text file and more importantly unlike TWM it has click to focus, which we needed for one of the scripts that suda runs. Whilst working on TWM on Parabola Linux whilst trying to find a way of changing the background I discovered something interesting , issuing this command xsetroot -bg grey;
Allows us to drag and trail windows across the screen leaving behind trails and traces, turning icons and windows into brushes. It also reminds me of this from win 95/98;
Discovering this was only possible because as I later found Parabola Linux uses lxdm as the default display manager in the live version I had installed. If you don't use Linux or haven't delved into it much, on Linux the display manager which communicates with the X11 server ( which controls how things are displayed on screen, screen size what sort of screen it is ie crt, lcd, laptop and input devices like mouse and keyboard) is separate to the desktop environment you use and is configured using a text file in this case /etc/lxdm/default.conf ( or lxdm.conf on some distros) and in default.conf there is this one line
arg=/usr/bin/X -background vt1
on every other distro I've looked at this line is commented out, ( ie with a #) but not on Parabola and once we uncomment that line on other distros we get the glitchify effect. This seperation between desktop , window manager and display server is what makes this area of enquiry possible.
So as I said earlier a lot of what I do at the moment is based around online collaborations specificaly around the Medialab project Suda. Ill just show Suda running in a short video below.
How does suda work? Suda is a set of scripts designed to run and be interacted within online in a kind of server as installation piece. Its desktop is dictated and adjusted by the text file ctwmrc which is a human readable and editable file which can be altered as needed allowing us to call up scripts which run certain programs. You can view the version of .ctwmrc that my version of Suda, Suda-Se uses here https://crash-stop.blogspot.com/2023/01/suda-se-ctwmrc.html
At the heart odf the original online Suda are four scripts, glitchify ( already discussed) , sudacam1, sudacam2 and manifesto ( which calls a second script suda-ctwm-xdotool.sh) Sudacam 1 and 2 use ffplay to open two windows , one at 640x480 resolution and one at 352x288 resolution which act as windows which show whatever the mouse acting as a lens via x11grab and -follow_mouse centered. The exact commandline is this ( with changes between the two for size)
These simple elements allow me to build up a visual complexity quite quickly, for example;
Feedback loops are essentially generative and what they create is often unexpected , depending on the elements I bring up or delete, open or close. It escapes the trap of looking for material to make glitch art from by becoming the process through which glitch can be created in real time from the desktop. And more than that it becomes an arena to play within outside the formal office space bounds of the gui based operating system or what we expect from narrative linear video and becomes almost a continuous collage of moving and static elements. Almost as if I was playing with hardware glitching techniques
I should at this point explain that the desktop environment I now use is one Ive only just started working with – the previous desktops that I talked about, ctwm and twm are simpler and use only one configuration file which allows me to add in menu entries to call up scripts. The desktop environment I use now, Icewm, is also quite configurable and by altering the text file ( ~/.icewm/toolbar ) I can add in scripts or programs I want which can be assigned buttons on the toolbar at the bottom so in effect I am configuring the workspace to trigger the shellscripts I need to generate feedback in real time. I have also come to appreciate the elements of the desktop as integral to what I’m doing – terminal output , icons , window bars error messages , the trails and movement of the mouse itself – I try not to hide these . So I can recreate what I was doing in ctwm but also add in new features like a script which snapshots an area on the screen then sets the background to that ( as well as saving those snapshots for later use, an example of this is below.
I also have scripts set up which change the colorspace of the windows opened and thus the quality of the feedback these are variations on the sudacam scripts which add in hex editing as well as playing with colourspace for example rather than
And we can add in more elements using a slightly altered version again of the final script triggered above.
By adding a script to ~/.icewm/taskbar which calls a video I can also introduce a pre made video into the feedback process like so: ( though a better example might be this https://www.youtube.com/watch?v=matpsXvoP0c&t=5s
And finally we could add words. First I created a script which mashs several texts together line by line , these act as seed texts to the next stage which takes each seed text , runs those through dadadodo ( a text mangling program akin to William Burroughs cut up techniques ) and then two scripts run in parallel to write lines of text into a text editors window – we can then use that output as input for the other feedback windows i've already generated and add in a bit of displacement mapping as well by altering one of the scripts to this -
In essence Ive created a feedback toolbox kit for the desktop which feeds on its imagery and generates broken and glitched visuals in a constantly evolving and recirculating way which can be used live as a performance tool or saved as video and edited down later looking for moments and threads which fit well together as in this longer form video here;
And this is the modified taskbar file in ~/.icewm/ that provides the functionality I'm using they are written in a format like this
so its name as it appears in menu or mouseover of toolbar icon, icon to use ie application-x-shellscript.png ( for shellscript) icons must be accesible to icewm ie in ~/.icewm/icons/ ( i cheated and moved some system wide ones i wanted to use into that folder to make them accessible) and finally where the program or shellscript is ( programs generally are in /usr/bin/ whilst shellscriptsI keep in my home folder) so below is my ~/.icewm/toolbar as used above
# This is a default toolbar definition file for IceWM # # Place your personal variant in $HOME/.icewm directory.
This is the basic configuration file for the ctwm desktop environment used in the modified version of Suda that I run on my own computers. The text is self explanatory , its based on Thomas Eriksson and Steve Litts original .ctwmrc file found via the xwinman website. This is obviously specific to my setup and you will have to change things that relate to your own home folder, shellscripts you want to run and programs you want to appear in your own menu's.
#Modified .ctwmrc based on the work of those mentioned below - Thomas Eriksson and STEVE LITT # # $XConsortium: system.twmrc,v 1.8 91/04/23 21:10:58 gildea Exp $ # # A little $HOME/.twmrc by Thomas Eriksson brummelufs@hotmail.com # # Modified (just colors and some menu options) by Istvan Keppler keppler@lajli.gau.hu # # twm... the original and the best... #
#--------------------------------------------------------- # EMERGENCY # This line is here so that U will be able 2 restart CTWM # even if the rest of the file doesn't work very well! # To restart in emergency press shift and F9 ! #---------------------------------------------------------
######## BEGIN STEVE LITT'S MODERNIZATTION SETTINGS ######### ClickToFocus # Prevent mouse movement changing focus RaiseOnClick # Window rises when clicked RaiseOnClickButton 1 # When clicked with button 1, that is. UsePPosition "on" # Help kbd instantiated windows get focus RandomPlacement "on" # Help kbd instantiated windows get focus AutoFocusToTransients # Help kbd instantiated windows get focus SaveWorkspaceFocus # Obviously workspace focus should be retained #WindowRing # Enable Alt+Tab type window circulation #WarpRingOnScreen # Enable Alt+Tab type window circulation IgnoreCaseInMenuSelection # Make menus much easier via keyboard ShortAllWindowsMenus # Don't show iconmgr and workspace mgr in lists
######## END STEVE LITT'S MODERNIZATION SETTINGS #########
#--------------------------------------------------------- # PIXMAP CONFIG and # PIXMAPS #---------------------------------------------------------
PixmapDirectory "/usr/share/ctwm/images" # dir. for xpms
Pixmaps { #TitleHighLight "xpm:blue4.xpm" }
# --------------- # Managers # --------------- # this section for bars on right side showing open or minimised windows like taskbar in win or |#lxde ShowIconManager SortIconManager IconManagerGeometry "64x128-0+0" 1
#This section shows the worspaces available in bottom right hand corner default here is four # Uncomment to have this function #DontPaintRootWindow #ShowWorkSpaceManager #StartInMapState #workspacemanagergeometry "260x66-0-0" 4 #MapWindowCurrentWorkSpace {"Red" "SkyBlue" "Blue" #"xpm:ASBBlockGreen.xpm"} #WorkSpaces { #"One" {"grey" "black" "Blue3" "blue" "xpm:ASBBlockBlue.xpm" } #"Two" {"grey" "green" "black" "blue" "xpm:ASBBlockBlue.xpm" } #"Three" {"grey" "green" "black" "blue" "xpm:ASBBlockBlue.xpm" } #"Four" {"grey" "white" "black" "blue" "xpm:ASBBlockBlue.xpm" } }
# This is for hiding the ugly menubar in windows who dont need em. NoTitle { # "xosview" }