Wednesday 3 July 2024

New script I'm working on.


 Above image is the output from a new script that I'm working on which is similar to other scripts that I've created recently but rather than working on a folder of images it grabs an image from your screen , hex edits it ( if you say yes), divides it into a grid of images and reassembles it rotates it in a random direction , then grabs another image from the screen and uses gmic xor to combine the first and second images. It does this and saves the image into new folder and carries on until you ctrl c the script. It requires imagemagick, scrot gmic cli and at the moment is linux only and runs from a bash terminal. Script below is very much a work in progress use at your own risk. ( ps my scripts can now be found on codeberg here )

 

#!/bin/bash
#screen grabbing desktop alter dimensions to suit
#
h=$(pwd)
echo $h
dd=$(date +%Y-%m-%d_%H%M%S)
mkdir $h/$dd
#Questions questions
echo -n "Screen size  1368x768 (s), medium 1600x900 (m), large 1920x1080 (l), box 768x768(1600x900) (b) ? : "
read sc
#
echo -n "Use Stereoscopic colour shifting (y/n) ? : "
read shft
#use splittor
echo -n "Use splittor (y/n) ? : "
read spl
#
echo -n "Make cubes (y/n) ? : "
read cn
#
echo -n "Use Rotation (y/n) ? : "
read ro
if [ $ro == "y" ] || [ $ro == "null" ]
then
#echo -n "Degrees to rotate (0-365) ? : "
#read dg
echo -n "Number of rotations ? : "
read numro
fi  
echo -n "Use Contra-rotation (y/n) ? : "
read cro
#
echo -n "Use gmic xor displacement (y/n) ? : "
read gmc
#
echo -n "multiple(m) or single(s) or none(n) hex Editing ? : "
read hex
if [ $hex == "s" ] || [ $hex == "null" ]
then
echo -n "Target value ? : "
read tr
echo -n "Image format to use ? : "
read f
fi
#

echo "Press CTRL+C to stop..."
#
for ((;;))
do
sleep 15
d=$(date +%Y-%m-%d_%H%M%S)
if [ $sc == "l" ] || [ $sc == "null" ]
then
scrot -z -a 0,0,1920,1080 $h/$dd/$d.png
cd $h/$dd/
 elif [ $sc == "m" ]
 then
 scrot -z -a 0,0,1600,900 $h/$dd/$d.png
cd $h/$dd/
elif [ $sc == "s" ]
then
 scrot -z -a 0,0,1368,768 $h/$dd/$d.png
cd $h/$dd/

elif [ $sc == "b" ]
then
 scrot -z -a 417,97,768,768 $h/$dd/$d.png
cd $h/$dd/
fi


if [ $shft == "y" ] || [ $shft == "null" ]
then
composite -stereo -50+20 $d.png $d.png result.png  
mv result.png $d.png
fi

if [ $hex == "s" ] || [ $hex == "null" ]
then
  to=$(openssl rand -hex 1)
mogrify -format $f $d.png
sed '0,/'$tr'/s//'$to'/' $d.$f > swap.$f
mogrify -format png swap.$f
rm swap.$f
rm $d.$f
mv swap.png $d.png
fi
if [ $hex == "m" ] || [ $hex == "null" ]
then
from=$(openssl rand -hex 1)
  to=$(openssl rand -hex 2)
mogrify -format ppm $d.png
sed 's/\x'$from'/\x'$to'\x'$from'/g' $d.ppm > swap.ppm
#sed '0,/'$from'/s//'$to'/' $d.ppm > swap.ppm
mogrify -format png swap.ppm
rm swap.ppm
rm $d.ppm
mv swap.png $d.png
fi
if [ $spl == "y" ] || [ $spl == "null" ]
then
mogrify -format ppm $d.png

split -n 24 $d.ppm
rm $d.ppm
cat xaa xac xab xae xad xag xaf xai xah xak xaj xam xal xao xan xaq xap xas xar xau xat xaw xav xax  > swap.ppm
mogrify -format png swap.ppm
rm swap.ppm
mv swap.png $d.png
rm xaa xab xac xad xae xaf xag xah xai xaj xak xal xam xan xao xap xaq xar xas xat xau xav xaw xax
fi
#
if [ $cn == "y" ] || [ $cn == "null" ]
          then
          gmic $d.png frame_cube , -o swap.png;
          mv swap.png $d.png;
          fi
if [ $ro == "y" ] || [ $ro == "null" ]
          then
          i=0
          while [ $i -lt $numro ]
            do
            ((i++))
            rnd=$((1 + $RANDOM % 360))
            convert $d.png -distort SRT $rnd rotate.png
            mv rotate.png $d.png
            done
          fi
#
if [ $gmc == "y" ] || [ $gmc == "null" ]
then
#rndx=$((1 + $RANDOM % 450))
#rndy=$((1 + $RANDOM % 800))
scrot -z -a 417,97,768,768 swap.png
if
[ $cro == "y" ] || [ $cro == "null" ]
then
rnd=$((1 + $RANDOM % 360))
convert swap.png -distort SRT -$rnd rotate.png
            mv rotate.png swap.png
fi
gmic $d.png swap.png  -blend xor -o swap2.png
mv swap2.png $d.png
rm swap.png
 

fi

cd $h/

done
 

 

 


Eigenstate 2 - more experiments with ffmpeg, x11grab and generating feedback on the linux desktop

These are the commandlines I used in an online  demonstration recently showing how simple tools can lead to visual complexity.  From a standard bash terminal on Devuan Linux ( not Debian)  4 with ffmpeg 4.4.x ( they need to be altered for ffmpeg 5 plus as there are differences in the way the commands need to be written and wont work without that also will only work with x11 not wayland will most probably not work in windows) 

simple x11 grab using ffplay 'ffplay -f x11grab -follow_mouse centered -framerate 10 -video_size 640x480 -i :0.0'

feeding the output of ffmpeg into ffplay

ffmpeg -f x11grab -follow_mouse centered -framerate 5 -video_size 1920x1060 -i :0.0 -f rawvideo -vcodec rawvideo -pixel_format bgr0 -video_size 1920x1060 - | ffplay -f rawvideo -vcodec rawvideo -pixel_format bgr0 -video_size 1920x1060 -

The more complicated command line also allows us to play with colour space by changing pixel format .

ffmpeg -f x11grab -follow_mouse centered -framerate 5 -video_size 1280x720 -i :0.0 -f rawvideo -vcodec rawvideo -pix_fmt monob -video_size 1280x720 - | ffplay -f rawvideo -vcodec rawvideo -pix_fmt monob -video_size 1280x720 -

in this case from a colour format bgr0 to a black and white dithered format monob
or we could change the output codec from rawvideo to a strange format like tmv ( created by enthusiasts to enable video playback on the original ibm 8088 powered pc)

ffmpeg -f x11grab -follow_mouse centered -framerate 5 -video_size 1280x720 -i :0.0 -f rawvideo -vcodec rawvideo -pix_fmt monob -video_size 1280x720 - | ffplay -f rawvideo -vcodec tmv -pix_fmt monob -video_size 1280x720 -

or we could use ffmpegs displacement function

ffmpeg -f x11grab -follow_mouse centered -framerate 23 -video_size 640x640 -i :0.0 -f x11grab -follow_mouse centered -framerate 23 -video_size 640x640 -i :0.0 -lavfi '[1]split[x][y],[0][x][y]displace' -f rawvideo -pix_fmt pal8 - | ffplay -f rawvideo -pix_fmt pal8 -vf "rotate=1.23" -s 640x640 -

 FINAL WINDOWS used in demonstration

3 vertical one square rotating

normal vertical

ffmpeg -f x11grab -follow_mouse centered -framerate 5 -video_size 640x1060 -i :0.0 -f rawvideo -vcodec rawvideo -pix_fmt bgr0 -video_size 640x1060 - | ffplay -f rawvideo -vcodec rawvideo -pixel_format bgr0 -video_size 640x1060 -

hflip vertical

ffmpeg -f x11grab -follow_mouse centered -framerate 5 -video_size 640x1060 -i :0.0 -f rawvideo -vcodec rawvideo -pix_fmt bgr0 -video_size 640x1060 - | ffplay -f rawvideo -vcodec rawvideo -pixel_format bgr0 -vf hflip  -video_size 640x1060 -

vflip vertical

ffmpeg -f x11grab -follow_mouse centered -framerate 5 -video_size 640x1060 -i :0.0 -f rawvideo -vcodec rawvideo -pix_fmt bgr0 -video_size 640x1060 - | ffplay -f rawvideo -vcodec rawvideo -pixel_format bgr0 -vf vflip  -video_size 640x1060 -

square monob rotating

ffmpeg -f x11grab -follow_mouse centered -framerate 23 -video_size 640x640 -i :0.0 -f x11grab -follow_mouse centered -framerate 23 -video_size 640x640 -i :0.0 -f rawvideo -pix_fmt monob - | ffplay -f rawvideo -pix_fmt monob -vf "rotate=1.23" -s 640x640 -

Using simple tools we can convert the desktop into a complex environment to experiment with feedback and by adding video playback we can also make it even more complex and rich.

And experimenting with ffmpeg filters can take it even further.

This was made using the above tools 


 Eigenstate 2

 

Friday 28 June 2024

Basic glitch art tool kit

This is an updated rewrite of an earlier post from 2021 in response to a question I got asked on Tumblr recently about getting started making glitch art using non proprietary software - hopefully it will be useful and furnish you the reader with a good toolkit generally for making glitch art on Windows machines without the need for paying for or obtaining cracked versions of proprietary software which restricts your freedom to do with the hardware you own  what you will. Its also a  basis for starting to move away from closed source operating systems in general.

* When I talk about windows I mean Windows 10 , not eleven, not doing that , and when 10 becomes eol in less than a year that's the end of me writing guides for Windows. Ill just stick to Linux.

I use mainly linux myself and my work is mainly script based, I keep a codeberg repository for the scripts I use the most, some of which are also for windows 10, details in the readme , but most could be adapted for Windows 10 quite easily. My codeberg is here https://codeberg.org/crash-stop


Hardware requirements - I believe in recycling and reusing old equipment  as much as possible , my most modern desktop is a 4th gen i3 with 8gb of ram using the built in graphics from that chip , I've succesfully created and edited video on older equipment though having said that the minimum to achieve anything useful would be a late gen core2duo with 4gb of ram , one of my laptops ( I call it a potato) that I used for testing until recently  rocked an elderly celeron N2840 ( essentially a souped up atom processor)  and that could be used to edit and render very short videos and run hex editing programs in real time quite successfully  running Linux and batch scripts and such like with Windows 10 ( though these days ie 2024+ really a second gen i3 and 8gb ram plus is minimum on windows 10)  . Know your architecture are you running 32 bit or 64 bit ? I make no judgements but all the links I've given are generally for 64bit software , there are 32bit versions available and pages will give you links for both mostly so remember to check before you download and get frustrated that some software won't install.

For both Windows and Linux users it can be helpful to have access to either a built in webcam or an external webcam 

For windows 10 when installing software be aware that a recent update to Windows 10 means some software may trigger a software from unknown source alert in a dark blue box and you will have to allow the software to be installed by clicking through that message even though the software is from a trustworthy source. That being said the first piece of software to install is 

1)Notepad++ which is useful for reading readmes and for creating bash scripts though my preferred windows and linux editor is geany , this is installed in the usual windows way get it here http://notepad-plus-plus.org/ 

It will also help if you want to start getting into experimenting with the wordpad effect ( the best introduction for that is sTallio's here http://blog.animalswithinanimals.com/2008/08/databending-and-glitch-art-primer-part.html  though sTAllio refers to using photoshop raw as a format if you use something like ppm or bmp they are near raw formats just open them in and notepad ++ alter some stuff, save with new name and see what happens !!) 

you might also want to add a gui hex editor ( on linux I use bless) but this is a nice basic one for windows to start with https://mh-nexus.de/en/hxd/  start slowly, don't change to much at once  and remember to avoid the header and save as a new file.

2)After that install git for windows which will give us the bash like terminal  Git-Bash which we can use  to manipulate video with hex editing , ffmpeg and ffglitch/ffgac from the command line as its better than windows powershell and includes some of of the basic linux/unix command line applications we will need - get git bash here - https://www.atlassian.com/git/tutorials/git-bash

3) Windows , unlike Linux  doesn't really come with a package manager  so for things like ffmpeg its easier to install a third party package manager like Chocolatey . To install chocolatey you will need to open windows powershell as administrator ( quick guide here https://www.howtogeek.com/662611/9-ways-to-open-powershell-in-windows-10/ ) - so go to win icon in bottom left hand corner click on it and in search bar to the right type powershell it comes out top of list , right click and open as administrator then go here https://chocolatey.org/install and follow the install instructions carefully. ( you can copy and paste the commands from there into windows powershell) read prompts during installation carefully and answer Y when it asks. Keep powershell open as we are now going to install more interesting stuff

4) Having installed that keep powershell open and go https://community.chocolatey.org/packages and search for ffmpeg ( unless you already have it installed , in which case make sure you have it added to path of which more in a moment) find the command to install ffmpeg ( you should choose the latest version I use  ffmpeg 4.4 but that's for compatability with older scripts ) then copy and paste that into powershell opened as administrator which should be ' choco install ffmpeg ' ( without the quotemarks)  follow the prompts and answer 'y' when asked 

5)  After ffmpeg search for and install, using chocolatey and powershell ,  imagemagick 'choco install imagemagick', and shotcut 'choco install shotcut' and python 'choco install python' , python is a programming language which we need to run Tomato a datamoshing tool by Kaspar Ravel, Shotuct is an open-source video editor, which has some interesting filters and rendering profiles  but kdenlive is probably easier to use but that can be downloaded from its website .  Most of these applications  have their own installers but its easier to download and install them via chocolatey, but for completeness sake their websites are 
Imagemagick image editor - https://imagemagick.org/index.php
shotcut video editor - https://www.shotcut.org/
 
You might want to install sox audio editor ( handy for sonification in scripts if you arent using ffmpeg and Vedran Gligos megaglitchatron script here  ) but its kind of tricky to install on windows to use via git bash. You need to download the portable version sox-14.4.2-win32.zip extract it then copy the extracted files from within the folder and paste them in C:\Program Files\Git\usr\bin ( presuming you have git-bash for windows installed if not you will need to install that first ) we can also do the same for gmic cli from here https://gmic.eu/download.html download the cli Command-line interface (CLI): zip .
 
A word on paths , if you already have python and ffmpeg installed via their own discreet installers and not chocolatey  then make sure you have them added to your system path so we can access them anywhere via the terminal shell git-bash,  this is easily done - this is a good guide here https://medium.com/@kevinmarkvi/how-to-add-executables-to-your-path-in-windows-5ffa4ce61a53
 
4a) Video editing software   

If you've read through this previously you will notice I haven't included openshot in the list of software to install , openshot though good is particularly buggy on Windows and crashes a lot  instead for video editing download Kdenlive from here https://kdenlive.org/en/ its also available for Linux. A good introduction to completely libre and opensource video editors for Linux , Windows and MacOS can be found here https://itsfoss.com/open-source-video-editors/  personally on Linux I use flowblade , and I'd recommend that for Linux users as its the most stable I've found and copes with larger files really well ( but isn't great for using with odd dimensions so its essential when using it I've found to pre-resize all the clips you'll be working with with something like handbrake) , Kdenlive is probably the better choice.

6) Close powershell and Install in the normal windows way The gimp from here and audacity from here
and Handbrake from here ( handbrake is a really handy gui for re-encoding and transcoding video it may ask you to install a .net library when you first go to start it - just say yes and follow the links to the download you need - ie .net to run desktop apps)   and Transmission bittorrent client from here
On first running Transmission on windows a windows defender pop up will appear , press the allow access button with shield to allow transmission to connect to the internet - transmission is a safe bittorrent client we will use to download video from our source of choice if needed . 
(I'm a little wary of recommending Audacity because of certain controversy's over its management and direction over the last year but it is still open source for now and until a stable fork is created it will do .)
 
 
 
addenda - Audacity 

There is a known problem with Audacity in windows in that to import certain files like ogg audio or m4a we need to have ffmpeg installed , unfortunately the version it requires is older and slightly different to the one we installed using chocolatey so we will have to download and install a different version for audacity to reference . Issue and links here https://manual.audacityteam.org/man/installing_ffmpeg_for_windows.html. Installer is here https://lame.buanzo.org/ffmpeg64audacity.php

 
Audacity and sonification - online tutorial here by @kindred cameras https://www.youtube.com/watch?v=Z_Rut5gjwfE  and here by vaeprism https://www.youtube.com/watch?v=4iSe5qy8VwY&t=70s
 
7) Download and install obs-studio from here  https://obsproject.com/
Handy for making desktop videos and for capturing the ephemeral glitches we get when hex editing video and playing it back live using ffplay 

8) If you don't already have Vlc video player installed do so find that here useful for video playback and screenshots. If you dont like using vlc and want something simpler you could try mpv . You can install mpv via chocolatey ie 'choco install mpv' or go to their website and follow the links ( but chocolatey is easier and you don't have to mess with adding it to path ) https://mpv.io/

9) Processing , its one of the tools i use a lot so download that here but make sure to download version 3.54 not the newer 4.0 beta 1 . Processing does not come with an installer but instead comes packaged as a zip file so when downloaded unzip it and look for the exe file to start it, you can either pin that to your taskbar or as a shortcut on your desktop once started you will need to install several libraries (I will explain how to later on.)  There is a difference in how processing stores sketchbook files in Windows compared to Linux , this is important as on Linux processing creates a folder called sketchbook and that is stored in the top directory of your home folder, on Windows sketches that you create are saved and stored in your home folder here /c/Users/yourusername/Documents/Processing/

10) Finally go here and download ffglitch/ffgac , once its downloaded, unzip it and leave it there until needed as its a standalone binary which doesn't need installing ( you might need to install 7zip to unzip it as the binary is zipped using that 7zip here https://www.7-zip.org/  and while we're at it grab Kaspar Ravels tomato as well , download it from here https://github.com/itsKaspar/tomato
( click on the green window on the right hand side which says code and click Download Zip)


 Linux in many ways is so much simpler and already has much of what we need installed, such as bash and the basic gnu coreutils that installing git-bash on windows gives us more info on gnu-coreutils here its an altogether more flexible environment for making glitch art  given that the core of glitch art is about  finding error through the misuse of tools , or rather using tools in a way in which they were not intended. 

Anyways Linux has diverse distributions and package managers but chief amongst those are either debian based or arch based systems . On Ubuntu, Debian and linuxmint my main method of installing software is from the command-line so fire up a terminal and do ( depending on what you do or dont have installed)  presuming you are running Debian 10 or 11, Ubuntu 18.04 and above   and Linuxmint 19.3 and above ( should also hold true for MXlinux and Devuan and derivatives) issue this command from any terminal:

Sudo apt install ffmpeg imagemagick vlc mpv handbrake sox audacity kdenlive flowblade obs-studio transmission
 
type in your password when asked and hit enter and that's pretty much it . 

I've added mpv video player to the list for Linux as I've noticed of late especially in Linuxmint 19.3 and above vlc has become very unreliable in use.

Shotcut is generally not available via Linux package managers and rather than trying to install from source or adding cumbersome ppas its probably easier just to install via flatpak find that here https://flathub.org/apps/details/org.shotcut.Shotcut
 
If you are using arch based distros you probably don't need any instructions on how to install programs or use package managers  . On parabola Linux I generally use a simple 'sudo pacman -S packagename' command or run octopi package manager and search for whatever package it is that I want. I install flatpaks manually from the command line a good guide from the arch wiki can be found here
 

Installing processing on Linux is actually pretty similar to that on windows in some ways, except when its unpacked you can then run the install.sh file by either right clicking and opening a terminal in the folder that has  just been unpacked or opening a terminal and cd-ing to that folder and issuing this command './install.sh' which will install processing for the current user and leave a shortcut on your desktop and an entry in the programming section of your applications list - you can start processing by clicking on either link.

Once you have processing on either windows or Linux installed and started you will need to install certain libraries. Find tools in the menu running from left to right at the top of the blank sketch window that has opened after startup and find tools, click on that and  from that drop down menu after clicking choose add tool, another window will open titled ' Contribution manager ' click on the tab that says ' Libraries' scroll down and find the entry that says 'Video | GStreamer-based video library for Processing ' click on that then find the button that says install, click on that and wait for the library to install, then click on entry just below it titled ' Video Export | Simple video file exporter ' then click on install again - this should give you the ability with the right script to initialize and use a webcam for input . To test that use the script below by cutting and pasting it into the open blank sketch making sure you have a webcam attached to your desktop pc or laptop then pressing the start button above the test window shaped like a cassette players start button . All being well after a short delay you should see video playback and a little glitchiness ! This is a basic sketch  I use , it needs a webcam.
 
import processing.video.*;
  
    Capture video;                  
                    
    void captureEvent(Capture video) {
    video.read();
    }
   
    void setup() {
       size(640, 480);
       video = new Capture(this, width, height);
        video.start();
    
    }

    void draw() {
  
  video.get();
      //background(0);
     video.loadPixels();
      for (int y = 0; y<height; y++) {
        for (int x = 0; x<width; x++) {
          int loc = x + y*video.width;
         
          float r = red (video.pixels[loc]);
          float g = green (video.pixels[loc]);
          float b = blue (video.pixels[loc]);
          float av = ((r+g+b)/3);

         pushMatrix();
        translate(x,y);
      
          stroke(r,g,b);
          float n = (av+r)/360;
          if (r > 100 && r < 250)
          {
            square(0,0,0);
         
         
          }
       
        popMatrix();
         
        }
      }
 
    } 
 
 
* You might also want to go to  the contribution manager and install from the examples tab the first set of examples ' The coding Book' and also ' The nature of code' by Daniel Shiffman , The coding book gives examples of fun things like slit scan and time displacement which are a good basis for starting points for code for glitch art in processing.

Mac OS - I have no real  experience of using macs or any apple products, and as inclusive as I wish to be software wise I can't help finding Apples walled garden approach to hardware and software a big turn off and contrary to a lot of my views on open source and software and hardware freedom in general, that being said MAC OS being unix based it has some similarities with linux and there is a package installer which should help you install some of the programs outlined above find that here https://brew.sh/

It might also be helpful to look at and understand the concepts behind bash scripting for which an earlier Blog post of mine was written in response to a question from a user on Reddit on how I make some of my still images and specifically if it could be done on Windows find that here ( though it does repeat some of the information above for software requirements the parts on bash scripting and permissions hold true for Linux as well ) https://crash-stop.blogspot.com/2021/05/quick-and-dirty-guide-to-using-shell.html  and the next blog post on from that which illustrates the script itself https://crash-stop.blogspot.com/2021/05/bash-script-for-sonification-images.html
 
 

Wednesday 20 December 2023

ikillerpulse ( requires tomato.py in working directory)

I've been working on this script for the last week or so. What does it do? It takes an input video/s,  converts to a format we can use for datamoshing ( in an avi container), divides that video/s into n second chunks ( in a format we choose and in specific resolutions). Depending on what options are chosen when script is started it will run rgb shifting, displacement, and datamoshing on each chunk ( datamoshing is limited to removing iframes and tomatos pulse mode ) randomising file names then finally putting everything back together as one video in the output folder. It also removes files and cleans up after itself after ( having moved the source files into a folder named originals ) . 

It works on linux and windows though to run this on windows ( tested on 10 only) you need to change the references to python3 to python , that's all ) you will also need to have tomato.py in the directory you run the script from and you will also need to have ffmpeg installed.

Copy and paste the script below and name it something interesting with .sh extension run from bash on linux and gitbash on windows 10.

#! /bin/bash
#which directory are we in ?
h=$(pwd)
echo $h
mkdir $h/originals
mkdir $h/work
mkdir $h/output
mkdir $h/dispt
mkdir $h/work2
mkdir $h/tmpd
echo -n "Source video extension ? : "
read f
#for f in *\ *; do mv "$f" "${f// /_}"; done
echo -n "Time segment (1-10 seconds in format 00) ? : "
read ts
echo -n "Codec to use (1)mpeg4, (2)h264, (3)mpeg2video, (4)h261,(5)theora, (6)Insta, (7)4:3 ? : "
read cd
#use rgb shift
echo -n "Use rgbashift (n=no, rh(red), gh(green), bh(blue)) ? : "
read rgb
#are we killing iframes?
echo -n "Kill Iframes (y/n) ? : "
read kif
#are we datamoshing ?
echo -n "Are we datamoshing (y/n) ? : "
read dm
# get values for pulse mode only
if [ $dm == "y" ] || [ $dm == "null" ]
          then
          echo -n "Number of frames to duplicate ? : "
          read fdup
          echo -n "Every n frames ? : "
          read nfr
          
          elif [ $dm == "n" ]
           then
           sleep 2
           fi
   
echo -n "Are we using displacment (y/n) ? : "
read disp
if [ $disp == "y" ] || [ $disp == "null" ]
          then
          echo -n "Reverse the first video before displacement (y/n) ? : "
          read rev
          fi      


    
#do we need to split long videos into thirty second chunks?
#echo -n  "Cut up long videos (y/n) : ? "
#read cut
#if [ $cut == "y" ] || [ $cut == "null" ]
 #         then
  #        echo -n "Source video extension ? : "
#read f
#echo -n "Time segment (seconds in format 00) ? : "
#read ts2
#find . -maxdepth 1 -name '*.'$f''|while read filename; do echo ${filename};
#ffmpeg -i ${filename} -c copy -map 0 -segment_time 00:00:$ts2 -f segment -reset_timestamps 1 ${filename%.*}%3d.$f
#mv ${filename} $h/originals/
#done
#          elif [ $cut == "n" ]
#           then
#echo -n " Not cutting Long videos "
#fi

#find videos and convert them to codec and avi container and strip metadata with -map_metadata -1
if [ $cd == "1" ] || [ $cd == "null" ]
          then
          for i in *.$f; do ffmpeg -i "$i" -map_metadata -1 -vf "scale=1280:720:force_original_aspect_ratio=decrease,pad=1280:720:-1:-1,setsar=1" -c:v mpeg4 -bf 0 -q 0 "${i%.*}.avi";done
          elif [ $cd == "2" ]
           then
           for i in *.$f; do ffmpeg -i "$i" -map_metadata -1 -vf "scale=1280:720:force_original_aspect_ratio=decrease,pad=1280:720:-1:-1,setsar=1" -c:v h264 -bf 0 -crf 23 "${i%.*}.avi";done
          elif [ $cd == "3" ]
           then
           for i in *.$f; do ffmpeg -i "$i" -map_metadata -1 -vf "scale=1280:720:force_original_aspect_ratio=decrease,pad=1280:720:-1:-1,setsar=1" -c:v mpeg2video -bf 0 -q 0 "${i%.*}.avi";done
          elif [ $cd == "4" ]
           then
           for i in *.$f; do ffmpeg -i "$i" -map_metadata -1 -c:v h261 -bf 0 -q 0 -s 352x288 "${i%.*}.avi";done
          elif [ $cd == "5" ]
           then
           for i in *.$f; do ffmpeg -i "$i" -map_metadata -1 -vf "scale=1280:720:force_original_aspect_ratio=decrease,pad=1280:720:-1:-1,setsar=1" -c:v libtheora -qscale:v 7 -bf 0  "${i%.*}.avi";done
          elif [ $cd == "6" ]
           then
           #for instagram and sqaure aspect ratio first crop and increase to 1080x1080 then rencode to get correct sar and dar and aspect ratio
           for i in *.$f; do ffmpeg -i "$i" -map_metadata -1 -vf "scale=1080:1080:force_original_aspect_ratio=increase,crop=1080:1080,pad=1080:1080:-1:-1,setsar=1" -c:v mpeg4 -qscale:v 7 -bf 0 "${i%.*}.avi"; done
           #for i in *.$f; do ffmpeg -i "$i" -map_metadata -1 -vf "scale=1080:1080:force_original_aspect_ratio=decrease,pad=1080:1080:-1:-1,setsar=1" -c:v mpeg4 -qscale:v 7 -bf 0  "${i%.*}.avi";done
            elif [ $cd == "7" ]
           then
           for i in *.$f; do ffmpeg -i "$i" -map_metadata -1 -vf "scale=800:600:force_original_aspect_ratio=increase,crop=800:600,pad=800:600:-1:-1,setsar=1" -c:v mpeg4 -qscale:v 7 -bf 0 "${i%.*}.avi"; done
          
           fi
mv *.$f $h/originals/
#Convert video/s into $ts time segments
for i in *.avi; do ffmpeg -i "$i" -c copy -map 0 -segment_time 00:00:$ts -f segment -reset_timestamps 1 $h/work/"${i%.*}%3d.avi";
done

#if we are rgb shifting do it now before everything else
if [ $rgb == "rh" ] || [ $rgb == "null" ]
         then
         cd $h/work/
         for i in *.avi; do ffmpeg -i "$i" -vf "rgbashift=rh=-30" -pix_fmt yuv420p -q 0 rgb.avi;
         mv rgb.avi "$i";done         
         elif [ $rgb == "gh" ]
           then
           cd $h/work/
           for i in *.avi; do ffmpeg -i "$i" -vf "rgbashift=gh=-30" -pix_fmt yuv420p -q 0 rgb.avi;
            mv rgb.avi "$i";done
           elif [ $rgb == "bh" ]
           then
           cd $h/work/
           for i in *.avi; do ffmpeg -i "$i" -vf "rgbashift=bh=-30" -pix_fmt yuv420p -q 0 rgb.avi;
            mv rgb.avi "$i";done
           elif [ $rgb == "n" ]
           then
           echo -n "No Rgb vaporwave goodness for u then!!!:"
           fi
cd $h/
#move to $h/work/ and randomise files
cp tomato.py $h/work/
cd $h/work/
find . -maxdepth 1 -name '*.avi'|while read filename; do echo ${filename};
mv ${filename} ${RANDOM}${RANDOM}.avi; done
#use tomato or other to remove iframes in those chunks bar first frame
#find . -maxdepth 1 -name '*.avi'|while read filename; do echo ${filename};
#python3 tomato.py -i ${filename}
#rm ${filename}
#done
#NOTE TO SELF DISPLACE BEFORE IKILLER OR DATAMOSH  
find . -maxdepth 1 -name '*.avi'|while read filename; do echo ${filename};
mv ${filename} ${RANDOM}${RANDOM}.avi; done
#are we using displacement as well ?
if [ $disp == "y" ] || [ $disp == "null" ]
         then
        
         #chop pulsed avis' down to 3 seconds for displacement
for i in *.avi; do ffmpeg -i "$i" -c copy -map 0 -segment_time 00:00:$ts -f segment -reset_timestamps 1 -q 0 "${i%.*}%3d.avi";done

cp *.avi $h/dispt/
cd $h/dispt/
find . -maxdepth 1 -name '*.avi'|while read filename; do echo ${filename};
mv ${filename} ${RANDOM}${RANDOM}.avi; done
i=$(ls *.avi | wc -l)
echo $i
while [ $i -gt 0 ]
do
#do displacement
find . -maxdepth 1 -name '*.avi' | head -n 2 | xargs -d $'\n' mv -t $h/work2/
cd $h/work2/
z=1
rename files to swap1 and swap2
find . -maxdepth 1 -type f -name '*.avi'|while read filename; do echo ${filename};
mv ${filename} swap$z.avi
((z++));
done
#reverse the first video before displacement
if [ $rev == "y" ] || [ $disp == "null" ]
         then
ffmpeg -i swap1.avi -filter_complex "reverse" -an -q 0 reverse.avi
mv reverse.avi swap1.avi
fi
#        
ffmpeg -i swap1.avi -i swap2.avi -lavfi '[1]split[x][y],[0][x][y]displace' -q 0 swap3.avi
#tm=$(date +%Y-%m-%d_%H%M%S)
mv swap3.avi $h/tmpd/${RANDOM}.avi
sleep 1
rm swap1.avi
rm swap2.avi
rm swap3.avi
cd $h/dispt/
i=$((i-=2))
done
cd $h/tmpd/
mv *.avi $h/work/
cd $h/work/
elif [ $disp == "n" ]
           then
echo -n " No displacement then, right so "
fi
#Are we killing Iframes?
if [ $kif == "y" ] || [ $kif == "null" ]
         then
         find . -maxdepth 1 -name '*.avi'|while read filename; do echo ${filename};
python3 tomato.py -i ${filename}
rm ${filename}
done
elif [ $kif == "n" ]
then
echo -n " No Iframes were harmed in this process :"
fi
#are we datamoshing as well ?
if [ $dm == "y" ] || [ $dm == "null" ]
         then
          find . -maxdepth 1 -name '*.avi'|while read filename; do echo ${filename};
python3 tomato.py -i ${filename} -m pulse -c $fdup -n $nfr;

done
          elif [ $dm == "n" ]
           then
echo -n " Not datamoshing on to outputting  Long videos "
fi

#find . -maxdepth 1 -name '*.avi'|while read filename; do echo ${filename};
#mv ${filename} ${RANDOM}${RANDOM}.avi; done

find . -maxdepth 1 -name '*.avi'|while read filename; do echo ${filename};
mv ${filename} ${RANDOM}${RANDOM}.avi; done
#concat
d=$(date +%Y-%m-%d_%H%M%S)
printf "file '%s'\n" ./*.avi > mylist.txt ; ffmpeg -f concat -safe 0 -i mylist.txt -c copy $d.avi ; rm mylist.txt
#clean up
cd $h/
rm *.avi
cd $h/work/
mv $d.avi $h/output/
rm *.avi
cd $h/




 

Monday 4 December 2023

Verlustkontrolle - Loosing control - getting lost

GL*T©H ::::
LOO$*NG CONTROL _ GETT*NG LO$T


Introduction


Most of us making or studying glitch art have a glitch art origin story where we notice and then become fascinated by glitch. It might be through machine malfunction, a blue screen of death, broken images recovered from a failed hard-drive, a rhythmically skipping cd or a mangled image file downloaded from a camera that suddenly changes from being a banal family photo into something new and compelling, or a satellite signal drops out and reveals a new landscape in which faces melt into each other and narrative is halted and slowly lost. We only notice the technology that surrounds us when that technology works in a way other than expected. Loss can lead to transformation. Glitch art works by understanding, replicating and expanding those transformations and in that process ( and glitch art is very much a process) reveals the fragility of digital media and the cultural and technological assumptions digital media stems from.


Digital art is inherently fragile, to make digital art is to work with loss, we change computers, we change operating systems, equipment fails, software that we use either becomes ‘updated’ so it doesn’t have the same functionality or won’t work unless we ‘upgrade’ to a newer machine. The environment where our work lives is also fragile, websites and social media companies change moderation policies, social media sites may vanish taking down whole swathes of work, I may not keep up hosting fees and my carefully constructed website might disappear, the environment in which we work is in a state of constant flux.


The one constant in making digital art is change and loss, part of control must be about archiving, not only work , but software and hardware. A lot of what I do as an artist revolves around researching older versions of Linux and how it interacts with various hardware and file formats, to that end I collect and maintain an archive of older machines and software as well as maintaining an archive of my own work and techniques.


There is a paradox at the heart of what I do, I embrace loss within my own processes but try to reduce it in the archiving of my own work – knowing that all digital work tends towards entropy and that ideas of permanence are futile.


A quick discussion of generational loss


Far from being a perfect record a digital file is often compressed or lossy, subject to bit rot or corruption over time, susceptible to being lost, over written or destroyed by hard-drive failure, files may live on in online copies on google drive or via Instagram which can itself inadvertently glitch images ,


The classic Instagram glitch

Facebook or whichever social media network survives the next few years but those copies are often different to the originals due to the differing ways that social media networks compress images – Facebooks’ compression algorithm is especially egregious so in effect these are copies of copies of copies and many people have experimented with repeatedly uploading and downloading images and videos to demonstrate this or work with it


For instance to quote from a gizmodo article from 2015 on Pete Ashton’s ‘I am sitting in stagram’, with a nod to loss this article can only be accessed via archive.orgs wayback machine : https://web.archive.org/web/20160321010334/http://gizmodo.com/heres-what-happens-when-you-repost-the-same-photo-to-in-1685260122


(See also Pete Ashton’s website on this https://art.peteashton.com/sitting-in-stagram/)


Artist and photographer Pete Ashton has sped up this gradual disintegration process in his recent project entitled "I am sitting in stagram." He began with a single photo, uploaded it to Instagram, took an unfiltered screenshot and reposted the resulting image, repeating the process 90 times to produce an effect akin to the real-life aging process.’


Pete Ashton – Lucier grid


This work in turn was inspired by the work of composer Alvin Lucier ( thus lucier grid) specifically his work ‘I am sitting in a room’


From the wikipedia article on that work https://en.wikipedia.org/wiki/I_Am_Sitting_in_a_RoomThe piece features Lucier recording himself narrating a text, and then playing the tape recording back into the room, re-recording it. The new recording is then played back and re-recorded, and this process is repeated. Due to the room's particular size and geometry, certain frequencies of the recording are emphasized while others are attenuated. Eventually the words become unintelligible, replaced by the characteristic resonant frequencies of the room itself’


There is also a video homage to Luciers work by Patrick Liddel which illustrates video decay via youtube upload and download - ‘VIDEO ROOM 1000’ https://www.youtube.com/watch?v=icruGcSsPp0

 

 




There are technical explanations around ‘generational loss’ with jpegs which explain what is happening each time we save a jpeg https://photo.stackexchange.com/questions/99604/what-factors-cause-or-prevent-generational-loss-when-jpegs-are-recompressed-mu but wikipedias definition is probably better https://en.wikipedia.org/wiki/Generation_loss


To quote from that article ‘ Generation loss is the loss of quality between subsequent copies or transcodes of data. Anything that reduces the quality of the representation when copying, and would cause further reduction in quality on making a copy of the copy, can be considered a form of generation loss. File size increases are a common result of generation loss, as the introduction of artifacts may actually increase the entropy of the data through each generation.’


In their work Alvin Lucier, Pete Ashton and Patrick Liddle are not talking about loss as a bad thing rather loss becomes the basis of the work in a similar way to the use of feedback in analog video art where a screen and a camera gradually interact too create new, unique ever changing work.


Each copying or sharing of a work changes it subtly, in effect creating a new work or as I’ll talk about later a remix, digital work may be inherently fragile but it is also inherently mutable, loss rather than being an enemy can also be a useful tool, control of that process is via the environment it lives in , be it storage, the internet, the work itself, as well as setting the conditions under which that work can be reused.


Other forms of loss to consider


Where you display your work determines the quality its seen at, Facebook has notoriously bad image compression and videos uploaded there can be really badly artifacted, you could ask ‘but don’t we want this?’ – yes but the artifacts we want to see are the ones we generate, though allowing for the happy accident and working with the internet as a medium there are limits to how much we want these happy accidents to remake the work. Instagram stubbornly insists on a square format which in itself influences the work we make and post – a landscape image becomes a square selection of part of an image so loss is inherent in that platform which also does not allow for posting gifs and has a narrow range of video codec options, the texture of some work relying on a specific codec for impact or texture (see the differences in texture between h261, h264 or webp/ogv for example) but to mitigate that loss we begin to use that format to our advantage, see chan something stars baobab users project - https://www.instagram.com/baobab_users/ ( which I’ll talk about shortly) where both the format of Instagram and the format of smartphone galleries, screenshotting and cropping is used to its fullest extent reflecting online culture in a collision of collected or shared images, personal photographs’ memes and underground stars reflecting a stream of consciousness poetry which is like watching the internet dreaming and thinking.


Generation loss via platforms and within online communities


Glitch art lives on the internet – but web sites disappear or appear , content moderation policies change, website ownerships change. Tumblr ( arguably one of the birth places and incubators of glitch art as we know it now) passes through different hands and new owners ban content that they deem to be NSFW – YouTube content disappears in a haze of copyright strikes or is rendered unviewable by constant adbreaks – the job of an artist who exists to any degree online is to manage what happens when content policies change or websites disappear – control of loss is what we do – when Tumblr policies changed after it was bought by Verizon , leading to mass takedowns of blogs back in 2018 ( articles here https://www.businessinsider.in/tech/tumblr-users-are-leaving-in-droves-as-it-bans-nsfw-images-heres-where-theyre-going-instead/articleshow/67002132.cms and here https://www.fastcompany.com/90277836/meet-the-tumblr-refugees-trying-to-safe-its-adult-content-from-oblivion and many other articles) the internet archive swung into action to try and save many of these blogs but much good online work was lost, distorting the space that was Tumblr and the narrative / conversation going on within glitch art.


Those works might exist on the artists hard drives but often those making this work don’t back up their work, one of our primary responsibilities as artists working on the internet must be to archive and keep our own work safe and also to start building mechanisms to save the work of others we and the wider community see as important. We can’t expect the traditional art world or art historians to do this for us because they either don’t care, aren’t looking in the right places or don’t know what to look for in the first place – these spaces move so quickly we can’t wait for hindsight, academia or others to write and preserve our history for us because by then it might be gone.


The frankly odd moderation policies on nudity or that defined nebulously as NSFW on Facebook and Instagram lead to strange situations like shadow banning


Shadow bans


wikipedia definition of a shadow ban


Shadow banning, also called stealth banning, hellbanning, ghost banning, and comment ghosting, is the practice of blocking or partially blocking a user or the user's content from some areas of an online community in such a way that the ban is not readily apparent to the user, regardless of whether the action is taken by an individual or an algorithm. For example, shadow-banned comments posted to a blog or media website would be visible to the sender, but not to other users accessing the site.’


Many glitch artists will use a pseudonym, not something Meta or other platforms approve of and they will try anything to get you to use a real name or interact more fully with the platform even though you may only be there for that one group, it soon becomes obvious to a user when shadow banning is happening to their account. Pseudonyms are an important part of glitch art, they allow artists to take on personas for safety reasons or that reflect better their idea of self, gender identity or to take on a degree of anonymity which separates personhood from work, glitch art is a great community for allowing us to be who we are rather than our given or assumed roles irl, shadow banning becomes problematic when it forces us to give up identities we have fought hard to take on as our own.


These kind of bans make the community's themselves a less vibrant and inclusive place as much of what is termed NSFW or falls foul of content moderation algorithms is often work by marginalized and more diverse communities, communities such as queer or trans which have been the bedrock on which movements like glitch art have been built, for them to be excluded or shadow banned from platforms makes movements like ours poorer – if you don’t control the platform the platform controls you and we lose the richness and diversity of what made our online communities great in the first place. Its the digital equivalent of gentrification.


  To the Fediverse


There are good decentralized( not in the nft sense) fediverse alternatives to mainstream social media sites such as peertube ( a youtube alternative) , mastodon ( a good alternative to twitter), pixelfed an instagram alternative but I’m not going to talk about those now but they are good directions to go in to take back ownership of our own social online presence.


Further research

pixelfed - https://pixelfed.org/

peertube - https://joinpeertube.org/

mastodon - https://joinmastodon.org/


Working with loss


The work I’m currently making embraces loss by purposefully shrinking images down to a tiny fraction of their original resolution and then blowing them back up again to work with the artifacts created in that transition and then turning those images into grids which are mirrored and cropped and rebuilt. Originally these works started as an attempt to replicate the methodology of Xavier Dallet and his baobab users project find that here - https://www.instagram.com/baobab_users/?hl=en


Baobab Users



Xavier Dallet, Baobab users, extreme replication

Xavier Dallet, Baobab users, extreme replication

specifically the grid structure and repetition I found appealing which derives from using smartphones to build sets off images and meaning passed between users via the messenger platform, I wanted to do that on a desktop rather than smartphone so I created a set of scripts to replicate that.


Icewm Icons


Reduced to Pbm 10x10pixels


which led me to find a basic set of images like these icons from Icewm window manager as starting points . These are already at a small resolution typically between 32x32 or 16x16 pixels – I transcoded those from jpg to pbm ( black and white bit map) then made repeating grids


Wopa wopa wopa


But the process I created which takes groups of images and grids them and crops them randomly into smaller and smaller grids within a larger grid started me exploring what could be done by shrinking images down deliberately almost to the point where they lose meaning and become just material doing this I’ve found creates small islands of distortion around which further techniques anchor themselves ( something I’d experimented with before but I’ll explain that later) .

As an example I can scrape the images from a news website https://www.spiegel.de/kultur/

 

Spiegel kultur page 26/11/2023



The scraped images


Now reduce those images in size to 80x80px from original sizes and make them square


Original size


Use a script to rearrange blocks of pixels in a specific way and use gmic to displace originals against that output.

 


 


Or using a slightly different technique run those images through a computer with a specific type of graphics card using an old version of Linux and record the output using a capture card.


The same Image on a computer with ATI graphics card running LinuxMint Bea 2.1


Japans’ ghosts original


Or we could take the original video for japans 1982 performance of ghosts and play that through the same computer and record that output



Ghosts versus ati graphics card and mint bea 2.1 In this version I reduced the original video file down from 640x480 pixels to 352x288 and changed the format of the video from h264 mp4 to h261, h261 one of the earliest codecs created for video is more blocky and more prone to artifacts , and more useful when used in conjunction with an ati graphics card and mintbea 2.1 – the smaller the image the richer the output.


Or I could combine the techniques I talked about earlier , by turning the original ghosts video into stills , reducing those stills from 640x480 to 20x20px  

 


 then displacing every other still against its previous still , and colourising them in the process

 


Then recombining those into a new work, the audio taken from the original cut up stretched and glitched ( this was 10000 images)


Ghosts part 1

 So Just to illustrate the difference without and with ATI graphics card


LinuxMint Bea 2.1 without ATI graphics card


LinuxMint Bea 2.1 without ATI graphics card

LinuxMint Bea 2.1 with ATI graphics

LinuxMint Bea 2.1 with ATI graphics

These images all started as 640x480 pixels but at lower resolutions like 256x256 or even 80x80 more interesting things happen.


 

As I change resolution the artifacts created change, each new technique used adding to those artifacts like wind blowing sand around obstacles in real life.

Lower resolutions = more artifacts

Or we could just cut the original images up and attempt to reassemble them a technique I’ve started to use a lot more of late , its useful as a technique in its own right but also for creating displacement maps in scripts. This script happened by accident – I was actually trying to create a simple cut up and rejoin routine and to test that I worked on this process which didn’t work how I intended but I like what it does .



  Output as displacement maps for originals.




Loss or reduction of quality can lead to a rich breeding ground for artifacts. And discovering and exploring these flaws had a serious effect on how I viewed resolution and quality , the lower the resolution the more interesting things became for me when working in this way.

These images were made on a Dell Wyse 5010 Dx0D Thin client from 2016 with AMD G-T48E apu early versions of this technique used a standard mid 2000’s mini-atx motherboard and AMD Radeon HD 3650 agp graphics card ( these give different colours and effects) the hardware was not altered in any way and these images are direct output as seen on screen – this effect can also be replicated with variations in more modern versions of linux such as Gnuinos chimera .

To return to the theme of this talk

Part of our work as artists is to fight against the entropy of generational loss, we keep copies of work on hard drives, have back up strategies to keep our originals intact, all the while these images are being accessed, changed, uploaded, downloaded and remixed, an image on the internet is always changing, its meaning changing with each iteration, context or even the device it is viewed on, whatever we think about copyright or ownership an image or video or file on the internet is always a remix away from being turned into something else, it is a thing in itself but it is also a base material.

To work in this environment means giving up a large degree of control, but how then do we retain authorship if anything can be copied and remixed – essentially we have to change our ideas of what authorship and ownership mean – controlling work online is a losing game unless we take lessons from the the open source and free software movement. One way of controlling authorship is by giving others the right to use a work, remix it, or keep a copy of it, as long as those others grant the same rights to others if they make work based on what I have made – a culture of sharing rather than denying ( and also legally binding) which fosters rather than restricts. To gain control we must lose control and give up on the now absurd idea of a unique or investment work beloved of the old art establishment.

Scarcity value.

As a side note NFT’S also slyly acknowledge this absurdity whilst still charging for files in editions similar to the way that artist printmakers print limited editions giving the illusion of ownership via block-chain ledger entries and the notional passing of notional money ( cryptocurrency) most of the ideas that underpin these ‘markets’ reflect old art world practices and fight against the reality of digital art which is that if it is on a screen or a device or in an accessible file it can be copied, remixed, shared as part of a cultural commons.

Traditional printmakers may make a limited edition of prints from their own original plates for sale, the value of those prints reflecting their scarcity, the printmaker could still run off a few hundred extra prints which could be sold but that would reduce the artificial scarcity value on which the old art gallery system/establishment works and which controls artists and what is seen and valued culturally and monetarily.

Scarcity value and greed = New Warhols = Art as commodity

In a particularly interesting turn of events the artist Paul Stephenson tracked down original Andy Warhol acetates and made ‘New’ Warhol works. From the BBC article from 2017 https://www.bbc.com/news/entertainment-arts-41634496

Stephenson has made new versions of Warhol works by posthumously tracking down the pop artist's original acetates, paints and printer, and recreating the entire process as precisely as possible.’

While Warhol's assistants did many parts of the physical work, the artist, who died in 1987, was the only one who worked directly on these acetates, touching up parts of the portraits to prepare them for printing.’

Stephenson took the acetates to one of Warhol's original screenprinters in New York, Alexander Heinrici, who offered to help use them to make new paintings.’

An artists value as a commodity for investment and speculation increases after death and to be able to print new works to satisfy demand seems logical from a money making persepctive but what does it say about authorship and the ethical state of the art market itself? In a way I can only applaud the artist doing this as eventually it will reduce the monetary value of all Warhol works ( and Warhol would probably be quite amused and approving of this making of ‘new’ works) But old world copyright laws might have something to say about this, scarity value and ownership could be viewed as anti-cultural and anti society.

To be a digital artist also means to acknowledge the absurdity of the ‘original’ unique work and questions interacting with the old art world, the need for commercial galleries or even the notion of curatorship, as in the online communities where we work such as the glitch artists collective network we become

Self organising – self curating

See festivals such as The Wrong Biennial , Glitch art is dead, Fu:bar, Glitch art Brazil.

There is parallel self curating and self organizing within the NFT space but my take on it is that though laudable from a community aspect ultimately these are only seeking legitimacy from or aping pre-existing structures within the traditional art market , they are less about creating or reflecting a shared cultural commons and more about money and reputation and the next generation of investible art stars. 

 

Creative Commons reject the notion of scarcity foster abundance.

I share my work under creatives commons licence CC BY-NC-SA 4.0 which states ( find legal code here https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode.en#s3


creative commons licence CC BY-NC-SA 4.0


Terms and conditions

This aspect of sharing and remixing is fundamental to glitch art ( though not many artists use creative commons licenses there is an unwritten understanding that sharing and remixing is good) both in source material, final work ( in that any digital work can be final ) and the sharing of techniques, software and scripts.

My work is made using pretty much entirely free or Libre software which relies on an open and free software eco system and its one I strongly believe in and advocate for , creative commons goes hand in hand with it – libre software being founded on the principles of the fsf Foundation and its list of four software freedoms

0) The freedom to run the program as you wish, for any purpose (freedom 0).
1) The freedom to study how the program works, and change it so it does your computing as you wish (freedom 1). Access to the source code is a precondition for this.
2) The freedom to redistribute copies so you can help your neighbor (freedom 2).
3) The freedom to distribute copies of your modified versions to others (freedom 3). By doing this you can give the whole community a chance to benefit from your changes. Access to the source code is a precondition for this.

see also definition of free cultural works here - https://freedomdefined.org/Definition

The four freedoms applied to cultural works

1) the freedom to use the work and enjoy the benefits of using it

2) the freedom to study the work and to apply knowledge acquired from it

3) the freedom to make and redistribute copies, in whole or in part, of the information or expression

4) the freedom to make changes and improvements, and to distribute derivative works

To sum up - Control of authorship (other than attribution and crediting the original source) does not exist within digital art other than through peer pressure which often states its okay to remix, but wrong to steal and claim work as your own. We could fight to control our work and our ownership of an image or a video or a music file, a document, whatever, but ultimately there is a copy of that somewhere someone has made for their own use and purposes – but as I believe in remix culture and the idea that culture and art should be available that is a problem only if we adhere to the old ideas of copyright and ownership – if a digital file is not a thing who can truly own what is intangible ?

This attitude also reflects a culture of realism – most of us who make art will not make money from it or even be able to support ourselves through it, instead taking up jobs outside of art, in teaching, research or looking for funding through grants, residencies etc , or working completely outside of the art world, but that also makes us free to make the work we want to – it does not make the work we make any less valid, far from it, it allows us to work freely outside of old establishments and paradigms which no longer understand or serve us, free to create newer more relevant structures which operate along lines of a culture of abundance, sharing and a more open access to the means of artistic production and dissemination. It allows us to build a cultural commons where work has value for what it is rather than how much it is worth or a received assignment of value.

Because seriously who goes to galleries ? Contrast the number of people who wouldn't be seen dead in a gallery but will quite happily look at work on their devices – definitions of art have changed with the onset of the internet and the audience has grown and often the audience makes art in response to what they see, they don’t just passively receive it – we as the makers and consumers define what it is rather than old institutions and vested interests, it as an alive thing rather than a dead thing to be studied and dissected.

Towards a cultural commons

Lawrence Lessig is the originator of creative commons and from creative commons we can infer the idea of a cultural commons, where art, culture and learning are shared and valued without ideas of scarity value and ownership, something which already exists at the heart of glitch art where ideas, techniques and work are shared freely and openly and from Lawrence lessig we also get the formulation of ideas and discussion around the idea of remix culture through his book ‘Remix: Making Art and Commerce Thrive in the Hybrid Economy’

What is remix culture and why do I say everything is a remix? If we talk about generational loss then the remix element is obvious as a file is physically copied, recopied and remixed by transmission, in terms of culture itself its the act of taking a pre-existing work or idea and making something with that , In glitch art the use of archive dot org is near ubiquitous as a source of material, much of it out of copyright or public domain ( its safer to remix a work that's public domain) or say a photo library like pixabay we could also consider collage art as a precursor to glitch art in that pre-existing works are used and re-contextualised through deconstruction and the reassigning of meaning through proximity beyond the inadvertent remixes engendered by device or generational loss.

Remix culture is at the heart of any art going back to the renaissance or further – each generation looks at a previous generations work and borrows themes , iconography , techniques or just plain copies as Lawrence Lessig states:

I’ve described what I mean by remix by describing a bit of its prac-

tice. Whether text or beyond text, remix is collage; it comes from

combining elements of RO (read-only) culture; it succeeds by leveraging the

meaning created by the reference to build something new.

There are two goods that remix creates, at least for us, or for our kids, at least now. One is the good of community. The other is education.’

And:

Remixes happen within a community of remixers. In the digital

age, that community can be spread around the world. Members of

that community create in part for one another. They are showing

one another how they can create, as kids on a skateboard are show-

ing their friends how they can create. That showing is valuable,

even when the stuff produced is not.’

Glitch art is fundamentally a community, people sharing their latest technique or work, trying to one up each other – but it is also an inviting community in a way that traditional art communities are often not .

In the example I showed towards the beginning of this talk I scraped the images from a website, downloaded the probably copyrighted images and turned them into something new, who then owns these images – and at what point does a remix become a new work ? If everything can be looked at as source material who owns it ? My answer to that question is that we all do, if we look at cultural hoarding and gatekeeping in the old art world as a cultural loss to all of us then remixing allied with the free software movement and creative commons can be seen as cultural gain expanding as it does the access to tools, audience and participation. But this also implies a duty on us as artists to further participation, inclusivity, curatorship and opportunity.

But we must also be wary of creeping academicization – the need to study, dissect and classify within terms the establishment understands and can use to co-opt, curate and fit within an old art world narrative – we must forcefully resist this by writing our own histories, our own studies and our own narratives – work which is already under way.

Getting back to the theme beyond  philosophical implications

With glitch art we often work in ways that turn the idea of control on its head, if control means retaining everything including physical objects , all the information, all the detail if anything glitch art reflects the idea that loss is an integral part of existence and to try and hold on to anything is a losing game – digital art exists in an inherently fragile ecosystem – without power it doesn't exist , or rather it does but as the inaccessible content of hard-drives or remote servers – without power it cannot be seen without networks it cannot propagate . I make work which realistically will sooner or later disappear and probably more completely than previous generations of artists work.

We may be one of the first generations of artists in recent history to leave no trace of our work other than the physical and broken devices on which they were made and an oral history of practices which gets lost or changed over time. Therefore it is of primary importance that we both archive our work individually and collectively ( to take control of those processes) and also to record an oral history of practices and timelines and to have control of that process as well – asking the traditional art establishment to critique, write about, or show glitch art or digital art in general risks at the very least misunderstanding or at worst misrepresentation and caricature. We must also control the narrative around what we do.

Control the means of production

Control the means of digital production , understand your tools , maintain your hardware become good at managing your files , don’t rely on big tech to keep a record of your work fight the scourge of bit rot and dead links, record our shared history.

I neither use or endorse Apple products. No proprietary  or Apple hardware/software was used in the preparation of this talk, which was written on completely libre operating systems ( Trisquel, Gnuinos and PureOS) and partially written on a libre-booted Dell Latitude E6400.
CC BY-NC-SA 4.0










 




 
 


New script I'm working on.

 Above image is the output from a new script that I'm working on which is similar to other scripts that I've created recently but ra...