Wednesday, 20 December 2023

ikillerpulse ( requires tomato.py in working directory)

I've been working on this script for the last week or so. What does it do? It takes an input video/s,  converts to a format we can use for datamoshing ( in an avi container), divides that video/s into n second chunks ( in a format we choose and in specific resolutions). Depending on what options are chosen when script is started it will run rgb shifting, displacement, and datamoshing on each chunk ( datamoshing is limited to removing iframes and tomatos pulse mode ) randomising file names then finally putting everything back together as one video in the output folder. It also removes files and cleans up after itself after ( having moved the source files into a folder named originals ) . 

It works on linux and windows though to run this on windows ( tested on 10 only) you need to change the references to python3 to python , that's all ) you will also need to have tomato.py in the directory you run the script from and you will also need to have ffmpeg installed.

Copy and paste the script below and name it something interesting with .sh extension run from bash on linux and gitbash on windows 10.

#! /bin/bash
#which directory are we in ?
h=$(pwd)
echo $h
mkdir $h/originals
mkdir $h/work
mkdir $h/output
mkdir $h/dispt
mkdir $h/work2
mkdir $h/tmpd
echo -n "Source video extension ? : "
read f
#for f in *\ *; do mv "$f" "${f// /_}"; done
echo -n "Time segment (1-10 seconds in format 00) ? : "
read ts
echo -n "Codec to use (1)mpeg4, (2)h264, (3)mpeg2video, (4)h261,(5)theora, (6)Insta, (7)4:3 ? : "
read cd
#use rgb shift
echo -n "Use rgbashift (n=no, rh(red), gh(green), bh(blue)) ? : "
read rgb
#are we killing iframes?
echo -n "Kill Iframes (y/n) ? : "
read kif
#are we datamoshing ?
echo -n "Are we datamoshing (y/n) ? : "
read dm
# get values for pulse mode only
if [ $dm == "y" ] || [ $dm == "null" ]
          then
          echo -n "Number of frames to duplicate ? : "
          read fdup
          echo -n "Every n frames ? : "
          read nfr
          
          elif [ $dm == "n" ]
           then
           sleep 2
           fi
   
echo -n "Are we using displacment (y/n) ? : "
read disp
if [ $disp == "y" ] || [ $disp == "null" ]
          then
          echo -n "Reverse the first video before displacement (y/n) ? : "
          read rev
          fi      


    
#do we need to split long videos into thirty second chunks?
#echo -n  "Cut up long videos (y/n) : ? "
#read cut
#if [ $cut == "y" ] || [ $cut == "null" ]
 #         then
  #        echo -n "Source video extension ? : "
#read f
#echo -n "Time segment (seconds in format 00) ? : "
#read ts2
#find . -maxdepth 1 -name '*.'$f''|while read filename; do echo ${filename};
#ffmpeg -i ${filename} -c copy -map 0 -segment_time 00:00:$ts2 -f segment -reset_timestamps 1 ${filename%.*}%3d.$f
#mv ${filename} $h/originals/
#done
#          elif [ $cut == "n" ]
#           then
#echo -n " Not cutting Long videos "
#fi

#find videos and convert them to codec and avi container and strip metadata with -map_metadata -1
if [ $cd == "1" ] || [ $cd == "null" ]
          then
          for i in *.$f; do ffmpeg -i "$i" -map_metadata -1 -vf "scale=1280:720:force_original_aspect_ratio=decrease,pad=1280:720:-1:-1,setsar=1" -c:v mpeg4 -bf 0 -q 0 "${i%.*}.avi";done
          elif [ $cd == "2" ]
           then
           for i in *.$f; do ffmpeg -i "$i" -map_metadata -1 -vf "scale=1280:720:force_original_aspect_ratio=decrease,pad=1280:720:-1:-1,setsar=1" -c:v h264 -bf 0 -crf 23 "${i%.*}.avi";done
          elif [ $cd == "3" ]
           then
           for i in *.$f; do ffmpeg -i "$i" -map_metadata -1 -vf "scale=1280:720:force_original_aspect_ratio=decrease,pad=1280:720:-1:-1,setsar=1" -c:v mpeg2video -bf 0 -q 0 "${i%.*}.avi";done
          elif [ $cd == "4" ]
           then
           for i in *.$f; do ffmpeg -i "$i" -map_metadata -1 -c:v h261 -bf 0 -q 0 -s 352x288 "${i%.*}.avi";done
          elif [ $cd == "5" ]
           then
           for i in *.$f; do ffmpeg -i "$i" -map_metadata -1 -vf "scale=1280:720:force_original_aspect_ratio=decrease,pad=1280:720:-1:-1,setsar=1" -c:v libtheora -qscale:v 7 -bf 0  "${i%.*}.avi";done
          elif [ $cd == "6" ]
           then
           #for instagram and sqaure aspect ratio first crop and increase to 1080x1080 then rencode to get correct sar and dar and aspect ratio
           for i in *.$f; do ffmpeg -i "$i" -map_metadata -1 -vf "scale=1080:1080:force_original_aspect_ratio=increase,crop=1080:1080,pad=1080:1080:-1:-1,setsar=1" -c:v mpeg4 -qscale:v 7 -bf 0 "${i%.*}.avi"; done
           #for i in *.$f; do ffmpeg -i "$i" -map_metadata -1 -vf "scale=1080:1080:force_original_aspect_ratio=decrease,pad=1080:1080:-1:-1,setsar=1" -c:v mpeg4 -qscale:v 7 -bf 0  "${i%.*}.avi";done
            elif [ $cd == "7" ]
           then
           for i in *.$f; do ffmpeg -i "$i" -map_metadata -1 -vf "scale=800:600:force_original_aspect_ratio=increase,crop=800:600,pad=800:600:-1:-1,setsar=1" -c:v mpeg4 -qscale:v 7 -bf 0 "${i%.*}.avi"; done
          
           fi
mv *.$f $h/originals/
#Convert video/s into $ts time segments
for i in *.avi; do ffmpeg -i "$i" -c copy -map 0 -segment_time 00:00:$ts -f segment -reset_timestamps 1 $h/work/"${i%.*}%3d.avi";
done

#if we are rgb shifting do it now before everything else
if [ $rgb == "rh" ] || [ $rgb == "null" ]
         then
         cd $h/work/
         for i in *.avi; do ffmpeg -i "$i" -vf "rgbashift=rh=-30" -pix_fmt yuv420p -q 0 rgb.avi;
         mv rgb.avi "$i";done         
         elif [ $rgb == "gh" ]
           then
           cd $h/work/
           for i in *.avi; do ffmpeg -i "$i" -vf "rgbashift=gh=-30" -pix_fmt yuv420p -q 0 rgb.avi;
            mv rgb.avi "$i";done
           elif [ $rgb == "bh" ]
           then
           cd $h/work/
           for i in *.avi; do ffmpeg -i "$i" -vf "rgbashift=bh=-30" -pix_fmt yuv420p -q 0 rgb.avi;
            mv rgb.avi "$i";done
           elif [ $rgb == "n" ]
           then
           echo -n "No Rgb vaporwave goodness for u then!!!:"
           fi
cd $h/
#move to $h/work/ and randomise files
cp tomato.py $h/work/
cd $h/work/
find . -maxdepth 1 -name '*.avi'|while read filename; do echo ${filename};
mv ${filename} ${RANDOM}${RANDOM}.avi; done
#use tomato or other to remove iframes in those chunks bar first frame
#find . -maxdepth 1 -name '*.avi'|while read filename; do echo ${filename};
#python3 tomato.py -i ${filename}
#rm ${filename}
#done
#NOTE TO SELF DISPLACE BEFORE IKILLER OR DATAMOSH  
find . -maxdepth 1 -name '*.avi'|while read filename; do echo ${filename};
mv ${filename} ${RANDOM}${RANDOM}.avi; done
#are we using displacement as well ?
if [ $disp == "y" ] || [ $disp == "null" ]
         then
        
         #chop pulsed avis' down to 3 seconds for displacement
for i in *.avi; do ffmpeg -i "$i" -c copy -map 0 -segment_time 00:00:$ts -f segment -reset_timestamps 1 -q 0 "${i%.*}%3d.avi";done

cp *.avi $h/dispt/
cd $h/dispt/
find . -maxdepth 1 -name '*.avi'|while read filename; do echo ${filename};
mv ${filename} ${RANDOM}${RANDOM}.avi; done
i=$(ls *.avi | wc -l)
echo $i
while [ $i -gt 0 ]
do
#do displacement
find . -maxdepth 1 -name '*.avi' | head -n 2 | xargs -d $'\n' mv -t $h/work2/
cd $h/work2/
z=1
rename files to swap1 and swap2
find . -maxdepth 1 -type f -name '*.avi'|while read filename; do echo ${filename};
mv ${filename} swap$z.avi
((z++));
done
#reverse the first video before displacement
if [ $rev == "y" ] || [ $disp == "null" ]
         then
ffmpeg -i swap1.avi -filter_complex "reverse" -an -q 0 reverse.avi
mv reverse.avi swap1.avi
fi
#        
ffmpeg -i swap1.avi -i swap2.avi -lavfi '[1]split[x][y],[0][x][y]displace' -q 0 swap3.avi
#tm=$(date +%Y-%m-%d_%H%M%S)
mv swap3.avi $h/tmpd/${RANDOM}.avi
sleep 1
rm swap1.avi
rm swap2.avi
rm swap3.avi
cd $h/dispt/
i=$((i-=2))
done
cd $h/tmpd/
mv *.avi $h/work/
cd $h/work/
elif [ $disp == "n" ]
           then
echo -n " No displacement then, right so "
fi
#Are we killing Iframes?
if [ $kif == "y" ] || [ $kif == "null" ]
         then
         find . -maxdepth 1 -name '*.avi'|while read filename; do echo ${filename};
python3 tomato.py -i ${filename}
rm ${filename}
done
elif [ $kif == "n" ]
then
echo -n " No Iframes were harmed in this process :"
fi
#are we datamoshing as well ?
if [ $dm == "y" ] || [ $dm == "null" ]
         then
          find . -maxdepth 1 -name '*.avi'|while read filename; do echo ${filename};
python3 tomato.py -i ${filename} -m pulse -c $fdup -n $nfr;

done
          elif [ $dm == "n" ]
           then
echo -n " Not datamoshing on to outputting  Long videos "
fi

#find . -maxdepth 1 -name '*.avi'|while read filename; do echo ${filename};
#mv ${filename} ${RANDOM}${RANDOM}.avi; done

find . -maxdepth 1 -name '*.avi'|while read filename; do echo ${filename};
mv ${filename} ${RANDOM}${RANDOM}.avi; done
#concat
d=$(date +%Y-%m-%d_%H%M%S)
printf "file '%s'\n" ./*.avi > mylist.txt ; ffmpeg -f concat -safe 0 -i mylist.txt -c copy $d.avi ; rm mylist.txt
#clean up
cd $h/
rm *.avi
cd $h/work/
mv $d.avi $h/output/
rm *.avi
cd $h/




 

Monday, 4 December 2023

Verlustkontrolle - Loosing control - getting lost

GL*T©H ::::
LOO$*NG CONTROL _ GETT*NG LO$T


Introduction


Most of us making or studying glitch art have a glitch art origin story where we notice and then become fascinated by glitch. It might be through machine malfunction, a blue screen of death, broken images recovered from a failed hard-drive, a rhythmically skipping cd or a mangled image file downloaded from a camera that suddenly changes from being a banal family photo into something new and compelling, or a satellite signal drops out and reveals a new landscape in which faces melt into each other and narrative is halted and slowly lost. We only notice the technology that surrounds us when that technology works in a way other than expected. Loss can lead to transformation. Glitch art works by understanding, replicating and expanding those transformations and in that process ( and glitch art is very much a process) reveals the fragility of digital media and the cultural and technological assumptions digital media stems from.


Digital art is inherently fragile, to make digital art is to work with loss, we change computers, we change operating systems, equipment fails, software that we use either becomes ‘updated’ so it doesn’t have the same functionality or won’t work unless we ‘upgrade’ to a newer machine. The environment where our work lives is also fragile, websites and social media companies change moderation policies, social media sites may vanish taking down whole swathes of work, I may not keep up hosting fees and my carefully constructed website might disappear, the environment in which we work is in a state of constant flux.


The one constant in making digital art is change and loss, part of control must be about archiving, not only work , but software and hardware. A lot of what I do as an artist revolves around researching older versions of Linux and how it interacts with various hardware and file formats, to that end I collect and maintain an archive of older machines and software as well as maintaining an archive of my own work and techniques.


There is a paradox at the heart of what I do, I embrace loss within my own processes but try to reduce it in the archiving of my own work – knowing that all digital work tends towards entropy and that ideas of permanence are futile.


A quick discussion of generational loss


Far from being a perfect record a digital file is often compressed or lossy, subject to bit rot or corruption over time, susceptible to being lost, over written or destroyed by hard-drive failure, files may live on in online copies on google drive or via Instagram which can itself inadvertently glitch images ,


The classic Instagram glitch

Facebook or whichever social media network survives the next few years but those copies are often different to the originals due to the differing ways that social media networks compress images – Facebooks’ compression algorithm is especially egregious so in effect these are copies of copies of copies and many people have experimented with repeatedly uploading and downloading images and videos to demonstrate this or work with it


For instance to quote from a gizmodo article from 2015 on Pete Ashton’s ‘I am sitting in stagram’, with a nod to loss this article can only be accessed via archive.orgs wayback machine : https://web.archive.org/web/20160321010334/http://gizmodo.com/heres-what-happens-when-you-repost-the-same-photo-to-in-1685260122


(See also Pete Ashton’s website on this https://art.peteashton.com/sitting-in-stagram/)


Artist and photographer Pete Ashton has sped up this gradual disintegration process in his recent project entitled "I am sitting in stagram." He began with a single photo, uploaded it to Instagram, took an unfiltered screenshot and reposted the resulting image, repeating the process 90 times to produce an effect akin to the real-life aging process.’


Pete Ashton – Lucier grid


This work in turn was inspired by the work of composer Alvin Lucier ( thus lucier grid) specifically his work ‘I am sitting in a room’


From the wikipedia article on that work https://en.wikipedia.org/wiki/I_Am_Sitting_in_a_RoomThe piece features Lucier recording himself narrating a text, and then playing the tape recording back into the room, re-recording it. The new recording is then played back and re-recorded, and this process is repeated. Due to the room's particular size and geometry, certain frequencies of the recording are emphasized while others are attenuated. Eventually the words become unintelligible, replaced by the characteristic resonant frequencies of the room itself’


There is also a video homage to Luciers work by Patrick Liddel which illustrates video decay via youtube upload and download - ‘VIDEO ROOM 1000’ https://www.youtube.com/watch?v=icruGcSsPp0

 

 




There are technical explanations around ‘generational loss’ with jpegs which explain what is happening each time we save a jpeg https://photo.stackexchange.com/questions/99604/what-factors-cause-or-prevent-generational-loss-when-jpegs-are-recompressed-mu but wikipedias definition is probably better https://en.wikipedia.org/wiki/Generation_loss


To quote from that article ‘ Generation loss is the loss of quality between subsequent copies or transcodes of data. Anything that reduces the quality of the representation when copying, and would cause further reduction in quality on making a copy of the copy, can be considered a form of generation loss. File size increases are a common result of generation loss, as the introduction of artifacts may actually increase the entropy of the data through each generation.’


In their work Alvin Lucier, Pete Ashton and Patrick Liddle are not talking about loss as a bad thing rather loss becomes the basis of the work in a similar way to the use of feedback in analog video art where a screen and a camera gradually interact too create new, unique ever changing work.


Each copying or sharing of a work changes it subtly, in effect creating a new work or as I’ll talk about later a remix, digital work may be inherently fragile but it is also inherently mutable, loss rather than being an enemy can also be a useful tool, control of that process is via the environment it lives in , be it storage, the internet, the work itself, as well as setting the conditions under which that work can be reused.


Other forms of loss to consider


Where you display your work determines the quality its seen at, Facebook has notoriously bad image compression and videos uploaded there can be really badly artifacted, you could ask ‘but don’t we want this?’ – yes but the artifacts we want to see are the ones we generate, though allowing for the happy accident and working with the internet as a medium there are limits to how much we want these happy accidents to remake the work. Instagram stubbornly insists on a square format which in itself influences the work we make and post – a landscape image becomes a square selection of part of an image so loss is inherent in that platform which also does not allow for posting gifs and has a narrow range of video codec options, the texture of some work relying on a specific codec for impact or texture (see the differences in texture between h261, h264 or webp/ogv for example) but to mitigate that loss we begin to use that format to our advantage, see chan something stars baobab users project - https://www.instagram.com/baobab_users/ ( which I’ll talk about shortly) where both the format of Instagram and the format of smartphone galleries, screenshotting and cropping is used to its fullest extent reflecting online culture in a collision of collected or shared images, personal photographs’ memes and underground stars reflecting a stream of consciousness poetry which is like watching the internet dreaming and thinking.


Generation loss via platforms and within online communities


Glitch art lives on the internet – but web sites disappear or appear , content moderation policies change, website ownerships change. Tumblr ( arguably one of the birth places and incubators of glitch art as we know it now) passes through different hands and new owners ban content that they deem to be NSFW – YouTube content disappears in a haze of copyright strikes or is rendered unviewable by constant adbreaks – the job of an artist who exists to any degree online is to manage what happens when content policies change or websites disappear – control of loss is what we do – when Tumblr policies changed after it was bought by Verizon , leading to mass takedowns of blogs back in 2018 ( articles here https://www.businessinsider.in/tech/tumblr-users-are-leaving-in-droves-as-it-bans-nsfw-images-heres-where-theyre-going-instead/articleshow/67002132.cms and here https://www.fastcompany.com/90277836/meet-the-tumblr-refugees-trying-to-safe-its-adult-content-from-oblivion and many other articles) the internet archive swung into action to try and save many of these blogs but much good online work was lost, distorting the space that was Tumblr and the narrative / conversation going on within glitch art.


Those works might exist on the artists hard drives but often those making this work don’t back up their work, one of our primary responsibilities as artists working on the internet must be to archive and keep our own work safe and also to start building mechanisms to save the work of others we and the wider community see as important. We can’t expect the traditional art world or art historians to do this for us because they either don’t care, aren’t looking in the right places or don’t know what to look for in the first place – these spaces move so quickly we can’t wait for hindsight, academia or others to write and preserve our history for us because by then it might be gone.


The frankly odd moderation policies on nudity or that defined nebulously as NSFW on Facebook and Instagram lead to strange situations like shadow banning


Shadow bans


wikipedia definition of a shadow ban


Shadow banning, also called stealth banning, hellbanning, ghost banning, and comment ghosting, is the practice of blocking or partially blocking a user or the user's content from some areas of an online community in such a way that the ban is not readily apparent to the user, regardless of whether the action is taken by an individual or an algorithm. For example, shadow-banned comments posted to a blog or media website would be visible to the sender, but not to other users accessing the site.’


Many glitch artists will use a pseudonym, not something Meta or other platforms approve of and they will try anything to get you to use a real name or interact more fully with the platform even though you may only be there for that one group, it soon becomes obvious to a user when shadow banning is happening to their account. Pseudonyms are an important part of glitch art, they allow artists to take on personas for safety reasons or that reflect better their idea of self, gender identity or to take on a degree of anonymity which separates personhood from work, glitch art is a great community for allowing us to be who we are rather than our given or assumed roles irl, shadow banning becomes problematic when it forces us to give up identities we have fought hard to take on as our own.


These kind of bans make the community's themselves a less vibrant and inclusive place as much of what is termed NSFW or falls foul of content moderation algorithms is often work by marginalized and more diverse communities, communities such as queer or trans which have been the bedrock on which movements like glitch art have been built, for them to be excluded or shadow banned from platforms makes movements like ours poorer – if you don’t control the platform the platform controls you and we lose the richness and diversity of what made our online communities great in the first place. Its the digital equivalent of gentrification.


  To the Fediverse


There are good decentralized( not in the nft sense) fediverse alternatives to mainstream social media sites such as peertube ( a youtube alternative) , mastodon ( a good alternative to twitter), pixelfed an instagram alternative but I’m not going to talk about those now but they are good directions to go in to take back ownership of our own social online presence.


Further research

pixelfed - https://pixelfed.org/

peertube - https://joinpeertube.org/

mastodon - https://joinmastodon.org/


Working with loss


The work I’m currently making embraces loss by purposefully shrinking images down to a tiny fraction of their original resolution and then blowing them back up again to work with the artifacts created in that transition and then turning those images into grids which are mirrored and cropped and rebuilt. Originally these works started as an attempt to replicate the methodology of Xavier Dallet and his baobab users project find that here - https://www.instagram.com/baobab_users/?hl=en


Baobab Users



Xavier Dallet, Baobab users, extreme replication

Xavier Dallet, Baobab users, extreme replication

specifically the grid structure and repetition I found appealing which derives from using smartphones to build sets off images and meaning passed between users via the messenger platform, I wanted to do that on a desktop rather than smartphone so I created a set of scripts to replicate that.


Icewm Icons


Reduced to Pbm 10x10pixels


which led me to find a basic set of images like these icons from Icewm window manager as starting points . These are already at a small resolution typically between 32x32 or 16x16 pixels – I transcoded those from jpg to pbm ( black and white bit map) then made repeating grids


Wopa wopa wopa


But the process I created which takes groups of images and grids them and crops them randomly into smaller and smaller grids within a larger grid started me exploring what could be done by shrinking images down deliberately almost to the point where they lose meaning and become just material doing this I’ve found creates small islands of distortion around which further techniques anchor themselves ( something I’d experimented with before but I’ll explain that later) .

As an example I can scrape the images from a news website https://www.spiegel.de/kultur/

 

Spiegel kultur page 26/11/2023



The scraped images


Now reduce those images in size to 80x80px from original sizes and make them square


Original size


Use a script to rearrange blocks of pixels in a specific way and use gmic to displace originals against that output.

 


 


Or using a slightly different technique run those images through a computer with a specific type of graphics card using an old version of Linux and record the output using a capture card.


The same Image on a computer with ATI graphics card running LinuxMint Bea 2.1


Japans’ ghosts original


Or we could take the original video for japans 1982 performance of ghosts and play that through the same computer and record that output



Ghosts versus ati graphics card and mint bea 2.1 In this version I reduced the original video file down from 640x480 pixels to 352x288 and changed the format of the video from h264 mp4 to h261, h261 one of the earliest codecs created for video is more blocky and more prone to artifacts , and more useful when used in conjunction with an ati graphics card and mintbea 2.1 – the smaller the image the richer the output.


Or I could combine the techniques I talked about earlier , by turning the original ghosts video into stills , reducing those stills from 640x480 to 20x20px  

 


 then displacing every other still against its previous still , and colourising them in the process

 


Then recombining those into a new work, the audio taken from the original cut up stretched and glitched ( this was 10000 images)


Ghosts part 1

 So Just to illustrate the difference without and with ATI graphics card


LinuxMint Bea 2.1 without ATI graphics card


LinuxMint Bea 2.1 without ATI graphics card

LinuxMint Bea 2.1 with ATI graphics

LinuxMint Bea 2.1 with ATI graphics

These images all started as 640x480 pixels but at lower resolutions like 256x256 or even 80x80 more interesting things happen.


 

As I change resolution the artifacts created change, each new technique used adding to those artifacts like wind blowing sand around obstacles in real life.

Lower resolutions = more artifacts

Or we could just cut the original images up and attempt to reassemble them a technique I’ve started to use a lot more of late , its useful as a technique in its own right but also for creating displacement maps in scripts. This script happened by accident – I was actually trying to create a simple cut up and rejoin routine and to test that I worked on this process which didn’t work how I intended but I like what it does .



  Output as displacement maps for originals.




Loss or reduction of quality can lead to a rich breeding ground for artifacts. And discovering and exploring these flaws had a serious effect on how I viewed resolution and quality , the lower the resolution the more interesting things became for me when working in this way.

These images were made on a Dell Wyse 5010 Dx0D Thin client from 2016 with AMD G-T48E apu early versions of this technique used a standard mid 2000’s mini-atx motherboard and AMD Radeon HD 3650 agp graphics card ( these give different colours and effects) the hardware was not altered in any way and these images are direct output as seen on screen – this effect can also be replicated with variations in more modern versions of linux such as Gnuinos chimera .

To return to the theme of this talk

Part of our work as artists is to fight against the entropy of generational loss, we keep copies of work on hard drives, have back up strategies to keep our originals intact, all the while these images are being accessed, changed, uploaded, downloaded and remixed, an image on the internet is always changing, its meaning changing with each iteration, context or even the device it is viewed on, whatever we think about copyright or ownership an image or video or file on the internet is always a remix away from being turned into something else, it is a thing in itself but it is also a base material.

To work in this environment means giving up a large degree of control, but how then do we retain authorship if anything can be copied and remixed – essentially we have to change our ideas of what authorship and ownership mean – controlling work online is a losing game unless we take lessons from the the open source and free software movement. One way of controlling authorship is by giving others the right to use a work, remix it, or keep a copy of it, as long as those others grant the same rights to others if they make work based on what I have made – a culture of sharing rather than denying ( and also legally binding) which fosters rather than restricts. To gain control we must lose control and give up on the now absurd idea of a unique or investment work beloved of the old art establishment.

Scarcity value.

As a side note NFT’S also slyly acknowledge this absurdity whilst still charging for files in editions similar to the way that artist printmakers print limited editions giving the illusion of ownership via block-chain ledger entries and the notional passing of notional money ( cryptocurrency) most of the ideas that underpin these ‘markets’ reflect old art world practices and fight against the reality of digital art which is that if it is on a screen or a device or in an accessible file it can be copied, remixed, shared as part of a cultural commons.

Traditional printmakers may make a limited edition of prints from their own original plates for sale, the value of those prints reflecting their scarcity, the printmaker could still run off a few hundred extra prints which could be sold but that would reduce the artificial scarcity value on which the old art gallery system/establishment works and which controls artists and what is seen and valued culturally and monetarily.

Scarcity value and greed = New Warhols = Art as commodity

In a particularly interesting turn of events the artist Paul Stephenson tracked down original Andy Warhol acetates and made ‘New’ Warhol works. From the BBC article from 2017 https://www.bbc.com/news/entertainment-arts-41634496

Stephenson has made new versions of Warhol works by posthumously tracking down the pop artist's original acetates, paints and printer, and recreating the entire process as precisely as possible.’

While Warhol's assistants did many parts of the physical work, the artist, who died in 1987, was the only one who worked directly on these acetates, touching up parts of the portraits to prepare them for printing.’

Stephenson took the acetates to one of Warhol's original screenprinters in New York, Alexander Heinrici, who offered to help use them to make new paintings.’

An artists value as a commodity for investment and speculation increases after death and to be able to print new works to satisfy demand seems logical from a money making persepctive but what does it say about authorship and the ethical state of the art market itself? In a way I can only applaud the artist doing this as eventually it will reduce the monetary value of all Warhol works ( and Warhol would probably be quite amused and approving of this making of ‘new’ works) But old world copyright laws might have something to say about this, scarity value and ownership could be viewed as anti-cultural and anti society.

To be a digital artist also means to acknowledge the absurdity of the ‘original’ unique work and questions interacting with the old art world, the need for commercial galleries or even the notion of curatorship, as in the online communities where we work such as the glitch artists collective network we become

Self organising – self curating

See festivals such as The Wrong Biennial , Glitch art is dead, Fu:bar, Glitch art Brazil.

There is parallel self curating and self organizing within the NFT space but my take on it is that though laudable from a community aspect ultimately these are only seeking legitimacy from or aping pre-existing structures within the traditional art market , they are less about creating or reflecting a shared cultural commons and more about money and reputation and the next generation of investible art stars. 

 

Creative Commons reject the notion of scarcity foster abundance.

I share my work under creatives commons licence CC BY-NC-SA 4.0 which states ( find legal code here https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode.en#s3


creative commons licence CC BY-NC-SA 4.0


Terms and conditions

This aspect of sharing and remixing is fundamental to glitch art ( though not many artists use creative commons licenses there is an unwritten understanding that sharing and remixing is good) both in source material, final work ( in that any digital work can be final ) and the sharing of techniques, software and scripts.

My work is made using pretty much entirely free or Libre software which relies on an open and free software eco system and its one I strongly believe in and advocate for , creative commons goes hand in hand with it – libre software being founded on the principles of the fsf Foundation and its list of four software freedoms

0) The freedom to run the program as you wish, for any purpose (freedom 0).
1) The freedom to study how the program works, and change it so it does your computing as you wish (freedom 1). Access to the source code is a precondition for this.
2) The freedom to redistribute copies so you can help your neighbor (freedom 2).
3) The freedom to distribute copies of your modified versions to others (freedom 3). By doing this you can give the whole community a chance to benefit from your changes. Access to the source code is a precondition for this.

see also definition of free cultural works here - https://freedomdefined.org/Definition

The four freedoms applied to cultural works

1) the freedom to use the work and enjoy the benefits of using it

2) the freedom to study the work and to apply knowledge acquired from it

3) the freedom to make and redistribute copies, in whole or in part, of the information or expression

4) the freedom to make changes and improvements, and to distribute derivative works

To sum up - Control of authorship (other than attribution and crediting the original source) does not exist within digital art other than through peer pressure which often states its okay to remix, but wrong to steal and claim work as your own. We could fight to control our work and our ownership of an image or a video or a music file, a document, whatever, but ultimately there is a copy of that somewhere someone has made for their own use and purposes – but as I believe in remix culture and the idea that culture and art should be available that is a problem only if we adhere to the old ideas of copyright and ownership – if a digital file is not a thing who can truly own what is intangible ?

This attitude also reflects a culture of realism – most of us who make art will not make money from it or even be able to support ourselves through it, instead taking up jobs outside of art, in teaching, research or looking for funding through grants, residencies etc , or working completely outside of the art world, but that also makes us free to make the work we want to – it does not make the work we make any less valid, far from it, it allows us to work freely outside of old establishments and paradigms which no longer understand or serve us, free to create newer more relevant structures which operate along lines of a culture of abundance, sharing and a more open access to the means of artistic production and dissemination. It allows us to build a cultural commons where work has value for what it is rather than how much it is worth or a received assignment of value.

Because seriously who goes to galleries ? Contrast the number of people who wouldn't be seen dead in a gallery but will quite happily look at work on their devices – definitions of art have changed with the onset of the internet and the audience has grown and often the audience makes art in response to what they see, they don’t just passively receive it – we as the makers and consumers define what it is rather than old institutions and vested interests, it as an alive thing rather than a dead thing to be studied and dissected.

Towards a cultural commons

Lawrence Lessig is the originator of creative commons and from creative commons we can infer the idea of a cultural commons, where art, culture and learning are shared and valued without ideas of scarity value and ownership, something which already exists at the heart of glitch art where ideas, techniques and work are shared freely and openly and from Lawrence lessig we also get the formulation of ideas and discussion around the idea of remix culture through his book ‘Remix: Making Art and Commerce Thrive in the Hybrid Economy’

What is remix culture and why do I say everything is a remix? If we talk about generational loss then the remix element is obvious as a file is physically copied, recopied and remixed by transmission, in terms of culture itself its the act of taking a pre-existing work or idea and making something with that , In glitch art the use of archive dot org is near ubiquitous as a source of material, much of it out of copyright or public domain ( its safer to remix a work that's public domain) or say a photo library like pixabay we could also consider collage art as a precursor to glitch art in that pre-existing works are used and re-contextualised through deconstruction and the reassigning of meaning through proximity beyond the inadvertent remixes engendered by device or generational loss.

Remix culture is at the heart of any art going back to the renaissance or further – each generation looks at a previous generations work and borrows themes , iconography , techniques or just plain copies as Lawrence Lessig states:

I’ve described what I mean by remix by describing a bit of its prac-

tice. Whether text or beyond text, remix is collage; it comes from

combining elements of RO (read-only) culture; it succeeds by leveraging the

meaning created by the reference to build something new.

There are two goods that remix creates, at least for us, or for our kids, at least now. One is the good of community. The other is education.’

And:

Remixes happen within a community of remixers. In the digital

age, that community can be spread around the world. Members of

that community create in part for one another. They are showing

one another how they can create, as kids on a skateboard are show-

ing their friends how they can create. That showing is valuable,

even when the stuff produced is not.’

Glitch art is fundamentally a community, people sharing their latest technique or work, trying to one up each other – but it is also an inviting community in a way that traditional art communities are often not .

In the example I showed towards the beginning of this talk I scraped the images from a website, downloaded the probably copyrighted images and turned them into something new, who then owns these images – and at what point does a remix become a new work ? If everything can be looked at as source material who owns it ? My answer to that question is that we all do, if we look at cultural hoarding and gatekeeping in the old art world as a cultural loss to all of us then remixing allied with the free software movement and creative commons can be seen as cultural gain expanding as it does the access to tools, audience and participation. But this also implies a duty on us as artists to further participation, inclusivity, curatorship and opportunity.

But we must also be wary of creeping academicization – the need to study, dissect and classify within terms the establishment understands and can use to co-opt, curate and fit within an old art world narrative – we must forcefully resist this by writing our own histories, our own studies and our own narratives – work which is already under way.

Getting back to the theme beyond  philosophical implications

With glitch art we often work in ways that turn the idea of control on its head, if control means retaining everything including physical objects , all the information, all the detail if anything glitch art reflects the idea that loss is an integral part of existence and to try and hold on to anything is a losing game – digital art exists in an inherently fragile ecosystem – without power it doesn't exist , or rather it does but as the inaccessible content of hard-drives or remote servers – without power it cannot be seen without networks it cannot propagate . I make work which realistically will sooner or later disappear and probably more completely than previous generations of artists work.

We may be one of the first generations of artists in recent history to leave no trace of our work other than the physical and broken devices on which they were made and an oral history of practices which gets lost or changed over time. Therefore it is of primary importance that we both archive our work individually and collectively ( to take control of those processes) and also to record an oral history of practices and timelines and to have control of that process as well – asking the traditional art establishment to critique, write about, or show glitch art or digital art in general risks at the very least misunderstanding or at worst misrepresentation and caricature. We must also control the narrative around what we do.

Control the means of production

Control the means of digital production , understand your tools , maintain your hardware become good at managing your files , don’t rely on big tech to keep a record of your work fight the scourge of bit rot and dead links, record our shared history.

I neither use or endorse Apple products. No proprietary  or Apple hardware/software was used in the preparation of this talk, which was written on completely libre operating systems ( Trisquel, Gnuinos and PureOS) and partially written on a libre-booted Dell Latitude E6400.
CC BY-NC-SA 4.0










 




 
 


Mark Fisher – ‘Ghosts of my life’, Fukuyama’s ‘End of history’ and rebooting the future with glitch art.

Note- this was the introduction I gave during a recent online discussion with Verena Voigt ( https://www.verena-voigt-pr.de/ ) a...