Wednesday 29 January 2020

Why AI is not the future of art ( an artists view).

 
 Impossible Jewellery ( gan and glic)

Like all tools, AI becomes fetishised by those pushing it as a great disruptor . If we can only collect enough data points or images, essentially cannibalise the totality of all art made up until now and feed those back into our model we can create 'new' artworks that do away with those pesky artists and create something akin to a factory  which can churn out New novels, films, filmstars like the machines used by the ministry of truth in 1984 to feed the masses glib and sugary media that satisfies a craving but never gives insight . Because the approach that a lot of data scientists use is to try and make something like deep fakes - art which convinces you it might have been made by human hands , imitation is not the sincerest form of flattery especially in the case of things like style transfer where essentially we are playing the game of one tune to the sound of another  https://genekogan.com/works/style-transfer/

If AI art seeks to imitate that which has gone before it will fail, if it relies on aping human aesthetics it will fail. What to me is most interesting about AI and machine learning, training is mistraining or mislearning or failure - the parade of freakish faces or buildings which lead us into a new aesthetic unbounded by old paradigms of beauty or balance. I am not interested in faces which do not exist , whatever does not exist or any other fakery which will subsequently be used as an agent of control or obfuscation or  state sponsored hacking or social engineering or the inherent faultiness of image recognition for example 

Something went wrong - image without background.


You can see the person in the image, I can see the person in the image, the algorithm cannot, and that says something quite important about the technology (though I can see applications for deliberate obfuscation exploits and glitch art in this).

Some of the most unique and interesting work being done with AI is by the artists themselves, training and subverting the technology.

Artists such as vadim epstein ( find words and pictures here https://cdm.link/2019/08/ai-mirror-vadim-epstein/

Or applications such as gan breeder or art breeder https://artbreeder.com/


My argument as an artist is very simple; as in economics those who see advantage in using a technology to corner a market will seek to do so. This is not necessarily a good thing, especially if those doing the cornering are those who set the agenda already leading to a moribund aesthetic  and the exclusion and impoverishment of large swathes of society - their agenda is not ours lets not get caught up in it.

RED plus FFmpeg displacement.

Red.

Lately I've been playing with low end machines and faulty graphics cards. These stills are made by feeding a video through legacy os 2017 with an old pci graphics card, an Avance Logic with an ALG2302.A  chip. First I play the video and capture that using my trusty vga to composite adapter and pinnacle TV capture card, then I slice that up into stills after capture. What I like most about these is the texture and the colour red plus the almost GAN like blurriness and indistinctness.










Though I started to work on this series at the beginning  of 2020 ( this was posted originally on the 29/01/2020) I've begun to go back over those captures, in part because of the richness of the colours that draws me back. Also with the ongoing pandemic ( April 2021 and we are still in lockdown) I've had a chance to start thinking in different ways about manipulating video and what it is I'm looking for which in many ways brings me back to my roots as a painter and specifically abstraction. 

Influenced by the work of rä̻́s aļ̷̦ha̶̡̡̠̟̟̟̟gu̷̢̢̢e  find their work here https://rasalh.tumblr.com/ and their use of displacement maps on still images I looked for ways to use that on video, and after a simple google search found this artist https://abissusvoide.wordpress.com/2018/05/23/ffmpeg-displacement-map-experiments/ who had already experimented with them following work by Tomasz Sulej ( that name will keep cropping up as they are one of the most innovative toolmakers and thinkers in the glitch art community ) . That led to some experimenting and playing around with the command line and reading the ffmpeg documentation which led me to this command:

ffmpeg -i video1.mp4 -i video2.mp4 -i video3.mp4 -an -c:v libxvid -q 10 -filter_complex '[0][1][2]displace=edge=smear'  displacedvideo.avi

( I should state as always this is on linuxmint19.3 running the version of ffmpeg that apt provides - this may not work on your os or version of ffmpeg) .

which led to this long form video: which was made over a series of days by taking the output of one session and feeding it back with additions and variations until I got where I wanted it to be.


 Addenda - May 2021 , if you want to just pipe the output and watch it working to make sure before you commit to encoding a lengthy video ( or as asked elsewhere for live performance ) you can do this :

ffmpeg -i video1.avi -i video2.avi -i video3.avi -an -filter_complex '[0][1][2]displace=edge=mirror' -f rawvideo -pix_fmt yuv420p - | ffplay -f rawvideo -pix_fmt yuv420p -s 960x540 -

Making sure that your output size ie after the video is piped through ffplay where I've written '-s 960x540 -' matches your input video dimensions .




 

 

 

ikillerpulse ( requires tomato.py in working directory)

I've been working on this script for the last week or so. What does it do? It takes an input video/s,  converts to a format we can use f...