Wednesday 29 January 2020

RED plus FFmpeg displacement.

Red.

Lately I've been playing with low end machines and faulty graphics cards. These stills are made by feeding a video through legacy os 2017 with an old pci graphics card, an Avance Logic with an ALG2302.A  chip. First I play the video and capture that using my trusty vga to composite adapter and pinnacle TV capture card, then I slice that up into stills after capture. What I like most about these is the texture and the colour red plus the almost GAN like blurriness and indistinctness.










Though I started to work on this series at the beginning  of 2020 ( this was posted originally on the 29/01/2020) I've begun to go back over those captures, in part because of the richness of the colours that draws me back. Also with the ongoing pandemic ( April 2021 and we are still in lockdown) I've had a chance to start thinking in different ways about manipulating video and what it is I'm looking for which in many ways brings me back to my roots as a painter and specifically abstraction. 

Influenced by the work of rä̻́s aļ̷̦ha̶̡̡̠̟̟̟̟gu̷̢̢̢e  find their work here https://rasalh.tumblr.com/ and their use of displacement maps on still images I looked for ways to use that on video, and after a simple google search found this artist https://abissusvoide.wordpress.com/2018/05/23/ffmpeg-displacement-map-experiments/ who had already experimented with them following work by Tomasz Sulej ( that name will keep cropping up as they are one of the most innovative toolmakers and thinkers in the glitch art community ) . That led to some experimenting and playing around with the command line and reading the ffmpeg documentation which led me to this command:

ffmpeg -i video1.mp4 -i video2.mp4 -i video3.mp4 -an -c:v libxvid -q 10 -filter_complex '[0][1][2]displace=edge=smear'  displacedvideo.avi

( I should state as always this is on linuxmint19.3 running the version of ffmpeg that apt provides - this may not work on your os or version of ffmpeg) .

which led to this long form video: which was made over a series of days by taking the output of one session and feeding it back with additions and variations until I got where I wanted it to be.


 Addenda - May 2021 , if you want to just pipe the output and watch it working to make sure before you commit to encoding a lengthy video ( or as asked elsewhere for live performance ) you can do this :

ffmpeg -i video1.avi -i video2.avi -i video3.avi -an -filter_complex '[0][1][2]displace=edge=mirror' -f rawvideo -pix_fmt yuv420p - | ffplay -f rawvideo -pix_fmt yuv420p -s 960x540 -

Making sure that your output size ie after the video is piped through ffplay where I've written '-s 960x540 -' matches your input video dimensions .




 

 

 

ikillerpulse ( requires tomato.py in working directory)

I've been working on this script for the last week or so. What does it do? It takes an input video/s,  converts to a format we can use f...