Monday, 14 June 2021

Quick and dirty guide to using ffglitch/ffgac in Windows 10




This post is in response to a question posted in r/glitch_art about using FFglitch in windows 10 . Now I generally use linux exclusively but I've become aware over the last few months that I have to be a bit more inclusive, not everyone wants to run linux for one reason or another and I'm trying to port some of my examples to windows for that reason. 

What is FFglitch? Ramiro Polla's FFglitch is basically the best tool for datamoshing there is, Kaspar Ravels tomato v2 is a beautiful and simple tool but it has limits , FFglitch is more complicated to use but gives much more fine grained control of the process. Where to get it - here https://ffglitch.org/ it comes in Linux , Mac and Windows flavors Download it and unpack it, there's no need to install it, as its a standalone binary, but to use it we will need to take a few steps first.

First things first, download notepad ++  to alter scripts and settings with ffglitch its easier if you have a good editor - I recommend either geany or notepad++ so download and install that first if you havent got it installed already https://notepad-plus-plus.org/

FFglitch is essentially a command line tool , and this is where windows sucks , powershell is okay but we need a bit more power and some unix like tools which is where the git bash terminal comes in ( I've written about that before in a previous post ) you want to install this from here https://gitforwindows.org/  and follow the onscreen prompts carefully , generally its okay to go with the default prompts except where it asks for your default editor  - I recommend you tick the box that says notepad++ . Once its installed you will notice if you right click in a folder you now have an option which says 'gitbash here' - and this is what we are after, bash is a terminal like powershell but way more powerful, more of that later.

Optional - if you want to use python scripts, though the developer Ramiro Polla states he is moving away from those in favor of js there's a lot you can do with those scripts ( see Jo Grys posts in the facebook group glitch artists collective - tooltime  and also Kaspar Ravel and Thomas Collets work on implementing the game of life as motion vectors  with ffglitch )  . Sooo download and install chocolatey from here https://community.chocolatey.org/ then after that open powershell as adminstrator and do this to install python - choco install python --pre

so all of that out of the way I'm going to walk through the example I was actually asked about - the move sink and rise example that Ramiro Polla has on his website . First things first we need to change the format of any file we are using so it suits ffgac ( the new version is called ffgac not ffglitch ) which handily there is an example script to do just that - now one of the misconceptions people have on using ffglitch / ffgac is that they can use standard ffmpeg to prep their files and this leads to a further misconception which leads to trying and failing to use ffglitch / ffgac - and I think that explains the renaming - ffgac is a standalone version of ffmpeg designed to be used with this set of tools , all the scripts reference it and not any other version you may have installed - this is why we work within the ffgac / ffglitch directory . 

so prep your file with the script Ramiro Polla has on his website but modified for use with windows - open notepad++ and copy pasta what is  below  from  #!/bin/bash to done , save that script in the folder where you have ffgac as ffgacprep.sh 

#!/bin/bash
mkdir mpeg2

#change mp4 below to whatever the file extension your source file has
for I in *.mp4;
  do
             ./ffgac.exe -i "$I" \
             -an -vcodec mpeg2video -f rawvideo \
             -mpv_flags +nopimb \
             -qscale:v 6 \
             -r 30 \
             -g 90 \
             -s 1280x720 \
             -y mpeg2/"${I/.mp4/.mpg}";
  done
 

open git bash terminal in that folder ie right click and click on Git bash here - the terminal window will appear - before we can use that script we have to make it executable so in the terminal type in exactly this  ( without quote marks) '  chmod u+x ffgacprep.sh  '  - if all goes well there will be no error messages and the command line returns . so to prep your file now type in exactly in the terminal :

 ./ffgacprep.sh 

The script will create a folder within that folder called mpeg2 and output your file all ready for the next stage move it into the main ffglitch folder .

You could also use the command line commands shown on the website. So in git bash terminal and with your video in the ffglitch folder copy and paste that command into git bash terminal ( to paste into git bash terminal press shift and ins ) - change where it says input file to the name of your file - leave temp_file.mpg as your output file name unless your working on a lot of files and alter the first part of the script from ffgac to ./ffgac.exe ( cos we are working on windows and not linux ) so it will look like this ( but as one long line) - using the left and right arrow keys to navigate through the command and change it .

./ffgac.exe -i input_file -an -mpv_flags +nopimb+forcemv -qscale:v 0 -g 1000 
-vcodec mpeg2video -f rawvideo -y temp_file.mpg

Go back to the ffglitch website and go here https://ffglitch.org/2020/07/mv.html look for the section about halfways down where he says ' With all that being said, here’s the script that we will be using' open notepad++ and copy the script and save it in the ffglitch folder as  'mv_sink_and_rise.js' (without the quotes ) .

Now for the magic run the command he has on his website but for windows we change it to ;

./ffedit.exe -i temp_file.mpg -f mv -s mv_sink_and_rise.js -o glitched_file.mpg

copy and paste ( or type !) the above into git bash terminal and press enter :

Once that has run succesfully look for the file in the ffglitch folder and play it!

Once we have this running and you are happy with this process go back to Ramiro Polla's tutorials and start messing about with values in the mv_sink_and_rise.js script.


  

 

 

 


 



 

 

 


Friday, 11 June 2021

Black and white ( The future is 1 bit)


  A while back I became interested in dithering - an early technique used for creating grey scale images in the days when computer monitors, and sometimes computers, were only black and white - think early Macintosh or that portable black and white TV you used for your ZX81 or any of the early home computers. Very similar to newspapers grid of dots dithering has a certain aesthetic appeal - there's a wonderful article here on its roots and links to modern processing sketches https://www.evilmadscientist.com/2012/dithering/ but that doesn't fill my requirements these are great but what if I want to turn a whole video into black and white ( not greyscale) rather than just capture it live . Something like this :

 


There's a simple way to do this in ffmpeg using this command - 'ffmpeg -i Input_1280x720.mp4 -f lavfi -i color=gray:s=1280x720 -f lavfi -i color=black:s=1280x720 -f lavfi -i color=white:s=1280x720 -filter_complex threshold output.mp4' - which gives us this:


Although I like this - its not quite what I was looking for . So I looked for some alternative way using ffmpeg to create dither but none of the methods I found gave me that look and feel . Then I thought well could I just convert the video to stills and use an old format like pbm and researching this I found that the netpbm library ( find netpbm page and links here ) has a function for creating atkinson like dither.

 Most linux distros come with the netpbm library and that gives us  different ways of dithering an image but the one I was most interested in - pamditherbw ( which has the setting for Atkinson dithering) didn't seem to be included which sent me down a rabbit hole of finding out why . To get this function you have to uninstall the netpbm that comes with your distro then download the deb file from here and carefully follow error messages that the installing with gdebi process throws up ( there are a couple of packages that need to be uninstalled and another that needs to be installed as well as the netpbm package ) - I know this sounds a little vaguer than I usually write but its easy enough when you see the process and follow the prompts. 

So having that installed I could now run the script I wanted to run which gives me this, best viewed full screen ( I'll post the script at the end )


This is the script : caveats - you must remember to add file extension when answering 'what do you want to name the output file' ie mp4 - if using a different codec other than h264 you may wish to alter the ffmpeg statement at the end . It creates a lot of files when its working , so you might want to use a scratch disk ( I did a fifty minute video which created 80000 plus files here but it does clean up after itself ie removes the stills before exiting.

#!/bin/bash
#to get pam ditherbw to work uninstall pkgmanagers version of netpbm and download and from here https://sourceforge.net/projects/netpbm/files/
#more info on pamditherbw here http://netpbm.sourceforge.net/doc/pamditherbw.html

echo -n "File to work on ? : "
read n

echo -n "What do you want to name output file ? : "
read z

echo -n "Framerate as a number between 1 and 30 ? : "
read m

echo -n "crf value for final video ? : "
read o




# convert to pgm

ffmpeg -i $n -vf fps=$m image-%05d.pgm

i=0
find . -type f -name '*.pgm'|while read filename; do echo ${filename} ;
((i++))
# try different random seeds
  pamditherbw -atkinson -randomseed 8  ${filename} > $i.pam ;
  mogrify -format pbm $i.pam
  mv "$i.pbm" "$(basename "${filename}" .pgm).pbm"
  rm $i.pam
 
  done
 
 
 
  rm *.pam
  rm *.pgm
 




ffmpeg -i image-%05d.pbm -vcodec libx264 -r $m -crf $o  -pix_fmt yuv420p $z
#ffmpeg -i image-%05d.pbm -r $m -c:v hevc -b:v 2M  $z

rm *.pbm


#ffmpeg -r $m -i %d.jpg -i $n -map 0:0 -map 1:1 -c:a copy -c:v libx264 -tune stillimage -crf $o -r 24 lol.mp4





Friday, 4 June 2021

Hex editing Tarot cards.


 

I don't often make gifs, this one was made back in 2014 from the following images and a few more . The source images were black and white versions of tarot cards which I'd found online and which as far as I know were published under a creative commons license. The technique I used was one I talked about in a previous blog post 'Heads and tails' which gives this cut up and bloccy effect , especially when using black and white images as a base .

 
 


 

     Why the Tarot though ? Throughout my life as an artist I've come back to the tarot time and again as there is something about the imagery that tugs at the subconscious. A training as a painter ( My degree is in fine art painting) taught me about looking for a hint or a suggestion of the other , the thing within a painting that needs to be brought out , a process of listening and interaction in the same way that reading the tarot requires listening to what the cards are suggesting rather than forcing my will or ego upon them. I also apply that philosophy to what I make now, when I'm looking at video or images I'm making I try to listen to those ticklings from the subconscious and allow that to guide me .

You can see the full set here  on my shiny newish Tumblr blog .

 


Mark Fisher – ‘Ghosts of my life’, Fukuyama’s ‘End of history’ and rebooting the future with glitch art.

Note- this was the introduction I gave during a recent online discussion with Verena Voigt ( https://www.verena-voigt-pr.de/ ) a...