Thursday 11 May 2023

Burner - video and scripts

 

Two ffplay windows generating feedback via two cli scripts plus an extra script to keep the mouse pointer present. 


First script is simpleish - 

ffplay -f x11grab -follow_mouse centered -framerate 23 -vf "rotate=-1.23" -vf vflip -video_size 640x640 -i :0.0
 

Second script is slightly more complex - 

ffmpeg -f x11grab -follow_mouse centered -framerate 23 -video_size 640x640 -i :0.0 -f x11grab -follow_mouse centered -framerate 23 -video_size 640x640 -i :0.0 -lavfi '[1]split[x][y],[0][x][y]displace' -f rawvideo -pix_fmt yuv420p - | ffplay -f rawvideo -pix_fmt yuv420p -vf "rotate=1.23" -s 640x640 -


The most complex script is the one to focus the mouse and keep it awake and move it slightly every so often - run as a bash script from the command line -

echo "Press CTRL+C to stop…"
sleep 10
eval $(xdotool getmouselocation --shell)
echo $X,$Y
while :
do

xdotool mousemove $X $Y
sleep 5
xdotool mousemove_relative -- 0 -5
sleep 2
xdotool mousemove_relative -- 5 -5
sleep 2
xdotool mousemove_relative 0 5
sleep 2
xdotool mousemove_relative -- -5 5

#list for user change of mouse position

xdotool click 5
EXIT_CODE=$?
if [ $EXIT_CODE -eq 0 ]; then
eval $(xdotool getmouselocation --shell)
echo $X,$Y
fi
done


For topicality here is a chatgpt explanation of the second ffmpeg script.I originally fed the script in and asked it to suggest or add different filters and it just regurgitated the original script I'd just asked it to add too, but the explanation is pretty spot on I do love stochastic parrots). Maybe bard will do a better job!!

'Here's a breakdown of the filters used in this script:

    Input: -f x11grab -follow_mouse centered -framerate 23 -video_size 640x640 -i :0.0
    This sets up the first x11grab input for capturing the screen at a framerate of 23, with a video size of 640x640 pixels. The follow_mouse centered option captures the region around the mouse cursor and centers it in the output.

    Input: -f x11grab -follow_mouse centered -framerate 23 -video_size 640x640 -i :0.0
    This sets up the second x11grab input, similar to the first one.

    Filter: -lavfi '[1]split[x][y];[0][x][y]displace'
    This filter splits the second input into two streams ([x][y]) and performs the displace filter on the first input using the two streams. The split filter is used to duplicate the second input stream, so it can be used in both the [x] and [y] parts of the displace filter.

    Filter: -vf "rotate=1.23"
    This filter applies a rotation transformation to the video frames. The 1.23 value represents the rotation angle in radians.

    Output: -f rawvideo -pix_fmt yuv420p -
    This sets up the output format as raw video with YUV420 pixel format and pipes it to the next command.

    Player: | ffplay -f rawvideo -pix_fmt yuv420p -s 640x640 -
    This pipes the raw video output to ffplay for playback. The -s 640x640 option specifies the output display size as 640x640 pixels.

Feel free to adjust the parameters or add more filters as per your requirements.


I've also experimented with using the mandelbrot filter in place of one of the x11grab inputs ie replacing this -f x11grab -follow_mouse centered -framerate 23 -video_size 640x640 -i :0.0 with this -f lavfi -i mandelbrot=s=640x640 -framerate 23 which will give me this :

 


so there is a lot of room for variation and exploration within this space.



ikillerpulse ( requires tomato.py in working directory)

I've been working on this script for the last week or so. What does it do? It takes an input video/s,  converts to a format we can use f...