Wednesday 11 January 2023

A short game of solitaire

 

 




The last few days I've been revisiting an old processing sketch I based on one of the examples for image convolution by Daniel Shiffman here https://processing.org/examples/convolution.html

My version on Linux takes input in real time from a webcam or, on Linux, a v4l2loopback virtual camera running a custom script which also adds convolution which looks like the line below:

ffmpeg -f x11grab -follow_mouse centered -framerate 10 -video_size cif -i :0.0 -vf convolution="-0.75 100 -0.1 -1 -5 -75 -1 -1 1:0 -10 0 -2 -5 -1 -1 -1 -2" -f rawvideo -pix_fmt yuv420p -f v4l2 -vcodec rawvideo -s cif /dev/video10

( The best resource I've found for explaining and implementing v4l2loopback is this https://narok.io/creating-a-virtual-webcam-on-linux/ and this lies at the heart of a lot of my current work) 

The video above  is an example of this working, I had Windows 95 open in a vm using virtualbox , started the v4l2loopback virtual camera then opened the script below, hovered the mouse over the windows 95 vm and started playing solitaire. The video was captured in real time using simplescreenrecorder.  ( The script below is a work in progress so it still has notes to myself and additions that I'm playing with which may or may not be commented out - it is provided as it is use at your own discretion - it might work on windows using webcam or obs-studio virtual camera input).

import processing.video.*;


PImage edgeImg;
PImage img;

String imgFileName = "face";
String imgName = "face2";
String fileType = "jpg";
float r = 1;
// change values in matrices below for different effects
 
float[][] matrix = { { 0, 0.1, 0.7, 0.8, -1.3, 0 },
                     {1.5, 1.5, 1, -1, 0, 1 },
                     {0, -0.85, 0, 0, 0.7, -1 },
                      {1, 0, 1, 1, 1, -1 },
                      {-2, 0, 0, 1, -1, -1 },
                     { 1, -1, -0.9, 1, -1, -1 }  };
                     
Capture video;                   
                  
void captureEvent(Capture video) {
  video.read();
}
void setup() {
    size(640, 640);
  video = new Capture(this, width, height);
  video.start();  
}

void draw() {
   image(video, 0, 0);
  saveFrame("face2.jpg");
  img = loadImage("face2.jpg");
 
    noStroke();
  // background(255);
    imageMode(CENTER);
     //rectMode(CENTER);
    
    translate(img.width/2, img.height/2);
    filter(POSTERIZE,2);
       //filter(INVERT);
       //filter(DILATE);
       
 float n = r++;
          
 rotate(n);
 image(img, 352, 288);
    saveFrame("face.jpg");
    
   edgeImg = loadImage("face.jpg");
   img = loadImage("face2.jpg");
 
       
  // Calculate the convolution
  // change loopend for number of iterations you want to run
  int loopstart =0;
  int loopend =5;
  int xstart = 0;
  int ystart =0 ;
  int xend = 640;
  int yend = 640;
  //  'int matrixsize = ' is number of rows down in matrix.
  int matrixsize = 6;
    
  // Begin our loop for every pixel in the image
  for (int l = loopstart; l < loopend; l++) {
  for (int x = xstart; x < xend; x++) {
    for (int y = ystart; y < yend; y++ ) {
      color c = convolution(x, y, matrix, matrixsize, edgeImg);
      int loc = x + y*edgeImg.width;
     edgeImg.pixels[loc] = c;
    }
       }
        }
   image(edgeImg, 0, 0, edgeImg.width, edgeImg.height);
    //image(img, 0, 0, img.width, img.height);
}

color convolution(int x, int y, float[][] matrix, int matrixsize, PImage edgeImg)
{
  float rtotal = 0.0;
  float gtotal = 0.0;
  float btotal = 0.0;
  // change 'int offset = matrixsize' is rows across matrix
  int offset = matrixsize / 6;
  for (int i = 0; i < matrixsize; i++){
    for (int j= 0; j < matrixsize; j++){
      // What pixel are we testing
      int xloc = x+1-offset;
      int yloc = y+j-offset;
      int loc = xloc + edgeImg.width*yloc;
      // Make sure we haven't walked off our image, we could do better here
      loc = constrain(loc,0,edgeImg.pixels.length-1);
      // Calculate the convolution
      rtotal += (red(edgeImg.pixels[loc]) * matrix[i][j]);
      gtotal += (green(edgeImg.pixels[loc]) * matrix[i][j]);
      btotal += (blue(edgeImg.pixels[loc]) * matrix[i][j]);
    }
  }
  // Make sure RGB is within range
  rtotal = constrain(rtotal, 0, 255);
  gtotal = constrain(gtotal, 0, 255);
  btotal = constrain(btotal, 0, 255);
  // Return the resulting color
  return color(rtotal, gtotal, btotal);
}

 

 Though I found what I think is the most interesting part of the video the point where the solitaire cards are bounced out of their piles as below, which has led me to think of using the script with other inputs like the previously explained sudacam scripts.




 

ikillerpulse ( requires tomato.py in working directory)

I've been working on this script for the last week or so. What does it do? It takes an input video/s,  converts to a format we can use f...