Tuesday 19 July 2022

FFmpeg, virtual webcams and processing

Following on from the previous post on revisiting convolution in processing and trying to tie that in with my previous explorations of the desktop as a performance space and feedback generator wouldnt it be good if there was a way to pass the desktop as video into processing scripts in a similar way to that in which ffmpegs' x11grab allows us to open a window which follows the mouse focus around the desktop like a virtual webcam ? 

Turns out there is. A friend had told me about V4l2loopback previously, a way of creating virtual webcams in Linux and then thinking about it the other day I came across this post which describes how to do it https://narok.io/creating-a-virtual-webcam-on-linux/ but that author streams a video file rather than what I want which is the same input as I get from x11grab. Why ? If we can create a virtual webcam we can use that as an input for processing so we can extend the possibilites of desktop feedback loops .

TLDR: do this on Debian based systems 

install this

sudo apt install v4l2loopback-dkms

run this command  

sudo modprobe v4l2loopback devices=1 video_nr=10 max_buffers=2 exclusive_caps=1 card_label="Default WebCam"

then this 

ffmpeg -f x11grab -follow_mouse centered -framerate 10 -video_size 640x480 -i :0.0 -f v4l2 -vcodec rawvideo -s 640x480 /dev/video10

Then open a script in processing which uses a webcam for input ( ie the convolution scripts ive been working on ) and experiment. 


 





 

 


Tuesday 5 July 2022

Revisiting Convolution in Processing

 


Recently I've been revisiting a convolution script I use in the Processing environment based on this script by Daniel Shiffman here https://processing.org/examples/convolution.html

My version of that is modified to take an input from a webcam or video capture device and perform convolution on the input in real time - outputting video in the processing sketch window. This is done by taking a snapshot of the video feed then performing the convolution on that image and outputting that into the view port, it happens so quickly ( for the most part) that it seems as you are seeing continuous video . 

I've also modified the script by combining it with another example script that comes with processing that pixelates video - it seems to improve the quality of convolution. The output of this version also reminds me of the effects I can achieve using some of my circuit bent cameras.

The kernels ( convolution matrices) I'm using are based in part on the work of Kunal Agnohotri who gave a talk to Fubar 2019 on 'Hacking convolution filters'

Below is the script in its working state that Ive been running on Gnuinos Linux ( a fully libre version of Devuan Linux that Ive started to swap over too). Its messy with my additions and thoughts and working notes . Ive recently discovered that you can use the script without video source attached and it will quite happily generate video if you comment out the part that states  ' image(video, 0, 0);' by using // .This is what piqued my interest in revisiting this way of working as I'm more and more drawn to black and white and its possibilitys (video of that below before the script ) 


** The script is below this - tested and running on Processing 3.5.4 with video libraries installed you should be able to copy and paste this into a fresh blank sketch in processing  **


import processing.video.*;

PImage img;
PImage edgeImg;

String imgFileName = "face";
String fileType = "jpg";

// from pixelated_video

int signal = 0;
int numPixelsWide, numPixelsHigh;
int blockSize = 2;

color CaptureColors[];

// change values in matrices below for different effects
 //below is reference matrix for bw convolution with no input image ie self generating
//float[][] matrix = { { -0.75, -0.6, -0.4, },
                     //{0.1, -1, 1.9,  },           
                     //{ 0.1, 0.8, 0.7, }  };
                    
 float[][] matrix = { { 0, -192, -1,  },
                     { 1, 2, 0,},
                      {0, -10, 700,  }
                    
                       };
Capture video;                  
                 
void captureEvent(Capture video) {
  video.read();
}
void setup() {
  size(720, 480);
  noStroke();
  video = new Capture(this, width, height);
  video.start(); 
 // saveFrame("face.jpg");
 
}

void draw() {
 
    {
  
   
   
    //comment out first statement below (starting with image) to get bandw no image convolution only      
   image(video, 0, 0);
      //filter(POSTERIZE,4);
       //filter(INVERT);
       //filter(DILATE);

    saveFrame("face.jpg");
   
   edgeImg = loadImage("face.jpg");
  
    edgeImg.loadPixels();
   
    
   
  
      
  // Calculate the convolution
  // change loopend for number of iterations you want to run
  int loopstart =0;
  int loopend =4;
  int xstart = 0;
  int ystart =0 ;
  int xend = 720;
  int yend = 480;
  //  'int matrixsize = ' is number of rows down in matrix.
  int matrixsize = 3;
   
  // Begin our loop for every pixel in the image
 
  for (int l = loopstart; l < loopend; l++) {
  for (int x = xstart; x < xend; x++) {
    for (int y = ystart; y < yend; y++ ) {
      color c = convolution(x, y, matrix, matrixsize, edgeImg);
      int loc = x + y*edgeImg.width;
     edgeImg.pixels[loc] = c;
    
     
    }
       }
      
        }
  // image(edgeImg, 0, 0, edgeImg.width, edgeImg.height);
  
   int count = 0;
  
   numPixelsWide = edgeImg.width / blockSize;
  numPixelsHigh = edgeImg.height / blockSize;
 // println(numPixelsWide);
  CaptureColors = new color[numPixelsWide * numPixelsHigh];
 
   // loop for pixelation set block size
   for (int p = 0; p < numPixelsHigh; p++) {
      for (int q = 0; q < numPixelsWide; q++) {
        CaptureColors[count] = edgeImg.get(q*blockSize, p*blockSize);
        count++;
      }
    }
    for (int p = 0; p < numPixelsHigh; p++) {
    for (int q = 0; q < numPixelsWide; q++) {
    fill(CaptureColors[p*numPixelsWide + q]);
      rect(q*blockSize, p*blockSize, blockSize, blockSize);
     
    }
    }
  //saveFrame("face.jpg");
    //saveFrame("image####.jpg");
   }
    }
color convolution(int x, int y, float[][] matrix, int matrixsize, PImage edgeImg)
{
  float rtotal = 0.0;
  float gtotal = 0.0;
  float btotal = 0.0;
  // change 'int offset = matrixsize' is rows across matrix
  int offset = matrixsize / 3;
  for (int i = 0; i < matrixsize; i++){
    for (int j= 0; j < matrixsize; j++){
      // What pixel are we testing
      int xloc = x+i-offset;
      int yloc = y+j-offset;
      int loc = xloc + edgeImg.width*yloc;
      // Make sure we haven't walked off our image, we could do better here
      loc = constrain(loc,0,edgeImg.pixels.length-1);
      // Calculate the convolution
      rtotal += (red(edgeImg.pixels[loc]) * matrix[i][j]);
      gtotal += (green(edgeImg.pixels[loc]) * matrix[i][j]);
      btotal += (blue(edgeImg.pixels[loc]) * matrix[i][j]);
    }
  }
  // Make sure RGB is within range
  rtotal = constrain(rtotal, 0, 255);
  gtotal = constrain(gtotal, 0, 255);
  btotal = constrain(btotal, 0, 255);
  // Return the resulting color
  return color(rtotal, gtotal, btotal);
}
 

ikillerpulse ( requires tomato.py in working directory)

I've been working on this script for the last week or so. What does it do? It takes an input video/s,  converts to a format we can use f...