One of my strategy's when
testing distro’s is to have an external drive which contains
different file formats ie ogv , avi , mkv , mp4 containing a mix of
h264 , snow , h261 cavs etc so when I fire up a new distro I can just
mount the hard-drive and test each file against a player and I will
have the vga to composite adaptor connected to a secondary computer
via an old tv capture card so I can record what I find – using that
method I use cheese webcam booth then which records in webm ( I love webm because of the way it corrupts when hex edited) or more
often I will use gtk-recordmydestop which I use to capture from
tvtime as I find the capture is marginally better , but even so still
lofi .
So I was trying out an old
version of Mandriva (2009) running through the files that I had on
the external hard-drive, there’s the standard gnome media player
but it doesn’t come with all the codecs installed and the distro
repositorys are long since abandoned (As Mandriva no longer exists as an entity unlike Ubuntu or Debian you can't alter the sources.list and point the package manager at older repositorys for codecs) and then I thought of importing
an ogv file into kino , why I dont know – ogv as basically the most
open source file format you can have for video and will play on older
computers with little or no problem , or should do so I hit import
and this is what happened .
Importing ogv into Kino on Mandriva 2009
Not only does kino corrupt the
playback of the file when it imports it, it also saves the file as a
dv file in whatever directory the source is in , so it corrupts and
bakes , what you see is what you get . And unlike the previous faults
I’d found in legacy os this isn't dependent on the graphics card
and os interacting.
So then I was running through
some of my old distros to see if I could replicate that , and sure
enough I came across a thread on using older versions of Ubuntu ,
Ubuntu ( along with Debian ) keeps up repositorys for older versions
and you can still access these repositorys and build a working system
just by altering the etc/apt/
sources.list to point at the old repos Find older versions of Ubuntu here old versions of Ubuntu
Adapted /etc/apt/sources.list Ubuntu 10.04
So having worked through
ubuntu 9.11 I eventually settled on 10.04 ( as gtk- recordmydesktop
wasnt available as far as I know in earlier versions of Ubuntu) as I
knew it quite well from using it in the past and It had a good
variety of video players to experiment with searching through
synaptic I found most of them and installed them. So playing ogv in
Gnome-mplayer gives the same results as importing into Kino ( but without the saving to file - you'll need to use a screencapture program or device to record this).
Ogv playback in Gnome-mplayer Ubuntu 10.04
Gtk-recordmydesktop captures
in ogv format – ogv and oggvideo / theora and webm for that matter
are based on onvideos vp3 ( there are similarities between hex edited
vp3 and webm ) ( More on vp3 here VP3 on wikipedia)it isn't a very well organized format so when you come
to replay it it stutters and is hard to seek in, often stalling, but
it is backwards compatible ie you can use it on older versions of linux with some hope it will play
.
Curiously if
you change the file extension from ogv to ogg the file will play back
ordinarily. The following video was screen captured from Zoe Stawska's outernet explorer youtube channel find that here and subscribe https://www.youtube.com/channel/UCweQmrxjQfxbdv2I0mrmjqA
Ogv to Ogg name change and playback in Ubuntu 10.04
This is one of the more
interesting flaws I've found specifically in Ubuntu 10.04, its as simple
as encoding a file in to cavscodec ( a chinese audio video codec designed to replace patent encumbered h264 and h265. More on cavs here libxavs on sourceforge ) and replaying it in vlc or Gnome-mplayer . I screen
capture this again using gtk-recordmydesktop. So first here is the file as it would be played back ordinarily in Vlc in more modern linux distros like linuxmint 18.3 or Devuan ( source is a capture from tv of one of the more recent Transformers films )
Cavs encoded File playing as it should play back in newer Linux distros.
And this is the cavs encoded file as it plays back in Vlc and Gnome mplayer ( this is without hex editing or any technique applied ).
Cavs misread in ubuntu 10.04 Vlc and Gnome-mplayer
Recent researchs into Ubuntu 6.06 on low end machines
The last few months I’ve
been looking at Ubuntu 6.06 , from previous versions of ubuntu I’ve
come to expect the ogv misread fault but it presents itself in
different ways depending on the speed of the computer and the
graphics card used /
ogv misread in ubuntu 6.06 -
this is from a lower specced computer ( p3 600mhz) with an ati expert
98 agp card with 8mb of ram onboard so the computer is struggling to
actually read the file - im looking to exploit this further but
havent had time to research it fully ( though I have managed to
replicate this on a slightly more powerful pentium d processor
running an ati 9600 agp card ).
Agp Ati xpert 98 ogv misread
On a slightly higher specced
computer ( a p3 1ghz) with onboard intel graphics (i810) and a 3dfx
voodoo 2 ( because this msi 6178 ver1.1 board doesn't have agp) we
get different textures and reads - some files wont play at all
because the processor cant handle them , especially if the file is
too big . This is the original file captured using gtk-recordmydesktop and saved as ogv .
Ogv File as captured on Linux mint 19.1 using gtk-recordmydesktop.
Below is the same file as played on Ubuntu 6.06 given the specs as above and with onboard graphics ( intel i810) running through a 3dfx voodoo 2 pci with 8mb onboard ram .
Ogv misread Ubuntu 6.06 on 3dfx voodoo 2 pci graphics card
This is how the file manager on the same computer as used above sees my sources folder on the usb hardrive I store my test videos .
And this is how Linuxmint 19.1 sees exactly the same folder:
But the most surprising fault
I've found is in the playback of h264 encoded files in vlc on
Ubuntu 6.06 which gives an almost pixel sorting effect this gives the
same result independent of video card or processor. First I'll play the original file:
Nosferatu original playback
Now If we play the same file in Vlc on Ubuntu 6.06 we get this:
Nosferatu H264 playback error in Ubuntu 6.06
I have actually captured a complete version of this on youtube and if you are interested this is where to find it https://www.youtube.com/watch?v=h5KL2-juNfQ&t=127s
And this is about where I am
now, focusing on Ubuntu 6.06 and various hardware configurations
investigating the Ogv and h264 error which seems to have significant
differences between the easy Gnome-mplayer playback and kino import
in Ubuntu 10.04, as seen earlier especially if I vary the graphics
card used .
Throughout the last year and
in planning for this talk I've worked my way through quite a few
linux distros , some like legacy os and its derivatives ( of which
their are many ) and the various Ubuntus and derivatives have been
quite fruitful, others like Dragora 2.2 or Zenwalk and other
Slackware derivatives less so . Though there are many others yet to
try .
This is a list of some but not
all of those I have investigated . ( the useful ones are underlined though recently Ive been looking at pure:dyne and dyne:bolic especially and these should be considered especially as like legacy os 2017 and legacy os 4 mini they are complete in themselves and are usuable even with package repositorys or updates)
Why not virtualize ?
One of the questions you might
ask is ‘why don’t I virtualize this?’ , fire up virtual-box
and load up legacy OS or legacy mini 4 well , here’s the thing it
might not work
Some
faults might be reproducible in virtual box ie ogv misread and cavs
misinterpretation
It
might be possible to run legacy OS 2 if you specify Xvesa as the
graphics driver rather than Xorg and achieve the same results
But
with Ogv misread you get slightly different results depending on
graphics card and processor – could this be modelled in virtual-box
?
It
seems easier to me to pull apart and rebuild a physical computer and
run through different operating systems and graphics card combination
once rebuilt – sometimes you get different results on say a sis
chipset or an intel chipset – the more variables there are the more
possibility of error .
Computational
Archaeology – preserving machines and software .
A
lot of what I
discover
and use in my work is through what
I see as Computational
archeology,
that which is left behind due to perceived
obsolescence
may have some flaw or 'feature'
that was not seen due to ‘correct’ use, using older softer on
newer hardware , or older hardware with newer software may
also
open
up faults not seen at the time.
Which brings me back to the original question I started this talk
with - ' what
is
the minimum spec computer you needed to make glitch art
' and the newer question of Gans and computational cost .
Gans, exposure
and computational cost
( With digressions into the
politics of hardware and the entry costs of making glitch art, of the basic costs of
exposure including discussion of more recent technologies , such as
neural nets , the high price of cuda capable video cards ( is this
the direction glitch art should be taking ?) the need for higher and
higher definition and quality video – facebook playback issues ,
the time it takes to upload hd content on rural broadband vs access
to fibre , computational expense vs free access.
What I like most about glitch
art is its wild west feel, there is no right and wrong way of doing
it , no formalised academic path which teaches the primacy of one set
of aesthetics over the other, because the question of scarcity value
that traditional art relies upon to make profit and approve what is
and what isn’t seen is irrelevant when we have access to the
universal gallery of the screen and the internet as play space – we
can create as much or as little exposure as we want without needing
others to step in and offer us a venue or endorsement. The tools and
knowledge to make it are universally free and the hardware required
( as I’ve shown in this demonstration) can be as cheap as you want
or as expensive as you want - I myself come from a deliberately lofi
position and I believe in sharing the work I make and the way that I
make it with the proviso of attribution ( and that I state my
original sources ) .
I live in Rural Ireland where
broadband is patchy and slow ( to upload a high definition file can
take anything from an hour to a day depending on size ) so there is
always a time cost to what I do ( especially as rural broadband can't
cope with me uploading a large file and using the Internet as well )
, but that is offset by being a part of something bigger and more
important than just me ( it isn't offset by people offering free
exposure - because exposure is never free )
Most of what I do is based on
using old or recycled machines , given the recent rise in the use of
Gans I decided to finally source a second hand i5 , it didn’t cost
more than 50 euros but it does run something basic like liz everetts
gan tools .
Liz Everetts' Gantools repository |
But if I want to train my own
Gan I’ll have to invest a lot more into my set up, a more expensive
graphics cards and higher electricity bills , if I use the online
tools or pre-trained models I am just working within somebody else's
constraints, if I train my own there is an attendant computational
cost : demands for work on the basis of exposure increases the demand
on my resources and warps the direction my work might go in if I
worked for exposure .
Cheap access to Ai ? Though eating is also helpful. |
The more you pay the more you get ? yea cos I can really afford this! |
As work made using Gans
becomes more ascendant, to play means to pay , is this true to what
glitch art intends to be. Is this the new cost of exposure , given
that glitch art thrives on novelty, that which is current becomes
that which is sought after and paid for in line with the pay to play
model that I despise in the old art world, and then of course before
payment comes exposure.
But chasing exposure this way,
if the world we work in thrives on novelty and a perceived value of
the new, would seem to be pointless, yes I can run a Gan but if it is
because that is the latest trend or demand ( as chasing exposure
feeds the demands for certain types of work ) then does that not
betray what we do as glitch artists as more complex computational
demands reduces the options for making narrows.
The ai art boom |
Its worthwhile hunting down this article because of the implications https://www.barrons.com/articles/the-ai-art-boom
To
quote from the article -‘There
was otherwise little debate about the artistic merit of AI art at the
summit, which attracted players from across the tech, art, and
collecting worlds. The bigger questions instead focused on just how
much this new form was poised to disrupt the industry.’
This
is my personal view now , so take this as you will but Gans can be
seen as a way for
the traditional art market which has been struggling with how to stay
relevant to and make money from our playground ( see the awful sub
genre called post-internet art ) by cornering the market on one or
two individuals and exclusive or tweaked algoritms to define what is
seen or defined as art and therefore exclude that which threatens
their business model ie us , in the same way that napster almost
destroyed the music industry before the music industry via apple
happened along itunes and spotify, Gan art or AI generated art can be
seen as the art worlds
Itunes moment – now I’m not coming out against gans or the work
that people are putting into them but consider this , the more a gan
can be trained the more it can create something akin to art , the
more the traditional art world can reclaim its place as the centre of
exclusivity aesthetics and value, is this what we want to return
ourselves to the world of the academic and the culturally sanctioned
, where art is not produced and producable by all but once again
consumed by the passive – what price is exposure if we hand our
aesthetics to the 1%
Lets think about this for a minute |