Through Other Eyes

3 responsive agent-based webcam imaging projects

by Joris Slob and Sonja van Kerkhoff, May 2007

Stills from Values of White. Left: His head had just moved. Right: About 20 seconds later.

Three experiments showing information in an webcam image which the human eye normally cannot see and to do this through multi-agent perception, that is, the image is the result of what many 'eyes' reveal.


The assignment was to use MaxMSP to make this but we wanted to use open source software and after abandoning PureData because we needed a lot of data (1000s of 'eyes') in order to create fluid images, we then worked in Python. While Joris had some experience in Python and Sonja had none, it was the first time either of us had worked with imaging in this programme.

Learning Python a webcam still of Joris
under the influence of two python
PIL ImageChops filters.

The above shows 25 seconds
of 'agents's vision where
lighting increases
on the cloth.

This YouTube link
has a 1 minute
(but a lower quality) video
of the above.

In the first work, The Values of White, agents (imagine these as being like one pixel ants) change colours in the webcam image pixel by pixel towards purer RGB colour. For example, a pure white image would gradually change towards a balance of reds, greens, and blues.
Their vision of colour is, in this sense, more perceivably ordered than human vision: they see the distinctions whereas the human eye sees the combinations of the RGB values in colour. This ordering reveals the primaries within all colour as constantly changing live stream.

In the Edges of Change the whole image is blacked out and you only see anything if agents open up, pixel by pixel, windows showing the image behind, and only at an edge of change. Here these 'agents' perceive change in either tonal difference or movement. A passing movement looks like an area of tiny dots which then dissolve if there is no further movement.

The Edges of Change. Left: The tv is on which explains the splatter of dots.
Right: The same view just after a hand moved in front of the camera.

So for the viewer, movement is visualized momentarilly,
just as is tonal change which we normally see.
These agents don't distinguish between change in time or in space
whereas for the human eye there is a visual and semantic distinction
between movement and tonal change.

This YouTube link
shows 3 minutes, 35 seconds
of the The Edges of Change,
with Sonja standing in front of the
mirror swinging a
poi (ball on a string).

In the final work, Shared Experiences, the agents are individualized by their experience when they first interact with the image, according to the pixel they randomly land on.
The agents then move if they meet (are adjacent to) another agent and if they share a similar level and type of experience a specific shape and colour appears at the location of the agent-to-agent interaction.

The image on the right is the visualisation of "shared experiences" of agents
of the same webcam image as on the left, after about 20 seconds of run-time.
The 'experiences' of colour still relate more or less to webcam image.

The same webcam image
about 40 seconds later.

The tiny squares indicate new
"shared experience" loci,
meaning that there was
some change in
the webcam image.

Those 'cultural loci' grow (enlarge in size) for a given time or until newer 'cultural loci' are created. So here the processed image is detached from location and instead simulates the agents' collective memory.

Like a memory landscape, these 'loci' fade in time and intermingle with other 'loci' and can be overwhelmed by new input. But not always: like human collective memory, there is a random element.


The goal of these experiments is to manipulate live webcam feed in intelligent ways to show something about the world that is beyond normal human vision.

In all our experiments artificial agents serve to 'order' the live feed while the world, through the webcam image, introduces 'dis-order.'

Human nature tends towards a sense of order and our experiments serve to show how 'other eyes' visualize order in different ways.

Human history is a collection of shared peaks of experience and our final work, Shared Experiences, creates a visual representation of these types of interaction between agents -- in a sense it is an ordering of the highlights (or dramas) of the collective.

1. The Values of White

Figure one: The normal webcam image
and then as the agents show it in The Values of White

In The Values of White the agent begins with a randomly assigned desire for either red, green or blue at a random location. They then 'read' the colours of that location. If the value of the primary colour in that location corresponds and is intense enough, then that primary colour is shown at that location. For example if the agent has 'red' as its assigned colour and there is a high enough red in the colour of the location, then the colour red is shown as more intense. The colour we then see is a 'purified' extraction from the colour in the real world, which then fades (but is reinforced by a successive webcam image if there is no change in the image). The agent moves to an adjacent location and the process is repeated. So what we see is a continual struggle of the ant-like agents purifying a world that is continually bringing disorder (in terms of non-primary (red, green, blue) colours).

Figure two.

The most successful result in terms of reaction to a changing image while retaining some recognition of objects, was with about 3000 agents landing randomly over the image and where the threshold for outputting colour was fairly high (see Figure 2).

The human eye is more sensitive to tonal change than to colour change and in fact we tend to see 'white' or 'black' as an absence of colour. White light is full of colour and the ants show this as the purest (most intense) colour, as in Figure 1.

1.1. Technical Description

Figure three.

Figure four.

Responsive agent-based webcam imaging

For our assignment, we used Python in creating different experiments of agent-based vision.

The source code is freely available to tinker with. If you use this work to base new work on, we would appreciate it if you mention our names somewhere.

To make it work you will need to have python installed on your computer. You can get python from here. The programs were tested in Python 2.5, but might work for other versions as well.

The next step is getting the Python Imaging Library (PIL), available here. These programs were tested with Python Imaging Library version 1.1.6. Make sure you have the package that corresponds with your Python version.

Lastly, the nasty part. we're afraid we have used a Windows specific module in there, because we weren't able to find cross-platform webcam modules. The windows specific module can be found here. It is called VideoCapture and was simple to use. Hopefully people with other machines can find equivalent Python modules for their platform.

If you use another webcam library than mentioned above you shouldn't need to change more than about 10 lines (which relate to the webcam) in any of our files.

After these steps, you are ready to go. The files should run with python. In windows you just double click the source files and by default it will start in python, other operating systems usually require that you really run the program in a console:python

The source files will be added here in 2016.

The agents have a predetermined colour preference which is either pure red, green or blue.
The threshold value determines whether pixels are purified or not. The threshold is a variable between 0 and 255 (where 255 results in the lowest possible output). In our experiments with these agents on our system, we found 3000 agents moving through a 320 x 240 pixel webcam stream, with a threshold of 100, gave the best results. (see figure three)

When the threshold is less selective, the output is a pretty abstract or a random colour mess, depending on your vision (see figure four). What this implies is that the degree of selectivity of the threshold directly correlates to our experience of order in the output.

1.2. Discussion

The resulting images look similiar to the Pointillist and later Impressionist paintings in which tiny dots of pure color were applied to the canvas. Not only do the changing images look similar, the process is conceptually similar.
Agents, like the painters, order colour to obtain greater purity.
The resulting space seems flattened and mutable.
And the main visual effect works much like a filter, but a filter created by numerous randomly appearing agents.

Here the threshold is low, so you see colours appear
with an even randomness and the visibility of the
path each agent takes (we called this the
scent loss variable) is set so the tails or
trails show the movement of the ant-like agents.
The image that will soon fade completely is the
first image captured by the camera.

The 3 images below show the system after 30 seconds.
All these examples here have 5000 agents
and the image is more or less static.

Threshold = 100. Scent loss = 255.

Threshold = 200. Scent loss = 200.

Threshold = 100. Scent loss = 100.

This 1 minute YouTube link shows the speed of the changes.

These two images have the same settings. The changes are due to moving objects around. We 'painted' compositions by moving objects in front of the camera. The diagonal in the image on the right was made by moving an arm across.

10000 agents on April 18th 2007.

128 agents, below: detail

2. Edges of Change


Figure 5. The original image
and a few seconds later.

Next we aimed to represent change as an instantaneous image.

We did this by blacking out the image, and then allowing agents to open windows where they are stimulated by contrasting colour values,
which may be due to either change or contrast in the image.


Figure 6. A hand moves over the top left
and the same image a few seconds later.

So in the last image of Figure 6, the area of blue windows indicates that there has been recent movement in the top left, while the red-white windows show areas of high contrast in a static part of the image.
Normal vision sees solid objects in different locations from which we deduce movement, but this system aims to make the movement itself (and not the object) visible.

So here each agents' 'eye' shows where change has occurred, where there is strong contrast in the image. If the image is static you end up seeing lines such as in figure 7 below.

Read about the rest of this project in our paper in this PDF (314 kb)

Interactive stuff by Sonja      Made in 2007 by Sonja     The Guided Tour of Sonja's work