PROJECTS
PROJECTS+
THEORY
BIO
PRESS
CONTACT
                                               
                                               
                                               
                                               
                                               
                                               
text commisioned by and published in S.M.A.C #2, the new "zine" of the SFMOMA Media Arts Council
                                               
A couple of weeks ago I had to dog-sit Miltos' newborn SONY AIBO. A traumatized baby, it was born on the 10th September 2001 and it had a very hard time learning its name. We kept shouting "NAME REGISTRATION" over and over, but it was giving us the "I don't understand" signal. Finally we had to resort to the web, and download a new life for the baby. Once the new life was installed, AIBO never learned its new name (stupidy) but suddenly knew how to read our emails, walk around, get tangled up in cables (hungry?) and make lots of noise.
Now it's old, and it is lying unplugged in its clear plastic container.

What is interesting about robots is the way they perceive the physical reality around them. In the same way that a computer does not understand whether a .jpg file depicts a pizza or a building, but understands the pixels that make up the pizza/ building, the robot does not understand whether it is stumbling on an Ethernet cable or a potato, but it understands the physical coordinates of the cable/potato. Perhaps it even remembers that in that spot in the room there is a cable/ potato to avoid. The cable does not matter to the robot any more than it should. It just gets translated to a set of spatial coordinates, perhaps color, weight and flexibility. This is the software: Data and reaction, a pre-programmed understanding of reality, armed with the responses to get out of trouble.
Of course the reality it contains is the one that the designer thought of programming. If the programmer did not think of messy cables on the floor, then your robot is in fact in trouble.


In the same way a building does not understand whether it is your weight it is carrying, or a pile of bricks. It just translates your weight to physics, and there is no emotional involvement.

But does a building understand where it is located? Does a building understand whether it sits on a street lined with trees or with garbage trucks?
Some buildings understand when it's nighttime, and turn their own lights on, or they understand when it's cold or hot. Most of them understand which floor to take you to, and if that floor is on fire.

Some buildings look like robots. The Japanese architect Shin Takamatsu comes to mind, with the "Syntax" building amongst others. These buildings look like machines, even robots, but the similarity is purely visual. A building can look like a stucco mini mall, and it will still understand that you want to park your car, go shopping, stay out of the rain and leave.

And almost every building contains a lot of robots: The light switch, the elevator, even a regular piece of hardware could be described as a robot. Of course a lot of buildings contain real robots, like the window that understands when to be opaque and when transparent. But could you assemble a building just with robots? A building that is all sensors, interactive spaces, or interactive staircases?


Buildings and robots can exist without interaction. A robot can be preprogrammed to turn on at 7:00 AM and switch off at 11:00 PM just to make you believe it needs an 8-hour sleep. Even if there is nobody watching, it can still pretend. And buildings don't have to sleep, don't need robots, not even electricity.

Buildings and Robots are scripts.

A simple script, that makes a robot move towards a light source could look like this: move-forward, move-to-light, and move-from-bumps. The "move forward" is always active. The move-to-light overrides the first behavior, and changes the notion of "forward": forward is wherever the light is. Move-from-bump overrides both behaviors. It detects whatever is in the way, and backs the robot up so it can continue.

A simple script that makes a building protect a car would be a bit more complicated since the building usually has to adapt itself to its environment even before it exists: Make the car inaccessible (provide walls, provide opening). For the car to get there, the building has to be connected to a driveway, so "provide opening connection to driveway" and so on.

Imagine an interface where the physical and visual natures of a robot and a building are completely substituted with their scripts. The robot would look like a cluster of commands, text floating in space, even moving. As this flock of text moves through space, parts of it are highlighted, maybe the "move forward" string. As it stumbles onto a cable on the floor, the coordinates of the cable would appear, and they would trigger strings of commands (avoid bump) that until then where inactive.

The building would also be a cluster of floating text commands, but it doesn't usually move. When you enter this cloud of text, and as you follow an entrance command, an infinite number of sub strings or paths that were waiting for you to perceive them, get activated. Sometimes even before you enter, complex commands get briefly activated as you look into sets of behaviors that represent glimpses of space. Even if it is only the sun looking at this cloud of commands it is active, its volumes disappearing into shadows or getting activated with light.

The robot command-cluster enters the building command-cluster. The robot follows the path that the architect has designed, bumping and activating along the way. In return the building gets activated, its spaces appear and disappear giving reality a scale and installing emotion to movement.

Both of these clusters of commands are in fact pieces of hardware loaded with a set of scripts that comprise an artificial intelligence, a designed reality. The process of accumulating these sets of commands, and prefiguring reaction to circumstance is called architecture.


Andreas Angelidakis, 2001