Thursday, June 26, 2008

This is your brain. This is your brain in a vat. This is your brain in a vat on the EvoGrid.

You're undoubtedly familiar with some form of the brain in the vat argument, which suggests that your brain might actually not be connected to a nervous system connected ultimately to muscles and sense organs, but instead to a computer that simply receives the outputs of your brain, runs them through a giant virtual reality simulation and simulates inputs. Yeah, Descartes first considered it, and then there was that little film you might have seen called The Matrix. Sigh.

As tired as this concept is, I think it could have a great application towards a generalized artificial life platform, which could play really well on the EvoGrid.

What the model gives us is a layer of abstraction. Let’s say our A-Lifes are simply brains in a vat, hooked up to a computer simulation that processes their inputs and outputs. This model gives us the ability to separate the evolution of the control code from the simulation, so that the control code could theoretically be connected to any simulation. Perhaps the control code is operating a simulated robot, or perhaps it’s responding to musical notes. Maybe it’s in a 2D environment, or maybe a 3D environment. The point is that the “simulation” is a black box from the point of view of the A-Life.

Perhaps this is sounding more like AI than AL, to which case I plead somewhat guilty…although I frankly am more interested in evolution as a creative force than I am in these distinctions. But what about the morphology of these ALifes? In our case, we’re born with more or less the same kinds of bodies with the same kinds of inputs and outputs. But anything is possible in simulations, and so I propose that some simulations might choose to interpret “outputs” as instructions to change the morphology. Perhaps in one simulation, writing “3” to pin #2 would a new limb to be generated, along with new inputs and outputs. In another simulation, every ALife might control the same make of robot.

Here’s my proposal for a new generalized ALife platform, which I’ll call VatLife. The intent is to make the environment as much as a black-box as possible, to separate evolutionary computation from simulations.

The main classes/components are:


Here's the basic design. A VatLifeEnvironment may contain a number of objects, which can include VatLife artificial life forms. It maintains information about those objects (the VatLifePhenotype) and handles all of their interactions, as well as any physics (if applicable). This is meant to be a “base class”, so there may be many different types of VatLifeEnvironment simulations with different properties. Each VatLifeEnvironment should essentially determine the fitness function of the objects within it. This could be as general as determining that one VatLife has eaten another, or as specific as rewarding individual VatLife objects based on their ability to replay a series of inputs.

Each VatLife would contain a VatLifePhenotype, VatGenome, VatMind and a reference to a VatLifeController. The VatLifePhenotype is used by the parent’s VatLifeEnvironment, and will be treated (except perhaps for simple initialization purposes) as a black box by the VatLife.

The VatLifeController would be a program that has two main tasks. The first is to take the VatGenome and produce a VatMind, as well as an initial VatLifePhenotype. The second is to respond to inputs and produce outputs by interpreting the VatMind – but not by referencing the VatLifePhenotype directly. Instead, it might read input pin #2 from the VatLifeEnvironment it’s in, which could get data based on the phenotype. For example, this could return the relative brightness that the left “eye” is receiving, the angle of which is controlled by another pin.

To make this work across different types of VatLifeEnvironment simulations, the VatLifeController might be coded in something like Java bytecode. This way, when one VatLife enters a simulation from another, the new VatLifeEnvironment might note that the new VatLife uses a different controller and automatically download it.

A word on input and output pins – the basic idea here is very simple. A VatLife object might have a set of hierarchical pins, each of which can be read and written to. To use a grossly simplified model of the human body, you could say that we have four top-level pins (our two arms and two legs), with our two arms each containing five child pins controlling to our fingers. Each pin when written to activates a muscle, and when read returns the level of strain.

Each VatLifeController should be given a certain amount of processing time, and can also be called when input pin values change, executing “attached code” (essentially interrupt processing).

While I’m not claiming this should be the design for the EvoGrid, I think it might be a very interesting design for an “island” within it.

One open question I have is how ALife’s will breed. Should there be a standard way of breeding different VatGenome objects (and is there a generic representation?) and leave the interpretation open to the VatLifeControllers, or should the VatLifeControllers be responsible for the breeding? I’m inclined to go the first route, but perhaps the more flexible solution is to give the VatLifeController a crack at it first, and if it doesn’t handle it, fall back to a generic handler.

Any feedback would be much appreciated!