In a small, seminal book titled Vehicles: Experiments in Synthetic Psychology, Valentino Braitenberg (1926–2011) – an admired neuroanatomist – describes a set of thought experiments in which agents with simple structure behave in human-like ways; Braitenberg blatantly put forward the hypothesis that the primitives for realizing such machines are cellular and synaptic processes that are amenable for physiological characterization. The reasoning and results presented in Shahaf et al (2008) make the realization of a Braitenberg’s vehicle that classifies objects in its visual field (using a large-scale network of biological neurons) a trivial matter. Such a Braitenberg’s vehicle is demonstrated in the above video clip that was prepared in Marom’s group by Danny Eytan, David Ben Shimol and Lior Lev-Tov. A low resolution movie was published in the ‘supporting information’ section of the 2008 paper.
The main text and data of Shahaf et al (2008) show that the physical loci wherefrom stimuli are delivered to a recurrent, large scale random network of cortical neurons, albeit causing “noisy” neuronal responses, may be fully classified using the temporal order at which neurons are recruited by the different stimuli. Here, an application of this idea, in the form of a Braitenberg vehicle, is demonstrated: Inputs from the two (Right and Left) ultrasonic “eyes” of a Lego Mindstorms vehicle are sampled at 0.2 Hz and translated into stimulation of a large random network of cortical neurons at two different sites. The side corresponding to the nearest visual object (relative to vehicle’s longitudinal axis) is classified using an Edit (Levenshtein)-distance metric based on the recruitment order of 8 neurons; these 8 neurons equally respond to stimuli from each of the two eyes, but their recruitment order is unique to the stimulus side. Based on the classified activity, a command is sent to the appropriate motor attached to one of the wheels. The red trace on the left side of the movie frame represents the total network activity (points depict evoked activity); the blue numbers in front of vehicle’s “eyes” show distances (in cm) from the right and left sensed objects; the Edit distance of the evoked recruitment orders, from a predefined internal representation of the Right and Left objects, is shown in red numbers. Top left: time in seconds.
In Marom et al 2009, we have used the above vehicle to provide a sobering example for the limits of reverse engineering in neuroscience. We demonstrate that application of reverse engineering to the study of the design principle of this functional neuro-system, may result in a perfectly valid but wrong induction of the system’s design principle. If in the very simple setup we bring here (static environment, primitive task and practically unlimited access to every piece of relevant information), it is difficult to induce a design principle, what are our chances of exposing biological design principles when more realistic conditions are examined?