Abstract
If microperformativity is seen to spatially and temporally destabilize human scales as the dominant plane of reference, as well as to ‘emphasize biological and technological micro-agencies’ that ‘relate the invisibility of the microscopic to the incomprehensibility of the macroscopic’, there may be no better performative example than the neuron. Neurons seem to have become our latest cultural-technical fetish objects that easily jump scale, context and category, from live artistic performances like CellF or Silent Barrage that utilize the electrical spikes of cultured neurons to control the action of robots or sound synthesis to the mathematical behaviour of artificial neural networks.
But strangely enough there is another kind of performativity that the neuron implies – that of economic markets. In 1952, the Austrian economist Friedrich Hayek published The Sensory Order, which put forward a neuronal theory of the mind, arguing that all sense-making, learning and memory is a product of the dynamic connections that emerge from neuronal links that take place over time. According to Hayek, the sensory order is constructed from the neuronal connections classifying information – objects external to the mind have no intrinsic properties except how the nervous system classifies these properties. In other words, ‘we live in a sensory order that is created by the central nervous system’.
What then does the neuron have to do markets? The concept of the ‘performativity’ of markets implies that through its mathematical models, economics shapes social reality rather than objectively describing it. Thus this article tries to understand this economic performativity through a detour - asking what work the concept of performance can do in attempting to describe ‘living technologies’ (Takashi Ikegami) or lively systems across different scales (brains, machines, economies), whose ontological and operational order is one that challenges a basic epistemological assumption: that we can actually know the world.
Notes
1 Ironically, Google’s Magenta team was developing a similar system – minus the biological neurons. Called NSynth – or Neural Synthesizer. As Google claims, ‘unlike a traditional synthesizer which generates audio from hand-designed components like oscillators and wavetables, NSynth uses deep neural networks to generate sounds at the level of individual samples’ (Google 2017).
2 The deathblow to perceptron and, hence, early connectionist research, was Marvin Minsky and Seymour Papert’s 1969 book Perceptrons, where they attempted to demonstrate that the perceptron’s claims were exaggerated and that symbolic modelling of AI would be the main path to designing an intelligent machine.