Category Archives: terrifying robot update

notes from Machine Dreams

Economics as the search for a meaningful definition of “rationality” — the impact of computer science on economics has been to dissolve the illusion of the individual as the “rational subject” or agent, and to instead regard individuals as cellular automata of a larger “rational subject” that subsumes them. Rationality in the economic sense — the sense of the most efficient distribution of resources, etc. — is an aggregate phenomenon that exceeds the individual’s grasp.

The market is then viewed as the “rational agent,” as a cyborg entity that computes and wills outcomes and so on. Individual humans, with their limited and irrational self-directed goals, are subroutines to the market’s higher functioning and purpose.

Mirowski cites a 1993 paper by Gode and Sunder that pits autonomous automata against one another in a double auction, revealing that this framework “had managed to induce ‘aggregate rationality not only from individual rationality but also individual irrationality.’ … aggregate rationality had no relationship to anything the neoclassicals had been trumpeting as economic rationality for all these years.” (554) The most idealized neoclassical market model “produces its hallowed results in experimental settings with severely impaired robots.”

The individual’s calculations, such as they are, need not be “rational” to yield rational macro outcomes. Motives at the individual level are ultimately inscrutable — their logic cannot be inferred from outside analysis of achieved outcomes. Their unique rational choices not necessary to the larger outcome, which can be produced by AI agents operating on simple automatic imperatives. People have reasons for what they do but they can’t be connected with economically rational outcomes.

(Mirowski argues — I think — for regarding multiple market forms themselves as automata in an evolutionary competitive process seeking an emergent “allocative efficiency.” The scary thing is we are enmeshed in the process, though the “efficiency” it discovers may have nothing to do with our limited human notion of individual thriving or social justice, etc. We may be agents serving the flourishing and reproductions of markets for their sake and their incomprehensible ends.)

I think there is an analogy to spy fiction and the chaotic behavior of individual spies caught in the infinite regress of double-triple-quadruple agents and simulated opponents and disinformation and the rest. Personal agency is meaningless in this context; the game is on a whole other level, so to speak. Individual spies may have all sorts of complex reasoning to defend their acts, but it is all local rationalization, irrelevant to the broader outcome or bigger logic. They are just individualist ideology that demands the assumption that their choices are constitutive of outcomes, but really the logic of the rational outcome only comes when their choices are merged with reactions and choices of a host of other agents whose moves can’t be anticipated or incorporated in the individual’s thinking process. Spies (like individuals in markets) don’t know how their acts shape the rules of the game they are playing; they think the rules are perhaps already fixed (their limited individual scope — the mistake the way they are programmed in their subroutine for the entirety of the software) when the whole system is calculating something they don’t understand or even know of. Like the humans on Earth in Douglas Adams’s books, part of an organic computer program determining the question of meaning.

The rationality of the espionage system perhaps exists at the level of national goals, or perhaps nations are players, automata is a larger game/market of war that has its own agenda, its own equilibrium that has nothing to do with human thriving or human goals.