From Tiziana Terranova, “Red stack attack! Algorithms, capital and the automation of the common”
What interested Marx (and what makes his work still relevant to those who strive for a post-capitalist mode of existence) is the way in which he claims that the tendency of capital to invest in technology to automate and hence reduce its labor costs to a minimum potentially frees up a ‘surplus’ of time and energy (labor) or an exceeding of the capacity to produce with relation to the basic, important and necessary labor of reproduction (a global economy for example should first of all produce enough wealth for all members of a planetary population to be adequately fed, clothed, cured and sheltered). However, what characterizes a capitalist economy is that this surplus of time and energy is not simply released, but must be constantly reabsorbed in the cycle of production of exchange value leading to increasing accumulation of wealth by the few (the collective capitalist) at the expense of the many (the multitudes).
Automation, then, when seen from the point of view of capital, must always be balanced with new ways to control, that is absorb and exhaust, the time and energy thus released. It must produce poverty and stress when there should be wealth and leisure. It must make direct labour the measure of value even when it is apparent that science, technology and social cooperation constitute the source of the wealth produced. It thus inevitably leads to periodic and widespread destruction of the wealth accumulated in the form of psychic burnout, physical destruction of the wealth created or environmental catastrophe. It creates hunger where there should be satiety, it puts food banks next to the opulence of the super-rich.
This is a good reminder that automation can be put in the service of capitalism or anticapitalism; like technology in general, it’s not inherently progressive or exploitive.
Also, it sheds some light on the question of what happens to the labor saved by automation, which hinges on the distinction between “wealth” (the useful things a society produces) and “value” (what a capitalist economy uses to measure and distribute wealth). My understanding of this distinction stems from Moishe Postone’s Time, Labor and Social Domination. Capitalism creates the concept of “value” through capital’s appropriation of labor — what has value is what allows for capitalist accumulation, what allows capitalists to ultimately get control over society’s wealth. Postone argues that “overcoming value and the abstract social relations associated with it are inseparable from overcoming value-creating labor.”
Automation has the potential to do this, but only if it disrupts social relations and not merely the nature of work. As it stands now, automation “releases” human effort without collapsing “value” into “wealth.” The “liberated” effort is social wealth, but capitalism redirects it toward competitive private accumulation. The effort gets indexed to commodities and is translated into “value” up for grabs.
Capitalist interests have succeeded in channelling the “released energy” from automation into the individualized “labor” of consumerism and not the collective/common social consumption of wealth. The fruits of automation are redistributed unevenly because they are capable of being appropriated by individuals, whereas the liberatory potential of the “general intellect” (to use Marx’s term for the technological potential embedded in a society’s development of machinery) is in the way it can unify the social as a subject, posits the social as both the producer and consumer of technologically driven wealth. How can the implicit collective intentionality inherent in technology be made explicit?
I think that is what Terranova is talking about in the essay linked above; automation can potentially generate a different sort of transindividual subjectivity, but only if the infrastructure for it is developed, disseminated, and reproduced. Such a subjectivity is posited by the “general intellect” but negated by the individualism and status-seeking sustained by competitive consumerism. The open question is whether algorithms can be deployed in such a way that they produce collective subjects (dissolving the unitary individual into ad hoc ant-colony-like flows, maybe, as Bernard Stiegler describes?) rather than atomized ones (through personalization and filtering, etc.)
“Eve Segwick argues that modern literature depends in large part on a dichotomizing tension between what she calls universal v. minoritizing sexualities. I think a similar tension exists in social media cultures between legitimate and suspect notions of the self—the self v. selfie, if you will”
I think that is an interesting dichotomy, but I wonder whether it is useful to cling to the possibility of a “legitimate” self, as if there were selves that were illegitimate, that “lack integrity,” as Mark Zuckerberg infamously claimed.
Selfies don’t delegitimize or undermine the “legitimate self” so much as generate it, posit its existence as “that which is not selfie."
In other words, selfies assault the notion of autonomous, persistent, transcendent identity; the willingness to take them and share them inadvertently shows you don’t believe in the true, real self inside but instead in the need to continually demonstrate who you wanted to be, who you were, in a given moment, for a particular audience. The "self” implies another’s point of view on it, a perspective that generates it. You don’t exist as a self until someone else recognizes it. The selfie simulates, evokes, that outside point of view. It makes our self real to us, something we can experience, consume, as the expense of pretending to be someone else as we look.
1. Nicholas Carr’s The Glass Cage is an apparent extension of his earlier book The Shallows, an account of recent social science research that suggests that our reliance on computers and technology is deskilling us in ways we don’t seem to be thinking through, and something integral to the experience of “being human” is in danger of being lost. Google isn’t just making us dumber; it is making us subhuman.
2. But the book is not merely a humanist critique of automation — of how, say, using GPS degrades our ability to know where we are, both literally and figuratively. It is also a conservative critique of capitalism, albeit in code, where “automation” and “technology” and so on generally stand in for capitalism and capitalists as the enemies. Often, critiques of technology will lament the negative effects of it on our lives (making us lonely, depressed, stupid, narcissistic, selfish, etc.) without offering much of an explanation for why it has been successfully foisted on us. You get the impression that it’s the fault of consumers, who are too lazy or short-sighted or weak-willed to resist the blandishments of technology, or it’s the cosmic fault of perverse progress that marches inevitably forward with irresistible inertia. If you substitute “technology” for “culture” in this sentence from Andrew O’Hehir’s critique of A.O. Scott’s “Death of Adulthood” essay, you get the point:
Scott’s essay appears to treat “culture” as a sealed and self-referential system, one that shapes and reflects human consciousness but has only an incidental relationship with economic, political and social factors that lie outside its purview.
Some of Carr’s book works that way. He’ll note things like “New technology, once valued as a means to a greater good, came to be revered as a good in itself” without giving an explanation for why, leaving the implication that technology itself is just weirdly seductive (people are “unwittingly” falling for it) and can mutate itself to be more so, as if it were organic and capable of evolving independent of human will.
But Carr also works in critique of capitalism as the driving force of how tech has been developed and implemented. “Learning requires inefficiency,” he writes in one of his paeans to intellectual struggle, but “businesses, which seek to maximize productivity and profit, would rarely, if ever, accept such a trade-off. The main reason they invest in automation, after all, is to reduce labor costs and streamline operations.” Later, discussing Intel’s enthusiasm for automating everyday life, Carr points out that “instilling such dependency in customers would also, it seems safe to say, bring in a lot more money for Intel and other computer companies. For a business, there’s nothing like turning a customer into a supplicant.”
3. Technology automates and deskills because it benefits companies’ bottom line. It makes consumers more helpless and dependent, and further addicts them to short-term comfort and convenience over the long-term life satisfaction rooted in the “deep skills” of knowing how to make things. In Marxist jargon, this is the “real subsumption” of desire, what human beings within a society think to want is reformatted to suit what yields opportunities for capitalist profit and growth.
4. Concern over deskilling at the hands of automation leads Carr into some Shop Class as Soulcraft–type celebrations of the dignity of hard work and the flow states stemming from manual labor. But this seems to keep the burden of resistance on individuals, operating on an individual level — which will never address the root of the problem. You might commit to never using GPS, but its hegemony will make it inescapable and ultimately render one’s personal resistance a private insignificance. In other words, there is no point resisting technology without resisting capitalism, and furthermore, resisting technology is not equivalent to resisting capitalism. Resisting capitalism requires a different form of collective action. It is not a matter of the conservative’s standing athwart history and saying no. To quote O’Hehir again, on the limitations of personal-choice politics: “The freedom and autonomy each perceives in himself is better described by some other term, a force of compulsion or overdetermination … that disguises itself as liberation.”
5. If “true” value rests only with human effort and striving, it’s puzzling to think of automation as both labor-saving and profit-enhancing. If capitalism wanted to produce more value, wouldn’t it find new ways to generate and subsume human labor, the source of all value? Isn’t that among the fatal contradictions of capitalism that capitalists are reliant on living labor but constrained by competition to try to eradicate it and save on labor costs?
One can argue that capitalism is a warped system that teaches humans to value fetishes and illusions instead of the “true” value of effort, and this opens up the possibility of a system where money can be made from robots making and circulating things for other robots to enjoy. (The endgame here is the Matrix vision of a world where enwombed humans passively provide the power for an essentially automated virtual world.)
Alternatively, one can argue that automation doesn’t save or reduce labor, but it makes it more abstract and fungible, more a matter of interchangeable clicks and links amid a universe of digitized commodities and processes. Convenience doesn’t lead to us working less; it leads us to doing more of simpler work, work that masquerades as entertainment or work whose essential quality is to render us inert. Automation makes us work at keeping ourselves inert. (The secret of Adderall.)
6. If automation makes work more abstract, generic, it does the same to desire: hence the complaints that convenience and standardization vulgarizes people’s palates and tastes and so on. The work to be distinctive in those ways has been rendered less productive; quality has been trumped by quantity in consumption, etc. Deskilling is a matter of acceleration as much as automation: Processing higher volumes of information is more lucrative than investing value in a few things through more sustained attention and care.
Despite the deskilling of desire, one must still work, continually, on self-becoming. Because this work has been standardized and made digitally accessible, it is more exploitable than ever. It is only that the interminable work yields a self that becomes ever more constrained to tokens of its virality, of its ability to push volume in the network. The unlimited growth of the self is constrained to the unlimited number of notes your Tumblr can garner.
Reading Nicholas Carr’s forthcoming The Glass Cage, about the underappreciated ethical dangers of automation, inspired me to read George Orwell’s The Road to Wigan Pier (1937), which contains a lengthy tirade against the notion of progress as efficiency and convenience. Orwell declares that “the tendency of mechanical progress is to make life safe and soft.” It assumes that a human being is “a kind of walking stomach” that is interested only in passive pleasure rather than work: “whichever way you turn there will be some machine cutting you off from the chance of working—that is, of living.” The human addiction to machine-driven innovation and automation, he predicts, will inevitably lead to total disempowerment and dematerialization:
There is really no reason why a human being should do more than eat, drink, sleep, breathe, and procreate; everything else could be done for him by machinery. Therefore the logical end of mechanical progress is to reduce the human being to something resembling a brain in a bottle.
Basically, he sees the Singularity coming and he despises it as a “frightful subhuman depth of softness and helplessness.”
This, he surmises, is why people of the 1930s were wary of socialism, which Orwell regards as being intimately connected ideologically with the theme of inevitable progress. That connection has of course been severed; socialism tends to be linked with nostalgia and tech’s “thought leaders” tend to champion libertarianism and cut-throat competitive practices abetted by technologically induced asymmetries, all in the name of “innovation” and “disruption."