Monthly Archives: May 2014

From The Culture of the Copy, Hillel Schwartz (1996)

Seems like the ground was well-prepared for selfies and for normcore. “Our false intimacies lash us on to a narcissism ever enamored of but ever discontented with likeness … Amidst a parade of electronic spectacles, all we make of our selves is suspect … The best we can manage in a world of simulacra is a visage unavoidably generic.”

Schwartz adds in a passage later in the book that “our self-portraits now neither anchor nor extend us because we are no longer sure of ourselves as originals.”


This is a useful paper that attempts to shift the discussion about privacy to a more pragmatic one about “obscurity.” We have been used to taking our obscurity for granted, which afforded a certain amount of safety with respect to information about us. Our lives were not search-engine optimized. But now circumstances have flipped, and our reasonable expectations about what will be difficult to discover about us is relatively easy to uncover, and the burden is now on us to encrypt somehow the archived traces we generate in the ordinary course of living.

The key point is that it once required an enormous amount of effort for people to dig up dirt on others; now the invasions of privacy can be automated, with the work offloaded to data scrapers (which make data easy to obtain) and algorithms (which make data easy to understand, or at least process).

As the authors point out, individuals are easily deterred by small levels of friction in information gathering. Our natural narcissistic indifference to anyone but ourselves means we generally won’t try too hard to undo others’ obscurity. Typically we are more invested in ending our own obscurity, trying to get the attention of audiences whose approbation we seek. We often want high visibility in particular contexts, and people who are rendered socially invisible often suffer from having their lives relatively devalued by the culture. People deliberately seek social recognition, not obscurity — at least until that recognition becomes notoriety, or unintended audiences begin to notice.

But companies that sense a profit opportunity aren’t deterred by friction, and certainly their bots aren’t. Privacy invasion scales, and business models can spring up that revolve around organizing information about individuals so it’s ready to sell whenever antagonists (prospective employers, spouses, police, advertisers) decide they need it. “All it takes is a single trigger event and otherwise strong obscurity protections can come undone,” the authors note. These businesses spell the end of functional obscurity.

Such businesses include the obvious malfeasants, like mug-shot websites, but they also include Google, Facebook and Twitter, which collect data and assign it to a profile and track it through traceable networks, looking for patterns. And then there are all the Big Data firms (Acxiom, etc.) that also collect information and process it and sell it.

The authors give a rundown (via Alexis Madrigal’s summary) of Zeynep Tufekci’s assessment of how data can be made into disinformation for Big Data’s algorithms. 

Some of the options to produce obscurity include: referring to folks without tagging them; referring to people and deliberately misspelling their names or alluding to them through contextual clues; sending screenshots of a story instead of directly linking to it; and, hatelinking, which introduces noise into a system by making it seem that you approve of a story, rather than denounce it 

Disinformation campaigns may protect individuals from having entirely accurate profiles compiled about them. But from the perspective of the Big Data companies, such inaccuracies don’t even matter. They are dealing in general probabilities, not facts. They generally aren’t interested in targeting specific individuals, just types, and the “privacy” harms they are responsible for are at the level of populations, not persons. 

Maintaining personal anonymity is not a defense against the harms caused by predictive analytics and Big Data population profiling — combing data with algorithms to detect patterns and correlations that can then be used to reshape the digital infrastructure that users experience. If your anonymized data is similar to someone else’s, you may subsequently be treated the same and subjected to the same prejudices. They don’t need to know your name to discriminate against you. It’s safer and more effective if they don’t. The authors point out that “even if one keeps a relatively obscure digital trail, third parties can develop models of your interests, beliefs, and behavior based upon perceived similarities with others who share common demographics.

Focusing on obscurity thus seems a bit myopic in its emphasis on protecting the specific individual from being known. Likewise, individual action is largely useless for protecting oneself from population-level effects, from the policy decisions that stem from Big Data. 

Obscurity and Privacy by Evan Selinger, Woodrow Hartzog :: SSRN


This is another good essay about the foibles of blaming technology for problems of asymmetric power. The technology doesn’t invent power discrepancy and discrimination, it merely “foregrounds” it, which can be a good thing, making hidden injustices suddenly blatant.

But at the same time, such technology usually exacerbates the inequities it reveals, allowing the powerful to leverage their advantages and privilege, widening the gap. (Those who fund technological innovation are the same people who are already privileged; they assure it exacerbates asymmetries, which, after all, fuel capitalism.) What makes the injustice visible is that an equilibrium has been upset, and bad but tolerable arrangements have been made intolerably worse.

Anyway, the point that privacy can’t be imposed unilaterally, can’t be “chosen” by an individual, seems crucial. That notion reflects the technologically abetted fantasy about how social life works, as an opt-in situation you can manage from a device with a variety of settings and filters. But privacy is produced socially; it is a collective phenomenon. It emerges as a result of a group of people subscribing to the same norms about disclosure and information circulation, about what sorts of things should bot be exploited, commodified, and marketized.

The capitalists who fund and run technology companies want nothing more than for privacy to be commodified; that is why they are so aggressive about chipping away at the norms that once constituted it. Once those norms are destroyed, they can peddle stop-gap solutions they control to keep information from circulating in public view while amassing all the “private” data for themselves, to put to whatever profitable uses they can devise.

The “privacy problem” is a problem with information capitalism in general, obviously. it is a problem of fostering asymmetries to create profit opportunities. (Mug-shot blackmail sites, discussed here, are just an egregious example of a widespread business model.) It is a problem ultimately of social production: How do we organize to produce privacy norms in the face of the social media companies that organize us in such a way as to destroy them.

Google Glass Doesn’t Have a Privacy Problem. You Do.



The bigger issue is that there’s a tendency to look at what people are doing online, especially teens, especially teen girls, as inherently dangerous and corrupting; that sexting is something risky done to them rather than something at times pleasurable they participate in with a degree of agency; as something caused by technology, part of the longer trend of ignoring or even erasing teen and women’s sexual agency.

Society is uncomfortable with such sexual agency, and the “problem” is solved when it is displaced onto an app or a device: just put the phone away and we can pretend something that long predates the phone can be “fixed” away.

As such, “social media” or “technology” is treated as like a morally disinhibiting toxin (more on that). I think this stems from what I call “digital dualism”, the fallacy that the web is some separate, other, virtual, world; it gives us an easy culprit and solution for difficult social problems, but it’s a misunderstanding of the deeply enmeshed nature of digital technologies in our everyday lives. 

I’ve been guilty of scapegoating technology in this way, taking a bit of a determinist view of what using social media does to users’ perception of risk, their appetite for it, the potential pleasures they can develop through it that might not have otherwise seemed possible or desirable. Some of these are about consuming “risk” on demand, as if it were a commodity in isolation, a means to escape less controllable forms of risk. This risk on demand is marketed in a way that is designed to be compulsive, as with machine gambling.

Others are the pleasures of agency, of having a demonstrable and measurable effect in the world — the pleasures of virality. But some of these are the ordinary pleasures of desire that would exist without media to chart them.

But because these ordinary “natural” desires are charted in media that don’t belong to the ones expressing desires, because they are made manifest in the particular form that social-media platforms require, they become reified and exploitable by third parties. That’s the basic business model of social-media companies: induce users to surrender control of the messages they communicate in order to be able to broadcast them to a measurable audience. The idea that desires must be broadcast to be “real” sustains that business model, and that seems an ideology worth combatting.

on sexting


This (via Nathan Jurgenson) seems like a useful way to contextualize the pejorative overtones that tend to attach to selfie. If we regard selfie as a shorthand way to describe not just images you take of yourself but instead the “self in social media” — which is regarded as illegitimate, contrived, narcissistic, false, uncontrolled, viral, etc. — the ideological significance of pejorative selfie becomes clearer. A lot more on the disciplinary uses of “selfie” can be found at this excellent blog of Anne L. Burns.

The Epistemology of the Second Selfie

Another passage from Baudrillard’s Symbolic Exchange and Death:

We pass from injunction to disjunction through the code, from the ultimatum to solicitation, from obligatory passivity to models constructed from the outset on the basis of the subject’s ‘active response,’ and this subject’s involvement and ‘ludic’ participation, toward a total environment model made up of incessant spontaneous responses, joyous feedback and irradiated contacts. According to Nicolas Schaffer, this is a ‘concretisation of the general ambience’: the great festival of Participation is made up of myriad stimuli, miniaturised tests, and infinitely divisible question/answers, all magnetised by several great models in the luminous field of the code.

This anticipates the oft-cited Deleuzean essay about “societies of control,” which is essentially a restatement of Foucault’s claims about “governmentality." 

The general point is that the "code” — the immediate mediatization of experience — implements control through participation rather than prohibition. 

The medium’s effectiveness is the message

If I had known Baudrillard’s Symbolic Exchange and Death was basically an extended investigation of the ramifications of McLuhan’s “the medium is the message” claim, I probably would have dumped a bunch of it in to my paper on virality.

Virality isn’t a medium, exactly, but a media effect, so it takes us one step further: “the medium’s effectiveness is the message.” Or “circulation is the message.” Arguably, using the medium, as opposed to the medium’s static formal qualities, is eventually what all content boils down to in a quantified network. 

What started me thinking about this is what Baudrillard has to say about public opinion, which he says “is par excellence both the medium and the message.” Public opinion refers not to some pre-existing opinion that it successfully measures. It refers to itself, already set in motion as a supposedly real and objective thing, and polls merely measure how much the people polled already know about it. 

Opinion polls are situated beyond all social production of opinion. They now refer only to a simulacrum of public opinion. This mirror of opinion is analogous in its way to that of the Gross National Product: the imaginary mirror of productive forces without regard for their social finality or counter-finality, the essential thing being merely that ‘it’ [ça] is reproduced. The same goes for public opinion, where what matters most is that it grows incessantly in its own image: this is the secret of mass representation. 

Nobody need produce an opinion any more, but everyone must reproduce public opinion, in the sense that all opinions are swallowed up in this kind of general equivalent and proceed from it thereafter (reproduce it, or what they take it to be, at the level of individual choice).

I argue that virality is basically an extension of that process. It posits reproduction of a meme as the centrally important fact about a meme, swallowing its putative content. Its massiveness speaks to its importance, and all reiterations of it refer to that, not to “what it’s about.”

Baudrillard suggests the function of polling is to simply perpetuate polling, the imposition of questions with already prepared answers on top of reality to delimit it. Polls “belong to the same order as TV and the electronic media, which … are also a perpetual question/answer game, an instrument of perpetual polling.”

Virality too is an “instrument of perpetual polling,” inviting us to engage with reality only through an ongoing referendum on how certain things are being circulated. In a world where representations are governed by virality, perpetual polling — assessing the momentum of a thing’s spread —is the only way to process, interact with the real.

When Baudrillard describes what happens as polling supplants more open-ended forms of expression, he could be describing our quantified communications on social media. 

There is a jubilation proper to this spectacular nullity, and the final form that it takes is that of statistical contemplation. Such contemplation, moreover, is always coupled, as we know, with a profound disappointment — the species of disillusion that the polls provoke by absorbing all public speaking, by short-circuiting every means of expression. They exert fascination in proportion to this neutralization through emptiness, to the vertigo they create by anticipating every possible reality in the image.

Social media use turns out to be not a liberation from the top-down tyranny of old media, with its techniques of manipulating and corralling public opinion, but instead extends the hegemony of such techniques. Social media turn everything we say or do within it into a moment of polling, kicking off “statistical contemplation” as the only means of thinking about it all. Once statistical contemplation starts, how can we make it stop?