Monthly Archives: April 2016

The overload

This New York article by Casey Johnston about the death of the chronological feed colors within the lines of these sorts of pieces: It takes for granted that people suffer from information overload as if it is some sort of act of god, and that algorithmic curation is therefore an inevitable and necessary attempt to fix the problem. Users are implicitly presumed to be incapable of curating their own feeds, because they are either too lazy, too passive, or too indiscriminate — presumably users follow or friend people whose posts they have no interest in seeing out of politeness, or a design to curry favor with them. As Johnston writes, “It’s difficult for users to adequately curate their own feeds. Most people just follow their friends.“

That claim strikes me as conceding to social-media-company ideology about how platforms are supposed to be used. What makes such personal curation difficult is not the effort required in doing it (make a Twitter list, start a Facebook group), but the effort it takes to overcome everyone assuming and insisting that it is so so difficult. 

Platforms like to promulgate the idea that users are not inclined to decide for themselves what they want, and that they are instead eager to be persuaded and served things they haven’t chosen, like ads. Not only can’t we curate our information feeds, but we can’t curate our personal desires, so we welcome ads and algorithms to solve the overwhelming problem for us. 

Johnston acknowledges that maybe companies shouldn’t be trusted to do this sorting and don’t have our interests in mind, but then basically shrugs: “It’s an understandable fear. But, well, that ship has sailed.” We should just give up and roll with it, apparently. Sweet surrender.

Using social media that implements an algorithmically curated feed reinforces for users that they shouldn’t be expected to deliberate over any desires or guide their own information-search processes. Such platforms teach users helplessness. Staging information overload deliberately helps with the lessons. The point is to make the surrender pleasurable, as Will Davies suggests here. As with the “sublime” in aesthetic theory, we are overloaded with information so that we can enjoy being overpowered.

That is why platforms have always tried to saturate users with information and encourage them to constantly add more people and services to their feeds. The overload is intentional. Overload is the point, just like “too many channels (and nothin’ on)” is the whole point of having cable. Social media platforms foreground the metrics that drive overload, opting people in when possible and encouraging them to friend and follow everyone and everything they can. 

Such promiscuity leads to the kinds of “context collapse” that companies are invoking to explain why users are posting less. But clearly the platforms prefer “context collapse” to communication. Their business model relies on having a lot of users spending a lot of time on the site, not necessarily on users posting a lot about themselves. Context collapse may make users post less, but it also generates a prurience about what others post; it salts all posts with a sense of risk that makes them more compelling. It also orients users toward consumption rather than production; or rather, it encourages them to limit their own “prosumption” to safe practices — sharing links to signal their own identity, endorsing other people’s content with likes, and so on. 

This suits social media platforms just fine; the more programmatic your engagement is with their platform, the better. Ideally you watch your feed like television. Just as algorithmic sorting is posited as something users demand to deal with information overload (when really it allows platforms to serve ads in with content), “context collapse” is deployed to make it seem like users’ sinking into passivity is their own fault and not the platform’s — and meanwhile social media follow the path of all previous mass-media technologies, toward emphasizing the few broadcasting to the many. 

We’re supposed to believe that users posting less constitutes some sort of threat to Facebook: If we stop posting, they won’t have as much data about users to use to target ads better. But that is not necessarily the case: Facebook gets the data it needs about users by spying on their browsing activity and keeping track of their likes and other sorts of non-posting behavior. The only thing that user posts are good for, from the platform’s point of view, are keeping other people engaged with the site. 

But a site that is made up only of friends talking to friends is an uncomfortable place to serve ads — the primary business of Facebook. (It doesn’t exist primarily to facilitate connection or even data collection on individuals; those are subordinate to gluing eyes to screens and guaranteeing they see ads.) Hence Facebook seeks a blend of friend-to-friend recognition (the social glue that makes checking Facebook nearly mandatory) with the ordinary sort of culture-industry product that we are well-accustomed to seeing ads with — the sort of content that people typically link to and share, the “quality” content that Facebook optimizes its feed (with constant tinkering and rejiggering) to prioritize.  

In re-sorting users feeds, however, feed-curation algorithms aren’t supposed to solve information overload; they are meant to prolong it and make it more enjoyably overwhelming. The sublime overload inculcates users with passivity toward their own curiosity. The procedures that pretend to manage the overload direct the users’ surrendered attention toward ads. With their lowered resistance and learned helplessness, they should be more easily persuaded than ever.

Both information overload and context collapse are deliberately induced — they are features masquerading as bugs. Both help us enjoy a more passive attitude toward consuming social media, offering plausible deniability to ourselves when we see the ship of active engagement has sailed.

Advertisements

It’s stated as if it were a commonsense fact that people decide to “turn to social media as a source of inspiration and comfort,” but that strikes me as the last thing a person would do intentionally. 

It is not the quest for “inspiration” or “comfort” that drives us to social media but the terrifying suspicion that those things can’t really be found anywhere. 

The idea that one “turns” to social media implies that we’re not always already embedded in it. Trying to opt into the instrumentalization of sociality that social media so well exemplifies seeks to preserves the illusion that there is still some “good” form of social being that can remain unsullied. We want to “use social media” (rather than take it for granted as part of social life and ordinary communication) because we want it to be responsible for the shortcomings that afflict our social relations in general. We want to contain the way we use one another to that “space,” but the containment walls never hold. The reactor has melted down.

contortions of self-consciousness

In his book Sour Grapes, Jon Elster has a chapter about “willing what cannot be willed,” or what he also calls “states that are essentially by-products.” He offers the example of spontaneity: you cannot try to be spontaneous; you can only recognize that you had been acting spontaneously after the fact. 

“When we observe that some such state is in fact present,” Elster notes, “it is tempting to explain it as the result of action designed to bring it about — even though it is rather a sign that no such action was undertaken.” This Elster calls the “intellectual fallacy of by-products,” which presumably leads to a belief that we can reverse-engineer the pleasure we take in certain conditions that can’t otherwise be pursued directly.

Reading about ASMR, as in this article about Buzzfeed’s Facebook Live show ASMR News Now, made me think of this fallacy, and how ASMR seems to hinge on defying the idea that you can’t manufacture inexplicable pleasures. ASMR is usually explained as a kind of brain tingle brought on by sounds that conjure intimacy and monotony in equal measure: “soft voices, kind words, a conceit of caregiving,” as Nitin Ahuja explains it in this essay. The sensation seems to steal upon those who experience it, yet it apparently can be triggered reliably by ASMR practitioners who can slur their sibilants in the right rhythm while performing some mundane activity chosen for its unobtrusiveness, its lack of capacity to bear deeper meaning. The ASMR practitioner often performs concentration — through such routines as folding towels, say — so that listeners can let their own need to concentrate dissolve. 

The typical ASMR scenario thus seems to stage meditative conundrums of concentrating on not concentrating, dramatizing how the care we often yearn for must be both an expression of special attention and of being taken for granted. It’s about using technological mediation to will an unwillable state, to make our approach to a desirable “by-product” state suitably indirect. The frisson of ASMR is thwarting the principle that you can’t tickle yourself, you can’t plan to give yourself goosebumps. ASMR says you can.  

ASMR suggests there is a way out of the contortions of self-consciousness that come from trying to be natural. Elster cites Stendhal’s diary on this recursive desire to act natural and claims Stendhal “turned to fiction” as a “way of enacting his desire by proxy.” 

I wonder if we sometimes hope that our social-media profiles could function in a similar way, allowing us to actively experience what happens to that profile a kind of radical passivity that passes for “naturalness.” Our data gets processed and what we really want to know or how we really want to be is presented to us as not an artifact of our consciousness, of our deliberate consideration, but instead somehow implicit in our past activities. 

This desire to have our “real selves” captured behind our backs and revealed to us becomes an alibi for permitting extensive surveillance of the self, for embracing the “inevitability” of surveillance as a prerequisite to self-knowledge. Finally surveillance will let us chart the path to “being natural” without immediately feeling unnatural about it. Inherent in this is our ability to take for granted that “naturalness” is less a state of being than a commodity, and like other emotional commodities, is available on demand by consuming the appropriate goods. When I want to feel “authentic,” I can look at a list of books Amazon recommends for me and simultaneously delight in how well my data pegs me and in how much of me escapes Amazon’s understanding. 

Stendhal, Elster notes, didn’t try to “make an impression on others by faking qualities that he does not have.” Rather he wanted to become “a person who could not care less about making an impression.” One of the seductive things about surveillance is that you know you are making an impression — as so much data —regardless of whatever effort you make. You can trick yourself into thinking that the effort to be natural has become superfluous, and your “naturalness” will be constructed for you from that data for your later consumption. 

DRM for clothing

This Fortune article points to how “Internet of Things”–style surveillance is coming to clothing, so what you wear can be tracked just like your phone is. There is also a blockchain-like component to this tagging as well, as each article of clothing will be assigned with a unique number that presumably can guarantee a branded good’s provenance and prevent fraud. It’s essentially DRM for clothing.

But really, the apparel and data-hoarding industries don’t even know what they are going to use this technology to do yet. “Doing this” — integrating unique ID tags with RFID capability — “by default at the point of manufacture means the brands don’t need to weigh up the benefits before deciding how to integrate the functionality.” Just impose surveillance now, and figure out what you are actually looking for later! 

Sounds great, right? What consumer wouldn’t want that? Naturally, the article is full of risible quotes from industry people about “what consumers want” (apparently ”they want to have product suggestions from the retailer based on what they personally want,” which seems a bit tautological) and corporate self-regulation with respect to potential abuses of this technology. 

The CEO of Evrythng, the nightmare factory responsible for this “Facebook of Things,” told Fortune, “I think brands have an increasingly important responsibility to be transparent with the uses of the data that they’re providing.” 

Murphy added that no-one should be tracked without their consent. “Brands have to nurture trust with the consumer,” he said.

So the solution to the rampant invasions of privacy that this technology enables is “trust companies not to.” 

We should probably get used to accepting that our clothes will generate data much as our phones do, and we will have no control over how that data is collected, used, and resold. Chances are we won’t know those trackers are in the clothes in the first place. We won’t even have the opportunity to check yes on an incomprehensible 99,000-word terms-of-service agreement before we put them on.