Monthly Archives: January 2016

I was complaining about the representation of care robots earlier on Twitter, which reminded me of essay I wrote in 2005 about the ideology of “customer service.” After all, customer service is the best example of strategic deployment of “skill-ified” emotional labor, of emotionality reduced to manipulation for profit. When I think of care robots, I tend to recontextualize the discussion in a retail scenario, where customers are “cared for” in a way that is meant to alienate them by way of wearing down their resistance.   

Back then I wrote that customer service is deployed to promote a spirit of “heroic consumption” and accustoms us to the “notion that we deserve social recognition only when we buy something, as well as make us accept the idea that unless we have money to spend, we are invisible in the public sphere.“ 

I argued that customer service “is typically a way of making shoppers feel more important than they really are for an activity that should in no way be thought to dignify them.” 

Taking this to the logical extreme, I claimed that bad customer service is actually a form of kindness:

A clerk’s rudeness is really a gift that knocks you out of the complacent, compliant role of customer and thrusts you back into the more fundamental, sentient role of responding to what’s really around you; it disrupts the narcotic haze of a shopper lost in their private fantasies of acquisition and self-aggrandizement. It undermines the self-centeredness of consumerism; it affirms that, in contemporary capitalism, the customer is always wrong, always reifying the good things in life, always content to purchase rather than experience pleasure. The anger that many feel at bad customer service is a displaced anger; they are angry at themselves and how what they expect from life has been reduced to such squalid, petty demands as a smile on the face of the person who pours their coffee … When you are given the suck-up service with a smile, you are immediately made to feel like you belong, but you have to wonder what you belong to, and if that’s a club you really want to be in.

In 2005, I thought that commodified emotion made people skeptical of spontaneous friendliness outside of commercial encounters. Today, I think more that commercial pseudo-friendliness sets up the backdrop against which a “real” friendliness can be defined and experienced. Care robots may well function in a similar way, further refining the perception of what “real” emotional connection should be. 

But it may work also to authorize treating the humans who provide robot-like care (under compulsion from employers) as though they had no feelings —with contempt and abuse. The problem is not so much how robots treat us, but how we feel permitted to treat robots.

Trained to Like

A year ago, when the emotional manipulation paper became a scandal, it became clear that many Facebook users hadn’t before considered how their News Feeds were manipulated; they were inclined to instinctively accept the feed as a natural flow. That seems to reflect consumers’ attitude toward television, where content simply flows from channels and we don’t generally stop to think about the decisions that led to that content being there and how it might have been different. We just immerse in it or change the channel; we don’t try to commander the broadcasting tower.

Social media’s advent was supposed to do away with central broadcast towers, but it hasn’t really turned out that way. Instead mass media companies distribute their products through the platforms and consumers’ role is to boost the signal for them. The fact that consumers create content of their own is insignificant; it doesn’t compete with the mass media product.

On Facebook, the entirely of the News Feed flow is the mass-media product, and it is made not by executive producers but by an algorithm.

Will Oremus wrote a history of Facebook’s News Feed algorithm for Slate, which seems to tell the story Facebook wants told about how it came about. (There isn’t, for instance, much discussion of how the News Feed accommodates advertisers, as Nathan Jurgenson points out.) Oremus sets up the idea of omnipotent and autonomous machine-learning-driven algorithms as a straw man that can be blown away by the revelation that people actually program them. Get this: “the intelligence behind Facebook’s software is fundamentally human.” Who knew? I thought the algorithms were found inscribed on tablets of gold in Menlo Park.

But Oremus does raise a significant question about that human intelligence, about the logic (or lack thereof) on which the News Feed algorithm is premised: “What if people ‘like’ posts that they don’t really like?” What if engagement is not the same as enjoyment? 

To answer that, you have to accept that there are many different kinds of attention and many different shades and purposes behind affirmation. But advertisers don’t subscribe to that affective economy. Any attention is good attention when attention is monetized. Attention is measured in time; it doesn’t have any additional dimensions. It doesn’t matter if you are gaping in horror; you are still paying attention, and that is all that matters. As far as advertisers are concerned, you like what you see, because you will see an ad that is stuck on the thing your eyes are glued to.

Any examination of the logic behind the News Feed has to look at not only how it is designed to hold users’ attention, but why. What sort of attention does Facebook want to cultivate in its users so they fit the world view of the advertisers who fund its existence? 

What advertisers demand is one-dimensional attention. Part of what the algorithm achieves is this flattening of attention, by reducing the potential meaning of a like (or a click or a comment or any other behavior that is metiricized) into an uncomplicated affirmation, a revealed preference that obviates any ambiguity. This distorts how people are actually liking on the basis of a complex and irreducible set of reasons, but when the distortion is fed back into the News Feed, shaping what appears there, it renders those other reasons irrelevant and disincentivizes them. It starts to produce the oversimplification as a simple fact about users’ actual behavior. The algorithm shapes the behavior it purports to reflect. It produces the sort of attention that it factors into its sorting processes. 

Oremus rightly points out that the Like button was created to put Facebook users to work at producing Facebook’s programming:

The like button wasn’t just a new way for users to interact on the site. It was a way for Facebook to enlist its users in solving the problem of how best to filter their own news feeds. That users didn’t realize they were doing this was perhaps the most ingenious part. If Facebook had told users they had to rank and review their friends’ posts to help the company determine how many other people should see them, we would have found the process tedious and distracting. Facebook’s news feed algorithm was one of the first to surreptitiously enlist users in personalizing their experience—and influencing everyone else’s.  

He places appropriate emphasis on Facebook’s ingenuity in surreptitiously putting users to work in this way. But what makes it so ingenious? It is the fact that Facebook could so easily reorient users’ experience of Facebook away from “sharing” and toward ranking. Suddenly the reason to go on Facebook was primarily to register likes, not to enjoy or engage with anything in particular or to keep in touch with friends or whatever the site’s original purpose might have been misunderstood to be. With the Like button, the site found its true purpose. The button was a machine for turning users into workers, and for converting their multifaceted engagement with people in their network into a unidimensional game of attention-boosting. 

Facebook tends to represent itself as trying to serve users by fine-tuning its algorithms to give them what they really want, and unfortunately evil content providers figure out how to game those algorithms and ruin it all, forcing Facebook to rejigger the variables to please their fickle customers. Because algorithms can be gamed, they reorient attention and effort toward how to game them rather than to make quality content.

Oremus’s account fits with that picture. Once Facebook started using likes to populate News Feeds, 

many began to tailor their posts to get as many likes as possible. Social-media consultants sprung up to advise people on how to game Facebook’s algorithm: the right words to use, the right time to post, the right blend of words and pictures. “LIKE THIS,” a feel-good post would implore, and people would do it, even if they didn’t really care that much about the post. It wasn’t long before Facebook users’ feeds began to feel eerily similar: all filled with content that was engineered to go viral, much of it mawkish or patronizing. Drowned out were substance, nuance, sadness, and anything that provoked thought or emotions beyond a simple thumbs-up.

In the narrative Oremus offers, this gaming the system is an unfortunate by-product of Facebook’s benign well-intentioned algorithm. But creating a system to game is the whole point of deploying algorithms; Facebook then becomes the master rulemaker that controls the fates of those who choose to play (Zynga, Upworthy, newspaper publishers, etc.).  

So Facebook is not responding to the demands of fickle customers who don’t like Upworthy links. Instead Facebook is effectively producing fickle consumers (whose one-dimensional attention span is ever shallower) to drive harder bargains with those who want to sponsor News Feed posts and have them show up more prominently. By design, the algorithm exhausts us on successful content types and renders them boring, redundant, overexposed. The newsfeed algorithm destroys organic reach so that Facebook can sell reach in the form of sponsored posts.

The eradication of “substance, nuance, sadness, and anything that provoked thought“ from News Feed content is the prerequisite to make ads acceptable as content there. It is not an unfortunate unintended consequence; it is the premise that makes News Feed work as it is supposed to, as an ad conduit.