This New York article by Casey Johnston about the death of the chronological feed colors within the lines of these sorts of pieces: It takes for granted that people suffer from information overload as if it is some sort of act of god, and that algorithmic curation is therefore an inevitable and necessary attempt to fix the problem. Users are implicitly presumed to be incapable of curating their own feeds, because they are either too lazy, too passive, or too indiscriminate — presumably users follow or friend people whose posts they have no interest in seeing out of politeness, or a design to curry favor with them. As Johnston writes, “It’s difficult for users to adequately curate their own feeds. Most people just follow their friends.“
That claim strikes me as conceding to social-media-company ideology about how platforms are supposed to be used. What makes such personal curation difficult is not the effort required in doing it (make a Twitter list, start a Facebook group), but the effort it takes to overcome everyone assuming and insisting that it is so so difficult.
Platforms like to promulgate the idea that users are not inclined to decide for themselves what they want, and that they are instead eager to be persuaded and served things they haven’t chosen, like ads. Not only can’t we curate our information feeds, but we can’t curate our personal desires, so we welcome ads and algorithms to solve the overwhelming problem for us.
Johnston acknowledges that maybe companies shouldn’t be trusted to do this sorting and don’t have our interests in mind, but then basically shrugs: “It’s an understandable fear. But, well, that ship has sailed.” We should just give up and roll with it, apparently. Sweet surrender.
Using social media that implements an algorithmically curated feed reinforces for users that they shouldn’t be expected to deliberate over any desires or guide their own information-search processes. Such platforms teach users helplessness. Staging information overload deliberately helps with the lessons. The point is to make the surrender pleasurable, as Will Davies suggests here. As with the “sublime” in aesthetic theory, we are overloaded with information so that we can enjoy being overpowered.
That is why platforms have always tried to saturate users with information and encourage them to constantly add more people and services to their feeds. The overload is intentional. Overload is the point, just like “too many channels (and nothin’ on)” is the whole point of having cable. Social media platforms foreground the metrics that drive overload, opting people in when possible and encouraging them to friend and follow everyone and everything they can.
Such promiscuity leads to the kinds of “context collapse” that companies are invoking to explain why users are posting less. But clearly the platforms prefer “context collapse” to communication. Their business model relies on having a lot of users spending a lot of time on the site, not necessarily on users posting a lot about themselves. Context collapse may make users post less, but it also generates a prurience about what others post; it salts all posts with a sense of risk that makes them more compelling. It also orients users toward consumption rather than production; or rather, it encourages them to limit their own “prosumption” to safe practices — sharing links to signal their own identity, endorsing other people’s content with likes, and so on.
This suits social media platforms just fine; the more programmatic your engagement is with their platform, the better. Ideally you watch your feed like television. Just as algorithmic sorting is posited as something users demand to deal with information overload (when really it allows platforms to serve ads in with content), “context collapse” is deployed to make it seem like users’ sinking into passivity is their own fault and not the platform’s — and meanwhile social media follow the path of all previous mass-media technologies, toward emphasizing the few broadcasting to the many.
We’re supposed to believe that users posting less constitutes some sort of threat to Facebook: If we stop posting, they won’t have as much data about users to use to target ads better. But that is not necessarily the case: Facebook gets the data it needs about users by spying on their browsing activity and keeping track of their likes and other sorts of non-posting behavior. The only thing that user posts are good for, from the platform’s point of view, are keeping other people engaged with the site.
But a site that is made up only of friends talking to friends is an uncomfortable place to serve ads — the primary business of Facebook. (It doesn’t exist primarily to facilitate connection or even data collection on individuals; those are subordinate to gluing eyes to screens and guaranteeing they see ads.) Hence Facebook seeks a blend of friend-to-friend recognition (the social glue that makes checking Facebook nearly mandatory) with the ordinary sort of culture-industry product that we are well-accustomed to seeing ads with — the sort of content that people typically link to and share, the “quality” content that Facebook optimizes its feed (with constant tinkering and rejiggering) to prioritize.
In re-sorting users feeds, however, feed-curation algorithms aren’t supposed to solve information overload; they are meant to prolong it and make it more enjoyably overwhelming. The sublime overload inculcates users with passivity toward their own curiosity. The procedures that pretend to manage the overload direct the users’ surrendered attention toward ads. With their lowered resistance and learned helplessness, they should be more easily persuaded than ever.
Both information overload and context collapse are deliberately induced — they are features masquerading as bugs. Both help us enjoy a more passive attitude toward consuming social media, offering plausible deniability to ourselves when we see the ship of active engagement has sailed.