Monthly Archives: October 2014

Automatic democracy

Behind the libertarian propaganda for a direct (live) democracy, capable of renovating party-based representative democracy, the ideology of an automatic democracy is being put in place, in which the absence of deliberation would be compensated by a ‘social automatism’ similar to that found in opinion polls or the measurement of TV audience ratings." 

That’s from Paul Virilio’s The Information Bomb, most of which consists of his dire warnings against technological developments we’ve yawningly come to take for granted. (It reads like an ultraparanoid Nicholas Carr book.)  

But the passage above struck me as a way to understand Facebook and other platforms’ preoccupation with automated personalization. Currently they use archives of previous behavior, predictive analytics, and evershifting proprietary algorithms to shape the user’s information environment, target ads, and redraw the horizon for what the suer may think is possible in the world. Zeynep Tufekci, among others, has pointed out the way this can be used to shape voters’ choices or engineer consent. Virilio has anticipated this possibility and takes it to its logical conclusion, in which the surveillance data on a user will be algorithmically processed to yield their opinions on various political questions and will effectively cast their vote for them, without their having to bother, let alone consent to participate. This would simply be a totalization of the logic of governance by opinion poll — only no one would need to be polled, no conscious deliberation on the part of the polis would be necessary, and preferences would be presumed to be revealed through patterns of ordinary behavior. 

According to the positivist epistemology of Big Data, this would allow citizens’ "true” political desires to be captured without being manipulated by advertising and so on. But of course that would warrant politicians to pursue a perpetual, unending campaign of psychic manipulation. Instead of freeing individuals from politics, this would inescapably establish the latent politics in every last gesture a person makes; it would force people to recognize that anything that can be captured as data is always already a political matter, a matter of power. (Measurement connotes a power relation.) 

That is already true, but automated, algorithmic politics would make it unmissable.

Advertisements

Algorithmic gender, etc.

As more information about ourselves is captured within “Big Data” platforms and systems alongside the data about other individuals, algorithms will be used to assign identity markers to us, place us in categories, regardless of whether these correspond to how we think of ourselves. The system will infer our identity, according to categories it defines or invents, and use these to shape our environments and further guide our behavior, sharpen the way we have been classified, make the data about us denser, deeper. “A New Algorithmic Identity: Soft Biopolitics and the Modulation of Control” (gated link boo-hiss) by John Cheney-Lippold examines some of the ramifications of this.

1. Our self-identifications will just be one data point contributing to how the system identifies us. Within the data platform, in other words, we won’t have the last word on what our identity is; the platform will ascribe it on the basis of statistically determined parameters derived from all the users’ data in aggregate. 

We might see ourselves as “male,” for example, but if our generated data doesn’t fit the markers that the system uses to identify maleness or corresponds to patterns that nonmales have established, this won’t matter. The system will not regard us internally as male, and it won’t treat us the way it treats other “males”. (The same process holds for religious affiliation, where self-identifying Christians may not be regarded and treated as such based on their behavior within a platform.)

As Cheney-Lippold argues, “How a variable like X comes to be defined, then, is not the result of objective fact but is rather a technologically-mediated and culturally-situated consequence of statistics and computer science.” What is “male” will not be a biological or voluntaristic condition within Big Data, but will instead depend on how the algorithm for “maleness” is programmed, what variables it takes into account and what patterns it searches for and imposes on data and populations. These algorithms will then shape users’ interactions with any environments the system touches. 

2. The definition of a particular identity marker may be much more fluid within a data-based system. It can adapt to the emergence of new patterns or be reprogrammed to accommodate certain data points. The categories themselves can be algorithmically dictated, so that their meaning evolves with the data collected and the processes used to organize and analyze it all. So “maleness” may change from moment to moment, depending on changes in the data that feeds into its algorithm as well as on direct code changes to the algorithm itself. Traditional categories may seem static, but are dynamically redefined moment by moment, context by context by algortihms. And algorithms also throw up new categories and subcategories that inflect the traditional ones contingently.

3. Because the definitions of categories are constantly changing, so individual users’ identities are also perpetually in flux within the system. You may be male one minute, not male the next, male again tomorrow, and so on. Cheney-Lippold writes:

a user’s ascribed gender can and may change as new user information arrives into the cybernetic system ... algorithms allow a shift to a more flexible and functional definition of the category, one that de-essentializes gender from its corporeal and societal forms and determinations while it also re-essentializes gender as a statistically-related, largely market research-driven category. Gender becomes a vector, a completely digital and math-based association that defines the meaning of maleness, femaleness, or whatever other gender (or category) a marketer requires.

4. This changes the nature of stereotyping. “The capacity for cybernetic categorization to regulate certain categories’ meaning according to algorithm marks a move away from offline stereotypes and into a form of statistical stereotyping.” The stereotypes may be seen as quasi-objective if they are derived from a data set (and then fed back into a system to regulate users’ behavior). Data may mirror pre-existing social biases and then launder them into objectivity by letting those biases shape the definition of derived categories. 

The way biases are programmed into algorithms — what data is collected, what patterns they are trained to look for, etc. — may be hidden and laundered this way as well. Sexism can be reified into code and disappeared, if code is seen as some sort of neutral processing of data.

As Cheney-Lippold notes, drawing on Foucault, “ontologies are embedded within power relations” — what defines any particular social category is a matter of politics, but that politics can be hidden, circumvented, superseded, by code, automation, and algorithms.

5. The platforms may not ascribe gender (or any other classification) as an either-or, but as a place within a matrix, an intersecting point on many different continuums. Any category may be endlessly nuanced into subclassifications. Within the system “gender” can be expanded to admit any number of possibilities beyond “male” and “female,” each with its distinct set of data patterns. It becomes a matter of human users deciding to recognize those refined, expanded ascriptions as “gender.” If we let algorithms deduce our gender, we may be open to letting gender be something beyond binary. 

More likely, platforms could report our deviation from what it regards as “true maleness” or “femaleness” and invite us to contribute more data or perform more actions to try to “correct” this. Gender can be gamified, and we can pushed to achieve 100% maleness, or what have you, according to the shifting determinations of what “maleness” is.

The platforms and algorithms can reify certain identity markers that were once defined more nebulously and indeterminately in social interaction, but then black-box those concretized definitions, so that individuals are forced to always police themselves and modify their behavior continually in an effort to achieve their specifications. The algorithm can thereby implement social control by holding the secret truth of a socially vaunted categorization. We may strive to perfect our maleness according to what a platform’s copious data “proves” it should be, but we will then become dependent on the platform that collects that data and analyzes it to produce a “objective” definition of what “male” is. We will have to continually interact with that system, give it more information, submit to its tests.

6. The way identity markers are defined within data platforms will always serve the extension of the platform, not the personal preferences of the individual users. They will impute an identity that is conducive to perpetuating the platform’s influence over users and sustain their interaction. (This is the lesson one can derive from the way gambling machines are engineered to prolong the time spent on machines.)   

7. One can’t hack the algorithms that control us and shape the social categories that govern us (or how we are individually categorized). The categories are inferred from populations, not from individual behaviors; platforms implement control at the level of populations, and these are constituted ad hoc, depending on what sort of control is desired. 

As Cheney-Lippold explains

Control then works at levels far past the purview of liberal individualism, situating subjects within networks of power that govern indirectly and without proximity. The individual user is incapable of really experiencing the effect that algorithms have in determining one’s life as algorithms rarely, if ever, speak to the individual. Rather, individuals are seen by algorithm and surveillance networks as members of categories. … The identifications that make us as subjects online are becoming more opaque and buried, away from our individual vantage points and removed from most forms of critical participation. They are increasingly finding mediation outside the realm of traditional political intervention and inside the black boxes of search engines and algorithmic inference systems (Becker and Stalder, 2009).

We don’t even know what our identity within the platform really is, let alone the levers by which to change it — and that’s assuming there is a static “it” that could be changed. There is no use complaining that an algorithm imposes gender discrimination if it is impossible to know how that algorithm defines gender at any moment. The definition is always being modulated on a case-by-case basis, depending on the other factors in play. I might be “male” to an algorithm being run for law enforcement, “female” for one being run by a food company; etc. etc.

The full range of our identity markers can be algorithmically reconstituted depending on our context; our imputed identity, even in terms of any of the familiar and seemingly fixed categories (gender, class, race, religious affiliation, etc.), can change from web page to web page. Our data is reprocessed from moment to moment, positing a different self for us to inhabit and imposing a different set of culturally inflected prejudices on us. Trying to wrest control of these from within the system only refines the data by which the process is implemented. “We are effectively losing control in defining who we are online, or more specifically we are losing ownership over the meaning of the categories that constitute our identities.”

A fatalistic response to this is to embrace the way identity is imposed on is, consume our “selves” as perpetually novel, ultra-personalized consumer goods. (Baudrillard prescribed this as a kind of “hyperconformity.”) One can accept the ready pleasure of consumerism rather than pursue the freedom of autonomy, which is always imperfect and requires boundless innovation in our techniques of resistance. 

Link

A critique of K-Hole’s “normcore.” After differentiating normcore from the common misinterpretations (which have culminated in the “Dress Normal” Gap campaign), the author of the eflux essay, Rory Rowan, goes on to argue that normcore fails to accommodate the sociological realities of power and overlooks the obstacles to K-Hole’s utopian proposition of continually blending in. (Inclusion and exclusion are not mere matters of personal effort.) Also, normcore is entirely consistent with the demands of neoliberal subjectivity that one be continually adaptable. 

insofar as they fail to contextualize Mass Indie in relation to broader socioeconomic or political forces, K-Hole misses an opportunity to examine the demand for differentiation in the domain of pop culture in relation to wider patterns of neoliberal subjectification … Indeed, by defining Normcore in relation to adaptability and empathy—both admirable traits in and of themselves—K-Hole risks framing their solution to chronic differentiation in terms that replicate rather than challenge the ideological Trojan horses of neoliberal subjectification. It is, after all, the same ideological framework that insists on an adaptive labor force and the economic importance of affects such as empathy, that channels subjectification into the isolating vectors of differentiation. 

The pressures of both fitting in and developing a personal brand remain strictly the individual’s problem — stress and risk imposed on isolated individuals whose atomization is perfected in social media. It is still work, whether you are working to be different or working to be the same. Conforming requires as much effort as striving for uniqueness.

Social media has made this work a source of profit for platform-owning companies, and this has provided an incentive for maintaining isolating individualism in the face of the potential collectivities that connectivity could foster. Rowan writes:

Although K-Hole claims that today, individuals must find their communities—and K-Hole associates Normcore with this process—no details of the forms of community that might be found or produced through this individual search are offered…

Normcore is best understood as a coping mechanism to help individuals deal with the stresses of differentiation, rather than a means to address the wider social conditions that demand it. In such an individualist account of social relations, there is not much need to address the contents of social norms.

The work of forming collectives in social media is often captured by the platforms that facilitate them; meanwhile the platforms continually refine their interfaces to encourage re-individualizing — with scoreboards and metrics and so on particularly, as well as personalized, big-data-driven reshaping of the social reality users see on these platforms. 

But of course, that doesn’t mean people should stop doing the affective labor necessary to build and sustain relationships, collectives. Empathy is exploitable but necessary and nearly unpreventable, just as is cooperation inside capitalist-controlled workspaces. Belonging is still the only solution to not belonging, even though it is in danger of being commodified, subsumed to capitalist processes of production for profit.

Rowan ends on a note of embracing antagonism in the name of a new normativity: Find enemies rather than worry about fitting in; deal with the realities of power by seeking to wield it. Don’t interpret the world; change it, etc. Presumably this would foster collective subjects unified by their shared objects of contempt. The danger is that this admonition too can be neoliberalized, so that all feel obliged to fight a war against all, at an increasing pitch of paranoia. 

SO NOW!: On Normcore | e-flux