Link

This is a useful paper that attempts to shift the discussion about privacy to a more pragmatic one about “obscurity.” We have been used to taking our obscurity for granted, which afforded a certain amount of safety with respect to information about us. Our lives were not search-engine optimized. But now circumstances have flipped, and our reasonable expectations about what will be difficult to discover about us is relatively easy to uncover, and the burden is now on us to encrypt somehow the archived traces we generate in the ordinary course of living.

The key point is that it once required an enormous amount of effort for people to dig up dirt on others; now the invasions of privacy can be automated, with the work offloaded to data scrapers (which make data easy to obtain) and algorithms (which make data easy to understand, or at least process).

As the authors point out, individuals are easily deterred by small levels of friction in information gathering. Our natural narcissistic indifference to anyone but ourselves means we generally won’t try too hard to undo others’ obscurity. Typically we are more invested in ending our own obscurity, trying to get the attention of audiences whose approbation we seek. We often want high visibility in particular contexts, and people who are rendered socially invisible often suffer from having their lives relatively devalued by the culture. People deliberately seek social recognition, not obscurity — at least until that recognition becomes notoriety, or unintended audiences begin to notice.

But companies that sense a profit opportunity aren’t deterred by friction, and certainly their bots aren’t. Privacy invasion scales, and business models can spring up that revolve around organizing information about individuals so it’s ready to sell whenever antagonists (prospective employers, spouses, police, advertisers) decide they need it. “All it takes is a single trigger event and otherwise strong obscurity protections can come undone,” the authors note. These businesses spell the end of functional obscurity.

Such businesses include the obvious malfeasants, like mug-shot websites, but they also include Google, Facebook and Twitter, which collect data and assign it to a profile and track it through traceable networks, looking for patterns. And then there are all the Big Data firms (Acxiom, etc.) that also collect information and process it and sell it.

The authors give a rundown (via Alexis Madrigal’s summary) of Zeynep Tufekci’s assessment of how data can be made into disinformation for Big Data’s algorithms. 

Some of the options to produce obscurity include: referring to folks without tagging them; referring to people and deliberately misspelling their names or alluding to them through contextual clues; sending screenshots of a story instead of directly linking to it; and, hatelinking, which introduces noise into a system by making it seem that you approve of a story, rather than denounce it 

Disinformation campaigns may protect individuals from having entirely accurate profiles compiled about them. But from the perspective of the Big Data companies, such inaccuracies don’t even matter. They are dealing in general probabilities, not facts. They generally aren’t interested in targeting specific individuals, just types, and the “privacy” harms they are responsible for are at the level of populations, not persons. 

Maintaining personal anonymity is not a defense against the harms caused by predictive analytics and Big Data population profiling — combing data with algorithms to detect patterns and correlations that can then be used to reshape the digital infrastructure that users experience. If your anonymized data is similar to someone else’s, you may subsequently be treated the same and subjected to the same prejudices. They don’t need to know your name to discriminate against you. It’s safer and more effective if they don’t. The authors point out that “even if one keeps a relatively obscure digital trail, third parties can develop models of your interests, beliefs, and behavior based upon perceived similarities with others who share common demographics.

Focusing on obscurity thus seems a bit myopic in its emphasis on protecting the specific individual from being known. Likewise, individual action is largely useless for protecting oneself from population-level effects, from the policy decisions that stem from Big Data. 

Obscurity and Privacy by Evan Selinger, Woodrow Hartzog :: SSRN

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s