A post at O’Reilly radar by consultant Alistair Croll makes the case that everyone henceforth will be born into consolidated blanket surveillance and will be nurtured by a symbiotic relationship with their own data timeline. “An agent with true AI will become a sort of alter ego; something that grows and evolves with you … When the machines get intelligent, some of us may not even notice, because they’ll be us and we’ll be them.”
In other words, our cyborg existence will entail fusion not with an entirely alien entity or some sort of Borg-like hive mind that coordinates our behavior with a larger collective, but with a machine powered by our own personal data that represents itself as already part of ourselves. The alien algorithms ease themselves into control over us by working with data that seems to belong uniquely to us because it is about us (even though the very fact of its collection indicates that it now belongs to someone else).
Croll argues that this kind of data-driven social control will be “the moral issue of the next decade: nobody should know more about you than you do."
That sounds well and good, if you take it to mean that no one should use against you data that you don’t know has been collected about you.
But beyond that, the conflation of data with knowledge here is problematic. I don’t think self-knowledge can be reduced to matters of data possession and retention; it can’t be represented as a substance than someone can have more or less of. Self-knowledge is not a matter of having the most thorough archive of your deeds and the intentions behind them. It is not a quality of memories, or an amount of data. It is not a terrain to which you are entitled to own the most detailed map. Self-knowledge is not a matter of reading your own permanent record.
Knowledge comes with a point of view; there are no objective facts that have the same meaning no matter who is grasping them. So other people are always producing knowledge about me, from their perspective and for their own purposes, that I can never access. They will always know more about me than I do by virtue of their having a point of view on the world that I can’t replicate. The ability to impose your own self-concept on others is a matter of power — you can demand it as a matter of customer service. (This won’t change what they know and think about you, but it will allow you to suspend disbelief about it.)
The stakes in defining what "self-knowledge” means is bound up with the incentives for using social media and submitting to increased surveillance of various forms. If we accept that self-knowledge is akin to a permanent record, we will tolerate or even embrace Facebook’s keeping that record for us.
Social media sites are central to both data collection (they incite us to supply data as well as help organize what is collected across platforms into a single profile) and the use of data to implement social control (they serve algorithmically derived content and marketing while slotting us into ad hoc niches, and they encircle us in a panoptic space that conditions our behavior with the threat of observation). But for them to maintain their central place, we may have to be convinced to accept the algorithmic control they implement as a deeper form of self-knowledge.
But what if we use social media not for self-knowledge but for self-destruction? What if we use social media to complicate the idea that we could ever “know ourselves”? What if we use social media to make ourselves into something unknowable? To the degree that identity is a prison, self-knowledge makes the cell’s walls.
The most luxurious and privileged condition may be one in which you get to experience yourself as endlessly surprising — a condition in which you hardly know yourself at all but have complete confidence that others know you as they should.