From the transcript of an Edge.org interview with Frank Schirrmacher:
information is fed by attention, so we have not enough attention, not enough food for all this information. And, as we know — this is the old Darwinian thought, the moment when Darwin started reading Malthus — when you have a conflict between a population explosion and not enough food, then Darwinian selection starts. And Darwinian systems start to change situations.
Not sure how this works in practice, how information adapts to claim attention. Or is it that information that survives is meant not to describe reality but compel attention, and the strategies it reproduces replicate not a structure of the Real but a structure of desire. The information that will shape how we understand the world will have nothing to do with reality per se but more to do with what can fascinate humans.
now you encounter, at least in Europe, a lot of people who think, what in my life is important, what isn’t important, what is the information of my life. And some of them say, well, it’s in Facebook. And others say, well, it’s on my blog. And, apparently, for many people it’s very hard to say it’s somewhere in my life, in my lived life.
The deep internal structures of identity are being externalized in computer networks, but why? Is it merely that they become more explicitly instrumental, operational when reified that way? Is it that their significance seems amplified, the self externalized is free to become grandiose? Is it the market rationality that we have absorbed from living under capitalism seeking to find application in the deeper psychological structures, so that it can dictate extra-economic decisions, work with attention and emotion as neoclassical economics worked with resource distribution? The social-networking developments seem to answer this question: How can we apply ideas of productivity and innovation to the production of the self?
when you have a generation — in the next evolutionary stages, the child of today — which are adapted to systems such as the iTunes “Genius”, which not only know which book or which music file they like, and which goes farther and farther in predictive certain things, like predicting whether the concert I am watching tonight is good or bad. Google will know it beforehand, because they know how people talk about it. What will this mean for the question of free will?
The field for our idenity production is beginning to be circumscribed by the data we ourselves generate — we archive past iterations of ourselves and these hem us in for our own supposed good. The original choices that set us on a particular path recede into the domain of original sin. This is a digitization of the cliche about the butterfly effect. If only we hadn’t bought that Adam and the Ants song on iTunes so long ago. I wouldn’t be this person that I am now.
maybe the future will be that the Twitter information about an uproar in Iran competes with the Twitter information of Ashton Kutcher, or Paris Hilton, and so on. The question is to understand which is important. What is important, what is not important is something very linear, it’s something which needs time, at least the structure of time. Now, you have simultaneity, you have everything happening in real time. And this impacts politics in a way which might be considered for the good, but also for the bad.
The time needed to hierarchize the significance of information appears to have collapsed. New information supplants the old as the old settles into networks of relevance. What will be relevant to us must already be given in advance, predicted by the Googles and Amazons, etc., before information is disseminated. The simultaneity of all information means that we need a premade set of rules to create a bounded set at each moment that we can cognitively assimilate. We will see only what is preordained as important to us, but we will be convinced that we designed the filters for our own good.
From Douglas Rushkoff’s response:
I would argue we humans are not informavores at all, but rather consumers of meaning. My computer can digest and parse more information than I ever will, but I dare it to contend with the meaning. Meaning is not trivial, even though we have not yet found metrics capable of representing it. This does not mean it does not exist, or shouldn’t.
The danger is that we outsource the meaning to these systems that do the thinking outside ourselves. That we trust the meanings supplied by the hive mind, by the search engine, by the wisdom of crowds and so on, because we end up demanding quantified versions of everything, along with a quantified data-driven sense of self, with immediate metrics to tewak and calibrate.
From Nick Bilton’s response:
Free will is not a prediction engine, it’s not an algorithm on Google or Amazon, it’s the ability to share your thoughts and your stories with whomever wants to consume them, and in turn for you to consume theirs. What is import is our ability to discuss and present our views and listen to thoughts of others.
A very strange concpetion of free will as the ability to share thingsand impose one’s self on others. But that reciprocity is not free will; free will is a matter of not being circumscribed or determined by preexisting contexts. The cant of sharing is here used to distract us from the real problems posed by prediction engines and the archived self.