The Artifice of Intelligence, an introduction to the research project on AI and algorithms
This is a slightly edited version of my presentation of the research project as part of the study days at Rietveld Academy.
A caveat before I begin: I tried to keep this presentation short. I will use it mainly to talk about my approach to the theme of Artificial Intelligence and the questions an problems central to my research. I am still discussing within Critical Studies at Sandberg Institute, the exact form the project will take as well as the outcomes of the project.
In Hollywood and the End of the Cold War: Signs of Cinematic Change, Bryn Upton writes:
Communism was not the only focal point for American anxieties. During the Cold War, the American people also had an uneasy relationship with intellectuals: on the one hand, increased technology and access to higher education were allowing more and more people the chance to demonstrate their intelligence; on the other hand, being an intellectual was often equated with being elitist, anti American, effeminate, or subversive. Thus, it is not surprising that a new supervillain called Brainiac should appear in Superman comics in 1958. Brainiac is similar to Ultra-Humanite in his usefulness as a stand-in for intellectuals corrupted by the quest for power -responding to lingering postwar anti intellectualism that was still a part of American popular culture at the end of the decade.
Brainiac, the first post human villain in pop culture history. A half man, half machine, Brainiac embodied the unease that the common man felt towards the thinking classes: academics, public intellectuals, writers and artists that were increasingly engaged with new ways of thinking. It is no coincidence that the peak of these anti intellectual anxieties was taking place simultaneous to the nascent civil rights movement in the US and the decolonial efforts in the old Empires. 1958 was the year of our first artificial intelligence supervillain but it was also the year that Morocco declared independence from Spain and Guinea from France. The anxieties around new ways of knowledge production became embodied by an evil machine who attempted to destroy everything good embodied by Superman. Paradoxically, sixty years later, artificial intelligence continues to elicit similar cultural anxieties: a day doesn’t go by that algorithms, data bases and different forms of machine learning are not in the news due to their use for political goals, surveillance, control and discipline. Much has happened in the sixty years since and yet, humans continue to be perplexed by machines.
But what does “Artificial intelligence” mean? I would like to complicate the notion of artificial intelligence by, instead, referring to “the artifice of intelligence”. IQ testing was first developed from the research of the English Victorian statistician Francis Galton. Through the publication of his book Hereditary Genius in 1869, Galton spurred interest in the study of mental abilities, particularly as they relate to heredity and eugenics. He attempted to estimate the intelligence of various racial and ethnic groups based on observations from his and others’ travels, on the number and quality of intellectual achievements of different groups, and on the percentage of “eminent men” in each of these groups.
Alfred Binet and Thedore Simon, in 1905 based on Galton’s work, developed the first IQ test “to measure the difference in intellect between races”. Since its very first scientific conception, intelligence has been inextricably tied to notions of race, gender and geography.
One might argue that we are living in a world where these notions no longer hold value, however, a mere two months ago, in the heat of the campaign of Dutch municipal elections mainstream media devoted extensive space to discussing the IQs of Black people and migrant groups in the country. The ideas of a 19th century Victorian statistician, still shaping the political landscape of Europe.
It is in this rarified context that I want to complicate the notion of discussing the intelligence of machines when human beings of color are still not granted full intellectual capabilities. Intelligence, then, is neither a natural nor a neutral category; it a tool of social, cultural and racial demarcation.
In the project I am developing with the Critical Studies department, I would like to take this “artifice of intelligence” as a starting point to explore how our bodies become the learning materials of machines: our bodies are the data that feed the surveillance apparatus. The databases that feed the algorithms of the surveillance State operate on the body through facial recognition, voice recognition, the prevalence of cameras in public spaces etc and this constant intervention of the State AND corporations to create demographic data that would classify us as suspects, criminals, potential perpetrators or a danger to public safety. And here, I would like to make an aside to recall my previous point: there is a Venn diagram that illustrates how the same people whose intelligence is constantly under discussion are also the first suspects in these surveillance databases: the immigrant, the Black person, the refugee, the former colonial subject whose intelligence is under constant scrutiny, with their mere presence in public, feeds the databases of discipline that train the machines to predict our behaviors with algorithms.
In turn, this data can travel freely across borders and be used to train machines in a transnational network that can span continents. Our bodies, as data, can transcend the geographic limitations imposed by border controls, passports or citizenships in ways that are not afforded to the same human beings that generate the data. The refugee, denied a residence permit that would allow her to visit relatives in another country, can very well become the data set that trains the machine how to identify specific physical characteristics such as hair structure, the use of specific garments, body language, intonation or a certain way of “being” while in public. Our bodies as data, our intelligence under constant inquiry, always suspect, always a subject of debate. We become points affixed to a geography, unable to travel across borders while we simultaneously teach machines how to be afraid of us.
This artificial intelligence, however, is trained following tried and true patterns of resource accumulation: resource extractivism applied to the accumulation of data and the intimate details of our lives.
What is extractivism? I will use the commonly accepted definition just as a starting point of this exploration:
Extractivism is the process of extracting natural resources from the Earth to sell on the world market. It exists in an economy that depends primarily on the extraction or removal of natural resources that are considered valuable for exportation worldwide. Some examples of resources that are obtained through extraction include gold, diamonds, lumber and oil. This economic model has become popular in many Latin American countries but is becoming increasingly prominent in other regions as well.
The news lately have been inundated by the use of what I like to call “intimate data” for political purposes. Intimate data which is not merely private data but the kind of data that makes the intimate details of our lives, our relationships, our likes and dislikes, our social connections and sentimental attachments. Cambridge Analytica has indexed this data, extracting it from Facebook profiles and created voter demographics that can predict behaviors based on past preferences and ideological affiliations. Our intimate data has become a resource that can be commoditized, extracted, sold and utilized in very similar ways to other tangible commodities. Again, our bodies as data. “We” become a resource that can be extracted, categorized, indexed and used to train machines, sold to any interested party that can then repurpose this intelligence to sell us products or influence our political outcomes.
None of these extractivist practices have happened with our consent, though. So here is another point of exploration: how do we teach consent to machines? In most discussions around machine learning and Artificial intelligence, consent is hardly ever mentioned. As we move into more complex situations where algorithms make ethical decisions on our behalf, how are we going to train the machines to understand consent beyond the yes or no binaries? Who is going to be teaching this consent to algorithms when our entire culture is regularly interpellated due to the non consensual nature of our very human interactions? Can a culture based on non consensual extractivism of intimate data ever produce ethical robots?
I started this brief presentation discussing how the first post human super villain embodied our cultural anxieties about intellectuals. In its most recent incarnation released earlier this year, Brainiac is a powerful artificial intelligence consuming entire planets and civilizations. Much has happened in the sixty years since we first met him and yet, we continue to be anchored by some of those same anxieties. In my research, I hope to explore the cultural and political implications of creating algorithms and artificial intelligences that continue reproducing our very own flawed models of human organization. The machines, after all, can never be very different from the cultures that create them.