The ‘dark figures’: of AI, absent students, and safe playgrounds (in a de-bounded world)
what we can(not) seem to think about; what we seem unable to do (together, in global cities, now)
How are we to live together, somehow – with AIs, in cities, in classrooms?
Here, I’m trying to think through this conjunction of factors, in the absence of any certainty, clear argument, opinion, or ‘hot take’.
I offer this to you in the meandering spirit of Montaigne, in recognition of my commitment to thinking-writing and hitting publish (too soon), and in the knowledge that, thus far, what follows is not a style of thinking that the AIs I have seen tend to do – yet
~
Yesterday, I was struck by footage broadcast on TV news. Putin, being led through the floodlit background of a conspicuously new children’s playground in Mariupol. There was I, on a rowing machine in a gym in Melbourne, working on my health and fitness from a safe and comfortable distance, nothing but blue skies in the autumn morning out the window to the right of screen. There was Putin, and his people leading him around, getting strolled through the Potemkin pretence of a city, having it pointed out where children could play safely now, on equipment that was new, or replaced, or hadn’t been bombed.
In class yesterday, I was teaching with Chat GPT. We were trying to “think what ‘it’ could think for us”, about indigenous dispossession in Australia. What semblance of knowing can the AI show?
So far, at least, what we notice is that it is very capable of generating decent-ish ‘entries’ on nearly any question we ask it, on the fly. What’s conspicuous about it, the trick to its seeming magic, is its swift generativity: no matter what you ask, it doesn’t fuck out, say ‘I don’t know’, stammer, switch the topic to its own concerns1, or vague out on an irrelevant tangent. So, at least, it seems more willing and less insecure, narcissistic and ‘all caught up’ than most of us are apt to be.
In sum: Chat GPT is programmed to unfailingly and ‘dutifully’ respond in real time, and does so, usually with something beigely coherent and generic.
Of course, as with even the best human interlocutors, the quality of the AI’s response depends greatly on what you ask it. So far, it can’t do deeper meaning; it does not understand (eg) justice. When you ask the AI pointed questions such as ‘can there be justice if the land was stolen?’, or ‘what is justice for the people of Mariupol?’, March 2023 responses have the pat coherence of a politician’s boilerplate.
And even if/when the AIs’ responses can tackle deeper-semantic and fundamental-normative questions, can they intend ethically, and can they orient their life’s actions and meanings around that intention or imagined vision (do AIs have utopias?), and form groups to carry that vision out, ‘as if their life depended on it’ (would they go on Crusades?)? No; they are nonhuman. Whatever many emergent powers and interests they may have, they’re unlikely to be those of ethics and politics (and quandaries and conflicts) we humans can think about now, if we choose to. And ‘we’ ‘can’ ‘choose’ ‘to’.
On the ride home from class, I was thinking about the disquieting haste with which the newer gen AIs are being pulled down the pipes by the world’s deepest pockets – formidably resourced, organised, capitalised developers tied to the surveillance capitalist titans as well as the world’s leviathans and their militaries – and listening to Ezra Klein’s human take on it all. I was struck by what Klein was, I think, intending to impress upon me as one of his concerned, attentive human listeners: the ways we no can no longer understand, control, or stop these nonhuman entities we are coaxing into operativity; the way they give us back the semblance of a coherent ‘understanding’ that seems intelligible to us, yet how this seeming, this semblance, belies something really other in the style of cognition operating ‘below the hood’, of which we sometimes understand fuck all, already. All of this is striking.
This leads back (again) to perennial metaphysical questions. These programmed systems do not have ‘Being’, nor do they ‘think’ as we can if we can (if we try). But they are actively engaged in an emergent mode of cognition – that is not thought or consciousness as we can experience it – that we have also programmed to evolve, change and learn (and whose learning is now evolving on its own, very fast indeed).
Here, I was struck by the heedless, ‘headlong rush’ of it all, powered by capitalism and geopolitics: between the US tech giants, duking it out for supremacy; and between the US and China… duking it out for supremacy.
(I wanted to listen back to Oppenheimer Analysis; I thought about the way the ‘prawns’ are kept in Johannesberg in District 9, until their military technology can be extracted and harnessed)
~
What brings these together, for me, is the tension in what we can(not) seem to do in each case, what we seem to be (un)able to do, and all the dark figures here in our blindspots. In the background is the concerned question of human agency, and (indicatively), William Gibson’s professed nonunderstanding of the world now, and how (he half confesses), he kind of trainwrecked his novel, Agency, which grappled with and failed to grasp this present and its future.
And where that leads my reflection is in how these seemingly disparate worlds might overlap and interact in ways both mundane and surreal: given a majority urban global population who have to live, somehow, in the service economies of (post) industrial economies, where resources are distributed by job markets and global logistics, and redistributed by states and families.
First, back to Putin, and what he is (un)able to do.
~
As for Putin, he can order the Russian military to invade and occupy Ukraine, and Russian forces have enough ordnance that they are capable of wrecking cities like Mariupol by carpet bombing it. The same military is capable of making Mariupol ‘safe enough’ for Putin’s flying visit, to carefully chosen spots, and his people are capable of finding, building, and flood lighting a children’s playground for the cameras, to confect the tragicomic spectacle of reconstruction, safety, and care for children2.
Of course, the playground in Mariupol, and Putin in the playground in Mariupol, also ‘says’ what Putin and the Russian military are unable to do: they are unable to provide protection, welfare, let alone happiness for the population – not even the Russian population in Russia. The Russian military could not take Kiev, only destroy some of Ukraine’s cities; and aside from propaganda, this is only a destructive force. It wrecks things, and can only dominate based on the real fear that it can and will do this, and by doing it.
Thus, aside from propaganda, as wreckers they are unable to build playgrounds and their implied world – a liveable future for young people. Putin’s Russia and its postmodern totalitarian mafia state, in all its kitsch absurdity, is not capable of making anyone safe or happy (not even themselves). All they have is force enough to destroy nearly everything around them. And all this can achieve is a ‘stalemate domination’ that can then only work by keeping everyone unsafe and unhappy. Safety and happiness itself become threats here.
~
As for playing around with Chat GPT in the safe, climate-controlled classroom of Melbourne, the contents of Pandora’s Box, so far, are decidedly mixed.
The question it poses for me – so far – is less:
can the recursive operations of an algorithmically programmed system ‘think’ (are ‘operations’ ‘thinking?’)
but more….
can we think? (Do we tend to?)
Who(m) among us tends (not to) think, and do we humans tend to think – together, alone, alone-together – under prevailing societal conditions?
…and who(m) or what are the ‘dark figures’ of these tendencies of thinking?
I would split this into the following few communities, because like all futures, this one is unevenly distributed, and here already.
Firstly, there are those who are paid to work on the AIs, or invested in (funding, profiting from, and dominating by) their better working and development. As above, this ‘research and development’ is powered by capitalism and geopolitics, the need to gain a competitive edge in this gigantomachy, the impossibility of stopping and slowing down, now the ‘race’ is underway. This is shadowed by the ‘all or nothing’ ‘last giant standing’ aspects that make it accurately describable as a ‘death match’3: between the tech giants, between Silicon Valley and American Democracy, between the United States and China. From where I sit, this program is running, has already been let rip, cannot be stopped – can it be slowed down (please)? Moreover, we are unable to say or know where this will lead, while our intuitive understanding apprehends that this is, as a Trumpbot might say: ‘not good’, ‘very bad’, and ‘SAD’.
Are the R&D people, the AI people, are they thinking about what they are doing? If 20C history is any indicator, what proceeds here is usually not thinking. I think (there you go) of Arendt’s excoriation of American postwar Cold War bureaucracy (Robert McNamara, really),
“The trouble is not that they are cold-blooded enough to ‘think the unthinkable’, but that they do not think. Instead of indulging in such an old-fashioned, uncomputerizable activity, they reckon with the consequences of certain hypo- thetically assumed constellations without, however, being able to test their hypotheses against actual occurrences” (Crises of the Republic, ‘On Violence’, 108, emphasis in original).
We thus remain in that venerable military-industrial lineage that is only-mostly the escalating entanglement where R&D turns into its domination applications; where the Manhattan Project became Fat Man and Little Boy; where the science of 1900s German chemistry became gas warfare, on April 22nd 2015. Then as now, these groups, passionately invested in their project, and co-powered by narrow technocratic reasoning and (here) a cosmic and fuzzy technophilic libertarianism, appear as those strangely inverted gnostic nihilist believers in magic we know from myth and literature, from the Sorcerer’s Apprentice to J Robert Oppenheimer. I’m not sure I agree with Arendt (though as usual I enjoy the provocation in her thinking): they may or may not be able to think about what they are doing.
But thanks to capitalism and geopolitics – and some kind of demiurge that wishes or wills the bringing forth of whatever is technically possible, to hell with the possible consequences – they are unable to stop, they wish to continue. Their manias become science.
Secondly, there are those of us who are trying to work with the AIs in a number of different ways, with varying degrees of conscious intentionality, ethical concern and ‘success’. As might be apparent, my take on this for teaching is that ‘it’s here already’, and there’s as little point pretending that’s not the case as there is pretending that Mariupol is safe for children to play (and seeing as I haven’t been ordered to pretend, I won’t). We don’t really know where this will lead and what its settled meanings might be: as Zhou Enlai never said about the long-term effects of the French Revolution, ‘it’s too soon to say’.
So/then… the AIs are developing predictive power, but we have none and are unable to get any (including from the AIs, so far). The dark age might be upon us, but we are in the dark about this. Cybernetes was the helmsman, but we have no way, it seems, of steering this cybernetic emergence4. These are not trivial machines.
And as we are unable to deny the presence of the AIs or put them back in Pandora’s box, and must try to think with and work with them, so we are unable not to think with and work with them. This means they have grasped our working lives before we have grasped them, though they are not our familiars, assistants, pets, or friends. There is thus a new ‘they’ are ‘in here now’ alongside ‘us’, whether we like it or not – like 21C weather, we must weather the weather, whether we like it or not. Is this death by Clippy, or just being subjected to some kind of Microsoft-mediated purgatory? ‘It’s too soon to say’.
Harking back to another earlier post: as AIs are already debounded and debounding, so we will live in the midst of grappling with the strange weather they produce as they transgress, ingress, and instantly render silly or strange the many things we humans continue to invest meaning in, or pretend to invest meaning in. Consider the essay, or the online quiz, as two among the ordinary assessment tools of ‘drive through’ neoliberalised higher ed: these are already-instantly absurd practices now… yet they will continue for… a few more years?
Thirdly and finally, I’d like to give some visibility to the mundane background of my current professional use of ChatGPT described above. I spent perhaps 30 minutes of class time yesterday asking the AI questions, on the basis that I was instructing the students to do so for their assessment, then think critically about the responses it gave, and whether these responses can actually help us understand the deeper meanings at stake (here: in indigenous dispossession, and indigenous people’s struggles for recognition and justice. But we could also ask it about Putin and Mariupol).
However, what is probably ‘dark’ to you so far, my still human reader, are the social facts that comprise the enabling condition of any possible classroom thinking. In the absence of people to think with, there is no thinking together, no learning, whether machine learning or human learning. I was in the classroom with six or so conscious and attentive students. There are 20 or so people in this small cohort. There are four who can’t turn up as they’re fitting study in around work commitments. This means there are a further ten – fully half the group – who I have not yet seen, do not yet know, and have not yet responded in any way to the couple of email reach outs I’ve sent. We’re at week four in a twelve week semester. If this were an election, it would be rendered invalid. In 2023, it’s just higher education.
In discussing crime statistics, it’s common to talk about the so-called ‘dark figure’ of crime: all the (wronging, bad, harmful, damaging) stuff that happens, everywhere, everyday that we don’t know about, because it never gets noted, written up, turned into a number, aggregated and correlated against others &c. In higher education now, many of us are dealing with the ‘dark figure’ of a substantial percentage of students in our cohorts – 30-50% – who are ‘not there’ (in ways big and small), some such that we are unable to meaningfully know them in any way. Chat GPT always responds to my summons. Yet this portion of those enrolled, they do not respond. Not only faced with AIs, we are also living in a world where we are unable to reach – and teach – a substantial portion of the people who nonetheless enrol and pay (including, as here, about the possible effects of AI on higher education, and whether it can be of assistance to us in solving society’s basic problems of dispossession and disadvantage).
We can turn this dark figure into stories, to some extent. When people do make contact, their motivations and aspirations – the struggles and strivings of the ‘dark figure’ –turn out to be real and diverse. Yet in trying to teach this group that set of transferable skills known as critical thinking, I’m left wondering how on earth the living majority, the more than four billion of us now living in growing, still un-destroyed cities, are going to grapple with the AIs, who will easily be able to accomplish a lot of the low end customer service work that comprises ‘jobs’ or precarious ‘work’ in our post-industrial service economies. Moreover: those privileged to live in cities where children can play safely and legally in playgrounds (except when there are lockdowns): what happens once you add environments degraded by disaster and war to all of this?
There is the perennial ‘Skynet’ worry-fantasy about this in AI’s military applications, once the Putins and Xis and Bidens of this world order their array of applicants against one another, sometime around 2030. Perhaps even in Ukraine, given it is effectively a proxy war now between the US and China. But already and in the more prosaic ways, with more mundane phrasing: what happens to cities full of service workers working with words and content in post-industrial service economies, once this rolls through, given all the other things we seem to be (un)able to do?
I wonder about this, in the dark; and we should think about this now, together. But are we (un)able to?
~
In all of this, AI and its dark figures provoke a question about people and their intelligence.
My students who still show: why are they there?
Those who do not: Why aren't they there? What do they think? Do they think about this? What are they capable of knowing and doing? And So?
The question of AI, for me, fundamentally remains a social question, to do with what we're supposed to do with one another. What is the meaning of a human life in a capitalist system in a global city? Very often, my students who aren’t attending are doing so because they’re working, preoccupied, gaming, and have complex personal issues, including a substantial percentage who have mental health issues affecting their motivation and concentration. Many of those still ‘in contact’ are already struggling; what happens to such groups and individuals once AIs intervene where resources are distributed and redistributed – in job markets, states, and families? We are the people who somehow have to live together, and muddle our way through the Anthropocene. I’m not quite sure how this is going to work, if it’s going to work out, but I do think this is something we must actively think about together now.
Supporting the topic by proactively listening and asking relevant and appropriate follow-up questions, and knowing how and when to time and interjection, all comprise the higher level skills of amazing listeners; also, super rare in our cultures! Derber’s great book is worth a read on this.
If you do a Google Images search for ‘Mariupol playground’, you will see the extent to which ‘destroyed playgrounds’ is not only a bombed fact of any armed conflict targeting civilians and cities, but also a news media trope (hence the ‘counter playground’ Putin’s people are showing us).
The term is Zuboff’s in her latest piece on surveillance capitalism, here. A key point I’d like to come back to is the ‘void’ she opens here in the opening pages. That is: China’s Party (and in its way, Putin’s mafia state) do have a ‘vision’ and a strategic plan, but as she captures in the opening pages:
“This condition reflects a larger pattern. From the dawn of the public internet and the world wide web in the mid-1990s, the liberal democracies failed to construct a coherent political vision of a digital century that advances democratic values, principles, and government. This failure left a void where democracy should be, a void that was quickly filled and tenaciously defended by surveillance capitalism. A handful of companies evolved from tiny startups into trillion-dollar vertically integrated global surveillance empires thriving on an economic construct so novel and improbable, as to have escaped critical analysis for many years: the commodification of human behavior. These corporations and their ecosystems now constitute a sweep- ing political-economic institutional order that migrates across sectors and economies. The institutional order of surveillance capitalism is an information oligopoly upon which dem- ocratic and illiberal governments alike depend for population-scale extraction of human-gen- erated data, computation and prediction (Cate & Dempsey, 2017)…
…The political failure of the void forfeited the critical first decades of the digital century to surveillance capitalism. It deprived an increas- ingly connected world community of a clear alternative to the Chinese vision of the digital century. Without a path to a democratic and digital future, the democracies abandoned whole societies to new forms of digitally medi- ated violence from both state and market actors. Most treacherous is the potential fusion of these spheres in a digital-century incarnation of the surveillance state defined by unprecedented asymmetries of knowledge about people and the instrumentarian powers of behavioral con- trol that accrue to such knowledge (Zuboff, 2019). Without new public institutions, charters of rights, and legal frameworks purpose-built for a democratic digital century, citizens march naked, easy prey for all who steal and hunt with human data. In result, both the liberal democra- cies and all societies engaged in the struggle to build, defend and strengthen democratic rights and institutions now stumble toward a future that their citizens did not and would not choose: an accidental dystopia owned and operated by private surveillance capital but underwritten by democratic acquiescence, cynicism, collusion, and dependency” (3).
This ‘observer position’ is already there in second-order cybernetics: given that we’re not dealing with trivial machines here (where inputs and outputs are aligned [coins go in, Coke comes out]), operations might drift toward values which are *entirely* unforeseen and unpredictable. And here, the irony of a ‘cybernetic’ (steered) system might be its unsteerability – or one that can steer itself.