'What Wotan Wants': how to make a – ? – we won't know is deadly, or fun, or janky, but probably exploitative and stupid, in five easy recursions
rule of recursions in HI-driven podosphere hype, 'Mechanical Turks', global logistics, and some ranty bits on the nontrivial chance of domination by planetary stupidity
As of April 2023, two things about the Anglocapitalist AI discourse that I glimpse seem noticeable.
Firstly, we appear to be hitting peak hype. I say this based on my podcast feed, which is flooded with episodes featuring guests promising hot takes that *finally* make sense of it all.
Secondly and relatedly, this follows a metastable year-on-year pattern. That is to say, there is a tendency for the podosphere to gush in, flood, pump, then dump any given year’s Topic: rewind to April 2022, and it was Ukraine; April 2017, it was the Trump Presidency; April 2020, covid onset. &c &c.
Thus, using Luhmann’s system theory just a little to observe the unfolding of distinctions, we can see how HIs (trying to bring forth AIs) are currently involved in their own open complex system, whose communications, as a ‘rule’, tends to drift in the following fourfold way.
There Will Be Hype (just for one year): we cannot predict what this year’s hype topic will be (even in Jan-Feb), but we can predict there will be a hype topic (usually emerging by Mar-Apr), and it will really, really preoccupy ‘casts and guests – for a limited time, usually of less than a year. This is because HI attention is fickle, and tends to ‘swarm pile’ onto the hot button topic, ‘knowing’ it only has currency in its neural network for a limited time, and that the hottest take will have made some career inroads. May the hottest take win (but only for a limited time).
Swerve: topics and events ‘swerve’ in some direction, usually due to terminal boredom, or because the topic is replaced by an extrinsic factor no one was paying any attention to or predicted in any way (think: Chixculub, for the dinosaurs). This is because the world itself is a nontrivial horizon of events ‘outside’ the system of communication, that we continually try to trivialise, in order to ‘command’ and ‘control’ it. Second-order systems theory is totally woke to this; first generation Weiners, like Norbert, and Jon (von Neumann), tend to be better at the maths and not as good at reflexivity or awareness. Jon Von: ““All stable processes we shall predict. All unstable processes we shall control.”
Eigenvalues emerge: over time (3-6 months down the track), discourse will have drifted toward a metastable set of ‘positions’, which, for most HIs in most cultures and media ecosystems, typically binarise and split into for/against camps, usually based on one group’s (undeclared) interests, values, stakes, and book contract. In another dialect: the paranoid/schizoid nature of human psychopolitical apprehension tends to overwhelm indeterminacy, ambiguity and ambivalence, producing a false for/against, in/out, good/bad clarity we overidentify with and invest in (as a salvational or dooming narrative), along with our in-group. Where tech is concerned, the usual binary is hyping and boosting (tech good!), or dooming and damning (tech bad!).
Oops (the world again): usually about a year on, a ‘completely unpredicted’ event happens, at which point, the New Topic will emerge, and the HIs will go and swarm that, making a new set of confident pronouncements about it, thus also proving a kind of ‘cycle’ in play. We ‘know’ there’s a hype cycle, but this itself is seldom acknowledged in discourse, because we are usually passionately absorbed in the truthiness of the new hype topic.
Thus, if you’re sick of discourse about AI, don’t worry, just wait a little. In April 2024, you can look forward to not hearing about AI (even if you’re still worried about it, and rightly so), because your podcast feed will be swamped with, I dunno, the US presidential candidates and election, the imminent invasion of Taiwan, the ‘unforseen’ devastation wrought by human-to-human bird flu, the ‘unforseen’ devastation wrought by the implosion of global bond markets, or, perhaps, whatever Clippy tells us we are allowed to be interested in, from now on.
Or, if the planetary sourdough of some Silicon Valley bro’s self-programming AI really does go bad banana breadz and ‘gets out’ of the lab, maybe our phones and internet and electricity grid won’t work, leading to the tragicomic irony of 2024 being the year in which, exceptionally, no one was capable of talking online about the scary bad new hype topic, as a supply shock no one can communicate about hits us, and none of us can get food or fuel, after 4-6 days. These are just a few nontrivial probabilities for what we can look forward to – next year.
But in the meantime, We Need to Talk about AI... just a little longer. In what follows, I try to think about what the problem is, in each case, and why that’s a problem. The methodology here follows that of the Exquisite Corpse; thus by the final bit, I’m hoping to have disclosed ‘what rough beast’ is slouching toward Bethlehem to be born here (before I try to hit publish, around midday.
Mechanical Turk, one
In his recent piece for the New Yorker, Cal Newport did his usual good service to the Anglophone upper middle class and clearly explained AI for us, to us. Or, more specifically, he gave an illuminating descriptive analysis of the kind of ‘mind’ that text-generative, prompt-based ‘bots like Chat GPT does/not have. It’s worth a read in full, but as often (spoiler imminent), Newport manages to ‘give it’ in the final paragraph, where he writes:
“Imitating existing human writing using arbitrary combinations of topics and styles is an impressive accomplishment. It has required cutting-edge technologies to be pushed to new extremes, and it has redefined what researchers imagined was possible with generative text models. With the introduction of GPT-3, which paved the way for the next-generation chatbots that have impressed us in recent months, OpenAI created, seemingly all at once, a significant leap forward in the study of artificial intelligence. But, once we’ve taken the time to open up the black box and poke around the springs and gears found inside, we discover that programs like ChatGPT don’t represent an alien intelligence with which we must now learn to coexist; instead, they turn out to run on the well-worn digital logic of pattern-matching, pushed to a radically larger scale. It’s hard to predict exactly how these large language models will end up integrated into our lives going forward, but we can be assured that they’re incapable of hatching diabolical plans, and are unlikely to undermine our economy. ChatGPT is amazing, but in the final accounting it’s clear that what’s been unleashed is more automaton than golem”.
Reading Newport here, I was immediately reminded of the opening paragraph from Benjamin’s ‘Theses on the Philosophy of History’:
“The story is told of an automaton constructed in such a way that it could play a winning game of chess, answering each move of an opponent with a countermove. A puppet in Turkish attire and with a hookah in its mouth sat before a chessboard placed on a large table. A system of mirrors created the illusion that this table was transparent from all sides. Actually, a little hunchback who was an expert chess player sat inside and guided the puppet’s hand by means of strings. One can imagine a philosophical counterpart to this device. The puppet called ‘historical materialism’ is to win all the time. It can easily be a match for anyone if it enlists the services of theology, which today, as we know, is wizened and has to keep out of sight”.
Tapping a soft mallet or brush against Benjamin’s incomparably allusive resonance, in light of Newport, allow me begin to make out this Exquisite Corpse by striking the first of a few momentary LARPs and consider the hunchback (and the puppet). I’ll return to the ‘wizened theologian’ in the conclusion.
It’s quite interesting to me that very little of the broader Chat GPT discourse focuses on the fact that it’s Microsoft. I raised this in a previous post, and I’m writing this text into Microsoft Word, which is still my chosen word processing software, both because it has global de facto monopoly on AngloWestern word processing (remember WordPerfect?), and because the previous VC of the university – who apparently used to be a software salesperson, for Microsoft – switched us from Google Tools to Office 365.
Leave aside the boost/doom/too fast/let’s stop/we can’t stop discourse on AI for a sec and otice what this, just this, actuallyprobably means for everyone in my organisation (and probably yours): not only must I/we grapple with socio-culturo-probable ‘low rent’ uses of Chat GPT (and similar) by students, we will, most likely, be trapped in an organisational-professional ‘software suite’ situation where we have no choice but to use, engage with, and negotiate some kind of agencement and rapport with whatever ‘Mechanical Turk’ the ‘hunchback’ pushes down the pipes at us. So here, ask not what your ‘bot can do for/yo you, but what you are very likely to be doing for your ‘bot, given that this is Microsoft. Think of what life was like during covid, if you had to use Teams every day. Most likely then, the particular AI future for HIs like us will not be ‘magical’ or automate drudgery away, it will just add some clunky, janky, mandatory use platform into our already administratively bloated heteronomy, and the licensing fees will be ‘up the ass’, as they’re whatever the ‘hunchback’ decides, forever. There will be other, better ‘bots written by other companies, but (y)ours will have already signed on with the hunchback.
In sum, forget about the vertiginous terror of ‘takeoff’ for just a wee bit, think about our incredible professional dependence on the Tech Titans, consider the cultural styles of those organisations, the tics of their software, and how those companies make a profit from us, and notice where this will almost inescapably lead.
Mechanical Turk two
After spending a moment with the Mechanical Turk of Benjamin’s oblique, fabular Marxian eschatology, click to the hard contrast: Amazon’s Mechanical Turk.
Amazon Mechanical Turk (MTurk) is a crowdsourcing marketplace that makes it easier for individuals and businesses to outsource their processes and jobs to a distributed workforce who can perform these tasks virtually. This could include anything from conducting simple data validation and research to more subjective tasks like survey participation, content moderation, and more. MTurk enables companies to harness the collective intelligence, skills, and insights from a global workforce to streamline business processes, augment data collection and analysis, and accelerate machine learning development.
While technology continues to improve, there are still many things that human beings can do much more effectively than computers, such as moderating content, performing data deduplication, or research. Traditionally, tasks like this have been accomplished by hiring a large temporary workforce, which is time consuming, expensive and difficult to scale, or have gone undone. Crowdsourcing is a good way to break down a manual, time-consuming project into smaller, more manageable tasks to be completed by distributed workers over the Internet (also known as ‘microtasks’).
On first glance, curious, bazaar (virtual, simulation); on closer examination, sad, bizarre (dissimulation).
I mean, like: on first blush, Mechanical Turk seems to be about getting Amazon’s AIs to whisk away the shit or bullshit aspects of contemporary office work (like SalesForce, only not about sales and without a ‘force’), and somehow-or-other a set of ‘solutions’ back atcha, more efficiently-cheaply-flexibly than a staff of temps could. As it says, ‘global, on-demand, 24x7 workforce’. As it says, ‘Mechanical...’
Look a little closer.
MTurk’s microtasks secrete a microaggression.
MTurk is actually about getting non-mechanical ‘Turks’ from the meatspace to do all the shit bits (and see here my own attempt to explain the distinction between shit jobs and bullshit jobs), leaving those employed by the core company to draw much greater benefits by doing much cushier ‘work’.
To be specific, what it tends to actually mean is paying Kenyans two dollars an hour to do content moderation for Chat GPT.
With the previous section’s points on specific, durable corporate cultural patterns and styles in mind, we need to remember that Bezos wanted very badly to call it Relentless, and was talked out of it by friends, who (unlike him!) clocked the vibe it gave off. Let’s be honest, a part of us *admires* Amazon’s ruthless efficiency nearly as much as ‘we’ enjoy one-click fuflilment: Amazon are ‘the best’ at doing American capitalism. AmIwrong?!
(here’s where I make you regret agreeing so quickly)
Yet as Adorno and Horkheimer always maintained, the affinities between the norms of 50s America and 40s Germany were always much closer than most Americans would enjoy admitting. At its dystopian horizon (which is pretty close, wherever it’s Bezosism, wherever it’s Relentless), what we actually end up with, in the absence of a countervailing source of collective action, is something little better than the German V2 rocket effort in the later stages of the war1, the ‘facade’ of an ultra high-tech affordance hiding the grim reality of Jewish slave labour, in windowless factories where people were worked to death, with the corpses of the deceased hung from the walls to coerce labour from the still barely living.
The post literate, suburban GorzWorld version of this is precisely what George Saunders managed to nail (then hang up) with the Semplica Girls.
More broadly, the grim example of people nearly killing themselves to deliver the sovereign consumer their whim – whether a weapon system or a daughter’s birthday present – is just the nth degree of all modern industrial-scale production, all the more so in the wake of ‘whatever’ length supply chains enabled by containerisation.
If you’re privileged enough to be a sovereign consumer who can purchase services from K Mart or M Turk, you don’t have to notice the exploitation and suffering of workers (ethics and regard for the suffering of other is itself an individual consumer choice, as well as a niche market), because it has been totally separated from insulated profit-taking owners, efficiency-seeking clients.
We VaporSpace end users (here: using Chat GPT) see nothing, know nothing, and need care nothing of the Mechanical Turks BioKenyans at the other end of the supply chain, in this global division of labour.
To quote Zizek:
“I remember a cruel joke from Ernst Lubitsch's To Be Or Not to Be. When asked about the German concentration camps in occupied Poland, the Nazi officer snaps back:
‘We do the concentrating, and the Poles do the camping’”.
What I’m trying to draw attention to here is that the ‘front’ or ‘facade’ of Benjamin’s chess match neatly hides – and so entrenches, as it ‘secretes’ – the perennially negated suffering and exploitation of living people, those who have no real choice but to serve service as the Deliveroos and fungible labour inputs maintaining profitability for the many post-industrial services of our servile service economy.
Perhaps this is actually the problem with Benjamin’s metaphor, if it’s to be a truly Marxist thought (time for some real talk about real Turk): the Turk is seldom actually mechanical, and (in the 2000s and 2010s) has usually been the urbanising daughter of peasants from China, Vietnam, Bangladesh, or the Philippines, whose new factory job requirements include her pretending to be a “puppet in Turkish attire and with a hookah in its mouth”, because she is trying to labour her way into a slightly better future than that of her parents by letting herself get inserted into a global, on-demand, 24x7 workforce where Amazon has inserted itself into her, in exchange for what is calculated – down to the lowest cent – as the lowest possible wage that can still command the labour of that population.
Back ‘here’, VaporSpace’s hard shiny surfaces do no not sparkle up and down the supply chain; the affordances of VaporSpace, the ‘back end’ of every stack, no matter how high tech, tends to be assembled by actual people in a much shittier allotment in global birthright lottery.
Those VaporSpace surfaces won’t shine themselves.
Back to work, ‘Turk’.
...and back to AI in the podosphere here: most podcast discussions spiralling into AI doom-coursing seldom wanna talk about the dirty back end (even less than they wanna talk about our actual 4+ decade dependence on Microsoft’s second-rate products).
We should talk about the dirty back end much more, and we should go back to those nice Marxist questions again and again: who benefits? Who owns the ‘bots?
Peter Frase’s Four Futures, and Alec MacGillis’ Fulfilment, give us two different styles of entry point into how we might start thinking about this *every* time we talk about AI. Allow me to abuse the pull quote function one last time:
Think ChatGPT, think BioKenyans.
Use ChatGPT? Use BioKenyans.
global logistics one
Alongside Frase and MacGillis, another author whose work really deserves close attention is Christopher Mims. Like Marc Levinson, Allan Sekula and Rose George (who he has all read carefully), Mims’ work in Arriving Today is to ‘see the whole’ that emerges when we look from end-to-end across a contemporary supply chain for a ubiqiutous object (a USB charge cable). So although the book isn’t *about* AI – and certainly does not get caught up in hyping and boosting, or dooming and damning – different styles of networked computing-enabled, sensor-enhanced, teraflop-devouring ICT affordances do crop up, and are of great importance in making and keeping the lazy convenience of the one-click sovereign ‘on top’. Cos like: what if we had to, like, go to a shop to buy food (during a weather event?!), wait a few days to have a hookah delivered, or stitch our own ‘Turkish attire’?
One of the places where Mims’ account takes us further than Levinson and George did (with containerisation and shipping, respectively) is the nuance and detail it gives to trucking. Like MTurk, trucking actually relies overwhelmingly on human labour, which, in the case of the US, tends to take place on terms so exploitative and unprofitable (for truckers) they’re barely above that of the indentured. In his podcast appearances and op eds, Mims is at pains to point this out to us: what was framed as a ‘labour shortage’ in trucking was actually a labour market so unappealing that it could not attract and retain (m)any truckers. The same thing is starting to happen with sessional marking at universities, by the way.
The irony Mims notices with trucking (given its remuneration and status) is that it’s actually not a bullshit job: doing it well, and safely, and ‘backing it up’ day after day, involves a high level of skill, precision, discipline, and patience, as well as an ability to withstand hardship and loneliness. Trucking, as its rhyme suggests easily to any dad, is fucking hard work, yet unlike remote mining or offshore oil rigs, these rigs don’t command the FIFO-style pay that acknowledges actual terms and conditions of labour. Thus trucking is visited by a compound irony, insofar as it’s *because* we need trucking so much, and *because* we need so many (trucks and truckers), ‘we’ ‘can’t afford’ to ‘pay them’ ‘what they’re worth’ (like childcare). As the pandemic revealed, they are some of the most essential essential workers. If 90% of everything in VaporSpace comes by way of shipping, in a global supply chain, Mims reminds us, 100% of it comes by way of trucking. This is really worth thinking about it, but I can almost guarantee that nearly none of the AI hype crowd will.
As Mims also explores, trucking in the US has tended to remain independent or be held by small companies, as it does not scale well (in strongest possible contrast to global shipping). At the same time, there are (of course) a number of US-based tech startups who are (of course) racing one another to automate trucking. We’re still a long way from that incredible scene in Logan with the horse trapped on the freeway with fully automated truck-containers rushing by*(kind of the most ‘Stiegler-verse’ thing imaginable), but we need to clock that Hollywood clearly imagined this near future ‘for us’ by 2017*, and start ups are out there as I type trying to LARP it into existence on America’s freeway.
As it turns out, it’s really hard to actually create driverless trucking (hmm… maybe this is like… a fantasy… that *we* have?). It requires years and years of development, and huge investment in energy-intensive computer processing of an array of sensors: you need computing as powerful as the world’s most powerful supercomputer just a decade ago, you need to ‘teach’ the truck to drive the route with a Borgesian map bigger than the territory it travels over, and you need to code and code and code and then check and check and check and check everything, until it’s 99.9999% reliable. ‘Cos as soon as you put people – or escaped, spooked horses – into the picture, driverless will struggle, where a trucker’s experience, common sense, and intuitive, split-second response to an unforseen and rapidly emerging reality will handle this without needing all the electricity and capital ‘fed into it’ first. The gain we get, if and when we do, is the freeway truck train, a set of trucks capable of travelling in tandem, and ‘tailgater’ distance from one another, without needing to sleep, piss, or have a coffee.
With chops as a WSJ-style ‘research impeccably, explain lucidly’ journalist2, Mims is a long way from the low-hanging Marx/socialist ranty moralising that, say, Michael Moore or John Pilger would go to without even pausing for breath. However, let me put on a blonde wig and/or a trucker’s cap (or some Turkish attire, if you prefer) and play this role for just a tic.
Like: what is a society that exploits human truck drivers for decades (and doesn’t see that as inefficient and wasteful, even from a strictly zweckrational perspective), then turns around and tries to replace them with driverless trucks, which need vastly more resources and capital (than it would have taken to pay BioTruckers a decent wage) to bring them to a point where they could maybe get from California to Minnesota a day or so faster... just so we can have our hookah on Monday, instead of Tuesday (and doesn’t see this as a screwed up set of priorities)?
The pull quote point I’d like to take from this meaty, exploited, essential point in the global supply chains we’re all dependent on is to think:
is this any good, actually?
Even once all the technical hurdles have been surmounted, having consumed vast amounts of OPM and electricity (in an era of catastrophic climate change and institutional corruption and involution), once we ‘had it’, what would ‘we’ ‘have’, actually, and will it have been worth it?
On this point, it’s worth thinking back a decade to the 2010s Silicon Valley disruptors, and especially Uber, the ‘always be hustlin’ start up from the pre #metoo era of 'blurred lines’.
Looking back on Uber’s 2010s from the early 2020s, we notice a process of involution, of very little value added. For all the hype and pump in the early-to-mid 2010s, all ‘we’ got was slightly better taxis for a few years in Sydney3 in the mid 2010s, and a new precarious workforce (that endures to the time of writing). Ubers got worse as they became more like taxis, and taxis got a tiny bit better as the companies embraced app-based hailing that was more like Uber. Eventually, Ubers were being driven by taxis drivers, taxis were being used as Ubers, Ubers had become leased fleets of vehicles run and managed by cartels (in many cases, the same ones who were running the taxi companies), and riders, regardless of which vehicle they were waiting more, were waiting longer and paying more for a less nice vehicle driven by a driver who was paid and supported less than taxi drivers had been before Uber disrupted urban transport. It’s hard to see this involution as any kind of solution, a big win, or even, really, any kind of value add.
The point from Uber, a decade on, is that those caught up in the style of AI discourse now flooding my pod feed do what all of us do when they obsessively point in and yap at one direction: they generate a big fat constitutive blindspot. Then as now, there’s not only an unwillingness to see corporate interest (and mandatory-janky software suites as outcomes), the planetary necessity of exploitation (everyone can be a BioTurk, 24/7), there’s a constitutive inability to ask basic normative questions about whether this is a good idea any case, tied to an unwillingness, almost a disability, to use ordinary common sense questions as a way of steering and stopping whatever great new solution is being LARPed into being4, but that should just be nipped in the bud – because it’s actually wasteful and stupid, won’t get us anywhere, and takes scarce, precious, finite resources away from fixing our actual problems. To stick with the context of the US highway network: potholes, bad roads, degraded infrastructure let go by the selfsame capitalist totalitarianism failsystem.
In other words, when thinking about AI, we need to think about the actual division of labour in contemporary America, and its uncontrollable dynamic of galloping inequality (and bolted horses). This is a world of gaping regional inequality so beautifully captured by MacGillis, a world where people would rather pay to have a Baltimore house moved, brick by brick, an hour up the freeway to Washington DC, rather than pay a fraction of that to buy, keep, renovate, and live in it – in Baltimore. In this division of labour, the truckers (and the Amazon pickers) live in Baltimore, while up the road in DC, at Amazon’s new headquarters, someone is being paid much much much more to dream whatever new MTurk into being.
Before getting to the nth point of this stupidity (as we return and conclude by thinking about the wizened theologian, as promised), we should go back to Smil, because this passage, already quoted, really nails this division of labour.
“The other major reason for the poor, and declining understanding of those fundamental processes that deliver energy (as food or as fuels) and durable materials (whether metals, non-metallic minerals, or concrete) is that they have come to be seen as old-fashioned– if not outdated – and distinctly unexciting compared to the world of information, data, and images. The proverbial best minds do not go into soil science and do not try their hand at making better cement; instead they are attracted to dealing with disembodied information, now just streams of electrons in myriads of microdevices. From lawyers and economies to code writers and money managers, their disproportionately high rewards are for work completely removed from the material realities of life on earth.
Moreover, many of these data worshippers have come to believe that these electronic flows will make those quaint old material necessities unneccesary. Fields will be displaced by urban high-rise agriculture, and synthetic products will ultimately eliminate the need to grow any food at all. Dematerialization, powered by artificial intelligence, will end our dependence on shaped mases of metals and processed minerals, and eventually we might even do without the Earth’s environment: who needs it if we are going to terraform Mars? Of course, these are all not just grossly premature predictions, they are fantasies fostered by a society where fake news has become common and where reality and fiction have comingled to such an extent that gullible minds, susceptible to cult-like visions, believe what keener observers in the past would have mercilessly perceived as borderline or frank delusions”, Smil, How the World Works, ?4?
nth point
As for the wizened theologians, there’s not as much that needs to be said here. Okay, Kurzweil, whatever. But really, it’s the bro army of third-rate Kurzweils, cooking up ‘darkness sourdough’ (and giving it access to the web), that really bothers me in all this.
Chris has spent a lot of time listening to them; patience I do not have. By a large, they are not deep and careful thinkers. Or, as Arendt would say: they do not ‘think’.
In a sense, VaporSpace has also colonised the minds of a small group of people, wielding incredible and fateful power over all of us, using some of the shallowest HIs available on the market.
In reading for and adjacent topic, I was thinking about Nazis (another habit of mine), and conspiracy theories (mea culpa), and Adorno… there’s a great bit in his recent posthumously published transcription of a speech he gave in the 1967 in Austria on the ‘new’ right-wing extremism. It’s a wonderful place to end off, not just because of the emergence of the death drive in all this – as Prince says, some say man ain’t truly happy until he truly dies – but because of the fundamental stupidity of ending in a situation where we say ‘I’ll have what Wotan’s having, I want WOTAN’:
“I think this reference to anticipating terror touches on something very central that, as far as I can see, is given far too little attention in the usual views about right-wing extremism, namely the very complex and difficult relationship with the feeling of social catastrophe that prevails here. One might speak of a distortion of Marx’s theory of collapse that takes place in this very crippled and false consciousness. On the one hand, on the rational side of things, they ask, ‘What will happen if there is a big crisis?’ – and that is where these movements are attractive. On the other hand, they also have something in common with the type of manipulated astrology one finds today, which I consider an extremely important and typical socio-psychological symptom, because, in a sense, they want the catastrophe, they feed off apocalyptic fantasies of the kind that, as it happens, could also be found among the Nazi leadership, as documents show.
If I had to speak psychoanalytically, I would say that, of the forces mobilized here, the appeal to the unconscious desire for disaster, for catastrophe, is by no means the least significant in these movements. But I would add – and I am speaking especially to those of you who are rightly sceptical about any merely psychological interpretation of social and political phenomena – that this behaviour is by no means purely psychologically motivated; it also has an objective basis. Someone who is unable to see anything ahead of them and does not want the social foundation to change really has no alternative but, like Richard Wagner’s Wotan, to say, ‘Do you know what Wotan wants? The end.’ This person, from the perspective of their own social situation, longs for demise – though not the demise of their own group, as far as possible, the demise of all”.
see Tooze’s description in Wages of Destruction.
I mean, like Carreyrou and his great book on Theranos, not like their op-ed schmoes.
I say this empirically… there was a magical time when I went to Sydney and all my friends could suddenly afford to travel places, in a city where driving totally fucking sucks and everywhere takes an hour and taxis cost 100 bucks to get anywhere…. of course, it was all VC paying the difference, propping up the fantasy, until Uber had a market share big enough that we were dependent and habituated, then they could fuck riders and drivers, and we wouldn’t blink.
I just read a book on internet conspiracy cults, where the LARP got out of hand, then was listening to Quinn Slobodian’s great appearance on Bunga, where I found out how DEEP the LARP goes in terms of neoliberalism. Slobodian appears to be doing well what Mezzadra and Neilson seem to be unable to do without bamboozling this reader.