Facing the content monster, from the coinpumping parallel world of academia
a bridge post on what happened once LLMs were added to what was already adding up as the annulment of information, knowledge, and maybe truth, by the system 'we' serve
This is a small-ish bridge post, the initial purpose of which was to be a paragraph getting to the listicle I promised at the end of the previous post. Said listicle has been generated, but it ran long, and the cumulative post would have been ‘beyond email limit’, as substack warns me from time to time. Here, I woke up feeling… not so much ‘blue’ as ‘grey’ at the darkness of the darkest ever beige which is upon us. So this post is more like a kind of journaling of a set of feelings I get from a system I face, anything but super pumped, as I head into another year’s worth of work.
We now live in the 2020s, the era of Large Language Models (LLMs) and their generative transformers, now superadded to phone-based scrolling platforms (2010s) and the Content Monster (2000s). For another forthcoming post, but just quickly – we also live (together, somehow) in 2025. So let’s add that Nick Clegg has just quit Meta, to be replaced by a right winger, while Zuckerberg has come online to shoot some more skeet (see ‘get low’, below) by getting Meta ‘back to its free expression roots’ (ie, rolling over for the alt right, like a Neo Nazi’s golden retriever wanting a pat). Never has beige been so dark. 2025, the year where Darkest Ever Beige gets added to 3S.
2025: beige tragedy, black comedy. Get low; get low.
In the previous post I glanced the content monster, and wondered what it means to write (a book), in this 2025.
I also do so as I face returning to the parallel world of academic knowledge production, which I’m thinking through in what follows. It’s difficult to convey how 3S this realm has become by this new year. Peer review had been sputtering for some time, but post covid, something has collectively… I want to say some combination of ‘ramped up’, ‘broken out’, and ‘busted wide open’. In short, most of use face internal conditions where – apace with social acceleration and societal involution – we’re expected to pump even more gold coins1, coining lemonade with an outbreak of busted lemons (and you can quote me on that).
The metaphorics you just read were tortured and preposterous – but the phrasing fit my feeling. At the same time, the pumping process of crowning the coins appears to have become ‘even more futile’ and onerous: value capture captures all the value, and in doing so, seems to destroy all the value it captures, while robbing everyone of the interest – but not the task. Like mining BitCoin, producing ‘knowledge’ for the parallel world is an incredibly energy intensive and time consuming process; unlike mining BitCoin, or spruiking your personal brand MemeCoin, it ‘produces’ something that is of little interest to most, viewable by a tiny minority of people, most of whom, like Melville’s scrivener, Bartelby, would prefer not to. The net effect is a paywalled mausoleum of inscriptions for all to (not) behold: a frozen space of indifference jealously guarded against incursion by the rare individual who might be curious about what another person wrote about X, Y, or Z. ‘No’, it says, ‘you’re not allowed to read it’. Never before has so much text been – now generatively! – generated so quickly and freely, alongside so little ability-and-inclination to actually get and read the stuff. Imagine explaining this tragicomic outcome to Gutenberg in 1454, the year he put his press to commerical use, “producing thousands of indulgences for the Church” .
But never mind our fates as individual nulled scriveners or Bartelbys: are the journals, is the system even viable now? For how much longer? As Chris has captured aptly here, in a paragraph prudence told him to cut from an inscription intended for this parallel universe:
“Underpinning and informing this… is a deep sense of doubt about the viability and value of academic journals as they presently exist and operate. It is increasingly difficult to rationalise the gap between the ideal of peer review and the tawdry form it often takes in practice. When properly undertaken, the process of peer review and revision can greatly improve the quality of a manuscript, but it is valid to ask how much this accords with what prevails in the increasingly involuted, neoliberal conditions that shape academic knowledge production. Once published, the vast majority of articles are destined for irrelevance, with most barely read, banished behind paywalls or lost amidst an ever-expanding information glut. This sorry state of affairs is largely overlooked and accepted, however, as the value of publishing for the author tends to be simply in being published. These are conditions of what CT Nguyen describes as ‘value capture’, whereby the purpose of scholarship becomes less about the pursuit of knowledge, and more about achieving metrics that count towards jobs, tenure, promotion and prestige. These challenging conditions are exacerbated by the rapid advances of large language models (LLM), which can replicate many of the tasks undertaken by academics. Insofar as there are tools that are effectively capable of completing all stages of the publication process from idea generation, literature review, data collection and analysis, through to generating text, drafting, formatting, reviewing and editing, these tools are creating immediate and deeply challenging questions for academic practice. Indeed, it is worth noting that the manuscripts that reviewers and editors tend to prefer, with an obvious logical structure and content that largely conforms to common methodological and theoretical practices, are the kind easiest for LLMs to replicate. While one response could be for scholars to consciously adopt more creative, less linear modes of argumentation, these are much less likely to make it through the peer review process. When these problems are placed within a wider context of socio-economic stresses, as well as ceaseless social acceleration, there are strong grounds for questioning the viability of the current model of academic knowledge production. Much of this points towards a world of more information, but less knowledge”.
Well put – I only wish to add one more thing here.
I’ve been dwelling a lot on the words ‘worthwhile’ and ‘good’ when thinking about the value of work – here, of the written kind, though we could reflect on whatever we do.
Is it worthwhile? Is it good?
This is a kind of short algo that can generate a LLM for a HI who is curious about the value of her work. In other words, by asking these simple questions, we can quickly learn responses to the question: why bother?
Where the parallel world of academia I’m focusing on here is concerned, it’s clear (for the above reasons) that most of it is not worthwhile, and not good. If we follow Bateson’s definition of information as ‘a difference that makes a difference’, it became clear to me by about 2021 that most of what might make a difference – those moments in an article where we actually speak the truth directly, and in our own language – is usually pointed out, quibbled with, requested to be cut, edited out, simply because, as Chris notes, reviewers and editors prefer the smoothed mediocrity of LLM-style text. Whatever you do, don’t say anything. Reticulate smoothly, but never tell the frank truth in plain English. Cut, cut, cut. Fade to beige.
It only becomes clear once you’re terminally involved with all the mining and minting and crowning and pumping of coins, but so-called academic knowledge production is actually/cumulatively anti information, as well as anti dissemination, and thus in a sense anti truth, anti phronesis. The whole thing produces mountains and reams of content, yet is somehow the most anti Gutenberg system imaginable, producing thousands of indulgences for the Church, that not even the Church reads. 2500 years to build on Aristotle, only to negate practical wisdom at every turn.
No one sat down and designed this tragicomic outcome, yet we’ve ended up with a system that is profoundly Luhmannian in its bleak irony: it elaborates itself to perpetuate itself by eliminating and annulling anything that would perturb its smooth operation. We become smooth operators by producing the content equivalent of silicone-based lube forever sliding out the orifice of the interface, into a black hole of nullity. Pace Luhmann, we need to remember: it is the system, we are its environment; it feeds on us because we feed it, of course so we can continue to feed and maybe get promoted – so it continues to expand and entrench its terms of reference, so it colonises our lifeworld, so we serve it, and it serves nobody and nothing except its nothing and nobody. Pump and dump, churn and burn, mine and dine – then enjoy an after dinner minting? Brüm brüm.
This is my idiosyncratic phrasing for the process of ‘academic knowledge production’, as hopefully disclosed by the GIFs in this post. What might not be clear to all readers is that, more or less, academics get one ‘point’ (read: coin) for each publication they produce, and at most universities, they have to produce X or Y # of coins/year in order to remain competitive for promotion (on the carrot side), and in order to not be buried in teaching, marking and admin (on the stick side). The minority who are brainbendingly good at pumping coins usually become research scholars, whose job it is to pump coins and only pump coins. Long story short, they do this ‘as’ the confluence of their own careers/stats and the university rankings, which are based on a very complicated working out of X # of coins and Y amount of impact/citations and so on. Pump and pump and pump – and dump. Then pump. Brüm brüm.