Are We Losing Our Minds?
A new position paper warns against the uncritical adoption of AI in academia
A recent position paper by an international group composed of experts from the Netherlands, US, Germany, and Denmark is well worth reading as an antidote to the excessive AI hype — which seems to be deflating for many of the reasons explored in the work, another sign that “the AI winter” may be just around the bend.
The audience for the paper is academia, including scientists and science practitioners, a group which includes scientific publishers:
In this position paper, we expand on why universities must take their role seriously to a) counter the technology industry’s marketing, hype, and harm; and b) safeguard higher education, critical thinking, expertise, academic freedom, and scientific integrity.
Weighty topics, but such are the days we are in.
The authors know their stuff, and start with a shockingly good pull-quote from 1985:
The culture of AI is imperialist and seeks to expand the kingdom of the machine.
Yes, that is definitely still true . . .
The authors have plenty of ammo of their own, however:
AI has always been a marketing phrase that erodes scientific integrity and scholarly discussion by design, leaving the door open for pseudoscience, exclusion, and surveillance.
They shine a light on the definitional problems of the phrase “artificial intelligence,” noting that an artificial heart would pump blood, doing the job of what it mimics. AI does not do this. They also note that the term “intelligence” is vague and loaded with a history of eugenics, racism, and ableism, so when we accept the anthropomorphic claims of AI, we are making ourselves susceptible to being fooled by a system that has problems at its core.
Terms like “false frames” and “garbage software” made me want to cheer reading the paper, as the latter is one I just used the other day to counter an AI enthusiast — after all, AI’s can’t add reliably, play chess without breaking rules in their own system storage, and so forth. Complete garbage.
Describing AI as a “colonizing technology” — meaning it seeks to take over any field it’s used in — the authors reject the “debasing and dismantling of expertise, and the dehumanization of scholars,” noting that AI proponents tacitly reject “self-determination free from industry forces.”
As for using AI as a crutch, they write, “LLMs do not improve one’s writing ability much like taking a taxi does not improve one’s driving ability.”
- On a related note, one author quipped on Bluesky, “Correlation is not cognition.”
The authors also know we have to move forward, so they offer five principles for a better future with AI still viable but put in its place:
- Honesty — don’t secretly use AI, and don’t make claims about it you can’t prove
- Scrupulousness — only use AI in well-specified and validated scientific ways
- Transparency — make all AI open source and computationally reproducible
- Independence — ensure any research is unbiased by AI companies’ agendas
- Responsibility — don’t use AI products that harm animals, people, or the environment
With AI skepticism mounting since the release of GPT-5 and how it landed with a thud despite tremendous hype, it’s great to read a position paper by people who treat this as technology, one which must be evaluated at every turn as a technology — and not a particularly good, compatible, efficient, inevitable, or safe one.
For academia, AI is part of a techno-utopian attack on the entire system. It’s not great new technology designed to help us avoid chores. As one of their sources writes, “When you outsource the thinking, you outsource the learning.”
Hopefully, this position paper serves as a wake-up call.