The Misleading Semantics of AI

If we describe it more accurately, the bargain and implications shift and shrink

Would you like to play a game?

Systems and imaginings called “AI” have been with us a long time in various guises. There have been numerous hype cycles — when powerful scripts met databases for the first time, then were introduced to Big Data, which begat the useful child called “machine learning.” Now, large language models have been mated via “neural nets” — another misleading term — and pushed into the world as “AI” again.

“AI” is a term with problems and pitfalls revealed by even a minute of careful consideration and examination of assumptions, time of the duration and type the hypesters don’t want us to spend.

Academia has been uncritical, leading a group of cognitive scientists to publish an open letter earlier this year. We spoke with two of them not long ago.

Publishers’ and editors’ reactions have ranged from mindless hype to deep skepticism, but the hype in our industry is dominating. Someone smells money — a necessary but not sufficient reason for making a business decision.

The bottom line of this post will be an argument for being more precise in our wording in order to have more accurate perceptions of what is actually on offer with these systems.

I’ll argue that given the state of technology, we shouldn’t use the term “AI” but instead the more accurate and descriptive term “large language model.”

The terminology downshift can be revealing.

It can also help avoid what at this point I think are PR and business mistakes, judging from the tittering going on behind the scenes after various AI-related public dances.

This post is for paying subscribers only

Already have an account? Sign in.

Subscribe to The Geyser

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe