Is ChatGPT All But Dead?

Courts, users, and authors have wised up fast, while the LLMs themselves continue to deteriorate

Microsoft spent $10 billion acquiring ChatGPT in order to make Bing sing again. What has happened since? Bing’s marketshare is down slightly from its traditional marketshare of 3%.

Companies are struggling to implement LLM-based solutions due to data management, security, and computer resourcing issues, with timelines for implementation now extending out to 5 years or more. Worse, many companies worry their internal targets for energy use are now under strain because of how much computing power LLMs require.

Google’s LLM continues to do embarrassing things, like citing Hitler and Stalin as great leaders, and touting the benefits of genocide and slavery. Meanwhile, Google search has become polluted by LLM outputs to such a degree that users are turning to Reddit and TikTok for search.

Ted Gioia assembled a list of the developing problems in a post published yesterday, from which I’ve picked a few items to highlight — some of which are exogenous threats to LLMs in general, some of which are endogenous problems with the models themselves:

In other news, a potential lawsuit from the New York Times could force OpenAI to wipe ChatGPT and start over — if successful, the suit would force OpenAI to pay a $150,000 fine for each piece of infringing content. As ArsTechnica reports:

NPR reported that OpenAI risks a federal judge ordering ChatGPT’s entire data set to be completely rebuilt—if the Times successfully proves the company copied its content illegally and the court restricts OpenAI training models to only include explicitly authorized data. OpenAI could face huge fines for each piece of infringing content, dealing OpenAI a massive financial blow just months after the Washington Post reported that ChatGPT has begun shedding users, “shaking faith in AI revolution.” Beyond that, a legal victory could trigger an avalanche of similar claims from other rights holders.

Techno-utopianism has been guilty of overreach for decades, but it is finally being caught with its hand in the cultural cookie jar — first, with the implosions of blockchain, crypto, and Web3, and now with the easy and rapid puncturing of the AI, LLM, and ChatGPT bubble.

Meanwhile, workers, authors, copyright holders, and financial regulators are reasserting their significant rights and powers.

Are we finally asserting some degree of control over technologies? Will scholarly publishers also begin to assert more control over their technological, financial, and existential futures? Or will we continue to cater to techno-utopian hopes and silly models that don’t factor in such realities?


Subscribe to The Geyser

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe