Microsoft spent $10 billion acquiring ChatGPT in order to make Bing sing again. What has happened since? Bing’s marketshare is down slightly from its traditional marketshare of 3%.
Companies are struggling to implement LLM-based solutions due to data management, security, and computer resourcing issues, with timelines for implementation now extending out to 5 years or more. Worse, many companies worry their internal targets for energy use are now under strain because of how much computing power LLMs require.
Google’s LLM continues to do embarrassing things, like citing Hitler and Stalin as great leaders, and touting the benefits of genocide and slavery. Meanwhile, Google search has become polluted by LLM outputs to such a degree that users are turning to Reddit and TikTok for search.
Ted Gioia assembled a list of the developing problems in a post published yesterday, from which I’ve picked a few items to highlight — some of which are exogenous threats to LLMs in general, some of which are endogenous problems with the models themselves:
- A US federal court ruled that AI work cannot be copyrighted—because “human authorship is a bedrock requirement of copyright.”
- AI is getting worse at doing math over time.
- AI is getting more sycophantic and willing to agree with false statements over time.
- The Federal Trade Commission is investigating OpenAI over “unfair or deceptive privacy or data security practices.”
- Book authors have filed a class action suit against OpenAI, alleging “industrial strength plagiarism.”
In other news, a potential lawsuit from the New York Times could force OpenAI to wipe ChatGPT and start over — if successful, the suit would force OpenAI to pay a $150,000 fine for each piece of infringing content. As ArsTechnica reports:
NPR reported that OpenAI risks a federal judge ordering ChatGPT’s entire data set to be completely rebuilt—if the Times successfully proves the company copied its content illegally and the court restricts OpenAI training models to only include explicitly authorized data. OpenAI could face huge fines for each piece of infringing content, dealing OpenAI a massive financial blow just months after the Washington Post reported that ChatGPT has begun shedding users, “shaking faith in AI revolution.” Beyond that, a legal victory could trigger an avalanche of similar claims from other rights holders.
Techno-utopianism has been guilty of overreach for decades, but it is finally being caught with its hand in the cultural cookie jar — first, with the implosions of blockchain, crypto, and Web3, and now with the easy and rapid puncturing of the AI, LLM, and ChatGPT bubble.
Meanwhile, workers, authors, copyright holders, and financial regulators are reasserting their significant rights and powers.
Are we finally asserting some degree of control over technologies? Will scholarly publishers also begin to assert more control over their technological, financial, and existential futures? Or will we continue to cater to techno-utopian hopes and silly models that don’t factor in such realities?