“Openwashing”

Calling something "open" doesn't mean it is — because there are limits, and because we don't agree what it means

OpenAI and other current technology initiatives are being accused of “openwashing” — using the term “open” to burnish their images with a kindly and benevolent shine while the initiatives themselves remain proprietary and inaccessible in significant ways.

The term was coined in 2009 by Michelle Thorne, an Internet and climate policy scholar. Unlike “sportswashing” — a practice where a nation, companies, or individuals seek to improve tarnished reputations by sponsoring sporting teams or event — “openwashing” soon ran into a familiar problem, as nobody could agree on what “open” actually meant, a problem which persists to this day around OA, as revealed in the just-released thought piece from OASPA.

In many ways, invoking the term “openwashing” simply became a way to take cheap shots at organizations by complaining about a lack of access and some form of imagined impurity, in one form or another.

That’s not to say it doesn’t exist. Many scholarly and scientific publishers have been “openwashing” for more than a decade now — launching token OA journals inside their subscription portfolios to pacify OA advocates and protect their brands, editors, and staff from online attacks and derision, while leaving their subscription products relatively untouched.

In the world of technology, the term “open” has a more definitive meaning, one related to free sharing, access to code, and limitless reuse, which makes the use of the term by OpenAI and others more deceptive than the norm, as a recent article in the New York Times explains:

For example, OpenAI, the startup that launched the ChatGPT chatbot in 2022, discloses little about its models (despite the company’s name). Meta labels its LLaMA 2 and LLaMA 3 models as open source but puts restrictions on their use. The most open models, run mainly by nonprofits, disclose the source code and underlying training data, and use an open source license that allows for wide reuse. But even with these models, there are obstacles to others being able to replicate them.

This post is for paying subscribers only

Already have an account? Sign in.

Subscribe to The Geyser

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe