OpenEvidence Targets Users

A hot medical LLM relies on a revenue model known to corrupt

OpenEvidence Targets Users

OpenEvidence (OE) is a problem. It has proven susceptible to eugenics-adjacent pay-to-play misinformation, pay-to-play articles placed to help shill colostrum, and pay-to-play articles published to promote “functional medicine,” a MAHA-related pseudo-specialty.

Our Gold OA ad model has proven a corrupting influence, and an inference engine is making it clear in a new, overvalued way. The notion that you can chum together journal articles into meaningful medical meatloaf has never made sense. Journal articles describe research claims. They don’t tell you how to treat patients. It takes a different kind of expertise and editorial know-how to distill, describe, and maintain trustworthy and effective treatment guidance. Not everything physicians need to know is in journals.

Fundamental epistemic problems may be enough to warn savvy users away from such beasts but OE’s business model provides another big red flag because it is susceptible to Silicon Valley’s deepest source of corruption and bad incentives — targeted advertising.

Characterized as “the Internet’s original sin,” the targeted advertising model has arguably ruined the Internet, establishing a regime of surveillance capitalism that eroded personal privacy while leading us to the edge of a militaristic, Skynet surveillance dystopia enabled by engagement-hungry LLMs and creepy Ring doorbells.

OE appears to be following the same problem-plagued path to riches blazed by Google, Facebook, and Twitter/X. As an information source, it also adopted the signifier of producer-pays in our market, stitching “open” into its name — those of us following the money recognize this means users aren’t paying, are the product, and have no leverage beyond their utility as targets.

How OE Targets Users

For OE, user targeting depends on getting each individual’s national provider identifier (NPI).

NPIs apply to all individual HIPAA-covered healthcare providers or organizations, mostly to validate and facilitate insurance transactions. Individual HIPAA-covered healthcare providers include physicians, pharmacists, physician assistants, midwives, nurse practitioners, nurse anesthetists, dentists, denturists, licensed opticians, optometrists, chiropractors, clinical social workers, professional counselors, physical therapists, occupational therapists, prosthetists, orthotists, pharmacy technicians, and athletic trainers. Organizations such as hospitals, home health care agencies, nursing homes, residential treatment centers, group practices, laboratories, pharmacies, and medical equipment companies are also assigned NPIs.

NPIs are publicly available and can be downloaded in batch, enhancing the potential for fraud. At OE, users only need to pinky-swear they are who they claim to be.

The AAFP warns its members to protect their NPIs. One site warning health care workers about NPI risks writes:

Thousands of NPIs are stolen from healthcare professionals and used for further fraudulent schemes every year, particularly Medicaid and Medicare fraud. Indeed, one of the hallmarks of healthcare fraud is the theft or misuse of a healthcare provider’s NPI.

Crosswalking from NPIs to EINs to SSNs is often a simple matter for hackers with access to data from the dark web, making identity theft a well-known problem.

OE suggests via its privacy policy it is doing some crosswalking itself:

We may collect NPI numbers from third parties and match those numbers to Personal Information in our system to provide you with content and advertisements tailored to your individual interests and needs if you use the OpenEvidence Platform and or access our Services.

This may mean that if you have an NPI, you may already have a profile in OE, as they also state they gather other third-party information from available sources.

OE bases its user targeting on topic and device data. The ads appear when the system processes a prompt, suggesting that the most recent query is being leveraged to target advertising. Once the system’s response is generated, the ad disappears. This creates an incentive for delay, as ad exposure time is a metric one might subject to optimization if you were OE.

OE is already boasting $100 million in annualized revenue from ads targeting users. This helped it recently close a $250 million Series D funding round on a $12 billion valuation. Whether the very soft financial metric of “annualized revenue” stands the test of time remains to be seen.

Concerns with the Ad Model

With an LLM aimed at prescribers and interventionalists, there are a lot of concerns, some of which get articulated in an otherwise glowing article about OE’s business prospects:

With all due respect to [Daniel] Nadler [OE’s founder] and entrepreneurs intending to make a difference in healthcare, it could be argued that faking it is a rite of passage for many startups. Plenty have taken the “fake it til you make it” approach, and for worse offenders, faking it has landed them in prison, like Elizabeth Holmes. The founders of Outcome Health are another example, with one sentenced to prison and another sent to a halfway house. Both companies were once healthcare unicorns, and the latter’s revenue model is somewhat similar to that of OpenEvidence. 

Holmes is of Theranos fame, perpetrating a memorable med-tech fraud. Outcome Health is less well-known. Its executives used an advertising model to perpetuate a $1 billion fraud scheme. According to the Department of Justice in 2024:

Outcome, which was founded in 2006 and known as Context Media prior to January 2017, installed television screens and tablets in doctors’ offices across the United States and then sold advertising space on those devices to clients, most of which were pharmaceutical companies. [The executives implicated] sold advertising inventory the company did not have to Outcome’s clients and then under-delivered on its advertising campaigns. Despite these under-deliveries, the company still invoiced its clients as if it had delivered in full.

The three main executives received multi-year prison sentences.

Now, it’s not likely that OE will engage in fraudulent behavior, and I hope they don’t, but the question is: How will we know? LLMs hide how they work, and early claims about financial success or user adoption from OE may or may not be accurate or fulsome. They certainly lack detail or nuance. What ad-tech tools will OE use to verify ads display, clicks occur, and so forth?

There is also the issue of the “thumb on the scale” that may occur as the system realizes that OE makes more money (i.e., sells more ads) if responses are framed in certain ways or made more engaging in some manner. These effects could be subtle or obvious — silent omissions or blatant emphases — but the tendency of “smart” systems to optimize toward profit is well-established.

Even without these ghost-in-the-machine concerns, OE mentions that it will allow “sponsors” to provide content:

We automatically collect and store information about your use of the OpenEvidence Platform and Services, such as your engagement with particular content including editorial, usage patterns, advertisements, sponsored informational programs from our advertisers, which may include pharmaceutical companies (“Sponsored Programs”) . . .

All of this is occurring in the context of overweening confidence given off by any LLM — there is no wrong answer or hesitation, only confidently rendered and structured plausibility even if the evidence is thin or singular.

OE Doesn’t Fit In

Despite all the hype surrounding it, OE is the odd duck in the clinical information space — the evidence-based medicine products world occupied mainly by UpToDate, DynaMed, and ClinicalKey. OE’s competitors are more deeply integrated into various systems and workflows, and they have scrupulously eschewed the ad model. EHRs are also bringing some point of care game to the table.

The established products and their associated product lines (paired with drug databases or other add-ons) are subscription-based, don’t take ads, and don’t deal in their subscribers’ data. They are B2B workflow tools paid for by the employers of the physicians and HCWs using them, typically universities, hospital systems, and clinical networks.

As a competitor, OE is out of step with the market. It is baking in a set of problematic financial incentives that we know trigger the worst aspect of the Internet economy — the attention economy, the producer-pays economy, the ad slop economy, the OA slop economy, and ultimately the AI slop economy. In an LLM, these can all work in conjunction without being seen for what they are.

How these incentives might distort OE is unclear. Will advertisers’ information gain additional weighting in the tokenized system the LLM uses to generate responses? Will ad targeting be uncomfortably specific for users? Will a desperate need to meet their annualized revenue claims lead to misbehavior that may be insanely difficult to detect, as auditing an LLM’s advertising model is new territory?

As a wise person once said, “Show me the incentives, and I’ll show you the outcome.” OE has user targeting incentives in its business models, ones we’ve seen turn interesting and useful social media platforms, search engines, and scientific journals into trash heaps. Their LLM is already compromised by Gold OA “articles as ads,” and their entire future is based on the OG bad business model for medical information.

It’s almost like we’ve learned nothing from the past 25 years of Big Tech misadventures.


Subscribe to The Geyser

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe