The FDA Should Regulate Algorithms

From measles to plastic surgery to depression, some algorithms are making us sick

The FDA Should Regulate Algorithms

Plastic surgeons have noted that the majority of their teenage patients want to look like their Instagram-beautified selves, as the “beautification” algorithm processing their selfies has given them insecurities about how they actually look.

Measles are resurgent due to search algorithms that have for decades promoted anti-vaccination points of view, problems that were only amplified by social media, which pushed misinformation out to gullible parents, leading to outbreaks that have killed dozens. Russia has been able to use social media as designed to spread measles misinformation and destabilize society even further.

Depression and alienation are growing problems. Social media has led to a surge in self-harm, depression, and suicide among teens, particularly among girls aged 10-14 years old. Studies have shown that dropping Facebook leads to improvements in happiness, and delivers itself 40-60% of the benefits of psychotherapy.

Unregulated advertising of e-cigarettes on social media is likely a major contributor to the rapid adoption of these devices among teens.

Anger and stress are major contributors to mental and physical health problems, yet social media rewards these conditions. A recent study found that if you add some inflammatory word to a tweet, you have 17% more retweets. This has led to attacking and demeaning online behavior, bullying, and threats, all of which contribute to depression and alienation. Twitter profits from all of this, thanks to its algorithm.

Social media has been designed to be addictive, and may have been a major contributor to the opioid epidemic in the US, not only spreading misinformation about the safety of these drugs but also stigmatizing addiction in ways that prevented many from seeking treatment or interventions.

What is the role of government in regulating technology? Apparently, there is a growing perception that its regulatory umbrella might cover some emerging technologies. For example, the US Food and Drug Administration (FDA) recently released a white paper outlining how it plans to regulate artificial intelligence (AI) and machine learning in medicine.

Given their willingness to fill a gap in regulatory coverage in this manner, I’d argue that the FDA should be regulating social media and search algorithms now. In their unregulated form, algorithms are currently causing harm to human health on a massive scale. We are in the midst of a public health crisis. Each element may appear to be distinct, but they share a common inciting or amplifying element — the algorithms used by dominant social media and search companies.

This seems a natural course for the FDA, which regulates all sorts of information about health and medicines. But somehow, the technological veneer of Silicon Valley and social media makes regulators pause, as if these mega-corporations are immune from regulation, laws, and expectations.

(Oh wait, they are. I will use this opportunity to join the growing chorus urging lawmakers to repeal Section 230 of the Communications Decency Act. With this single change, social media would immediately become far less dangerous and perverse.)

In a fascinating interview on the “Recode” podcast, tech ethicist and co-creator of the “Time Well Spent” movement, Tristan Harris, spoke with Kara Swisher about the exceptionalism that protects information promulgated on social media, to our detriment:

Doctors, you want to hand them as much information about you so they have more to use to diagnose you. So it’s fine to have asymmetric power insofar as it is in our interest. It represents our interest. . . . Clearly, from a regulatory perspective, this has to change. The easiest thing to change, the thing that fundamentally has to change, is that we’re moving from an extractive attention economy that treats human beings as resources . . . [f]or our data, for our attention, to a regenerative attention economy where we just don’t drill. Why in the world would we say, “Let’s profit off of the self-esteem of children”? . . . This is so wrong. And we used to protect [kids from things like] this.

Harris speaks of what he calls the tech firms’ efforts to “downgrade humans.” Short soundbites, newsfeeds that are like slot machines of attention, and algorithms that exploit our weaknesses are all part of downgrading the humans so the machines can upgrade their information about us. Harris also hits on another good reason to downgrade humans — it takes less computing power to mess with people when they are just reacting, using their lizard brains rather than their prefrontal cortices. Not only does it work, but it’s also more efficient from a computational perspective to have people soaking in emotional reactions.

Meanwhile, regulators like the FDA are missing the big picture here. Deleterious health effects are all around us, spurred, amplified, and even caused by search and social algorithms that exploit our weaknesses.

The most infuriating part of this to me is that the major culprit behind all this, Mark Zuckerberg, is supposedly using the Chan Zuckerberg Initiative (CZI) to make investments in improving human health and well-being. If he wanted to do this immediately, he’d shut down his algorithms today. As Harris says:

Roger [McNamee] and I had this saying about a year-and-a-half ago, it was this Tylenol example. When it was found that there was poison in Tylenol, Johnson & Johnson took it off the shelf until it was safe. And their stock price tanked, but then it went up even higher because people trusted them. The problem is that the harms now are not as simple as whether or not we’re all just getting poisoned from Tylenol. It’s this diffused climate change harm. Maybe it doesn’t affect you, but it’s causing genocides around the world, or it’s causing millions of people to believe conspiracy theories and debasing our social fabric, but because that doesn’t affect you, people don’t have that same level of urgency of, “We have to shut it down”. . .

So, even as his platform and those competing with Facebook are amplifying misinformation about vaccines, opioids, and e-cigarettes, and causing depression, anxiety, and stress across all age groups, but especially among children, Zuckerberg is working to encrypt his platform to shield himself from liability rather than fixing it for the users. All the evidence keeps pointing to him and his ilk as some of the most damaging, greedy, and soulless business people in history. They are causing millions to suffer so they can get rich.

The business model of dominant social and search companies is clearly at fault. It’s an extractive business model, built on technology designed to mine our weaknesses for information that will help its owners make money. But regulators have been too slow to act. We are now in the midst of multiple public health crises thanks to social media and search engines that depend on surveillance capitalism and behavior modification technologies.

Information is powerful. Its power should be used to make us healthier, calmer, and more resilient. Instead, it is being used systematically to make us sicker, more stressed, and more reactive. We’re baffled by it, which may be its greatest defense system. We can’t even agree on what a fact is, where the truth resides. As Harris puts it:

. . . conspiracy theories magnified by these platforms times billions of people . . . [flip] everyone’s mind into this kind of questioning mindset that questions institutions and trust. . . . People don’t believe the media. They don’t believe in government anymore. . . . free is the most expensive business model we’ve ever created because if people can’t agree on shared facts and truth, that’s it. That’s it.

So, FDA, while the siren song of AI and machine learning may be the subject of white papers and future plans, the current drumbeat of social media algorithms gone awry is far more urgent. These algorithms are causing harm in the here and now. They are causing a multi-later public health crisis. I say it’s time to start regulating social media and search algorithms so they no longer misinform, exploit, and victimize users. Don’t wait for AI and machine learning to become mainstream. We have big problems now from functional technology deployed by billionaires acting to exploit the people you’re charged with protecting.

It’s time to do your job.


Subscribe now