Interview: Anita Chan, Author of “Predatory Data”
Why eugenics is the most important concept on the table today — from Silicon Valley to RFK
When we began writing our forthcoming book about Big Tech and science, a word neither Joy nor I would have imagined might play an important role would be “eugenics.” But it’s a topic our research inevitably led us to — where we were fortunate to encounter Anita Say Chan, author of Predatory Data: Eugenics in Big Tech and Our Fight for An Independent Future. Chan is an Associate Professor in the School of Information Sciences and Department of Media and Cinema Studies at the University of Illinois, Urbana-Champaign.
Impressed by her work and interested in the accidental intersection with a topic she knows far better than we ever will, I asked her for an email interview. The result is amazing — both in how generous she was in her responses, and in the quality and depth of her insights into what’s going on. It makes me think we may need to stop talking mostly about “authoritarianism,” and start talking more about “eugenics.”
Q: Can you tell us how your academic career started, and how you arrived at this point in your pursuits?
Chan: I was trained as an anthropologist and historian of technology, and my first book traced the outsized power that the US tech sector had in driving international economic policy, in what I attributed to as the “myth of digital universalism” — that is, the mistaken conviction that Silicon Valley represents the best, most evolved technological future that the rest of the world can only hope to strive for. But it was obvious how broken this idea was (i.e., that Silicon Valley could and should design the future for the rest of the world), when the kinds of products the US tech sector was pushing into international schools and rural work settings simply didn’t work, or only made the work and goals of global educators and workers substantively harder.
And yet despite such misalignments and outright failures, Silicon Valley’s outsized influence kept everyone still operating as if were the single best future to bet on, meaning other local alternatives for addressing local needs were consistently overlooked. So when US tech leaders say they aim to “move fast and break things,” we should keep in mind that what they are doing is projecting everyone else and every other sector but their own as flawed, inefficient, and breakable, while omitting how broken and ill-suited their own products often are. Yet again with AI, we see how willing they are to flood global markets with flawed products, even as they put the most vulnerable populations and our own kids at risk to advance their own technological survival.
Q: You have an interesting research focus. Can you walk us through it? What’s your most popular class?
Chan: My favorite class to teach right now is one that allows students to examine various innovation case studies through the lens of our own local campus in Illinois. So we look at cases like the history of US land-grant policy that fundamentally redefined higher education in the US following the Civil War. We also look at the origins of disability resources and accessibility design in higher ed (which popularized accessibility design on US campuses before national regulations were passed in the mid-20th century), and student movements to desegregate US campuses and housing. It immediately allows students to see how a diverse range of actors — not just engineers and technologists — were behind innovations that fundamentally reshaped the nation and the world beyond. It also makes apparent how students themselves were not just consumers of higher ed, but were real agents in shaping innovations for accessible and inclusive campuses, including for campus infrastructures they interact with regularly today.
Q: Why do these topics matter right now?
Chan: I wrote this book as a love letter to my students in information sciences and media studies, who are set up to graduate and begin careers in data science and AI companies, many of whom, especially now, feel DEEPLY ambivalent about this prospect. There is a reason they have misgivings. The way data-driven systems stratify human life, reify racial and class-based hierarchies, and operate on logics of continual assessment to evaluate populations in ways that punish minoritized populations has a history over a century old. Students in computer science and data science are just generally not taught that history.
But the methods in correlation, prediction, and statistical regression they are instead asked to learn were established by eugenic researchers — men like Francis Galton, Charles Davenport, Henry Goddard, David Starr Jordan, and Madison Grant (who are referred to in the book) among so many others. It was these eugenic researchers who generations ago seeded what I describe as likely the world’s first cross-continental movement to dataify populations and monitor “unfitness,” with their goal being to literally remake democratic societies into ones that embraced natural hierarchy, so that the survival and interests of White Western techno-elites could be prioritized. And like what I call in the book contemporary techno-eugenicists, 19th century eugenicists profited enormously — both politically and financially — from spreading eugenic disinformation and decrying the dangers of pluralistic social life.
The book is a reminder to students like mine of how far eugenics got with popularizing their movement, especially in the US. Because of their manipulation of data to demonstrate the supposed futility in supporting the unfit — through things like public education and welfare — US eugenicists were able to get sterilization laws for the mentally, physically, and morally “unfit” passed in over 30 states in the US. They were able to pass the world’s first immigrant exclusion laws with the Chinese Exclusion Acts of 1875 and 1883. They were able to install the first laws requiring photographic monitoring of immigrants, and they were able to remake US immigration policy by 1917 so that “Nordic populations” were prioritized, and so that even political dissenters, who they argued were spreading “mental unfitness,” were excluded.
Q: Melding the surveillance capitalism of the platform economy with eugenics seems a bit counterintuitive, but I thought it was quite insightful. What was your first clue that something like this was going on?
Chan: The work I did for my first book trained me to recognize the deeply destructive pathology and insistence on its own supremacy that pervade US tech sectors. I came to recognize, however, that this was not just driven by an extreme version of neoliberal capitalism. Like eugenicists of the past, today’s techno-eugenicists clearly hold a pathologically existential paranoia that sees any version of resourcing to anything but itself as an attack on “civilization” and profound threat to the rightful world order. Like past eugenicists, they channeled this paranoia through an obsession with data and datafication of human difference.
But it was past eugenicists who were the original advocates for data-driven surveillance and population monitoring at scale. A century ago, they used data drawn from family studies, immigrant surveys, physical exams, and IQ exams to argue they could predict criminal behavior, productivity, and a population’s social value and cost. Eugenicists then claimed that collecting large amounts of data could enhance social engineering and lead to more perfect human futures and national economies. So it’s no coincidence that Silicon Valley has now built an invasive data collection infrastructure that scales unchecked experimentation on human populations around the world. It is still defended as a force that will allow “civilization” to progress and evolve, even as it continually sacrifices the lives and well-being of those they deem as less efficient.
But I also argue in the book that what AI is really producing is a kind of monoculturalist world which relies on the spread of and acquiescence to probable, majoritarian values and the active empowerment of dominant classes for AI’s growth. I argue that through biasing toward probable outcomes, AI’s prediction systems have provided a means to actively amplify majoritarian worldviews into the future. So it’s no wonder that the overblown existential anxieties of majoritarian populations, nationalists, and the far-right radicals across the globe to go from fringe to front page overnight.
Q: How does the personalization of online spaces and data tie into eugenics? Is our current pop culture focus on individual health and wellness harkening back to folks like Kellogg?
Chan: Today, we are so conditioned to believe that data collection drives personalization, and that we really can know ourselves better through allowing ourselves to be tracked by tech companies, that we often forget that people had to learn that datafication could be pleasurable. People had to be taught that giving up their data, in other words, and that allowing themselves to be hyper-monitored and tracked could yield a kind of promise. And eugenics was among the earliest movements to tap into this link around personalization and data. It recognized that the feeling of discomfort that one should feel in being surveilled could be inverted into pleasure, and that that people’s willingness to be monitored and datafied could be turned into a political tool and strategy to popularize their movement. And in fact, publics really did line up and pay to have their attributes measured and documented by Galton, and people continued to write to the Eugenics Records Office ask for “secrets” of their family surveys to be unlocked for them. There was real desire, in other words, to see data – and evidence – that supposedly verified subjects’ place among the “wellborn” or place in the social hierarchy.
I argue that this pleasure function in data was defining for what it meant to experience oneself as part of an emergent information class a century ago. Today, platforms from Spotify to Alexa to of course now LLMs sell themselves on the notion of personalizing content. What eugenics history reminds us is how so much of the power of self-datafication depends on the insight and supposed distinction it projected it would impart. For eugenicists, this promise of comparison was about quantifying relative value – of allowing people to “objectively” see and gauge themselves in the spectrum of the well born against the unfit. Today, datafication’s pleasure is still based on the promise of knowing and verifying individual users’ “true” distinction – but this distinction rests only on comparing and defining users against someone else who has already been tracked and profiled in the archive.
Q: Are there other parallels between early 20th century America and today that are contributing to a resurgence of eugenics-related ideas and policies?
Chan: A century ago as now, eugenics was driven by an information and tech elite who were deeply convinced of the own superiority, and who believed their exceptionality should grant them the unchecked power to re-engineer society as they thought fit. This class used their status as men of science — as well as their access to data — to influence national policymakers and wider publics alike. And they were able to prey upon growing public insecurities over the rapid market transformations they saw (brought on by industrialists’ imposition of new technologies into economic sectors, no less), and the new forms of global movements and independence struggles that were changing urban demographics.
In many ways, though, I see the attempt today to revive eugenics as just that — an attempt to win over and persuade publics that they WOULD be better off in a world operating by the majoritarian, monoculturalist agenda of eugenicists. So it’s important that DOGE’s website claims to itinerize data and receipts, but its real function is arguably doing this as a theatre — to claim that there IS data that demonstrates the supposed wastefulness and inefficiency in supporting the unfit and poor, and to demonstrate the supposed objectivity of the virtue in further empowering techno “elites.” The numbers don’t add up, and they never will — in part because what they are trying to convince audiences of is the fundamentally false claim that eugenicists worked to popularize over a century ago — that as they put it, “some lives were born to be a burden on others.”
Q: Technology seems to provide good cover for a lot of bad ideas. Why do you think this is?
Chan: It’s because we’ve empowered the most inappropriate monoculture – one with a profound and dangerous self-awareness problem — to singularly control the conversation we have on technology. Silicon Valley represents a very narrow version of the world, but it never acknowledges itself as out of touch. The leaders of Silicon Valley — men like Elon Musk, Peter Thiel, Sam Altman, Mark Andreessen, and Mark Zuckerberg — are white men who hail from very privileged backgrounds and whose defining conviction is that they are exceptional and superior to ordinary people. They see their purported genius and exceptionality as entitling them to special privileges — beginning with having the power to determine the appropriate technologies the rest of the world should use, to now, expanding to the right to directly govern lives of others in realms well-beyond just technology use.
These are elite men who — we shouldn’t forget — have already built the largest infrastructure for real-time, continuous human experimentation at scale. And they have done so while arguing that they should be able to run it with little to zero oversights — even when their products have been proven to harm publics, escalate mental health crises, and intensify social division and political violence. And it’s these kinds of figures — those who have shown little capacity for self-restraint and who have zero regard for public safety — who are convinced their own survival requires taking over government.
Q: You mention Chris Anderson’s “data speaks for itself” and in conclusion point out “the inherent multiplicity of potentials for interpretation that surrounds any dataset that users are often encouraged to only see as given and predetermined by the lens of probability.” Most of the tech billionaires are funding science and supporting unfettered access to data sets with “it’s for the greater good” messaging. Would you have any advice for today’s younger generation of digital-native scientists about what it means to contribute data to an infrastructure designed for others to give it voice and context?
Chan: Predatory data systems are not inevitable. And lessons from history tell us a lot. Even as US eugenicists sought to justify social hierarchies and racial segregations at the turn-of-the-century, feminist and immigrant researchers — such as those based at Chicago’s famed Hull House Project — aimed to use the data they collected to evidence labor oppression and the exploitation of raced, gendered, and ethnic minorities. Their research provided evidence that structural oppression, not biology or individual failure, explained poverty and urban inequality. And these justice-oriented data practitioners succeeded in eventually getting the nation’s first anti-sweatshop and anti-child labor laws, as well as workplace safety regulations that are still in place today. We have seen traditions in feminist and anti-racist data collection lead to landmark protections and legal innovations, and also changed the way national infrastructures and technology is designed. These successes show that by centering public accountability and working in solidarity with marginalized communities, we can reimagine a new status quo for technological design and infrastructure.
But we need to directly challenge Big Tech’s myth “that they are the only path to the future” in order to dismantle the status quo. And higher ed and university data science and computer science programs — which directly shape the future of the US tech sector — is an entirely viable place to start. The programs that train data science graduates often operate as if their only priority was to keep tech companies happy. But what if these programs instead taught students how to build and fortify systems that strengthened the public good, pluralism, and equitable societies? What if lessons from history and the humanities — and the contemporary work of organizations like the Global Indigenous Data Alliance and Data for Black Lives that demonstrate how data can empower justice-based reforms for minorities communities — were taught instead? And what if students could demand that the programs they invest their futures into evidence public safety and technological sustainability as real priorities for education, rather than simply funneling graduates into tech companies that have openly accelerated social division, stratification, and political violence?
Q: Why did you choose to publish your book OA?
Chan: So much of my work has been in the global south – I’m dedicated to making sure that my work is as accessible as it can be to global audiences – and to the many global research networks and educators who have fortified my work along the way.
Q: What do you make of the recent dysfunction around DOGE?
Chan: The recent feud between Trump and Musk might have left some with the impression that Musk’s interest in politics was just about conventional profit motive. But there’s no question that whether it’s with Trump or other MAGA’ites, the tech sector is openly aligning itself with the far-right for their eugenic crusade, which to them directly translates to eroding democratic institutions and eradicating protections (what they see as a degrading force) for minority populations. Musk’s sharing of “great replacement” and “deep state” conspiracy theories that project the latent threat within democratic governments, are tied to the pro-authoritarian, white supremacist content he regularly posts on X. Peter Theil has long proclaimed the belief that “democracy and technological advance” are simply not compatible. And Marc Andreesen muses that unless we defend AI against “socialist enemies” and “deranged” regulations we will guarantee a “stagnated” future for the West.
Well beyond Musk and Trump, tech elites will continue to channel vast sums of individual wealth to reprogram governments — down to the engineering of local elections — which is a direct and unabashed revival of a blatantly eugenic playbook. Techno-elites today see unbridled AI innovation and investment in elites as the ONLY track to evolution. They see accelerating AI development as an interest that should supersede all other priorities, with competing investments into less “efficient” poor or wasteful populations and institutions as threatening to undo US AI supremacy. So whatever the state of the Trump-Musk relationship, eradicating democratic governments and regulations that “waste” resources on protecting the inefficient and “unfit” will be fundamental to techno-eugenicists’ ultimate goal.
Q: What’s next for you?
Chan: Thank you for asking. Folks can keep up with me at anitachan.org and the ongoing work of my students and my lab, The Community Data Clinic, at communitydata.illinois.edu. We have a number of collaborations with community organizations and local data researchers in Illinois whose work reminds us how actually ridiculous and abominable Silicon Valley’s vision for a tech future is, when the national data has told us for over a decade that 50% of US families don’t have more than $500 saved in the bank. The data collaborations we host are with organizations that reimagine tech futures and build tech infrastructures for actual working families in the US — people like Danielle Chynoweth of Cunningham Township, Julie Pryde of the Champaign-Urbana Public Health District, Kimberly David of Project Success of Vermillion County, and Stephanie Burnett of the Housing Authority of Champaign County are my personal heroes in tech.
We’d have a very different world now if we invested in work like theirs, rather than empowering techno-elites in Silicon Valley.