Interview: Kathryn Harrison, DeepTrust

A new coalition to help battle deepfakes and misinformation is moving forward

Interview: Kathryn Harrison, DeepTrust

The battle against misinformation requires contributions and leadership from people with business training, international experience, and technology chops. In our domain, Elisabeth Bik is one such person.

Another such person is Kathryn Harrison — founder and Executive Director of the DeepTrust Alliance, a 501(c)6 organization devoted to creating the standards to combat deepfakes and other forms of misinformation.

Harrison spent nearly a decade at IBM, where she served as Director of Global Product Management for the IBM Blockchain Platform, helping to develop and deliver IBM’s open source contributions to Hyperledger. Before that, Harrison led the company’s Payments and Blockchain team for IBM Middle East and Africa, and the Smarter Cities team for IBM’s Global Business Services practice.

Harrison began her career in financial services at Gerson Lehrman Group in New York and Hong Kong. As a founding member of GLG Asia, she first led legal, political, and regulatory research and then business development for the firm’s clients in Asia.

She is a term member on the Council of Foreign Relations, an American Council on Germany Young Leader, a French American Foundation Transatlantic Forum member, and a frequent conference speaker on topics related to identity, cybersecurity, deepfakes, and blockchain.

Harrison has a Bachelor of Science the School of Foreign Service at Georgetown University, an MBA from the Wharton School, and an MA in International Studies from the Lauder Institute, both at the University of Pennsylvania.

The following interview was conducted via email. Enjoy.

Privacy fears swirl around app that turns people into 'movie stars' | Fox  News
Q: Please tell us about what led you to found the Deeptrust Alliance?

Harrison: I lived in Istanbul from 2014-16 while on assignment for IBM. It was an incredibly eventful time. There were multiple bombings, an attack on the airport, an attack on the soccer stadium, and even a coup. I watched as the President controlled public opinion by jailing journalists and educators, shutting down newspapers and magazines, and even taking control of Wikipedia and social media sites. At the time, I thought it was simply an authoritarian regime. After a few years back in the US, I saw many similar phenomena begin to happen here — undermining mainstream media, “alternative facts,” and then the deepfake that Jordan Peele did of President Obama.

Today, many of us get our news scrolling through Facebook or Twitter before coffee. We are each shown a unique and specific set of information based on our past information consumption habits, and the provenance of that information is rarely clear. I felt that the average human has limited tools to help them understand information. It’s easy to see the implications in major societal events like an election, but it also impacts individuals on a very personal level. Now, images, video, and audio can be created of a person doing and saying things they have never done or said before. You can even create pictures and videos of entirely fictional human beings with the click of a button.

Neither the developers of this technology, nor governments, nor individuals have thought through the consequences, and it is a very difficult set of problems to solve. I started the DeepTrust Alliance to connect the stakeholders who are best positioned to solve it.

Q: What are some ways you engage people in the problems emerging around disinformation, misinformation, and deep fakes?

Harrison: The DeepTrust Alliance is focused on convening stakeholders to solve the problems of disinformation, misinformation, and deepfakes. It takes a broad coalition of stakeholders ranging from industry experts, to academics and social scientists, to government and policymakers, to civil society. We began in 2019 by convening a series of symposia to surface the key problems and identify potential solutions. Unfortunately, Covid-19 slowed the roll out of our programming, but in 2020 we have published two reports — first, the summary of the content from the FixFake Symposia; and, second, an ecosystem view of the key private sector stakeholders, who are building solutions to tackle deepfakes. Before the end of the year, we will also publish an ecosystem that highlights the government and non-profit actors.

We think it is critical to educate decision-makers and engage them to ensure an efficient adoption of solutions.

Q: A lot of people were worried about deep fakes in the 2020 US national election. Were there any incidents? Was the level of activity anticipated or unanticipated?

There were a handful of deepfakes which did surface. Fortunately, they were debunked in a very quick time frame. The policies of Facebook, YouTube, and Twitter also helped to slow their distribution. While that is good news, the unfortunate side of the story is that deepfakes aren’t really necessary. Many of the most potent stories and conspiracy theories that emerged during the election didn’t require AI or any high-level technical skills to create; they simply used tools like Photoshop, mis-contextualization of real images and video, or even worse, just pure lies. Two examples of deceptive editing from the election are shown and explained in this video:

One of society’s biggest challenges is how people react to misinformation and disinformation that comes from those they know, love, and trust.

People over the age of 65 have been shown to be seven times more likely to share misinformation than other age groups. This isn’t simply a question of lack of familiarity with digital platforms. Instead, there are two drivers. One, people in that age group tend to be more confirmed in their beliefs, and so they are more likely to share information that conforms to that set of beliefs; and, two, social media is a new form of social connection. Sharing and liking posts especially in the time of Covid is a new way to connect with one’s friends and community. We can’t underestimate the impact that it’s having on the spread of disinformation.

There is also the question of blatantly false information coming from elected officials. Twitter and Facebook took steps to help identify these mistruths, but we cannot underestimate the negative impact it has had on confidence in elections, local officials, and government as a whole.

Q: What are some of the things people might be surprised to learn about deep fakes and/or misinformation or disinformation?

Harrison: Most people think about deepfakes in the context of political news or elections. However, 96% of all deepfakes are actually deepfake porn, where the face of a woman, most often an actress or a musician, is transposed onto a porn video without the consent of either woman. This is a new set of gender violence which is largely misunderstood and unappreciated but has real impacts on an individual. For example, an Indian journalist Rana Ayuub had a pornographic deepfake made of her, which led to significant death threats. It took UN intervention to force the Indian government to provide her with security and to take active steps to remove the video from major Internet portals. However, this video can still be found if you look hard enough.

Q: What government agencies deal with deep fakes? How are their abilities?

Harrison: In the United States today, most of the work around deepfakes today is handled by the Department of Defense and the national intelligence agencies. DARPA, which is the innovation arm of the Department of Defense, has just launched its second multi-year program focused on developing tools to detect deep fakes and misinformation. The first program, called Media Forensics, ran from 2016 to 2020 and was already underway when the technology behind deepfakes was invented. Just this year, they launched Semantic Forensics, which is focused on identifying indicators within the content depicted that indicate deception. The Department of State has developed a whole program to identify companies that are focused on solving the problem of misinformation and disinformation in an organization known as the Center for Global Engagement. The Federal Election Commission is also significantly involved in questions and risks related to misinformation and federal elections. Finally, the legislative branch of government has had numerous hearings both in the House and the Senate related to the threats and solutions for malicious deepfakes. The Senate AI committee has passed an initial bill and the House has seen numerous pieces of legislation introduced over the last 18 months.

Q: Who are the main actors behind these kinds of manipulations?

Harrison: There are numerous actors behind deepfakes. The first set are sovereign states. Certainly, in the 2016 election, Russia was extremely active at creating and disseminating misinformation and disinformation designed to sow chaos and discord in the US, UK, and other nations. Countries like Iran and China have also been shown to create information campaigns that are disruptive to democratic societies.

Second, there are criminal rings which are focused on using misinformation and disinformation to derive economic benefit. For example, famously in the 2016 election, a group of Macedonian teenagers created a completely fake news website with links distributed through numerous right wing Facebook groups that helped them to net several million dollars in Google ad payments over just several days. While this was ultimately identified and shut down, they misled a wide public for profit.

Third, there are special interest groups focused on developing and delivering conspiracy theories to forward and advance their own agendas, whether it be for political, social, or financial gain.

Finally, there are lots of technology hobbyists and enthusiasts who create deepfakes with a sense of entertainment or fun. Nicolas Cage and Elon Musk are two popular figures whose faces are inserted into movies from Space Odyssey to Indiana Jones to James Bond.  There are also traditional industries like entertainment and media which view deepfake technology as an incredible tool for content creation and cost saving.

Q: You talk about “deep fakes and cheap fakes,” and note that the cost of creating fake information that can fool people is dropping. Talk about that for a moment.

Harrison: The deepfake-cheapfake dichotomy was particularly powerful in the last several years when the cost, technical skill, and effort to create a credible deepfake were still incredibly high.

Just to clearly define those two terms:

  • Deepfakes are manipulated content created using machine learning algorithms. This is an extremely cutting-edge technology that uses a source set of training data to create wholly new images and video.
  • Cheapfakes, on the other hand, cover all of the audio-visual manipulations created with tools that have existed for the last hundred years. This can include tools like Photoshop, slowing or speeding up a video, or simply miscontextualizing information.

As I mentioned before, cheapfakes are still generally sufficient to deceive and mislead the majority of the public. But what is significantly more concerning is the difficulty of identifying deepfakes and the level of credibility that good deepfakes can achieve. Today, you still need a large source of data and significant computer processing time to create deepfakes, but that number is coming down rapidly. You already have deepfake-like capabilities in apps that are on your phone today, like Snapchat and Zoom. In the near future you will be able to create a deepfake with a single image. While this has incredibly powerful implications for communication and storytelling, it also has equally concerning consequences when used maliciously.

Q: What kind of uptake has the Deeptrust Alliance had so far?

Harrison: In 2020, we have been focused on educating a broad set of stakeholders, which has led to incredible engagement with numerous branches of government, the private technology sector, academia, and civil society. We have surfaced common challenges and identified areas for collaboration. We are excited to move on to our next stage in 2021.

Q: How has the Covid-19 pandemic affected the organization’s ability to execute and grow?

Harrison: Covid-19 certainly hindered our ability to do in person organizing; however, I think it has only accelerated the public’s awareness and fear of the malicious use of deepfakes.  Most people are focused on the technology side of deepfakes, but there are important policy implications and we are excited to pursue these angles in 2021 and beyond.

Q: What’s the future of deep fakes and the Deeptrust Alliance?

Harrison: Deepfakes are here to stay, whether we like it or not. The technology is available in multiple open-source packages and libraries that almost anyone with some basic technical skills can download. And as I mentioned before, the ability to tell and create stories with deepfake technology is already available in some of the apps on your phone. I don’t think we need to despair yet about the potential of deepfakes. However, we need to act with purpose and solidarity in order to drive an important set of solutions that can mitigate the potential damage. I would put these into four large buckets.

First, there are technology solutions. These apply not only to the scientists creating deepfake algorithms but also to the social media platforms that accelerate the distribution of problematic content. In 2020, we have seen some initial steps from Facebook and Twitter, but there is far more that can be done to develop signals and tools to help humans understand the source of content, how it may be edited or manipulated, and any potential intentions of the initial creator. There are projects going on like the Content Authenticity Initiative out of Adobe, which is working hard to begin to build standards to solve this problem.

Second, there is a critical set of policy and governance initiatives which can be spearheaded at both the local and national level. There are already many laws on the books which can be applied to the phenomenon of deepfakes. I think it is most important to focus on regulating behavior rather than technology. However, there are major gaps in terms of the understanding of deepfakes from lawmakers and other policy stakeholders, which must be overcome.

Third, intensive efforts are required around education. While media literacy is an important part of those efforts, people of all ages need to understand how large social media platforms work and why they are seeing what they are seeing, and how to quickly evaluate the trustworthiness of a specific piece of content. I think there is room for improvement around transparency for content distribution.

Fourth, new social norms need to be developed about what information is shared. I often tell people that if you would not stand in the middle of your town and shout the news from the rooftops, you probably shouldn’t share it on Facebook. On Facebook, you would be amplifying it to ten times the number of people in your town.

The DeepTrust Alliance is really focused on tackling this problem from a policy angle. We are using the existing problem of deepfake porn, and their harms to women, as an initial test case to build policy responses. While there are existing laws which regulate the use of non-consensual content, the remedies are far too slow, expensive, and inaccessible for the majority of women today. Because most of the victims today are actresses or musicians, they often have the resources to be able to combat this type of terrible content. But imagine if this happened to your teenage daughter. What would you do, and what would be the impact on her life?

The DeepTrust Alliance is committed to building a sensible policy landscape to protect individuals while retaining freedom of speech and freedom of information consumption. In 2021, we will be re-launching more of our active programming on Capitol Hill and in other national capitals around the world.


Thank you to Amelia Leopold for arranging this interview.

Give a gift subscription

Subscribe now