Artificial Intelligence and the Path Forward for Technology Policy

Q&A with Chris Lewis

Chris Lewis Public Knowledge

Chris Lewis (Harvard AB ‘01-’02) is President and CEO at Public Knowledge, a DC-based public interest digital rights organization that promotes freedom of expression, an open internet, and access to affordable communications tools and creative works. Before joining Public Knowledge, Chris worked at the Federal Communications Commission as Deputy Director of the Office of Legislative Affairs. He is a former U.S. Senate staffer for the late Sen. Edward M. Kennedy and has over 20 years of political organizing, policy, and advocacy experience.

 

Chris Lewis, President and CEO of the nonprofit digital rights group Public Knowledge, participated in a panel discussion, “The Path Forward: Policy, Industry, Innovation, and Incentives,” at the ALI’s Technology Deep Dive on October 20. Deep Dive sessions highlight one major global or community challenge that ALI Fellows might tackle. In this Q&A, Chris talks with 2019 ALI Fellow about the current public fascination with generative artificial intelligence, the policy solutions we really need for AI, and the regulation for digital technology that’s way overdue.

Lisa Macpherson: Chris, when you last wrote for the Social Impact Review, you were calling on social impact leaders to join with academic, civil society, and advocacy groups to grow a movement to build a better internet for all. Can you update us on that effort?

Chris Lewis: The Movement for a Better Internet is a collaborative effort to ensure that the internet’s evolution is guided by public interest values. Our goal is to bring together diverse voices, facilitate connections, share resources, and drive policy change based on a shared public interest vision. Since we launched in 2022 we have signed up about 70 organizations from all over the globe to receive information about the movement. We’ve set up a digital hub to share learnings and perspectives, including from workshops and convenings hosted by Movement for a Better Internet members and at tech and civil society conferences like RightsCon, MozFest, DWeb Camp, and the Creative Commons Summit. And we recently hosted our first policy lab, a virtual convening designed to engage organizations in identifying shared public interest values, collaborating on policy positions, and planning and executing policy campaigns. The topic for the lab was generative artificial intelligence and the future of creativity. We wrestled with questions like, how might policy ensure that generative AI contributes to a thriving commons of widely accessible knowledge and creativity that people may build upon? How do we address the concerns creators have about being exploited by generative AI, so they don’t stop sharing their works publicly on the Web to avoid AI training?

Macpherson: Very timely, since as the moderator noted in the introduction to your panel, artificial intelligence seems to be shaping the agenda in regard to technology policy right now. How are you approaching policy development to ensure artificial intelligence is shaped in the public interest?

Lewis: Public Knowledge has always been mission-driven. Whether the topic is internet access, the rise of social media, or new innovations like AI or the metaverse, we apply our mission and our public interest values to communications tools and technology as they evolve. We were traditionally techno-optimists, but as we’ve acknowledged harms developing online over the years we’ve evolved to being cautiously optimistic! Our role is to counter the voice of industry in policy circles and to make sure technology works for society. We think in terms of values first – in fact, our values ladder up to the idea of dignity, which was discussed in the Deep Dive session prior to our panel. We’ve relied on the same values since our founding. They include free expression, including individual control and dignity. We also value safety, both for online communities and the safety of participating in the conversation online, including by ensuring privacy. Equity – that is, in a pluralistic society with diverse voices, how do we ensure everyone has a chance to speak, and how do we ensure equitable access to the benefits of technology? We advocate for competition, which ensures consumer choice. And we explicitly seek to support, not undermine, democratic institutions and systems.

Macpherson: Relative to those values, what do you see as the major threats of generative AI? And to what extent are these threats new?

Lewis: Some of the threats posed by generative AI are not new at all, nor are they unique to generative AI. In fact, some researchers have been trying to convey the risks of AI for a very long time. For example, lack of diversity in the datasets, or poor quality datasets, used to train the models can reflect historical bias, encoding and reinforcing societal prejudices and stereotypes. They may violate our privacy when models are trained and tested using information about individuals without the choice to be included. And generative AI may wipe out certain types of jobs, especially creator jobs, and cause or perpetuate a lot of economic dislocation. But we also need to hold on to the extraordinary benefits AI will bring, including in virtually every sector of social impact. Climate science, health care, education, urban planning, equitable delivery of services… AI can help us make sense of vast and complex systems in all of these areas, and more. The challenge of policymaking is to promote these benefits while mitigating the harms as we continue to research and discover them.

Macpherson: So you’re not buying into the idea that the risks of generative AI are “existential,” or that it could wipe out humans?

Lewis: The seemingly-human quality of generative AI, especially chatbots like ChatGPT, has led to tension between the researchers who’ve been pointing out AI’s risks all along and the so-called “AI Doomers” – those who warn about the existential risks of generative AI. But some of these are disingenuous, meant to raise valuations of their companies and convince Congress that they need to be super careful about other, basic regulations. We can’t let the industry’s fixation with purely speculative harms let us take our eyes off the risks that AI systems already in use pose every day. More pragmatically, and I think realistically, we need to address the anxiety and uncertainty around the implications of generative AI for copyright law, and therefore its threats to the creative labor market. The real uncertainty among creatives must not encourage us to legislate away basic open internet principles – like fair use and open sharing of online content through hyperlinks – for average internet users. And, of course, there are concerns about its use by bad actors, including to create harmful narratives of disinformation.

Macpherson: Given both the extraordinary promise of AI and those potential pitfalls, how are you thinking about public policy for AI?

Lewis: We’re not – at least we’re not thinking in terms of “AI policy,” alone. There are so many potential applications of AI, each with its own potential benefits, risks, considerations, and yes, politics. We favor an approach of highly targeted and incremental regulation; that is, regulation that recognizes and accounts for a breadth of use cases and potential benefits as well as harms. This regulatory power should be rooted in an expert agency that has the authority to research and study the continuing development of digital platforms, including AI, and create rules that provide basic standards and consumer protection. We should focus policy on specific applications of the technology, not bans or restrictions on the technology itself. Since AI technology is developing so quickly, an easy place to begin regulation is with risk mitigation. That means things like requirements for risk assessment frameworks and mitigation strategies, transparency on algorithmic decision-making and its outcomes, access to data for expert public sector and academic researchers, and impact assessments that show how algorithmic systems perform against tests for bias. The European Union is taking the lead here with the EU AI Act. We also hope Congress will work closely with civil society groups like ours to pursue legislation that centers accountability for upholding human and civil rights. Existing legal regimes, like civil rights protections, can be used and expanded, but with collaboration and input from an expert regulator. An expert agency for digital platforms, including AI, will allow consumer protection and other existing regimes to keep up with the rapid pace of innovation. Data privacy protections based on data minimization principles are also relevant here, as they would limit the collection and use of data for training models and for targeting of their output. Comprehensive privacy legislation like the American Data Privacy and Protection Act (ADPPA) would implement protections that cover all data collection and reduce exploitation and surveillance uses rather than focus exclusively on AI-related use cases. We also need to use competition policy to ensure that AI doesn’t concentrate existing technology monopolies or create new ones. A regulatory regime for AI built on expertise could create a baseline expectation for products without locking in the dominance of the largest companies. Unlike the early decades of the internet, AI is developing largely in the private sector, exacerbating the anticompetitive potential for the creation of large data sets and language models. Researchers should continue to study the potential for a public AI resource that provides equitable access and competition with the largest companies.

Macpherson: What about voluntary standards, like the ones recently secured from seven leading AI firms by the White House? Can they be effective?

Lewis: It’s true that until or unless there are government regulations, AI will be governed largely by the ethical frameworks, codes, and practices of its developers and users. There are exceptions, like when AI systems have outcomes that are discriminatory. The good news is, virtually every AI developer has articulated their own principles for responsible AI development. These principles can cover each stage of the product development process, from pretraining and training of data sets to setting boundaries for outputs, and incorporate principles like privacy and security, equity and inclusion, and transparency. They also articulate use policies that ostensibly govern what users can generate. But these policies, no matter how well-intentioned, have significant limits. As in every other industry, voluntary standards and self-regulation are subject to daily trade-offs with growth and profit motives. Bad actors will find ways to undermine or work around usage policies. And let’s face it: some of these AI firms are the same companies – even some of the same people – whose voluntary standards have proven insufficient to safeguard our privacy, moderate content that threatens democracy, ensure equitable outcomes, and prohibit harassment and hate speech. If they were sufficient, we wouldn’t have needed to initiate the movement for a better internet.

Macpherson: You talked about the use of generative AI by bad actors, including to create disinformation. What is the nature of that risk?

Lewis: Well, if social media made it cheaper and easier to spread disinformation, now generative AI will make it easier to produce. That means it may increase the number of parties that can create credible disinformation narratives. It will make them less expensive to create, and make them more difficult to detect. That’s because the traditional cues that alert researchers to false information, like language and syntax issues and cultural gaffes in foreign intelligence operations, will be missing. There’s a lot of focus and discussion right now about solving this with technological solutions for digital provenance and to ensure content authenticity – that is, tools to help detect what content is created with AI. Each of these solutions has its own strengths and weaknesses. But they suffer from low accuracy, and bad actors may copy, resave, shrink, or crop images, which obscures the signals that AI detectors rely on. And they all struggle with writing that is not in English.

Macpherson: Speaking of disinformation: When this piece runs in the Social Impact Review, the 2024 U.S. presidential election will be exactly one year away. Are we ready?

Lewis: Well, first, I want to give election officials some credit for what they accomplished in past elections. Election officials learned a lot about foreign interference in 2016 and the need to protect information integrity after the actual election in 2020. As a result, in our analysis, election officials and the trusted community sources they worked with did a much better job pushing back on misleading narratives in the 2022 elections. So far, experts in the trust and safety community are saying that the threats from generative AI are roughly similar to the ones we already see in social media: the same themes, driven by the same motives, just at more volume and maybe with more credibility. But there will be new challenges next year. Virtually all of the major platforms have rolled back disinformation policies before the 2024 election cycle. They’re still pretty terrible at enforcing the policies they do have in languages other than English. Tech sector downsizing has reduced resources in online trust and safety. Social media platforms, and media in general, seem to be fragmenting, sometimes to alternative platforms that have less robust policies, fewer resources for enforcement, or explicit “anything goes” approaches to content moderation. And there’s a dangerous new counter-narrative in Congress and the judicial system about the government’s role in content moderation: some are equating consultation between platforms and the government on topics related to national security and public health and safety with censorship.

Macpherson: What is the likelihood that AI regulations will be adopted before the 2024 election?

Lewis: There’s some bipartisan momentum toward doing something in this space. The White House, House, and Senate are holding hearings or calling for comments about the risks of generative AI, in particular, in order to steer potential policy interventions. Both chambers have said they hope to have regulatory frameworks in place before the 2024 election. But it’s guarded: As one senator said, “... in broad strokes, I think that it’s not unreasonable to expect to get something done next year.” But I would guess that any regulations focused on AI by that point will be very narrowly targeted, and one area of focus will absolutely be information integrity and elections. We’ve already seen several proposals related to political ads and other revisions to the Federal Election Campaign Act. Things like that, that work within existing legal frameworks, may be easier to get to before the election. But we shouldn’t let all the buzz around AI keep us from pursuing technology policy solutions we’ve needed for years. A comprehensive data privacy law – like ADPPA – is desperately needed and would be a foundation for legislation focused specifically on AI. It incorporates civil rights principles and empowers an agency for rulemaking since Congress doesn’t have the agility required to keep up with innovation in this sector. We also need more assertive antitrust enforcement and competition policy to ensure consumers have more choice. Lastly, we advocate for a dedicated digital regulator.

Macpherson: Chris, thank you very much for participating in the panel, and for meeting with me.

Lewis: Thank you.


About the Author:

Lisa Macpherson was a consumer marketing executive with a specialty in digital marketing transformation before participating in the Harvard Advanced Leadership Initiative as a Fellow in 2019. She was an ALI Senior Fellow in 2020 and 2021 while also working as a Senior Policy Fellow, and now Senior Policy Analyst, at Public Knowledge. Lisa focuses on democratic information systems, including disinformation, content moderation, and policies to support local news.

This Q&A has been edited for length and clarity.

Previous
Previous

Rapid Development of New and Affordable Medical Treatments

Next
Next

Inspiring and Shaping Future Social Impact Leaders