A Historical Model for AI Regulation and Collaboration

What the Human Genome Project can teach us about channeling a revolutionary technology for public benefit—and why nuclear weapons are a counter-productive analogy.

In March of 2023, the CEOs of top AI companies penned a letter comparing the risk of extinction from artificial intelligence (AI) to the risk of nuclear war. The letter succeeded in making headlines, dredging up imagery from the Terminator franchise, but it failed to steer the world’s approach to AI in a helpful direction. In fact, many of the ideas around what AI can achieve has been influenced by the notion that it’s as powerful as a nuclear weapon. But by “weaponizing” this technology, we’ve made it much harder to regulate, as it has undoubtedly led to policies aimed at stockpiling resources to achieve national supremacy over the tech.

Instead of hoarding access to AI and focusing solely on risk mitigation, universities, national laboratories, and industries from around the world need to work together to advance the technology’s benefits. This may seem like an overly hopeful, impossible task, but not too long ago, humanity successfully accomplished such collaboration and advanced the benefits of another controversial technology: genetic sequencing. In 1990, governments around the world, with the leadership of the United States, began a 13-year effort to map human DNA through the Human Genome Project
(HGP). I believe we can achieve such cooperation again to ensure AI advancements help humanity thrive.

A Look to the Past: The History of The Human Genome Project

Initially funded through the National Institutes of Health and the US Department of Energy— under the leadership of individuals like biomedical scientist Charles DeLisi—the HGP became an international effort in response to global concerns around the ethical, legal, and societal implications of mapping the human genome. Given these broad concerns, an interdisciplinary team of biologists, physicists, chemists, computer scientists, mathematicians, and engineers were engaged in the project. The sequencing was done by universities across the US, UK, Japan, France, Germany, and China, and the effort also included a complementary industry-led program led by Celera Corp (Quest Diagnostics).

Are you enjoying this article? Read more like this, plus SSIR's full archive of content, when you subscribe.

Like AI, one of the main concerns around genome mapping was privacy. People were afraid that employers and health insurance companies would use the data from genome mapping to discriminate, and they demanded a public policy response. In 1996, the United States passed the Health Insurance Portability and Accountability Act (HIPAA), which protects Americans from the unauthorized and non-consensual release of individually identifiable health information to any entity not actively engaged in the provision of health-care services to a patient. Ethical and legal concerns around genetic sequencing were also addressed and mitigated through the creation of the Ethical, Legal, and Social Implications (ELSI) Research Program, which required five percent of the annual HGP budget to be allocated to address emerging ELSI.

Today with generative AI, we are face similar unknowns and anxieties. We have reached a level of scientific breakthrough that points to immense promise—and risk should it fall into the wrong hands. Through the Human Genome Project, humanity chose to rally a multi-disciplinary set of global stakeholders to co-develop the technology and share it out for the betterment of society, and that paid off significantly in scientific and economic ways—we should follow this instinct again.

The HGP provides a playbook for developing successful co-governance mechanisms and public policy that fosters cooperation from governments; productive innovation across industries; and investments in ethical, legal, and social safeguards. At a moment when governments are trying varying approaches to AI without much ability to get concrete and consistent governing rules in place, the HGP points to specific steps to take.

The Human Genome Project as an Actionable Roadmap: US Leadership Through Global Collaboration

The first actionable takeaway from the HGP is that the US could be a leader while at the same time driving global collaboration that drives public policy as well as research and development. Initiated by NIH, in partnership with the Department of Energy, the Human Genome Project became a whole-of-government effort once it was codified through budgetary allocations and public policy, like the passage of HIPAA, which advanced the agreements of the initiative across a global stage.

There has been movement on US leadership in AI, and that trend should continue alongside establishing a lever for global collaboration. In October 2023, the White House released an executive order on AI that aimed to accomplish three things: shaping AI safety standards through the might of government procurement, boosting the AI workforce, and addressing national security concerns. Likewise, congressional leaders have vowed to do “years of work in a matter of months” to ensure that America leads the world in advancing policies that encourage innovation. Policymakers from both sides of the aisle have spoken about AI’s potential to reshape industries, revolutionize daily life, and drive economic growth. Yet more than six months later, the US still has much work to do on responsible AI regulation, and the National Institute of Standards and Technology—the lab tasked with overseeing a new generation of artificial intelligence models—is plagued with budget cuts and leaky roofs. To advance its position, the US government should institutionalize AI within a department that can drive global cooperation and public policy development while serving as the budgetary home to a global AI effort that moves beyond risk mitigation towards harnessing the power of AI for social good.

The second milestone of the HGP that has implications for the current moment is the level of global cooperation codified via the Bermuda Accord, which laid out the rules for the rapid and public release of DNA sequence data. This multinational effort to map the human genome generated vast quantities of data about the genetic makeup of humans and other organisms, and solid principles were required. Governments agreed that all DNA sequence data be released in publicly accessible databases within 24 hours after generation. The agreement also made the entire sequence freely available in the public domain for research and development to maximize benefits to society. The Bermuda Accord shaped the practices of a whole industry and established rapid pre-publication data release as the norm in genomics and other fields.

In the nuclear space, a similar convening occurred in Pugwash, Nova Scotia, and that’s why leading scholars have called for AI’s “Pugwash Moment.” It has become clear that country-specific gatherings, like the UK’s AI Safety Summit, are not sufficient at developing a shared vision and approach. AI requires global action and thus a convening similar to the Bermuda Accord is necessary during which governments and industry set out rules for the deployment of new large language models (LLMs), the principles for how to ensure those LLMs remain open, recognize ways to mitigate potential harms, and form a vision for how public LLMs can further research and development that maximize benefits to society. To responsibly capitalize on AI, we should drive toward international cooperation that truly enables us to govern both the development of safe and reliable foundational models; provide access to industry, academia, and civil society to build off of the best possible data; and invest in mitigating ethical, legal, and social concerns.

Third, the HGP showed us that the government can be a responsible and innovative actor when it invests in its own capacity, and this need not impede private sector growth. Congress allocated $3.8 billion for the Human Genome Project from 1988 to 2003, which helped drive $796 billion in economic impact and the generation of $244 billion in total personal income. This was more than a 141 to 1 return on investment to the US economy, which in 2010 alone led to 310,000 jobs, $20 billion in personal income, and $3.7 billion in federal taxes. This significant ROI demonstrated that the private sector could thrive as the government led an open and transparent effort to map the human genome. It was also an investment that was tied to ethical considerations because it allocated up to 5 percent of the budget to support research on the ethical, legal, and social implications of mapping the human genome.

In the case of AI, Congress should again allocate funds to enable an effort that has a national scope and involve cooperation among all three components of the R&D community via universities, national laboratories, and the private sector. This effort should entail international cooperation with a consortium of like-minded institutions across the global North and South, and that consortium should make up the network of scientists, ethicists, and technologists who will develop the optimal and ethical approach to using AI and the subsequent research and continuous development of models and applications. Without true investment in research and development that ensures that AI serves the public interest, it is unlikely that we will see the same kind of positive societal impact from AI that we saw from the HGP.

With the economic prosperity that can stem from global cooperation on AI, this is not the time to be frugal. The US government must invest in the promise of AI with as much gusto as it invested in the Human Genome Project all those years ago.

Is There Promise in AI?

To be sure, the comparison between generative AI and genetic sequencing isn’t perfect. Capitalizing on generative AI doesn’t yet have a neat endpoint like the HGP in which we can call the “project” complete. And in many ways, we will need to govern globally from behind, because much of the technology and the foundational models are already out there. But even here, all is not lost. An HGP-like effort toward AI could create synergy across the various global approaches to regulating the tech, create rules for the continuing development of LLMs and deployment practices, and—more importantly—start to build the public interest AI equivalent of the HGP. This would be an open, public LLM that could be developed with the values of public interest technologists and supported and maintained openly for civic institutions that would use it to work on the grand challenges of our time. The HGP showed us that completing the genome map was just the beginning of large-scale advances in medicine, agriculture, energy, and environmental conservation.

Some could also argue that the recent development of the US AI Safety Institute and global consortium might be the answer. The institute’s mission is to advance the science of AI safety and set rigorous standards for testing models and ensuring their safety for public use. But here again, as compared to the HGP, the focus is limited to safety and risk mitigation and not actually endeavoring to build out the public AI infrastructure that would seed the kind of scientific innovation and economic development that the HGP resulted in.

Yes, it is difficult to steer away from mentally framing a technology that prompts anxieties of the unknown as the next “nuclear weapon.” The importance of safety is not to be minimized. But our current AI moment requires public leadership that guides us toward a vision of AI that serves our public and social interest, modeled by global cooperation and co-governance, not one of purely weaponization.

What once took humanity 134 years—the identification of a new gene—now takes six weeks thanks to developments around generative AI. The discovery was made last summer by researchers at Stanford who trained AI systems similar to ChatGPT on raw data of real cells to see if they could teach themselves biology. This is just one of many breakthroughs that AI has and can help us achieve. We cannot get paralyzed by the daunting nature of regulating AI or downplay the risks of it; nor should we approach the technology like how we’ve approached nuclear weapons. We must look back at our own history for how to move toward a collaborative process that will enable us to govern the development of safe and reliable foundational models; foster equitable access and opportunity for innovation that stems from the best possible data; and invest in mitigating ethical, legal, and social concerns that stem from AI.

Support SSIR’s coverage of cross-sector solutions to global challenges. 
Help us further the reach of innovative ideas. Donate today.

Read more stories by Lilian Coral.

 

Technology

Lilian Coral
Lilian Coral is vice president of New America's Technology and Democracy programs and head of the Open Technology Institute. She is a Public Voices Fellow on Technology in the Public Interest with The OpEd Project in partnership with The MacArthur Foundation.