AI leaders reveal their hopes and concerns for the technology in conversations with GMF.

In a first of its kind, GMF launched a series of interviews with leaders in artificial intelligence (AI) from the worlds of  relevant policy, business, and technology. This series reflects the organization’s commitment to creating unique dialogue to foster action across technology and policy silos and on issues of transatlantic importance. AI has rapidly emerged as one of these issues.

The conversations kicked off with leading investors Marc Andreessen, Reid Hoffman, and Vijay Pande. Interviews were also held with Mistral’s Arthur Mensch, who provided a European perspective, and leading scholars Anu Bradford and Chris Miller. Policymakers included Dragos Tudorache, member of the European Parliament and AI Act rapporteur, Tsiporah Fried, senior adviser to the vice chair of the French joint chiefs of staff, and Cybersecurity and Infrastructure Security Agency (CISA) Director Jen Easterly.

Discussions with leaders from Deep Mind, US and European diplomats, and those working on AI’s military applications are forthcoming.

These interviews reveal many differences but some consensus. There are five takeaways of note.

First, AI offers enormous opportunity.

  • Economic growth: AI can lower costs and improve quality of services, including education and health care. The technology can benefit millions without reliable access today. 
  • Drug discovery: AI can usher in a new “industrial revolution” in pharmaceuticals. Today, researchers who suspect that a given chemical can treat a disease run experiments on it. AI will enable greater understanding of the underlying biology and, therefore, put researchers in a better position to predict effective chemicals and test them.
  • Health care delivery: An “industrial revolution” in delivery means shifting from one-off meetings with doctors to systematic diagnoses and matching of drugs with an individual’s condition and genetic makeup. This can increase health care access to millions worldwide who currently lack a general practitioner.
  • Educational access on learners’ terms: AI could provide every child with a world-class tutor who can determine individual learning needs. 

Second, the geopolitical contest with China is a critical factor in thinking about AI.

  • Competition: Although China lags the West in key elements of AI, it is striving to close the gap. China today spends as much money importing semiconductor chips as oil, and its investment in chip innovation dwarfs that of the United States, Europe, and Japan combined. Washington, given its long-term commitment to subsidies, large domestic market, aggressive private companies, and giant research complex, cannot be complacent. 
  • Innovation: Over the long run, the West should try to out-innovate China through new chip architectures, efficiencies, and training and deploying at scale AI models. The US government has a good track record of funding prototypes that it then transfers to the private sector to become breakthrough products.
  • Access: China’s digital Silk Road is building infrastructure with cyber vulnerabilities and enabling governments to spy on their citizens. The West must respond to this challenge and make AI globally available. Mensch argues that open models will allow entrepreneurs to create local differentiation.

Third, a gulf exists between technology leaders and policymakers, but a transatlantic consensus is emerging on methods for interacting to foster innovation and mitigate risk.

  • Understanding regulatory outlook and risk: Technology investors often see policymakers as overly risk-averse and too quick to regulate in ways that may stunt innovation.
  • Government action is inevitable and essential: Even the technology leaders interviewed acknowledge existing regulations and enforcement capabilities may need updating to confront new versions of problems for which laws already exist. They also concede that there is a need for addressing emerging challenges, including facial recognition bias and cybersecurity risks such as malware enabled to evade defenses.
  • New modes of cooperation enable government and industry to co-create guidelines: Policymakers are creating processes to bring together stakeholders to develop and fine-tune guardrails. CISA, for example, in coordination with international partners and AI companies, created nonbinding guidelines for infrastructure developers. 
  • Agreement to focus guardrails on uses rather than on underlying technology is growing: In the United States, a Biden administration executive order instructs expert agencies to update rules for and enforcement of current laws concerning AI applications. The EU’s AI Act takes a “risk based approach” that imposes greater obligations on applications as their potential harms increase. In both jurisdictions, however, information on large language models must be shared with regulators, and risks associated with these models will be evaluated with industry representatives.

Fourth, interviewees highlighted specific examples of smart cooperation.

  • New approach to health regulation: Many regulations, including those from the Food and Drug Administration, will need updating to take advantage of AI. 
  • Chip subsidies boost competitiveness: The driver of AI advances is better and cheaper computing power, enabled by more capable chips. To compete, government support will need to continue for the chip industry. 
  • Securing the vote: AI is likely to be used to challenge election integrity. Deepfakes deceive voters into believing hoaxes and denying truths. Foreign adversaries remain intent on using deepfakes and other tools to interfere in US elections Election officials must fortify systems and be ready to respond to rumors about election integrity. Industry, working with government, should deploy watermarking, and citizens must be wary of the information they receive.
  • Upgrade cybersecurity: To mitigate risks from terrorists, cyber criminals, or rogue nations using AI systems for malicious purposes, government experts are working with industry to develop guidelines and engage in red teaming. 
  • Defensive use of technology: The White House is cryptographically verifying official content so citizens can check its authenticity. Generative AI and large language models are finding vulnerabilities commonly introduced by insecure older computer languages and rewriting code to increase cybersecurity.
  • Information as a battlefield: The importance of controlling the narrative means that war could be won even before any weapons are fired. Democracies must address this without impinging on free speech rights and without engaging in propaganda. That will be a major challenge.

Fifth, Europe has strengths in the technology sector but faces challenges to increasing its competitiveness. 

  • Unleash talent: Europe has great talent pools for generative AI, which requires small, creative teams. France and the United Kingdom have clusters of AI startups. Key AI technologies, including at Deep Mind, were developed in London. Still, many Europeans come to the United States to work in technology.
  • The lack of a single market hinders building economies of scale: Europe has yet to harmonize national regulations and lacks deep, integrated capital markets. Individual countries are beginning to compete on lower taxes, regulation, or wages.
  • A less dynamic framework: Whereas immigration is a big part of the story of innovation in the United States, where most technology unicorns have an immigrant founder, Europe lacks pathways to success for immigrants. Risk-taking is hindered by punitive bankruptcy laws.

These are the early days of AI, and the speed of change is without precedent. GMF aims to continue fostering debate on the best options for unleashing the technology’s potential—and addressing the serious concerns that accompany it.