AI Tool Builders and Their Users: What Should We Expect From the Tools and Who Is Responsible When They Fail?
Earlier this month, President Donald Trump signed an executive order on the U.S. artificial intelligence strategy. The plan suggests the need for careful policy attention to this new technology. Microsoft and Amazon have called for regulating facial recognition technology – an especially sensitive application of AI. Google has recently addressed the need for a policy framework for AI overall in its new Perspectives on Issues in AI Governance. This contribution raises important issues and helps to kick off a critical discussion about how to manage the rapid rise of new AI technologies. However, experience with Internet regulation shows that it will be essential to ensure that incentives for responsible treatment of social risks are incorporated before widespread deployment.
What is important about Google’s document is not so much that it has specific answers to AI policy questions such as the future of work, security, privacy, or fairness. It does not have much in the way of detail on those questions. But rather, it offers a distinctive approach to what kind of high-level strategies we ought to use in addressing AI policy questions. Teasing out these strategies can be helpful in understanding the roles that government, industry and civil society can play individually and collaboratively to move forward on AI policy questions. First and foremost, Google urges us to rely on the laws, institutions, and broad values that we have as a society, rather than trying to reinvent the wheel. The challenge, according to the company, will be in the implementation of those broad values. We are urged not to get distracted by trying to write new principles for AI regulation from scratch. This is sensible as far as it goes, but we can learn a lot from how Google positions AI within this framework of existing rules and values.
On the positive side, Google seems prepared to offer technical expertise in addressing the challenges of deploying AI with policy goals in mind. Yet it is not clear how much responsibility Google is prepared to take when things go wrong with the AI services that Google offers to the world. Lurking in the background of the Google paper and the entire AI governance debate is the question of how to allocate responsibility and liability between those who make AI tools and services, and those who use the tools. These are critical questions because, as we saw in the first MIT AI Policy Congress, governments around the world are trying to figure out how to ensure that AI deployments across many parts of society are trustworthy.
Lurking in the background is the question of how to allocate responsibility and liability between those who make AI tools and services, and those who use the tools.
Google’s opening approach to AI governance leans heavily on self-regulatory models and the hope that the existing regulatory frameworks will address key problems:
There are already many sectoral regulations and legal codes that are broad enough to apply to AI, and established judicial processes for resolving disputes. For instance, AI applications relating to healthcare fall within the remit of medical and health regulators and are bound by existing rules associated with medical devices, research ethics, and the like. When integrated into physical products or services, AI systems are covered by existing rules associated with product liability and negligence. Human rights laws, such as those relating to privacy and equality, can serve as a starting point in addressing disputes. And of course there are a myriad of other general laws relating to copyright, telecommunications, and so on that are technology-neutral in their framing and thus apply to AI applications.
In many ways, this reads like a call to repeat the Internet regulatory model that the United States and other OECD countries adopted in the 1990s.
To date, self- and co-regulatory approaches informed by current laws and perspectives from companies, academia, and associated technical bodies have been largely successful at curbing inopportune AI use. We believe in the vast majority of instances such approaches will continue to suffice, within the constraints provided by existing governance mechanisms (e.g., sector-specific regulatory bodies).
There were many benefits from the mix of regulations that led to the Internet revolution,[1] however, we now see that there were also unaccounted for social costs in the areas of privacy and cybersecurity. Already there is evidence that some of those same externalities are reappearing in the deployment of AI technologies, so it is really too early to claim that existing structures will answer the mail on all AI/data analytics challenges. We can already see examples of AI technologies being deployed without adequate public oversight.
So before we declare that existing policy structures are “curbing inopportune AI use,” as the Google document states, consider the following two well-analyzed and widely discussed examples of the failure of existing law to control harm from AI systems.
COMPAS criminal recidivism prediction system: Thanks to outstanding data-driven, tech-savvy journalism by Julia Angwin and her team at ProPublica (now at TheMarkup), it became clear that a system widely used by courts to predict criminal recidivism in parole and sentencing decisions have significant racial bias built into its model.[2] While there is a dispute about just what would be fair in these cases,[3] there is no question that there are unresolved questions about fairness and discrimination. Despite the evidence of harmful discrimination, many courts continue to use this and other similar systems. None of the legal structures currently in place are doing much to curb this particular risk from automated decision-making. It is true that this particular system is not even sophisticated enough to have a machine-learning model. It is just a bunch of hard-coded rules. But if the law cannot control these simpler systems in this area, it is hard to imagine how it would tackle more complex AI. Some policymakers call for a halt in using these systems,[4] but it appears that they are still in widespread use around the country.[5]
Face Recognition: Studies by MIT and Stanford computer science students have revealed dramatic racial and gender bias in several of the most widely used and technically sophisticated face recognition systems.[6] We now know that these systems, some of which are still on the market, are far less accurate for women and people with darker skin. To the extent that we rely on face recognition for identification purposes associated with any important professional or personal opportunity, the biases shown in these systems do real harm to women and people of color. Recent news reports suggest the buck has passed between users (the police) and the toolmaker (Amazon).[7] Some developers such as Microsoft and IBM deserve credit for acting to correct these errors quickly. Google has also taken services offline quickly when similar errors were pointed out. But other companies, including Amazon, have instead blamed these harmful mistakes on user error, calling into question whether we should rely on market pressure as a corrective.[8]
We should not try to reinvent the wheel with brand new policy categories, or worse yet, attempt to create a whole new stand-alone field of regulation.
These cases challenge Google’s assertion that existing regulatory structures are sufficient in their current form to address hard AI policy questions. However, Google is still correct that we should not try to reinvent the wheel with brand new policy categories, or worse yet, attempt to create a whole new stand-alone field of regulation specially designed to address AI technology.
Bringing these existing regulatory and institutional structures to the point that they can handle the new governance challenges of AI requires:
- More technical expertise in the existing enforcement bodies
- Better tools to assess the safety, robustness, and fairness of AI systems
- Adequate legal authority to compel responsible behavior by those who design or use AI without adequate attention to society’s priorities
Google is actually doing a commendable amount of research and development on technical tools that contribute to the overall need to make AI more trustworthy, including work on explainability, interpretability, and fairness assessment.[9] Yet their governance framework is largely silent on the broader question of how we can be sure that these tools are available across the range of AI application areas and how governments and others will learn to use them. Actually, the paper strikes a somewhat pessimistic note on the current capability to assess the trustworthiness of AI systems. While it is widely accepted that neither the public nor regulators should trust AI systems that are not explainable and subject to interrogation by human users, Google is forthright about the fact that at least with today’s state-of-the-art:
There are technical limits as to what is currently feasible for complex AI systems. With enough time and expertise, it is usually possible to get an indication of how complex systems function, but in practice doing so will seldom be economically viable at scale, and unreasonable requirements may inadvertently block the adoption of life-saving AI systems.
As long as these technical limitations remain, there will be real gaps in our ability to govern AI deployments in a way that earns public trust. While explanation, interpretation, and methods for assessing robustness are lacking, it will be hard to account for externalities associated with the use of these systems.
Despite the trustworthiness and reliability gaps evident in our current governance framework, Google seems to want to reduce its own legal responsibility for any harms that may flow from AI-related faults. Per the last section of the paper:
Google recommends a cautious approach for governments with respect to liability in AI systems since the wrong frameworks might place unfair blame, stifle innovation, or even reduce safety. Any changes to the general liability framework should come only after thorough research establishing the failure of the existing contract, tort, and other laws.
With this Google seems to want us to treat AI technology as if it is a wholly new species of technology as if the software and data that make up neural nets and other AI systems have nothing to do with the software, hardware, and networks that surround us today. Google’s case for limited regulation and liability limits could apply to more or less any new technology. On the contrary, AI systems that are the subject of these governance discussions have many similarities in technical, business, and social terms to the Internet, software, and service infrastructure that we have today: tools. While innovative in many respects, AI will be offered either as software, platforms or as part of vertically integrated services. Like today’s Internet-based software, platforms, and services, users of AI tools will depend on them for a wide range of business, personal, and public sector functions. But, also like today’s Internet tools, users will have relatively little knowledge or ability to shape the fundamental features of the tools. And like today, the platform providers — Google, Amazon, Facebook, and others — will be in a position to shape the way that new AI tools are delivered.
By contrast, liability limits were imposed by the US Congress (section 230 of the Communications Decency Act of 1996), the European Commission (the e-Commerce Directive), and other legislatures to protect Internet platforms in the 1990s based on some very particular technical limitations of those platforms at the time and the nascent nature of the Internet marketplace. What is more, there was a real risk to online free speech that if platforms were held responsible for the speech of their third-party users of the platforms, then those same platforms would have strong incentives to restrict the speech of third parties. The corresponding risk is that AI tool providers might discourage or prevent uses that are not verifiably safe and trustworthy. Perhaps that would actually be a good thing.
Before we preemptively declare that AI toolmakers need protection from liability, we should consider experience from today’s Internet environment.
There is clear evidence that early liability limits in software licensing led to radical underinvestment in security during the PC era. It was not until major, systemic vulnerabilities were identified in platforms such as Microsoft Windows and Google that those companies launched major security engineering efforts. Those efforts produced dramatic improvements in the security of those services. Yet today many other software and platform offerings suffer from major security weakness, and as the industry as a whole struggles to adopt effective security practices, societies around the world suffer substantial monetary and non-monetary loss from security weaknesses.
There is clear evidence that early liability limits in software licensing led to radical underinvestment in security during the PC era.
Following a number of serious privacy breaches, including but not limited to Facebook and Cambridge Analytica, policymakers around the world recognize that we have been late in taking privacy risks seriously in national laws.
Careful light-touch regulation of early Internet platforms produced a mix of innovation and great economic and social growth, but also real unaddressed externalities. So where does that leave us as to the responsibilities of AI tool makers and users? The careful approach proposing reliance on existing sectoral regulation, as opposed to new AI-specific laws, outlined in much of the Google paper is a good start toward trustworthy and innovative deployment of AI tools. This must include taking a clear-eyed view of the risks and externalities that will come along with some AI applications, and making sure that rules, whether existing or new laws, will address them. There are many different approaches to analyzing and allocating liability for complex systems such as these. We should look at the options carefully, but make sure that we do not leave this critical question unanswered.
Going forward, let us avoid the mistakes we made in the software and Internet marketplaces that emphasized rapid deployment to the exclusion of incentives for responsible treatment of social risks inevitable in any complex new systems. To avoid those mistakes, we should start by assuring that tools builders have the incentive to build mechanisms that provide clear, empirical measures of the trustworthiness, safety, and reliability in the tools and systems they provide. These technical capabilities are an essential prerequisite to the thoughtful application of the existing safety and liability rules that Google sites. Accountability and transparent operation of AI tools are a must for any widespread deployment of AI systems in which there is a risk of harm.
Google and many, but not all, other AI tool builders have shown an early commitment to research in the areas of interpretability and explainability. The path forward to broad adoption and acceptance of AI tools is to deepen that research and figure out how these tools can be deployed in the service of making risk more transparent. This will enable industry, civil society and governments to have a clear view of what risks that may arise, thus enabling thoughtful decisions about where responsibility for harm lies. Getting those decisions right, and having them enforced in law, are essential to building public trust in AI applications.
NOTE: An earlier almost identical version of this text was originally published at https://internetpolicy.mit.edu/ai-tool-builders-and-their-users/
[1] Weitzner, D.J. (2018), “Promoting Economic Prosperity in Cyberspace,” Ethics & International Affairs, 32(4), 425-439.
[2] Angwin, J., Larson, J., Mattu, S. and Kirchner, L., “Machine Bias: There’s software used across the country to predict future criminals. And it’s biased against blacks.” ProPublica, May 23, 2016.
[3] Kleinberg, J., Mullainathan, S., and Raghavan, M. (2016), “Inherent trade-offs in the fair determination of risk scores.,” arXiv preprint arXiv:1609.05807
[4] Guest opinion, Rep. Greg Chaney, “Idaho must eliminate computerized discrimination in its criminal justice system,” Idaho Press, February 6, 2019.
[5] Lapowsky, I., “Crime-predicting Algorithms May Not Fare Much Better Than Untrained Humans,” Wired, January 17, 2018.
[6] Buolamwini, J., and Gebru, T., “Gender shades: Intersectional accuracy disparities in commercial gender classification,” in Conference on Fairness, Accountability and Transparency, January 2018, pp. 77-91.
[7] Menegus, B., “Defense of Amazon’s Face Recognition Tool Undermined by Its Only Known Police Client,” Gizmodo, January 14, 2019.
[8] Farivar, C., “Amazon: Cops should set confidence level on facial recognition to 99%,” Ars Technica, July 30, 2018.
[9] For example, Doshi-Velez, F., and Kim, B., “Towards a rigorous science of interpretable machine learning,” arXiv preprint arXiv:1702.08608, 2017.