Download Governing AI and more Exams Artificial Intelligence in PDF only on Docsity!
GOVERNING AI
Scott J. Shackelford* &Rachel Dockery**
Artificial intelligence (AI) is increasingly pervasive and essential to
everyday life, enabling apps and various smart devices to autonomous
vehicles and medical devices. Yet along with the promise of an increas-
ingly interconnected and responsive Internet of Everything, AI is usher-
ing in a host of legal, social, economic, and cultural challenges. The
variety of stakeholders involved – spanning governments, industries, and
users around the world – presents unique opportunities and governance
questions for how best to facilitate the safe and equitable development,
deployment, and use of innovative AI applications. Regulators around
the world at the state, national, and international levels are actively con-
sidering next steps in regulating this suite of technologies, but with little
sense of how their efforts can build on and reinforce one another. This
state of affairs points to the need for novel approaches to nested govern-
ance, particularly among leading AI powers including the United States,
European Union, and China. This Article provides an overview of AI and
the numerous challenges it presents with special attention being paid to
autonomous vehicles, along with exploring the lessons to be learned
from polycentric governance frameworks and how to apply such social
science constructs to the world of AI.
I NTRODUCTION................................................. 280
I. W ELCOME TO THE MACHINE LEARNING REVOLUTION..... 284
A. Defining Artificial Intelligence...................... 285
B. Technical Approaches to AI........................ 287
C. Applications and Benefits of AI..................... 290
1. Health & Medicine............................. 293
2. Transportation................................. 294
3. Cybersecurity.................................. 295
D. Challenges Presented by AI......................... 295
1. Economic Challenges........................... 296
2. Social and Cultural Challenges.................. 297
3. Legal & Ethical Challenges..................... 298
- Chair, IU-Bloomington Cybersecurity Program; Director, Ostrom Workshop; Associ- ate Professor of Business Law and Ethics, Indiana University Kelley School of Business. Spe- cial thanks go to Kalea Miao for her outstanding and invaluable research on this project, particularly with helping to prepare the autonomous vehicles case study. ** Research Fellow in Cybersecurity and Privacy Law, IU Maurer School of Law; Acting Executive Director, IU Cybersecurity Clinic.
280 CORNELL JOURNAL OF LAW AND PUBLIC POLICY [Vol. 30:
4. Summary...................................... 299
II. U.S. A PPROACH TO REGULATING AI..................... 300
A. Applicable U.S. Federal Law....................... 300
B. Applicable U.S. State Law.......................... 302
C. Applicable U.S. Case Law.......................... 304
III. C OMPARATIVE APPROACHES TO AI GOVERNANCE......... 305
A. Private Initiatives & Ethical Frameworks............ 305
B. European Union................................... 308
C. China............................................. 311
D. International Law & Organizations................. 312
E. Summary.......................................... 314
IV. TOWARD A POLYCENTRIC M ODEL FOR AI GOVERNANCE.. 314
V. AI GOVERNANCE CASE STUDY: AUTONOMOUS VEHICLES.. 319
A. U.S. Approach to AV Governance................... 320
B. EU Approach to AV Governance.................... 321
C. Chinese Approach to AV Governance............... 323
D. Other Notable International AV Governance Actions. 323
E. Summary.......................................... 325
VI. I MPLICATIONS FOR MANAGERS AND POLICYMAKERS....... 327
CONCLUSIONS................................................. 332
“People worry that computers will get too smart and take over the world,
but the real problem is that they’re too stupid and they’ve already taken
over the world.”^1
INTRODUCTION
History was made in a Phoenix suburb during the winter of 2018—
customers began paying for “robot rides” from Waymo, the self-driving
car company that has emerged from Google’s efforts.^2 Competing auton-
omous shuttle services such as May Mobility quickly followed suit,^3 as
have major automobile manufacturers such as Ford, which has pledged
to have a fully autonomous car available by 2021 in a bid to create a new
industry that will be, according to some estimates, worth some $7 trillion
to the global economy along with potentially saving thousands of lives.^4
But this is just the tip of the iceberg when considering the myriad poten-
1 PEDRO DOMINGOS, THE MASTER ALGORITHM : HOW THE QUEST FOR THE ULTIMATE
LEARNING MACHINE WILL REMAKE OUR WORLD 286 (2015).
(^2) See Alex Davies, The Wired Guide to Self-Driving Cars, W IRED (Dec. 13, 2018),
https://www.wired.com/story/guide-self-driving-cars/. (^3) Id. (^4) Id.; Ford Targets Fully Autonomous Vehicle for Ride Sharing in 2021; Invests in New
Tech Companies, Doubles Silicon Valley team, F ORD MEDIA C ENTER, https://media.ford.com/ content/fordmedia/fna/us/en/news/2016/08/16/ford-targets-fully-autonomous-vehicle-for-ride- sharing-in-2021.html (Aug. 16, 2016).
282 CORNELL JOURNAL OF LAW AND PUBLIC POLICY [Vol. 30:
underappreciated and scarcely attempted in the literature. Indeed, there
has not yet been a single effort in a legal publication to unpack the bene-
fits and drawbacks of polycentric governance as applied to AI, despite
the fact that it has been successfully applied to a variety of related issue
areas.^11
Since it was first used in the 1951 book The Logic of Liberty by
Professor Michael Polanyi, polycentric governance has become a widely
discussed concept built by scholars from around the world, including
Lon Fuller and Nobel Laureate Elinor Ostrom and Professor Vincent Os-
trom, to name a few.^12 Although some confusion continues in the litera-
ture about the exact contours of the concept, in general it is an
overlapping multidisciplinary, multi-level, multi-purpose, multi-func-
tional, and multi-sectoral model^13 and, as such, “may be capable of strik-
ing a balance between centralized and fully decentralized or community-
based governance.”^14 It is noteworthy both for its breadth (given that it
has been used to analyze everything from fishery management to orbital
debris mitigation) as well as for the fact that it challenges orthodoxy,
such as by demonstrating the benefits of self-organization and network-
ing regulations “at multiple scales.”^15 One key finding is that, often due
to the existence of free-riders in a multipolar world, “a single govern-
mental unit” or treaty regime is often incapable of managing “global col-
lective action problems”^16 such as cyber-attacks. Instead, a polycentric
approach can promote “flexibility across issues and adaptability over
time”^17 by recognizing both the common but differentiated responsibili-
(^11) Cf. Ryan Calo, Artificial Intelligence Policy: A Primer and Roadmap, 3 U. B OLOGNA
L. REV. 180 (2018) (analyzing policy responses to AI challenges); Michael Guihot, Anne F. Matthew, & Nicolas P. Suzor, Nudging Robots: Innovative Solutions to Regulate Artificial Intelligence, 20 VAND. J. E NT. & TECH. L. 385, 414 (2017) (exploring the myriad challenges of governing AI systems). (^12) MICHAEL POLANYI, THE LOGIC OF LIBERTY (Karl Mannheim ed. 1951). (^13) Michael D. McGinnis, An Introduction to IAD and the Language of the Ostrom Work-
shop: A Simple Guide to a Complex Framework, 39 P OL’ Y STUD. J. 169, 171 (2011), (“Polycentricity is a system of governance in which authorities from overlapping jurisdictions (or centers of authority) interact to determine the conditions under which these authorities, as well as the citizens subject to these jurisdictional units, are authorized to act as well as the constraints put upon their activities for public purposes.”). (^14) Keith Carlisle & Rebecca L. Gruby, Polycentric Systems of Governance: A Theoreti-
cal Model for the Commons, 47 P OL’ Y STUD. J. 927, 928 (2019). (^15) See Elinor Ostrom, Polycentric Systems as One Approach for Solving Collective-Ac-
tion Problems 1 (Ind. Univ. Workshop in Political Theory and Policy Analysis, Working Paper Series No. 08–6, 2008), http://dlc.dlib.indiana.edu/dlc/bitstream/handle/10535/4417/W08- 6_Ostrom_DLC.pdf?sequence=1. (^16) Elinor Ostrom, A Polycentric Approach for Coping with Climate Change 35 (World
Bank, Policy Research Working Paper No. 5095, 2009), http://www.iadb.org/intal/intalcdi/pe/ 2009/04268.pdf. (^17) Robert O. Keohane & David G. Victor, The Regime Complex for Climate Change 9
PERSP. ON P OL. 7, 15 (2011); cf. Julia Black, Constructing and Contesting Legitimacy and Accountability in Polycentric Regulatory Regimes, 2 R EG. & GOVERNANCE 137, 157 (2008)
2020] G OVERNING AI 283
ties of public and private sector stakeholders in AI, which can generate
positive network effects that could, in time, result in the emergence of a
norm cascade improving AI governance.^18 Yet, these systems are also
prone to certain “syndromes” that can lead to disfunction,^19 and even
fragmentation, meaning that both the benefits and drawbacks of this ap-
proach must be critically assessed.
This Article provides an overview of the development of AI and the
numerous challenges it presents with special attention being paid to
healthcare, autonomous vehicles, and cybersecurity, focusing on lessons
to be learned from polycentric governance frameworks. We argue that
the hybrid governance structures to manage a range of AI applications
may, in fact, be best case scenarios, but that the diversion between the
major AI powers—namely the United States, China, and the European
Union (rather particularly when it comes to AI regulation)—threatens the
integrity of this system absent deeper coordination. The Article is struc-
tured as follows. Part I briefly summarizes the ML revolution, including
coverage of the myriad benefits and potential economic, legal, and cul-
tural impacts associated with AI. Part II summarizes the regulatory ap-
proaches to AI that have been tried to date at the international, federal,
and state levels. Part III then introduces the field of polycentric govern-
ance, delving in particular to the dominant principles, layered govern-
ance structures, and frameworks that the field has generated, including
the Ostrom Design Principles as well as references to the Institutional
Analysis and Development (IAD), the Social-Ecological-Systems (SES),
and the Governing Knowledge Commons (GKC) Frameworks in order to
see how these may be useful in addressing governance gaps. Part IV then
applies this approach through a case study focusing on autonomous vehi-
cle governance between the AI powers. Part V finally summarizes impli-
cations for policymakers and managers. Ultimately, we find that
polycentric governance is a helpful, though imperfect, lens through
which to view AI governance, but that additional research is required to
update these social science principles and frameworks to better fit the
ML revolution.
(discussing the legitimacy of polycentric regimes, and arguing that “[a]ll regulatory regimes are polycentric to varying degrees”). (^18) See generally Martha Finnemore & Kathryn Sikkink, International Norm Dynamics
and Political Change, 52 I NT ’L ORG. 887, 902–03 (1998). For a deeper dive on this topic, see Chapter 2 in S COTT J. SHACKELFORD, MANAGING CYBER A TTACKS IN INTERNATIONAL LAW , B USINESS , AND RELATIONS : IN SEARCH OF CYBER PEACE (2014). (^19) See Michael D. McGinnis et al., When is Polycentric Governance Sustainable? (Os-
trom Workshop Working Paper, Sept. 14, 2020), https://ostromworkshop.indiana.edu/pdf/ seriespapers/2020fall-colloq/mcginnis.pdf.
2020] G OVERNING AI 285
lack of funding, AI did not achieve substantial breakthroughs until there
were significant changes to these factors over the past decade.^29 The re-
mainder of this part will provide a brief overview of the field of AI to-
day, its application across various sectors, and its multitude of potential
benefits before going on to explain some of the challenges and concerns
surrounding its widespread deployment.
A. Defining Artificial Intelligence
The authors of Stanford University’s One Hundred Year Study on
Artificial Intelligence describe artificial intelligence as a “[s]cience and a
set of computational technologies that are inspired by—but typically op-
erate quite differently from—the ways people use their nervous systems
and bodies to sense, learn, reason, and take action.”^30 Defining “artificial
intelligence,” however, is a more difficult task due to a general lack of
agreement about the definition of what it means for something to be in-
telligent.^31 Another possible explanation behind the difficulty of defining
AI is the fact that “[f]rom a technical perspective, [AI] is not a single
technology, but rather a set of techniques and sub-disciplines ranging
from areas such as speech recognition and computer vision to attention
and memory, to name just a few.”^32 Some researchers suggest that the
absence of a universal definition of AI has allowed the field to flourish,^33
while others note that practical definitions may more appropriately state
a measure for intelligence still referred to as the Turing Test. “Restated in modern terms, the ‘Turing Test’ puts a human judge in a text-based chat room with either another person or a computer. The human judge can interrogate the other party and carry on a conversation, and then the judge is asked to guess whether the other party is a person or a computer. If a com- puter can consistently fool human judges in this game, then the computer is deemed to be exhibiting intelligence.” Preparing for the Future of Artificial Intelligence, N AT ’L SCI. & TECH. COUNCIL COMM. ON T ECH. at 5, n.4 (Oct. 2016), https://obamawhitehouse.archives.gov/ sites/default/files/whitehouse_files/microsites/ostp/NSTC/preparing_for_the_future_of_ai.pdf. (^29) See David Bollier, Artificial Intelligence, The Great Disruptor: Coming to Terms with
AI-Driven Markets, Governance, and Life, A SPEN I NST., at 3 (2018), http://csre- ports.aspeninstitute.org/documents/AI2017.pdf (identifying the three primary drivers of AI growth in today’s market as “dataset sizes, Moore’s law [computing efficiency], and demand.”). (^30) STANFORD UNIV., supra note 27, at 4. (^31) JACOB TURNER , ROBOT R ULES: REGULATING ARTIFICIAL INTELLIGENCE 7 (2019); See
JERRY KAPLAN , ARTIFICIAL INTELLIGENCE : WHAT EVERYONE NEEDS TO KNOW 1 (2016) (not- ing that defining AI is “an easy question to ask but a hard one to answer” due to “little agree- ment about what intelligence is.”). (^32) Urs Gasser & Virgilio A.F. Almeida, A Layered Model for AI Governance, 21 IEEE
INTERNET COMPUTING 58, 59 (2017). (^33) STANFORD UNIV., supra note 27, at 12 (“Curiously, the lack of a precise, universally
accepted definition of AI probably has helped the field to grow, blossom, and advance at an ever-accelerating pace. Practitioners, researchers, and developers of AI are instead guided by a rough sense of direction and an imperative to ‘get on with it.’”).
286 CORNELL JOURNAL OF LAW AND PUBLIC POLICY [Vol. 30:
the goals of intelligence – either to achieve human-like characteristics or
to behave rationally.^34
While agreeing upon a universally accepted definition of AI has not
been critical for advancements in the field, it will be an essential part of
regulating or attempting to govern AI. For the purposes of this Article,
we will utilize the definition of AI provided by Nils J. Nilsson in The
Quest for Artificial Intelligence: “[A]rtificial intelligence is that activity
devoted to making machines intelligent, and intelligence is that quality
that enables an entity to function appropriately and with foresight in its
environment.”^35 This definition is broad enough to capture the wide ar-
ray of computational technologies and applications enabled by artificial
intelligence, while narrow enough to differentiate AI from big data or
other analytics. It also goes beyond defining AI by a desire to achieve
human-like characteristics, recognizing that AI can surpass human per-
formance in certain tasks, and approach others from a distinct
perspective.
Within the broad scope of AI, there are two commonly recognized
types: narrow AI and general AI, also called weak and strong AI, respec-
tively. Narrow AI is the type we often think of today, where machines or
algorithms are designed to perform a specific task or a set of specific
tasks.^36 Narrow AI influences much of our daily life, “[f]rom using a
virtual personal assistant to organise our working day, to travelling in a
self-driving vehicle, to our phones suggesting songs or restaurants that
we might like.”^37 By contrast, artificial general intelligence refers to sys-
(^34) For an exploration of various definitions of AI and their implications, see Bernard
Marr, The Key Definitions of Artificial Intelligence (AI) That Explain Its Importance, FORBES (Feb. 14, 2018, 1:27 AM), https://www.forbes.com/sites/bernardmarr/2018/02/14/the-key-defi- nitions-of-artificial-intelligence-ai-that-explain-its-importance/#48d0f5464f5d (“[W]e’re not all operating from the same definition of the term and while the foundation is generally the same, the focus of artificial intelligence shifts depending on the entity that provides the defini- tion.”); T URNER, supra note 31, at 16 (“Artificial intelligence is the ability of a non-natural entity to make choices by an evaluative process.”); N ILS J. N ILSSON , THE QUEST FOR ARTIFI- CIAL I NTELLIGENCE: A HISTORY OF I DEAS AND ACHIEVEMENTS xiii (2010) (“[A]rtificial intelli- gence is that activity devoted to making machines intelligent, and intelligence is that quality that enables an entity to function appropriately and with foresight in its environment.”); E N- GLISH O XFORD L IVING D ICTIONARIES , https://en.oxforddictionaries.com/definition/artifi- cial_intelligence (last visited Sept. 24, 2019) (defining artificial intelligence as “the theory and development of computer systems to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.”). (^35) NILSSON, supra note 34, at xiii. (^36) Narrow AI is the technology underpinning various “commercial services such as trip
planning, shopper recommendation systems, and ad targeting,” and it is “finding important applications medical diagnosis, education, and scientific research.” Preparing for the Future of Artificial Intelligence, supra note 28, at 7. (^37) ARTIFICIAL INTELLIGENCE FOR EUROPE, COMM. FROM THE COMM ’ N TO THE E UROPEAN
PARLIAMENT, THE EUROPEAN C OUNCIL, THE COUNCIL, THE EUROPEAN ECONOMIC AND SOCIAL
288 CORNELL JOURNAL OF LAW AND PUBLIC POLICY [Vol. 30:
explicitly programed,”^44 and the technology is “[s]o pervasive today that
you probably use it dozens of times a day without knowing it.”^45
Machine learning is the technology underpinning many of the tech-
niques to achieve AI. Reinforcement learning, for example, “shifts the
focus [of machine learning] to decision-making, and is a technology that
will help AI to advance more deeply into the realm of learning about and
executing actions in the real world.”^46 Reinforcement learning is the
technology behind the AlphaGo AI, for example, which defeated the
human Go champion,^47 as well as Cue the “basketball bot” that has per-
fect accuracy shooting a basketball and recently set a Guinness World
Record for successfully making 2,020 consecutive free throws.^48
Deep learning is another type of ML that “uses structures loosely
inspired by the human brain, consisting of a set of units (or ‘neurons’).”^49
Researchers have been studying deep learning since the 1960s, but ad-
vancements were largely impractical without the vast amount of data
available today.^50 Recent advancements in neural networks have helped
to create some of the most impressive achievements in AI, contributing
to self-driving cars, medical image analysis, and language translation.^51
Deep learning advancements have provided new life to research in other
areas of AI, especially computer vision and natural language
processing.^52
Computer vision enables machines to recognize, compare, identify
patterns amongst, and draw conclusions about images—often with
greater acuity and accuracy than human eyes.^53 This technology has ben-
(^44) Machine Learning, Coursera, https://www.coursera.org/learn/machine-learning?ac-
tion=enroll (last visited Sept. 27, 2019). (^45) Id. (^46) STANFORD UNIV., supra note 27, at 15. (^47) Id. (^48) Luke Dormehl, Swish! Toyota’s Basketball Bot earns a Guinness Record with 2,
Perfect Throws, D IGITAL T RENDS (June 25, 2019), https://www.digitaltrends.com/cool-tech/ guinness-record-basketball-robot/. The number of perfect free throws – 2,020 – was in honor of the 2020 Olympics, which will be held in Japan – the same country where Toyota built and trained Cue. Id. (^49) Preparing for the Future of Artificial Intelligence, supra note 28, at 9. Artificial neural
networks are a brain-inspired concept that attempts to replicate the way humans learn. Neural networks rely on large amounts of input data to find patterns within data and create useful output layers of data. “For a basic idea of how a deep learning neural network learns, imagine a factory line. After the raw materials (the data set) are input, they are then passed down the conveyer belt, with each subsequent stop or layer extracting a different set of high-level fea- tures.” Luke Dormehl, What Is an Artificial Neural Network? Here’s Everything You Need to Know, D IGITAL TRENDS (Jan. 6, 2019), https://www.digitaltrends.com/cool-tech/what-is-an-ar- tificial-neural-network/. (^50) Keith D. Foote, A Brief History of Deep Learning, D ATAVERSITY (Feb. 7, 2017),
https://www.dataversity.net/a-brief-history-of-machine-learning/. (^51) STANFORD UNIV., supra note 27, at 8–9, 20, 27. (^52) Id. at 14–15. (^53) Id. at 4.
2020] G OVERNING AI 289
efited the healthcare industry in a variety of ways, including, for exam-
ple, assessing chest x-rays to diagnose pneumonia and mapping the
motor cortex to aid in identification of neurological diseases.^54 Computer
vision has also found a place in entertainment and sports, as it can be
useful to improve player safety, augment human officiating, assist in
training, and overall improve player experiences.^55 It is the technology
behind facial recognition software, applications to help visually-impaired
individuals better engage with their surroundings,^56 and other technolo-
gies that rely on image recognition and analysis.
Natural language processing, as the name suggests, involves a ma-
chine interacting with real-time dialogue. NLP enables digital assistants
such as Amazon’s Alexa or Apple’s Siri, as well as real-time translation
between languages, also referred to as natural language translation
(NLT).^57 Google Translate helps 500-million people understand more
than 100 languages on a daily basis.^58 This language translation has re-
cently taken strides in accuracy as a result of more training data.^59 NLP
technologies are often the first line of customer service interactions, and
the use of chatbots is expected to expand, with estimates of up to 85
(^54) Jennifer Bresnick, Top 5 Use Cases for Artificial Intelligence in Medical Imaging,
HEALTH IT ANALYTICS (Oct. 30, 2018), https://healthitanalytics.com/news/top-5-use-cases-for- artificial-intelligence-in-medical-imaging. (^55) Charlotte Edmond, This AI Just Invented a New Sport, W ORLD ECON. F. (Apr. 30,
2019), https://www.weforum.org/agenda/2019/04/artificial-intelligence-invented-sport-speed- ball/. Not only is AI transforming the sports we watch today (monitoring NASCAR races for faults affecting safety by using deep learning and helping coaches and trainers monitor player performance), but AI is also capable of creating rules for new sports – such as Speedgate, a sport created by an AI using combinations of rules from soccer, croquet, and rugby. Id. (^56) Kyle Wiggers, Here Are the Ways AI Is Helping to Improve Accessibility, V EN-
TUREB EAT (May 17, 2018, 12:53 PM), https://venturebeat.com/2018/05/17/here-are-the-ways- ai-is-helping-to-improve-accessibility/ (noting the benefits to the blind and visually-impaired provided by screen-reading programs, photograph classification, smart glasses, and more). (^57) See, e.g., Ronak Vijay, A Gentle Introduction to Natural Language Processing, T O-
WARDS DATA S CI. (June 29, 2019), https://towardsdatascience.com/a-gentle-introduction-to- natural-language-processing-e716ed3c0863?gi=768262956aa1. (^58) Bernard Marr, 5 Amazing Examples of Natural Language Processing (NLP) In Prac-
tice, FORBES (June 3, 2019, 12:23 AM), https://www.forbes.com/sites/bernardmarr/2019/06/ 03/5-amazing-examples-of-natural-language-processing-nlp-in-practice/#23e7b1981b30. (^59) While large training data sets have helped improve translations dramatically, some
NLP processes are being deployed that rely on less data – or at least less intensive data. For example, typical AI translators are trained on two identical texts in two different languages, requiring accurate human translators for the training data. The training data sets for languages used frequently in news, social media, and entertainment are more readily available than low- resource languages. Researchers at Facebook set out to change this trend, hoping to provide translation to their customers in less common languages. Their research findings suggest a way to translate between English and other languages (such as Urdu) by having access to different texts in both languages, without accurate translators. Sam Shead, Facebook Develops New AI Technique for Language Translation, F ORBES (Aug. 31, 2018, 11:00 AM), https:// www.forbes.com/sites/samshead/2018/08/31/facebook-develops-new-ai-technique-for-lan- guage-translation/#639307f62f71.
2020] G OVERNING AI 291
dividual and societal well-being and the common good, as well as bring-
ing progress and innovation.”^67 Some have even argued that, like
cybersecurity,^68 AI will become an increasingly important component of
sustainable development.^69 Similarly, the Norwegian Data Protection
Authority highlighted AI’s potential in its report on Artificial Intelli-
gence and Privacy: “The development of AI has made some major ad-
vances in recent years and its potential appears to be promising: a better
and more efficient public sector, new methods of climate and environ-
mental protection, a safer society, and perhaps even a cure for cancer.”^70
This section will provide an overview of some key advantages enabled
by AI, as well as a brief but nonexclusive catalogue of the sectors bene-
fiting from AI’s deployment. We then balance this discussion with an
examination of the downsides and challenges posed by various AI appli-
cations, which in turn drive the regulatory and governance discussions of
Parts III–V.
Often, for users of AI, the most tangible benefits derived from AI
adoption are as simple as speed, convenience, or accuracy. For individu-
als, AI recommendations for songs, movies, products, or traffic routes
provide efficiency and convenience to improve daily routines.^71 For or-
ganizations, utilizing AI technologies can provide efficiencies and speed
in resource and personnel management, employee performance monitor-
ing, and customer service management.^72 These narrow AI applications
providing daily conveniences have allowed AI technologies to become
an integral part of everyday life.
(^67) Ethics Guidelines for Trustworthy AI, HIGH-LEVEL EXPERT GROUP ON ARTIFICIAL IN-
TELLIGENCE (Apr. 8, 2019), at 4 [hereinafter HLEG AI Ethics Guidelines]. (^68) See Scott J. Shackelford, Timothy L. Fort, & Danuvasin Charoen, Sustainable Cyber-
security: Applying Lessons from the Green Movement to Managing Cyber Attacks, 2016 U. ILL. L. REV. 1995, 1995–96 (2016). (^69) See Michael Chui, Rita Chung, & Ashley van Heteren, Using AI to help achieve Sus-
tainable Development Goals, U.N. D EV. PROGRAMME (Jan. 21, 2019), https://www.undp.org/ content/undp/en/home/blog/2019/Us- ing_AI_to_help_achieve_Sustainable_Development_Goals.html; Andrew Ware, Does AI Pre- sent the Potential to Mitigate Resource Scarcity?, CRASSH (Dec. 7, 2017), http:// www.crassh.cam.ac.uk/blog/post/does-ai-present-the-potential-to-mitigate-resource-scarcity (making the case that AI can address resource scarcity by increasing trustworthiness, including “reliability, competency, and honesty.”). (^70) Artificial Intelligence and Privacy, NORWEGIAN DATA PROTECTION AUTHORITY (Jan.
2018), at 5 [hereinafter NDPA AI and Privacy]. (^71) Rhonda Bradley, 16 Examples of Artificial Intelligence (AI) in Your Everyday Life,
T HE MANIFEST (Sept. 26, 2018), https://themanifest.com/development/16-examples-artificial- intelligence-ai-your-everyday-life. (^72) See, e.g., The Workplace of the Future, E CONOMIST (Mar. 28, 2018), https://
www.economist.com/leaders/2018/03/28/the-workplace-of-the-future (“In 2017 companies spent around $22bn on AI-related mergers and acquisitions, about 26 times more than in 2015.”).
292 CORNELL JOURNAL OF LAW AND PUBLIC POLICY [Vol. 30:
Beyond the basic benefits provided by improving everyday conve-
nience, AI provides a multitude of additional benefits to developers, gov-
ernments, and users. The McKinsey Global Institute, for example, has
estimated that the economic value of “applying AI to marketing, sales
and supply chains” could add up to some $2.7 trillion by the 2030s.^73
To harness this potential, more than thirty nations have developed
or are developing national AI strategies—many of them since 2017.^74
Governments perceive AI as a key component of global influence, and
for good reason. According to a 2017 study by PricewaterhouseCoopers
(PwC), AI is expected to contribute $15.7 trillion to the global economy
by 2030, constituting a 14 percent increase in global GDP and more than
the 2017 output of China and India combined.^75 The “value of AI en-
hancing and augmenting what enterprises can do” is enormous, and may
even “be larger than automation.”^76 The estimated growth spans across
(^73) Id. (^74) The countries with a national AI strategy are geographically, politically, and economi-
cally diverse. Some of these plans focus only on research and development to realize the potential of AI, while others are more comprehensive governance plans. For a few examples of the variety of national AI strategies, see New Generation of Artificial Intelligence Develop- ment Plan, (promulgated by the St. Council, effective July 8, 2017) https://flia.org/wp-content/ uploads/2017/07/A-New-Generation-of-Artificial-Intelligence-Development-Plan-1.pdf (China’s AI Strategy); E MMA MARTINHO-TRUSWELL , HANNAH MILLER , ISAK N TI ASARE, AN- DRE´ PETHERAM, R ICHARD STIRLING, ET AL ., TOWARDS AN AI STRATEGY IN MEXICO: HARNES- SING THE AI REVOLUTION (June 2018), http://go.wizeline.com/rs/571-SRN-279/images/ Towards-an-AI-strategy-in-Mexico.pdf (Mexico’s AI Strategy); NITI A AYOG , NATIONAL STRATEGY FOR ARTIFICIAL I NTELLIGENCE (June 2018), https://niti.gov.in/writereaddata/files/ document_publication/NationalStrategy-for-AI-Discussion-Paper.pdf (India’s AI strategy); NAT’ L SCI. & TECH. COUNCIL COMMITTEE ON A RTIFICIAL INTELLIGENCE, T HE N ATIONAL A RTI- FICIAL I NTELLIGENCE RESEARCH AND DEVELOPMENT STRATEGIC PLAN: 2019 UPDATE (June 2019), https://www.whitehouse.gov/wp-content/uploads/2019/06/National-AI-Research-and- Development-Strategic-Plan-2019-Update-June-2019.pdf (United States’ AI Strategy); Q ATAR CENTER FOR ARTIFICIAL INTELLIGENCE, NATIONAL ARTIFICIAL INTELLIGENCE STRATEGY FOR Q A T A R (Jan. 30, 2019), https://qcai.qcri.org/wp-content/uploads/2019/02/Na- tional_AI_Strategy_for_Qatar-Blueprint_30Jan2019.pdf (Qatar’s AI Strategy). A common trend is for countries to develop an AI Task Force or related entity prior to the development and publication of the official strategy. Brazil, Estonia, Finland, Kenya, Sri Lanka, Tunisia, and several other nations currently have such task forces working on national AI strategies. See also THOMAS A. CAMPBELL, A RTIFICIAL INTELLIGENCE: AN OVERVIEW OF STATE I NITIA- TIVES (Aug. 18, 2019), https://www.futuregrasp.com/artificial-intelligence-an-overview-of- state-initiatives (describing the AI initiatives—both formal and informal—of 41 nations). (^75) AI to Drive GDP Gains of $15.7 Trillion with Productivity, Personalisation Improve-
ments, PRICEWATERHOUSE COOPERS (June 27, 2017), https://www.pwc.com/gx/en/news-room/ press-releases/2017/ai-to-drive-gdp-gains-of-15_7-trillion-with-productivity-personalisation- improvements.html. (^76) Id. (quoting Anand Rao, Global Leader of AI at PwC).
294 CORNELL JOURNAL OF LAW AND PUBLIC POLICY [Vol. 30:
augmenting decision-making by doctors; (5) helping clinicians provide
more comprehensive treatment; (6) improving end of life care; (7) facili-
tating research; and (8) aiding healthcare training.^81 Researchers also es-
timate that AI will dramatically reduce costs in the healthcare industry,
with some estimating AI will save up to $269.4 billion annually.^82 From
detecting skin cancer with a smartphone,^83 to apps that can already an-
swer an array of medical questions as well as physicians eighty percent
of the time,^84 AI continues to make important strides in the healthcare
context. The improvements in health and medicine will be significant,
and, though likely adopted incrementally and haphazardly at least ini-
tially, there seems little doubt that AI will dramatically change the
healthcare landscape.
2. Transportation
Both public and private transportation stand to be revolutionized by
AI technology. Modern vehicles already commonly contain AI-assistive
features, including brake assist, park assist, and lane-change assist.^85 In
fact, the spectrum of autonomous vehicles extends from no automation,
through partial and conditional, to full automation.^86 Cameras and sen-
sors on vehicles are also often used to augment human drivers, though
many manufacturers aim to develop fully autonomous vehicles. Compa-
nies such as Ford, General Motors, Tesla, Uber, and Waymo are invest-
ing in AI to develop driverless vehicles, with Waymo announcing that its
cars have driven over ten million autonomous miles on public roads.^87
Autonomous vehicles’ promises are numerous: improving traffic in cit-
ies, lowering commutes, improving the efficiency of public transporta-
tion systems, and perhaps most importantly, improving driver safety by
reducing the number of accidents.^88 Yet regulating such vehicles and
their manufacturers remains a thorny challenge, as we discuss below.
(^81) What Doctor? Why AI and Robotics Will Define New Health, P WC (June 2017), https:/
/www.pwc.com/gx/en/industries/healthcare/publications/ai-robotics-new-health/ai-robotics- new-health.pdf. (^82) AI and Healthcare: A Giant Opportunity, F ORBES INSIGHTS (Feb. 11, 2019), https://
www.forbes.com/sites/insights-intelai/2019/02/11/ai-and-healthcare-a-giant-opportunity/. (^83) See, e.g., Amanda Capritto, 4 Ways to Check for Skin Cancer with Your Smartphone,
CNET (Jan. 1, 2020, 5:30 AM), https://www.cnet.com/news/how-to-use-your-smartphone-to- detect-skin-cancer/. (^84) See Christopher McFadden, These 7 AI-Powered Doctor Phone Apps Could Be the
Future of Healthcare, I NTERESTING ENG’ G (Jan. 24, 2019), https://interestingengineering.com/ these-7-ai-powered-doctor-phone-apps-could-be-the-future-of-healthcare. (^85) STANFORD UNIV., supra note 27, at 18–19. (^86) See Davies, supra note 2. (^87) Id. (^88) See STANFORD UNIV., supra note 27, at 18–24.
2020] G OVERNING AI 295
3. Cybersecurity
As the cyber-threat landscape continues to increase for individuals,
businesses, and governments, enhancing cybersecurity is becoming in-
creasingly critical to protecting sensitive data and critical systems. AI
applications have the potential to “help cope with the sheer complexity
of cyberspace.”^89 Individuals can use AI applications like email filtering
and credit monitoring to manage their online life and protect their iden-
tity.^90 AI can also help users decipher and manage their privacy and se-
curity policies online.^91 Researchers are also developing AI applications
to help combat the increasing threat of deep fakes.^92 In addition to aug-
menting human online interactions, AI can aid in cybersecurity’s more
technical aspects, such as helping organizations detect and react to cyber
threats with a more effective response time. A recent survey of busi-
nesses demonstrated AI’s importance in breach detection: 60 percent of
respondents said that they would be unable to detect breaches without the
aid of AI.^93 Yet such reliance on AI also increases vulnerability to hack-
ers, who can exploit ML systems through the emerging field of “Adver-
sarial AI.”^94 Relatedly, AI is also fueling a renewed arms race among the
cyber powers that has the potential not only to buttress defense-in-depth
but also contribute to cyber insecurity as discussed next.^95
D. Challenges Presented by AI
While AI offers numerous remarkable improvements to daily life,
several experts have expressed concern for the “potential threat [AI]
could pose to humankind.”^96 These challenges include economic, legal,
social, and cultural concerns, though there are also practical issues such
(^89) Preparing for the Future of Artificial Intelligence, supra note 28, at 36. (^90) Bernard Marr, The 10 Best Examples of How AI Is Already Used in Our Everyday
Life, FORBES (Dec. 16, 2019, 12:13 AM), https://www.forbes.com/sites/bernardmarr/2019/12/ 16/the-10-best-examples-of-how-ai-is-already-used-in-our-everyday-life/. (^91) Andy Greenberg, An AI that Reads Privacy Policies So That You Don’t Have To,
WIRED (Feb. 9, 2018, 7:00 AM), https://www.wired.com/story/polisis-ai-reads-privacy-poli- cies-so-you-dont-have-to/. (^92) Bernhard Warner, Fighting Deepfakes Gets Real, F ORTUNE (July 24, 2019), https://
fortune.com/2019/07/24/fighting-deepfakes-gets-real/ (explaining how companies are trying to develop tools to help users know when they are interacting with a modified photo or video). (^93) Louis Columbus, “Why AI Is the Future of Cybersecurity,” F ORBES (July 14, 2019),
https://www.forbes.com/sites/louiscolumbus/2019/07/14/why-ai-is-the-future-of-cyber- security/. (^94) The New Cyberattack Surface: Artificial Intelligence, A CCENTURE (Apr. 1, 2019),
https://www.accenture.com/us-en/insights/artificial-intelligence/adversarial-ai. (^95) See, e.g., Matt Bartlett, The AI Arms Race in 2019, T OWARDS DATA SCI. (Jan. 28,
2019), https://towardsdatascience.com/the-ai-arms-race-in-2019-fdca07a086a7. (^96) INST. ELEC. & E LECS. ENG ’RS , ARTIFICIAL INTELLIGENCE : CALLING ON POLICY MAK-
ERS TO TAKE A LEADING ROLE IN S ETTING A LONG-TERM AI STRATEGY 3 (Oct. 15, 2017), http://globalpolicy.ieee.org/wp-content/uploads/2017/10/IEEE17021.pdf (position statement by the IEEE’s European Public Policy Initiative).
2020] G OVERNING AI 297
uting to longstanding concerns about a tragedy of the AI commons.^102
Such alarming statistics may be partially offset by the fact that AI will
also create new jobs, with some estimates projecting that it could contrib-
ute as many or even more jobs than it disrupts (though transitions would
be far from seamless with extensive retraining required to support dis-
placed workers pursuing these new occupations).^103 Though the precise
impact of AI on labor markets is unknown, the EU Commission report
on The Future of Work? Work of the Future! emphasized one central
important point: “Automation outcomes are not pre-determined but are
shaped by the policies and choices we make.”^104 While AI will have a
profound impact on the skills needed in the workforce, the direct eco-
nomic impact of AI will depend on governance decisions made today.^105
2. Social and Cultural Challenges
Beyond these concerns over how AI will affect the workforce land-
scape, researchers are increasingly concerned with how to navigate the
social and cultural aspects of living and working alongside AI. As indi-
cated by the title of a recent Washington Post article—”As Walmart
turns to robots, it’s the human workers who feel like machines”—it is
important to remember the human aspect of AI deployment.^106 When
Walmart introduced robots to automate janitorial and stock-shelfing
tasks, it was met with concerns from human co-workers feeling devalued
at work, while customers were unsurprisingly also thrown off by the six-
foot tall robot meandering throughout the store.^107 Despite efforts to cre-
(^102) See Roy M. Turner, The Tragedy of the Commons and Distributed AI Systems, P ROC.
12 TH INT ’L WORKSHOP ON DISTRIBUTED AI 370, 371 (1993). (^103) John Hawksworth, AI and Robots Could Create as Many Jobs as They Displace,
WORLD ECON. F. (Sept. 18, 2018), https://www.weforum.org/agenda/2018/09/ai-and-robots- could-create-as-many-jobs-as-they-displace/ (highlighting the importance of businesses and governments in fostering “increased investment in retraining workers for new careers, boosting their digital skills but also reframing the education system to focus on human skills that are less easy to automate: creativity, co-operation, personal communication, and managerial and entrepreneurial skills.”). (^104) Michel Servoz, The Future of Work? Work of the Future! On How Artificial Intelli-
gence, Robotics and Automation Are Transforming Jobs and the Economy in Europe, EU C OMM ’ N 3 (2019), https://ec.europa.eu/digital-single-market/en/news/future-work-work- future. (^105) As stated by Bank of America’s Chief Technology Officer Cathy Bessant, “The effect
of AI on jobs is totally, absolutely within our control... This isn’t what we let AI do to the workforce, it’s how we control its use to the good of the workforce.” AI and the Future of Work, WIRED (Apr. 2018), https://www.wired.com/wiredinsider/2018/04/ai-future-work/. (^106) Drew Harwell, As Walmart Turns to Robots, It’s the Human Workers Who Feel Like
Machines, WASH. P OST (June 6, 2019, 8:00 AM), https://www.washingtonpost.com/technol- ogy/2019/06/06/walmart-turns-robots-its-human-workers-who-feel-like-machines/. (^107) Id. (“This awkward interplay of man vs. machine could become one of the defining
tensions of the modern workplace as more stores, hotels, restaurants and other businesses roll in robots that could boost company reliability and trim labor costs.”).
298 CORNELL JOURNAL OF LAW AND PUBLIC POLICY [Vol. 30:
ate “human-friendly” robots, there is currently no “agreed-upon etiquette
for how robots and people should communicate.”^108
The Walmart dilemma is only one example of the variety of social
and cultural challenges arising from the development of AI. “We may
thrill to the idea of AI systems helping us to filter information to suit
personalized wants and needs, but belatedly discover the same technolo-
gies can produce fake news, closed echo chambers of public opinion, and
the erosion of a shared public reality.”^109 This creates concerns for the
very foundation of democracy as deep fakes become more prevalent and
difficult to detect.^110 It also creates important questions for human auton-
omy and market manipulation such as the extent to which we are com-
fortable with robots nudging what we buy, who we date, who we vote
for, and what we watch.^111 Some of these concerns are not specific to AI,
but each of them raises important questions as the field continues to
develop.
3. Legal & Ethical Challenges
One approach to mitigating or managing the concerns mentioned
above is through the legal system, yet the judiciary is managing its own
tensions with applications of AI.^112 AI is exacerbating many of the same
challenges that new technologies bring to existing bodies of law, includ-
ing competition law, information security, and privacy law, while also
creating new depth in issues such as negligence or products liability.^113
Privacy law may be the most apt example of this challenge. As
countries around the world are developing new privacy laws, there are
numerous reports detailing the tensions between some of the fundamen-
tal data protection principles and the capabilities of AI.^114 Data protec-
tion regulation attempts to minimize bias, discrimination, and unfairness
regardless of specific technologies involved.^115 Regulators have height-
ened concerns with AI because of what is referred to as the “black box”
(^108) Id. (^109) Bollier, supra note 29, at 5. (^110) See Chesney & Citron, supra note 10, at 1754, 1757. (^111) Id. at 1769, 1806, 1808. (^112) See, e.g., Anjanette H. Raymond & Scott J. Shackelford, Jury Glasses: Wearable
Technology and Its Role in Crowdsourcing Justice, 17 C ARDOZO J. CONFLICT R ES. 115, 116 (2015). (^113) See, e.g., Michael Froomkin, Ian Kerr & Joelle Pineau, When AIs Outperform Doc-
tors: Confronting the Challenges of a Tort-Induced Over-Reliance on Machine Learning, 61 Ariz. L. Rev. 33, 51, 58, 66, 94 (2019) (highlighting the arguments for negligence in the event that AI’s offer care that is proven to be more accurate and effective than a human doctor). (^114) See, e.g., Cate & Dockery, supra note 65, at 115–20 (highlighting the tensions be-
tween fair information practice principles such as data minimization, purpose and use limita- tions, transparency, etc. and artificial intelligence). (^115) Id. at 119.