Photo Credit: AdobeStock

The Politics and Governance of Artificial Intelligence

Advances in artificial intelligence (AI) and related technologies offer important opportunities for society, from medical care to driverless cars. But these technologies also raise troubling implications, including the potential for hidden biases, unexplainable decisions, the undermining of individual rights, widespread job displacement, and environmental impacts.

By Dr. Justin Longo (PhD) and Amy Zarzeczny (LLM), Associate Professors, Johnson Shoyama Graduate School of Public Policy, University of Regina

Advances in artificial intelligence (AI) and related technologies offer important opportunities for society, from medical care to driverless cars. But these technologies also raise troubling implications, including the potential for hidden biases, unexplainable decisions, the undermining of individual rights, widespread job displacement, and environmental impacts. At many points as we move forward, society will be presented with enticing technological opportunities that mask deeper challenges. Along the way, we will be building our future through the choices we make. It is possible for Canada to harness the power of innovative uses of AI, but with a principled approach that maintains the responsible exercise of discretionary authority—one that considers the justice and fairness implications of increasingly powerful machines.

A Future AI Scenario

The year is 2031. You have a hospital appointment for a CT lung scan ordered by your GP (who you’ve only met virtually). A facial recognition technology (FRT) scan identifies you upon arrival. A series of digital displays welcome you and direct you to the correct floor and department. The scanner also took a few discreet biomedical readings like your temperature and blood pressure, as well as made a preliminary measure of your mood; patients who appear either ill, or ill tempered, are flagged by the system, ready to be intercepted by a medical staff member or security officer where appropriate.

You are greeted at the Digital Imaging Department by a very adorable little robotic assistant, who helps you in getting ready for your scan. Your CT scan is completed without difficulty, and your robotic assistant walks you to the door and thanks you for visiting them today. From there, you are (digitally) escorted out of the building by a series of personalized signposts. A wrong turn is quickly corrected, as a camera identifies you, and notifies a nearby robot orderly and the security room to the mistake. As you drive away, your digital wallet is charged for parking based on your licence plate, which was read going in and out. 

On your way home, you get a notification from the hospital oncology department advising you that your scan has been reviewed by an AI-radiologist, which has detected an abnormal node that requires a biopsy (the review was quickly confirmed by a human radiologist). The system also reviewed your schedule and identified an available time next week that looks like it should work for you. You respond to the prompt and the appointment is booked. The booking system has also used your personal health record to match you up with people like you who have gone through this procedure, who can provide you with tips and support—anonymously, of course, and only if you want. This has all happened while you were driving home, with most of the information and your responses communicated by voice—though your car has been watching the road for you, and has alerted you when it needs your full attention. 

As you turn onto your street, the neighbourhood watch system identifies you and your car, alerting you to the presence of an unknown vehicle on the street. While several packages were delivered via drone to some of your neighbours, no abnormal activity has been detected in the surrounding area since you’ve been gone. As you pull into your driveway, your personal digital assistant has already responded to the news you’ve received by activating your household systems—lights, heating, music, tea kettle, diffuser, security—helping to reduce your anxiety.

Deploying AI Systems in the Public Sector

Artificial intelligence (AI) appears to have reached the gates of science fiction. Driverless cars, facial recognition technology (FRT), natural language processing (NLP)1, robotics, and smart internet-ofthings (IoT) devices are increasingly commonplace.

This past year saw AI-driven developments in fields as diverse as pharmaceutical discovery, the independent generation of text, audio, and imagery, and techniques for image classification, facial recognition, video analysis, and voice identification. AI applications in everyday use now include road transportation—think navigation assistance, ride-sharing system management, smart traffic lights, and driver-assisted cars—air transportation, including logistics management, pricing and passenger routing, and autopilot control of aircraft, text management, such as predictive text when typing a message, email categorization, and spam filtering, voice activated devices, visual, including facial, recognition, and recommendation systems used for things like shopping and entertainment. 

From a public policy perspective, governments around the world are seeking to maintain and sharpen their country’s competitive edge by funding further research and development. The Government of Canada has allocated targeted AI funding aimed at increasing the number of AI researchers domestically2 and to “develop global thought leadership on the economic, ethical, policy and legal implications of advances in artificial intelligence.”3 To date, more than 30 other countries and regions have published AI strategy documents. Governments are working to respond to the regulatory challenges that AI raises, and are collaborating with civil society organizations, academics, and private firms to develop appropriate governance frameworks for AI both within and outside of government. 

We are at the early stages of witnessing how government agencies might deploy AI systems in public administration. Sousa et al. (2019) recently reviewed research related to AI use in the public sector, identifying applications in areas including general public services, economic affairs, and environmental protection. They also found AI tools being used to support government operations across a range of tasks, including: regulatory enforcement in areas such as market competition, workplace safety, healthcare, and environmental protection; determining eligibility for government social welfare benefits and assessing applications for conferring rights such as patent protections; monitoring and analyzing public health and safety risks; and extracting useful information from massive stores of public sector information and data. An OECD survey (Berryhill et al., 2019) noted advances in AI for administrative efficiency, public decision-making, healthcare, transportation, security, citizen and stakeholder relationships, regulations, and achieving the Sustainable Development Goals.

The “future AI scenario” sketched above imagines a common health system interaction only 10 years from now, where incremental advances across several technologies come together in an appealing futuristic state. Whether that future is indeed 10 or more years away, one thing is certain: AI-based interpretation of the masses of data harvested from ubiquitous digital technology, and the normalization of the automation that can flow from those embedded, connected, smart systems will mean a future of increased convenience, speed, safety, personalization, and system performance.

When deciding whether and how to adopt emerging AI technologies, public sector actors will need to consider not only the cost/benefit assessment and the appeal of enhanced citizen services, but also how to balance competing interests to ensure that society benefits fully from new technologies. For example, promoting domestic technology development; fostering productivity gains through private sector adoption of world-leading technologies; supporting competitiveness and improvements in service quality to ensure consumer utility, while mitigating their potential risks and negative effects, such as workers’ rights, environmental sustainability, fairness, and strategic economic development. 

This policy brief describes three specific technologies and their public sector use cases that are being developed and adopted by public sector actors, each of which are also the subject of ongoing research at JSGS: AI in health care, FRT in policing, and algorithmic decision-making in immigration and refugee applications. In each case we briefly describe the technology providing a current public sector use case example, and outline some emerging concerns and implications for governance strategies.

Artificial Intelligence for Better Healthcare

AI is expected to have a major impact in healthcare. Smartphones and IoT devices offer options for monitoring with real-time feedback and personalised intervention. AI and health system data can be used to inform decisions regarding policies, programs, and operations with the aim of improving the effectiveness and efficiency of health systems. AI can predict when a patient’s condition might deteriorate, and health surveillance and analysis (including NLP analysis of social media) can provide early identification of public health concerns.

Imaging diagnostics are another promising application of AI in healthcare. AI-based imaging analytics are highly accurate at characterizing lung nodules as benign or malignant, with the potential to improve patient outcomes and health system efficiencies. JSGS researchers were part of a recent study investigating efforts to improve lung cancer diagnosis using novel AI imaging analytics.4 Through a knowledge exchange workshop with local experts and health system stakeholders where we explored ethical, legal, clinical, and organizational issues, we identified that advancing this technology will require data sourcing, financial investment, collaboration, and privacy protections (Zarzeczny et al., 2020).

FRT for Improved Public Safety

FRT identifies individuals by comparing an image of their face to a database of known faces. FRT can be used to facilitate citizen services including air travel security and immigration control, but its most prominent applications are in public safety. While current FRT systems are often promoted as being impartial and efficient at identifying persons of interest, research is revealing weaknesses, biases, and poor performance, including as related to gender, skin tone, and underrepresented populations. These issues often follow from algorithms trained on unrepresentative data sets. Importantly, FRT can also be employed without the individual’s knowledge.

In early 2020, it was revealed by privacy protection authorities that the Toronto Police Service and the RCMP had been using FRT to identify suspects in CCTV imagery during ongoing investigations (Office of the Privacy Commissioner of Canada, 2020). Leading suppliers of FRT have suspended sales to Canadian law enforcement agencies, pending regulatory clarity. JSGS researchers are exploring how demographic and experiential characteristics might explain what factors would lead a person to find the use of FRT appropriate in meeting public safety objectives (Sahlu, 2021).

Algorithms for Fairer Adjudications

Rules-based algorithms, or step-by-step computer executed instructions, can evaluate applicants for a benefit, position, or status against a priori articulated criteria. By analyzing data on past applicants against measures of subsequent performance, AI can potentially identify hidden features in those past applications to predict future success and thus provide a basis for adjudicating applications. The Canadian federal department of Immigration, Refugees and Citizenship—which was already using a programmed algorithm approach, applying codified immigration rules to applications in order to pre-process and triage them—is investigating more advanced AI approaches to improving the efficiency and effectiveness of the immigration system (Molnar & Gill, 2018).5

More than 20 years ago Barth and Arnold (1999) wrote about the public administration implications of using earlier generation AI as autonomous agents that make administrative decisions, addressing ongoing dilemmas including responsiveness, judgement, and accountability. Today’s critics argue that emerging public sector uses of algorithmic decision-making raise administrative law concerns including: the right to be heard, the right to a fair, impartial, and independent decision-maker, the right to reasons or an explanation, and the right of appeal. 

In collaboration with international colleagues, JSGS researchers are exploring integrated enhanced fairness protections for public sector use of algorithmic approaches to decision-making. Other ongoing JSGS research on the AI implications for Canadian ombudsman offices—which exist, in part, to ensure that citizens’ rights to an explanation of administrative decisions are protected—seeks to identify principles and safeguards against a possible Kafkaesque future where the sole reason for the denial of an application is because ‘the computer said no’ (Longo, 2021).

Conclusion

Expanded uses of AI raise diverse considerations including: job displacement and new skills necessary for humans to work alongside AI; consumer rights protection; privacy and data security; unintended uses (e.g., “deepfake” video and audio, where images, audio, and video are manipulated to produce realistic forgeries); gaps in public digital and AI literacy, and the need for appropriate regulation and legislation to ensure transparency and accountability. At minimum, related policy development will require public consultation to garner a broad representation of perspectives in the development and governance of AI.

In an effort to take a principled, rather than episodic, approach to considering AI adoption, governments have started to develop guidelines and governance frameworks to protect citizens’ rights and ensure government accountability and transparency. In March 2019, the Government of Canada released the “Directive on Automated Decision-Making”6 to guide automated decision systems developed or procured, including a requirement for an “Algorithmic Impact Assessment” prior to the production or deployment of any automated decision system. More ambitiously, the “Algorithm Charter for Aotearoa New Zealand”7 formalizes a commitment across the New Zealand public sector to use algorithms that are transparent and accountable. Principles to guide the adoption of innovative, trustworthy, and responsible AI have also been developed by the OECD,8 the European Commission,9 and across a range of countries (Berryhill et al., 2019). As AI-related public administration functions are being developed, important questions will emerge about the proper design of data collection systems and machine learning algorithms, the implications for fairness and the protection of civil and privacy rights within an algorithmic approach to data analysis, and the appropriate balancing of human and machine decisionmaking.

Artificial intelligence (AI) has made amazing gains in recent years. It can be tempting to view advances in isolation, and judge them by their near-term benefits rather than against an evaluation of their longer-term implications. However, we suggest it is important to develop a strategic framework that brings longer-term risks into closer view, to be evaluated alongside the immediate benefits, so that we create our future governance of technology intentionally rather than episodically.

Democratic governments can harness the power of AI with a principled approach that values our shared humanity, maintains the responsible exercise of discretionary authority, and considers the justice and fairness implications of increasingly powerful machines, while also allowing room for policy exceptionalism. Policy exceptionalism allows for the adoption and implementation of policy instruments and approaches that acknowledge the unique circumstances of a sector, technology, or moment in time. For example, while intrusions on privacy may be justified on broad public interests grounds such as combatting a pandemic, these intrusions should be treated as exceptions rather than new norms. Such exceptions would still be subject to the overarching principles in guidelines such as the EC “Artificial Intelligence Act”, and their use evaluated for broader impacts following their exceptional application.

We can also create room for principles-based experimentation in developing AI capabilities. With the freedom to try new approaches, potential advances in citizen service delivery, public sector operations, and public policy development might be realized. Rather than permit unethical or careless practices, experimentation in spaces like AI “sandboxes” can illuminate ethical, data security, and privacy challenges before widespread deployment. The speed with which AI technologies are developing, and the appeal of many of their applications, require proactive governance without delay.

Download this issue of JSGS Policy Brief.

ISSN 2369-0224 (Print) ISSN 2369-0232 (Online)

Footnotes

1 NLP systems can quasi-independently write text, such as news stories or short essays, with very little human input or guidance. These systems gather information from online resources, filter biased data, synthesize a large volume of text, and even mimic a style of writing. Current research at JSGS is investigating progress in NLP and the possibility that parts of the policy analyst’s skill set—specifically, briefing note writing—can be supplemented or even replaced by AI (Safaei, 2021).

2 The Canadian federal government published the world’s first national AI strategy in 2017 (Canada, 2017). In the 2021 federal budget (Canada, 2021), $444 million was earmarked over ten years in support of the Pan-Canadian Artificial Intelligence Strategy aimed at research, training, commercialization, and “the development and adoption of standards.”

3 Canadian Institute for Advanced Research (CIFAR), CIFAR Pan‑Canadian Artificial Intelligence Strategy <https://www.cifar.ca/ai/pan-canadianartificial-intelligence-strategy>. The Pan-Canadian AI Strategy was
reauthorized in the 2021 federal budget (Canada, 2021).

4 The Principal Investigator was Dr. Paul Babyn (Department of Medical Imaging, Saskatoon City Hospital), and the research was funded by the Saskatchewan Health Research Foundation and Saskatchewan Centre for Patient-Oriented Research. See Adams et al. (2021).

5 The 2021 federal budget included 429 million to modernize Canada’s digital immigration platform, and to enhance client support service (Canada, 2021), which might support development of predictive analytics.

6 <https://www.tbs-sct.gc.ca/pol/doc-eng.aspx?id=32592>

7 <https://www.data.govt.nz/use-data/data-ethics/government-algorithmtransparency-and- accountability/algorithm-charter>

8 <https://oecd.ai/ai-principles>

9 In April 2021, the European Commission published a proposed legal framework on AI referred to as “The Artificial Intelligence Act” <https://eurlex.europa.eu/legal-content/EN/TXT/HTML/?uri=CELEX:52021PC0206&from=EN>.

References

Adams, S. J., Mondal, P., Penz, E., Tyan, C.-C., Lim, H., & Babyn, P. (2021). Development and Cost Analysis of a Lung Nodule Management Strategy Combining Artificial Intelligence and Lung-RADS for Baseline Lung Cancer Screening. In Journal of the American College of Radiology. https://doi.org/10.1016/j.jacr.2020.11.014

Barth, T. J., & Arnold, E. (1999). Artificial Intelligence and Administrative Discretion: Implications for Public Administration. American Review of Public Administration, 29(4), 332–351.

Berryhill, J., Heang, K. K., Clogher, R., & McBride, K. (2019). “Hello, World: Artificial intelligence and its use in the public sector.” OECD Working Papers on Public Governance 36. OECD. https://doi.org/10.1787/726fd39d-en

Canada, Department of Finance. (2017). “Growing Canada’s Advantage in Artificial Intelligence,” in Building a Strong Middle Class, Budget 2017, 22 March 2017 https://www.budget.gc.ca/2017/docs/plan/budget-2017-en.pdf.

Canada, Department of Finance. (2021). A Recovery Plan for Jobs, Growth, and Resilience, Budget 2021, 19 April 2021 https://www.budget.gc.ca/2021/home-accueil-en.html.

Longo, J. (2021). Ombudship in the Digital Era: Protecting the Right to an Explanation Under Decision Making by Artifical Intelligence. https://docs.google.com/document/d/e/2PACX-1vQkiyd_UHrZZZKLinbw1eUhKQl0ubtWl0uluUnO398Ie1AxK8p--A7VVj5_JFJO7yvsR82bajcgejIo/pub

Molnar, P., & Gill, L. (2018). Bots at the Gate: A Human Rights Analysis of Automated Decision-Making in Canada’s Immigration and Refugee System. https://tspace.library.utoronto.ca/handle/1807/94802

Office of the Privacy Commissioner of Canada. (2020, June 6). Clearview AI ceases offering its facial recognition technology in Canada (news release). Office of the Privacy Commissioner of Canada. https://www.priv.gc.ca/en/opc-news/news-and-announcements/2020/nr-c_200706/

Safaei, M., & Longo, J. (2021). The End of the Policy Analyst? Testing the Capability of Artificial Intelligence to Generate Plausible, Persuasive, and Useful Policy Analysis. 9th Annual CAPPA Conference. June 3 2021. Ottawa (virtual conference). http://bit.ly/CAPPA_Safaei

Sahlu, K. (2021). Public Acceptance of Facial Recognition Technology: Surveying Attitudes, Preferences, and Concerns to Inform Policy Development (J. Longo (supervisor)) [MPP Thesis Proposal, University of Regina]. https://docs.google.com/document/d/e/2PACX-1vSY7KvB0iZjekCk1pkg3b9FI5qfmFDHNA4D1Mi_8Rmf800sukTl1Duj7LAYtZfxlT3W9ns8_YP5naXd/pub

Sousa, W. G. de, Melo, E. R. P. de, Bermejo, P. H. D. S., Farias, R. A. S., & Gomes, A. O. (2019). How and where is artificial intelligence in the public sector going? A literature review and research agenda. Government Information Quarterly, 36(4), 101392.

Zarzeczny, A., Babyn, P., Adams, S. J., & Longo, J. (2020). Artificial intelligence-based imaging analytics and lung cancer diagnostics: Considerations for health system leaders. Healthcare Management Forum / Canadian College of Health Service Executives, 840470420975062.

Justin Longo

Dr. Justin Longo (PhD) is an associate professor at the Johnson Shoyama Graduate School of Public Policy, University of Regina campus. His research focuses on the socio-political implications of advanced technology, and the public sector applications of information and communications technologies. In addition to his teaching and research, Justin also oversees the Digital Governance Lab. Previously, he served as a post-doctoral fellow in Open Governance at the Centre for Policy Informatics at Arizona State University, and a visiting research fellow in The Governance Lab at New York University.

Amy Zarzeczny

Amy Zarzeczny (LLM) is an associate professor with the Johnson Shoyama Graduate School of Public Policy, University of Regina campus. After completing law school at the University of Alberta, she clerked for Alberta’s Court of Queen’s Bench and Court of Appeal and practiced law with the firm of Reynolds, Mirth, Richards & Farmer LLP in Edmonton, Alberta. Zarzeczny subsequently obtained her Master of Laws from the London School of Economics and Political Science, following which she held an Academic Trust appointment as a Research Associate with the University of Alberta’s Health Law Institute. Zarzeczny later served as Crown Counsel with the Ministry of Justice and Attorney General of Saskatchewan in the Policy, Planning and Evaluation Branch. Her research focuses on health law and health policy issues including, in particular, legal, bioethical and policy challenges associated with emerging biotechnology, unproven and experimental medical interventions and medical tourism.