Transcript
Transcript: Artificial Intelligence Is Here Series: Citizen Consent and the Use of AI in Government
[The Webcast Webdiffusion logo appears.
Erica Vezeau appears via webcam. As she speaks, a chyron appears reading "Erica Vezeau. Canada School of Public Service"]
Hello, everyone. Welcome to the Canada School of Public Service.
My name is Erica Vezeau and I am the Interim Director General of the digital academy here at the school. I will be the moderator for today's event. I'm really pleased to be with you here for th- the event today and want to welcome all of you who are connected. Before proceeding further, I'd like to acknowledge that since I'm broadcasting from Ottawa Gatineau. I'm in the traditional unseated territory of the Anishinaabe people while participating in this virtual event, let us recognize that we all work in different places and that therefore we each work in a different traditional indigenous territory. I invite you please to take a moment to reflect on this and acknowledge it.
Today's event is the second instalment of our series entitled Artificial Intelligence is Here. The school is offering this event series in partnership with the Schwartz Reisman Institute for Technology and Society, which is a research and solutions hub based at the University of Toronto and is dedicated to ensuring that technologies like Artificial Intelligence are safe, responsible and harnessed for good. Our First AI is Here series event, which took place on November 5th, provided participants with an overview of the AI landscape and a discussion of the various ways that AI and machine learning can potentially transform government decision-making and the jobs of public servants. Today, we will turn our attention to the topic of citizen consent and the use of AI in government. The format of today's event will be as follows. First, we will watch a pre-recorded lecture delivered by Peter Loewen, the Director of the University of Toronto's Munk School of the Global Affairs and Public Policy, as well as the Associate Director of the Schwartz Reisman Institute. Following the lecture, I will rejoin you along with two esteemed panelists, both Peter Loewen himself and Wendy H. Wong, a professor in the Department of Political Science at the University of Toronto, and Canada Research Chair in Global Governance and Civil Society. At that time, I will leave Wendy and Peter in a moderated panel discussion where we will explore a number of ideas and topics raised during the lecture as it relates to the use of algorithms in government and citizen consent. Finally, we will open up the session for some audience Q&A, which will provide you, our audience members, with an opportunity to pose some of your own questions to our panelists. We have a great event plan for you and we want you to have the best possible experience. So before we begin the lecture presentation, here are a few housekeeping items to mention. Firstly, to optimize your viewing, we recommend that you disconnect from your VPN or use a personal device to watch the session if possible. If you are experiencing technical issues, we recommend that you relaunch the webcast link that was sent to your email. You can submit your questions for Peter and Wendy throughout the event using the collaborate video interface. Please go to the top right-hand corner of your screen, click the Raise Hand button and enter your question. If your question is specific to one of our speakers, please indicate so. The inbox will be monitored throughout the event and your questions will be shared with me to post to our invited guests. Simultaneous translation is available for our participants joining us on the webcast. You can choose the official language of your choice using the video interface. And finally, please note that today's session will be recorded by the school as it may be rebroadcast at a future date or otherwise used to support our learning offerings. Now, without further delay, let's start the video on citizen consent and the use of AI in government.
[A purple title card with a starburst reads "Artificial Intelligence Is Here Series". Words appear. "Citizen consent and the use of AI in government". Peter Loewen stands in front of a blue background. Slides appear to the left of him. The first reads "Citizen consent and the use of AI in government" over an image of 3D puzzle pieces. A chyron reads "Peter Loewen. Associate Director, Schwartz Reisman Institute. Director, Munk School of Global Affairs and Public Policy, University of Toronto"]
Hello, colleagues. My name is Peter Loewen and it's my pleasure to talk to you today about four obstacles to algorithmic government. What I want to do over the next few minutes is to share with you research that I've done with my colleagues, Benjamin Alan Stevens, Dario Sidhu, and Bart Bonikowski. Work funded by Shirk, by the public policy formed by the Schwartz Reisman Institute. And what this work is interested in doing, is understanding what the barriers are to citizens accepting government using algorithms to make decisions. I want to share with you four particular barriers that we think stand in the way of widespread citizen consent for the use of algorithms and decision-making by governments.
[Words read "Central questions: Why don't we automate more decision-making in government? What are the potential obstacles to algorithmic government?"]
What really motivates this set of questions and these investigations are the following central questions: Why don't we automate more decision-making in government? Why do we leave it to humans to make the same decisions over and over and over again, when those decisions could be made with much more rapidity, perhaps with more consistency and maybe with less bias if they were made by machines? The second central question is, what are the potential obstacles to algorithmic government? What I want to show you is that there are four particular barriers to the widespread adoption of using algorith- algorithmic government from the perspective of citizens. These barriers are important and they're barriers that we'll have to be overcome. If we want to use automated decision-making in a way that is consented to by citizens on a watch- much wider scale within government.
[The slide is titled "Road Map". As he speaks, his main points are listed on the slide.]
The road map for this talk is the following. I'm gonna make my own case for algorithmic government. That is, I'm gonna tell you some of the reasons that I think we should perhaps use more algorithms in government or more automated decision-making in government. To be fair, I'll, uh, discuss the other side of the ledger as well and articulate to you what I think are some of the limits to the use of algorithms in government. I then want to make a particular argument about what I call the centrality of citizen consent. The basic idea here is that if government is going to change on a large scale how it does something, it ought to do so with the consent of citizens. Not in a simple way, but in a way that deeply considers several dimensions on which citizens may or may not object to government doing something. Having set that table, I'll then talk about four challenges of citizen consent as obstacles to algorithmic government. In essence, what I'm going to do is to present to you four different reasons that citizens may be reluctant to consent to government using algorithms.
[The slide reads "What is the case for algorithmic government?"]
So what is the case then for the use of algorithms in government? Well, here are just some arguments for why we might want to automate a larger number of decisions that bureaucrats make.
[The slide reads "The administrative state is becoming more complex, not less."]
The first is that the administrative state is becoming more complex, not less complex. You know every year as we pass new legislation, bring in new programs, implement new policies, another layer is put on top of the administrative state. This is not an argument about government being inefficient or government not doing its job well. It's just a statement of reality that government is becoming more complex and not less. Organizations which are more complex should at some point strive for simplicity. One form of simplicity may be in trying to automate decisions rather than leaving them to humans to make them.
[The slide reads "We rely on humans to make a massive number of decisions."]
The second thing which is related to this, is that we regularly rely on bureaucrats and that is to say humans like you and I, to make a massive number of decisions and implement policies. The challenge here is that there really are clear and demonstrable limits to how well we can make decisions. Humans have trouble making decisions consistently. They have trouble regularly and consistently routing their decisions in the values or the objectives of a policy that should be motivating those decisions. Humans are biased. And more than that, humans are good at lots of things. We are particularly good at judgments. So if we're able to think about the whole schedule of decisions that you might make as a bureaucrat and break apart the ones that are more routine and automatable and wrought and the ones that require judgment. Why wouldn't we want to dedicate more energy to the ones that require judgment or maybe an element of morality even than the ones that are more wrought and routine?
[The slide reads "Procedurally fair processes do not necessarily lead to better decisions." The background image is a long plank balance on a ball, with three small balls on one end and one large ball on the other end. The plank is level.]
The third consideration is that more procedurally fair processes do not necessarily or even probably lead to better decisions once you understand this in the following way. There's a very good example from the United States, which is that there's a large number of veterans in the US. And there's a very good example given by our colleague Dan Hoe, that the Veterans Administration in the United States has decided that when veterans have some objection or some appeal to the benefits that they're receiving, that they should be heard by a human judge. Now, this is motivated by the idea that we think it's just and fair that if we have a problem with government, a human ought to hear our problem, but there's a scale issue. There are so many people appealing their benefit allocations that the amount of time that a judge can give them is just five minutes. So what seems procedurally fair, putting someone in front of a judge and letting a human hear another human's case may, in fact, lead towards processes because there's simply not enough time to do it well, but even when there is enough time to do it well, the data is relatively clear and the research is relatively clear that humans are not consistent in making their decisions. So we often bring humans into our decision-making processes because we want them to seem procedurally fair, but, in fact, they may be less fair than if we had a machine making a decision.
[The slide reads "The state is not designed to learn."]
The fourth case for algorithmic government is that the state is not designed to learn. If you are making decisions at a very rapid clip and then moving on to new ones and not building in time for reflection, the degree to which you can learn from the data you are producing in the decisions which are making, the degree to which you can learn or any bureaucrat can learn whether their decisions led to good outcomes and then to update their mental models and update how they're making decisions is deceptively difficult even when time is set aside for it, but in an estate that is- that is increasingly complex, that has an increasingly large number of decisions to make, it becomes near impossible to actually learn. So when you take these four considerations together, what comes out of it is the possibility that some degree of algorithmic decision-making may be able to address these shortcomes of how we're currently making decisions in the administrative state.
[The slide reads "Limits to the use of algorithms" His main points appear on the slide as he speaks.]
Having told you about the potential benefits or reasons for using algorithms, let's acknowledge that there are some limits to the use of algorithms, and these are equally important. I wanted to give you just five and then I really want to focus in on the fifth one. The first is supervision, that it's- it's hard if we automate a large room of decisions to be able to effectively audit or supervise all the decisions that are being made. More difficult than if decisions are being made more slowly than humans. The second potential limit to the use of algorithms is the classic problem of principal agent problems. If I'm the principal in a relationship, I have something that I want you to achieve as an agent. I empower you to do that, but if I can't supervise you in doing it, the whole notion is that I delegate to you because you have more time than I do to make the decision, then there becomes a gap between what I want and what you do. This is a classic problem in any large organization. In fact, in any organization which involves delegation, but there's a potential that it's even worse with algorithms if we allow algorithms to learn. Because if we can't see how algorithms are learning and changing the way they're making decisions, then that widens the gap between -- potentially between what the principal wants and what the agent does. There is the problem of explainability, that what is happening under the hood with algorithmic decision-making, especially if algorithms are learning in real time and are adapting how they're making decisions, is important to be able to explain to citizens. Citizens reasonably want justifications for the decisions that are made by public officials. And if decisions are left in the black box of an algorithm and we can't explain them, then we miss a fundamental value of public decision-making. Forth there is the challenge simply of implementation. Large national governments, large provincial governments, large municipal governments, all have the problem, it's well-known, of actually disseminating technology and technological solutions throughout their workforces. This is equally true, maybe even more true of algorithmic decision-making. We can imagine ways in which algorithms will make decisions. Getting them implemented into peoples workflows is another matter. There's a fifth obstacle, which is the one I want to focus on for the balance of this talk. That's the idea of citizen consent. The idea that what government should do should only be what citizens would consent government doing and maybe what they would consent government doing on reflection.
[The slide reads "Citizen consent". His main points appear on the slide as he speaks.]
So let me talk now a little bit about the centrality of citizen consent for government decision-making. The first claim I want to make is it in the long term, citizen consent is fundamental for the effective functioning of government. The secret of democracy, the reason why it works better than other systems is not because citizens are perfect supervisors of government, but that they are overtime able to rein in governments who go too far and able to push governments who aren't going far enough. At the core of that is the idea that citizens are consenting to what governments are doing. Indeed, citizen consent is tied to a whole number of assumptions and norms about how government and about how public service should work. Third, opposition to government action is powerful, even if that opposition is only held by a minority. So imagine just for a moment that most citizens couldn't care less about whether government uses algorithms or not, but there's some minority, 10,15, 20 percent that care deeply and are very exercised by the proper use or by limiting the use of algorithms by government. How can that 10,15, 20 percent act as a check on the use -- the greater use of algorithmic government? Well, they have first the potential for activating greater opposition that a well-motivated small group could be able to light up more public opposition. The second is this related idea of issue publics. And the idea here is that we don't have one public. We have several and those publics are defined by the issues that they really care about. And if you want to have continued government consent, you have to build up consent with the building blocks of those different issue publics. For the 10,15, 20 percent of people who really care about how government makes decisions, for them, the use of algorithms may be fundamental to their consent. So even though they're a small group, those are the terms on which you have to seek their consent. And finally, government's success depends on attention. You know in your departments. You've probably had examples or experiences of this, where when public opposition and the spotlight of public opposition focuses on [giggles] one department, it can really slow down what that department does for a long time. Sometimes it can stop whole programs, good programs right in their tracks. So government success depends on its ability to focus on things and get them done. And when it faces opposition, even opposition from a minority over some issue, it draws away from the ability of government to get done what it wants.
[The slide reads "Four challenges for citizen consent to algorithmic government". ]
What I want to highlight for you now is that our research suggests that there are four challenges for citizen consent to algorithmic government. That is to say there are four particular things. Maybe there's more, but we've identified for particular things about the use of algorithms to make decisions which may not be fully consented to by citizens. That is to say, these represent fundamental obstacles between where we are now and the further use or the greater use of algorithms in artificial intelligence by government. I'll talk about them more in detail, but here they are in sum.
[The slide fills the screen. His main points appear on the slide as he speaks.]
The first is that citizens support a lot of different reasons for the use of algorithms, but they support no set -- no single coherent set of justifications strongly. The second is- is that when you ask citizens to evaluate any algorithmic innovation, they'll always evaluate it negatively versus the status quo. That is to say if you describe to them the way decisions are made now and the way they could be made with algorithms, it seems they always prefer the status quo. Third, citizens' trust in algorithms develops independently of algorithmic performance. This is a fancy way of saying that citizens are very harsh judges of algorithms and they're just not likely to extend them their trust. I'll show you some experiments we've used to measure that. Finally, opposition algorithmic government is higher among those who fear the broader effects of automation and AI. Government is not operating isolated from the larger, big, super forces that are changing the way our society is organizing itself. And to the degree that there is a broader opposition to automation and AI, that broader opposition underwrites people's opposition to government using automation and AI.
[The slide is titled "Four Challenges for citizen consent". The points below read: "Citizens support various justifications for the use of algorithms, but no set of justifications strongly." "Citizens evaluate (any) algorithmic innovation negatively versus the status quo." "Citizens' trust in algorithms develop independently of algorithmic performance." "Opposition to algorithmic government is higher among those who fear broader effects of automation and AI."]
So if we want to know what citizens think about algorithmic government, Our view is that the best way to learn that is to ask them.
So what we want to do is present you with results from studies that we've conducted in several countries.
[The slide reads "Data from online surveys conducted in the 18 countries between 2018-20, using broadly-representative samples."]
Now, these studies rely on data collected through online surveys conducted on broadly representative population samples. And the results we're going to show you don't really depend on waiting or any kind of statistic -- strong statistical corrections. I want you to -- it's a long way of saying I want you to trust me that these results are representative of the populations where we've taken them. The data we want to present to you comes from a number of countries: Canada, the United States, Australia, Austria, Belgium; Flanders and Walloon, Denmark, Finland, France, Germany, Greece, Italy, Ireland, The Netherlands, Norway, Portugal, Spain, Sweden, The United Kingdom. And then the second set of studies that we conducted in Canada, US, and Australia in 2018, our European data were collected in the spring of 2020. And what we're going to do is present you with results where we've conducted experiments and surveys with citizens to measure their views on algorithmic government.
[A title slide briefly fills the screen that reads "Obstacle 1: Weak Support"]
The first obstacle I want to talk about that we motivate by these data is the following, that there's weak support for algorithmic government. And the support is weak in the following way.
[His main points appear on the slide as he speaks.]
That citizens support various justifications for the use of algorithms, but they don't support any particular set of justifications strongly. Now the reason why this is important is because, by my light as a scholar of politics, politics is often about reason given and reason-reason accepting. Politics is not about support for particular policy or course of action, but its support for the reasons that the government is doing something. And if people in particular have opposition reasons to things, these carry much more weight than positive reasons. So the more reasons that one can give to support a policy, the more reasons that are accepted, the more supported a policy will be. So what we did in this study, which we administered in all of these countries,
[A slide fills the screen titled "Reasons for supporting automation". The reasons listed are "To reduce the time required to make decisions. To make decisions which will make better use of government money. To make decisions which are not influenced by factors like a program recipient's gender, ethnicity, or wealth. To make sure decisions are not influenced by officials' biases. To reduce fraud against the government. To make decisions which are more consistent, less "random". To reduce the number of bureaucrats/government officials. To reduce the costs of government.]
is we presented citizens with reasons for why governments may use algorithms to make decisions. Having described algorithmic decision-making to them, we presented them with these eight reasons. And you can see there's a variety of reasons here. We might use algorithms to reduce the time required to make decisions, to reduce fraud against the government, to reduce the number of bureaucrats or government officials, or to make decisions which are more consistent and less random.
[The title changes to "Efficiency / Fairness dimensions". "Efficiency" is in pink, "Fairness" is in yellow. Some of the points are in pink, some are in yellow. The points in yellow are "To make decisions which are not influenced by factors like a program recipient's gender, ethnicity, or wealth. To make sure decisions are not influenced by officials' biases. To make decisions which are more consistent, less "random.""]
If you look across those eight reasons, you'll see that they break down on a couple of dimensions or they organize on a couple of dimensions. There's an efficiency dimension where using algorithms is about making decisions better and faster. And then there's a fairness dimension, which is about things like making decisions which are not influenced by officials' biases or making decisions which are not influenced by factors like a program recipient's gender, ethnicity, or wealth to make decisions that are more consistent and less random.
[A slide fills the screen showing a bar graph. The title is "How acceptable are the following reasons for governments to use algorithms and AI to make decisions" The subtitle is "Percent selected "Acceptable reason to use an algorithm" The x-axis is all the points from the list. The y-axis is percentages.]
And what we find is the following; this is Canadian data now, is that when we present people with these various reasons for supporting algorithms, we find that the majority of reasons are supported- all reasons are supported by a majority of citizens, but most citizens don't support all reasons.
[Another bar graph appears titled "Acceptance of algorithmic governance. Number of reasons found acceptable."]
In fact, when we break it down, we find something like a quarter of citizens support all reasons, but 10 percent support no reasons. And in total, something on the order of a third of citizens, um, a third of citizens support less than half of the reasons. Why is it that citizens who would support some of those reasons and not others? And the answer with many of these things is politics.
[A two-by-two chart. The rows are labelled "Did not accept majority" and "Accepted majority". The columns are labelled "Did not accept majority" and "Accepted majority".]
Here's what the breakdown looks like; if you look at that efficiency and those fairness dimensions, remember the efficiency one is about making decisions better and faster and the fairness one is about making decisions that are less biased and more fair. We find that 60 percent of people accept the majority of the efficiency reasons and the majority of the fairness reasons, but only 10 percent or something like 10 percent support only the efficiency reasons or only the fairness reasons. And then there is a strong 20 percent which don't accept the majority of either sets of the reasons.
[Another bar graph titled "How acceptable are the following reasons for governments to use algorithms and AI to make decisions?"]
When we look at the data in Europe, we see similar patterns across the questions, though there's actually less support there for any of the items than there is in Canada.
[Another bar graph titled "Acceptance of algorithmic governance. Number of reasons found acceptable."]
But again, we get this similar breakdown where some share of people support all the reasons, a sizable share support zero of the eight reasons, and then there's a breakdown in the middle where people are accepting some of the reasons but not others.
[A chart appears beside him listing various demographics. It's titled "Table 1: OLS regressions of support for algorithmic governance, combined EPIS and Canadian data".]
If citizens do vary in the sets of reasons that they support for using algorithmic government, why is it? Does it have to do with who they are, their demographic background? Or does it have something to do with their views of the world? And the answer is that it's actually a little bit of both. So if we look at the Canadian data, this table shows us the following.
[The slide fills the screen. As he speaks, rows on the chart are highlighted.]
OIder people are more likely to support the use of algorithms, that might be counter-intuitive to you, but they might be less aware of some of the negative sides of algorithms, women are less supportive of the use of algorithms for either fairness or efficiency reasons, those who have lower income are less supportive. And I want you to think about how often people who have low income are interacting with government decisions on a regular basis. Education is positively related to the use, um, of algorithms on either dimension. So those are those demographic factors. But what about the political factors? And what I want to point out to here is that the really important one for me is the following; is that the degree to which people hold populist views, that is to say views about whether government can or can't make good decisions and whether we should have effectively more authoritarian and less democratic ways of making decisions, is related to support for algorithmic government. Those who are high or underlying levels of populism are more likely to support efficiency reasons for government using algorithms, but no more likely to support fairness reasons. Your view of the world and your ideology, in fact, matters as well. That those who are more on the right side of the ideological dimension are more positively inclined to support efficiencies reasons than those on the left, and they're more negatively inclined to support fairness reasons than those on the left. It's a complex welter of reasons that people bring to their support or their opposition to government using algorithms, but it's not simply a matter of, um, people knowing more and then be more supportive of these.
[A title slide briefly fills the screen that reads "Obstacle 2: A status quo bias". As he speaks, his points are added to the slide.]
Our second obstacle to the use of algorithms in government is one that's deeply rooted in a particular feature of human psychology, which is that humans have a strong status quo bias. If you present citizens with or humans with one of two policies, Policy A Policy B, and you describe Policy A is a status quo, they'll be more likely to support it independent of its features or its merits. If you, instead, describe po- Policy B as a status quo they'll be more likely to support that. It turns out that it seems to be the same with the use of algorithms. One thing we investigated in our Canadian data is that we presented respondents with a number of potential algorithmic innovations. We did this across three different domains: immigration decisions, small business loans, and tax filing, basically whether a- a file tax return would be audited or not. We described to them the way these decisions are made now that at the end of the process there's a human making a decision. And then we presented them with a number of potential algorithmic innovations. Importantly, none of these innovations took a human completely out of the loop. They were simply meant to enhance the way a human was making a decision. And what do we find? I'll give you just one example.
[A screenshot of an online survey briefly fills the screen.]
That we find that when we describe to people the way immigration decisions are made, and then when we describe for potential innovation that could be brought in to improve the use of information in selecting immigrants, we find that citizens are less supportive of that innovation.
[A vertical change bar graph fills the screen, titled "Immigration". The y-axis is degree of support, and the x-axis reads "Algorithm Questionnaire", "Higher score", "Lottery" and "None".]
Indeed we, we, we presented them with three different innovations. And in every case, they're less supportive of the innovation over the status quo.
[The slide changes to a similar graph labelled "Small Business Loans" and then to one labelled "Taxation".]
It is the same if we talk about not only immigration, but small business loans, and if we talk about taxation. So the bottom line is that when we present people with potential algorithmic innovations to decision-making enhancements of how public officials might make decisions, across three different domains, in every case we find that citizens are less supportive of those algorithmic innovations than they are of the status quo.
[A title slide briefly fills the screen reading "Obstacle 3: Reputation building". As he speaks, additional text appears reading "Humans evaluate decisions based on the nature of the decision and our perceptions of the decision maker." and later "Humans are much less forgiving of algorithmic decisions."]
The third obstacle to the use of algorithms in decision-making has to do with reputation. You think about what government is doing, it's trying to maintain its reputation over time. And by that what we mean that it's trying to maintain the trust that people put in it to make good decisions. Now there's an interesting feature of human decision-making, which is that when someone makes a decision, and we're evaluating it, we evaluate not only the nature of the decision made, but also our perceptions of the decision maker. Was the decision maker trying to do the right thing? Did they have the right intentions? And if we believe that a decision maker has the right intentions, we're much more likely to forgive a decision that goes in the wrong direction. After all, people making the decisions are humans. But what we want to show you is that this is not the case with algorithms. That when human being see an algorithm that's made a decision with which they don't degree or with which a bad outcome has resulted, they're much more punishing of the algorithm and much less likely to trust it in the future than they are with the human. Let us show you this with a- with a vignette about healthcare decision-making that we've administered in these surveys to citizens.
[Black text on a white slide.]
We gave to citizens a little vignette that read the following, and you'll see some bolded text on the screen. And- and this is where we randomize people to be hearing about an administrator or an algorithm. Let me read you the algo -- the vignette in the form of an algorithm. Imagine a -- an hospital that has an algorithm that assigns surgeons to patients. This algorithm is a computer trained to make decisions automatically. One day a surgeon is working on a patient in critical condition when five more patients arrive at the hospital in need of care. If the algorithm decides to reassign the surgeon, the surgeon's current patient will die, but the surgeon will be able to save the five new patients. If it does not reassign the surgeon, then the current patient will live, but the five new patients will die. Having read that, respondents then read, the algorithm decides to reassign the surgeon or the algorithm decides not to reassign the surgeon. When it reassigns the surgeon, we call that a utilitarian decision because it's interested in the outcome saving five versus saving one. When it decides not to reassign the surgeon, we call that the ontological choice because it's about the damage that's done effectively by- by changing one's behaviour.
[The next slide is titled "Outcomes we are interested in". The numbered points appear on the slide as he talks about them.]
And there's three outcomes we're interested in. We're interested in knowing whether or not citizens agree with the decision that was made. And recall that they read about the decision either being made by the administrator, or by an algorithm. We're interested in whether they would trust the decision maker in the future to make similar decisions. That's about reputation. And then we're interested finally in whether the trust and the decision maker in the future is conditional upon the agreement they had with the decision that it made in that moment. And because we randomize the decision and the decision maker, we can separate out decision maker effects, that is, was it -- was it an algorithm or um, a human from the decision that they made, or we can consider them jointly.
[A dot plot graph titled "Anglo-American results: Agreement" appears.]
What do we find? We find that when the decision that gets made was the utilitarian decision, that is to save the five versus the one, citizens are much more likely to support the algorithm to agree with the decision that it's made. These are the results in Anglo-American countries. But we find that when the decision is an algorithm independent of the decision that the algorithm made, citizens are less likely to agree with the decision that it made. Think about that for a second. Irrespective of the decision that citizens made, they're less likely to agree with the decision made by the algorithm.
[A similar graph, titled "Anglo-American results: Future trust" appears.]
When we ask whether citizens would trust the algorithm in the future or the decision maker in the future, what we find is the following; the choice that was made, whether it was utilitarian or not in the moment, does not matter for whether they trust the algorithm in the future. What matters is whether the decision maker was an algorithm. And if the decision maker was an algorithm versus a human, they're much less likely to trust that decision maker in the future. Think about the implications of that for a second. When citizens look forward and look -- and think about whether they would trust a big decision made by an algorithm or a human, it doesn't matter what information they have about in -- the decision that's been made by the algorithm or by the administrator in the moment. All they care about is who's making the decision. And if it's an algorithm, they're much less likely to say that they would trust the decision it would make in the future.
[A similar graph, titled "European results: Agreement" appears.]
And finally, these are results which come from Europe, and I'll show you that they're just largely the same. That citizens, once again, in these different populations are in more agreement with decisions when it's made in the utilitarian way and they're less supportive of it when it's made by an algorithm irrespective of the decision made.
[A similar graph, titled "European results: Future trust" appears.]
When it comes to future trust, utilitarianism matters a little bit further whether they'll trust the decision maker in the future. But again, it really comes down to whether that decision maker was an algorithm or not. And if it is an algorithm, they don't extend the trust in the future the way they do for decision made by an administrator.
[A similar graph, titled "European results: Conditional future trust" appears.]
And finally, we asked the question of whether they're more likely to support a decision or to forgive the decision when they have agreed with the decider in the decision that I made in the first instance. But again, we find that it's not the case that even if you agreed with the decision made by the algorithm in -- or the administrator in the first instance, you're less likely to trust the decision maker in the future if they're an algorithm. So even among those people who liked the decision made by the algorithm or the administrator in the moment, they punish the algorithm much more or they extended much less trust in the future.
[A title slide briefly fills the screen. It reads "Obstacle 4: Broader opposition to AAI". As he speaks, his two major points appear on the slide.]
If our first three obstacles to algorithmic government are principally about psychology, the fourth obstacle is about politics. It's about populism. Now, we're going to argue that populism is this kind of flexible political ideology which searches for reasons to generate opposition to government. It's entrepreneurial. And importantly, what we find is that the correlates of populism, nativism, or a belief that you should prize people in your own country over others in a fear of economic loss are correlated with opposition to algorithmic government.
[A slide briefly fills the screen and shows a photo of Viktor Orban, a political cartoon, a line graph and cover of The Economist showing Donald Trump with the words "Playing with Fear".]
And the potential of those things to be linked up greater for political entrepreneurs to link up the use of algorithms with a belief that government shouldn't be trusted and that politicians who oppose effectively modern government has the potential we think to marshal a broader opposition to automation AI or to algorithmic government.
[The slide beside him reads "A broader opposition". His words appear on the slide as he speaks.]
So I'll just say the following. The first is that there is a widespread apprehension of the effects of automation and AI on job security and prosperity. A large share of citizens worry about the effects of automation and AI on the broader economy in what it will do not only for themselves, but especially for people who they know, their family and their friends. There's a correlated belief that automation and AI will increase inequality and that it will limit social mobility.
In all three of those beliefs, fear of job loss or belief in- in increased inequality, a belief in limited social mobility,
[A grid of line graphs under a number of names of countries temporarily fills the screen. The slide is titled "Fear of job loss via AAI and support for algorithmic government".]
are correlated with greater opposition to algorithmic government. Indeed, if we look across all of our countries in our sample, what we find is that as fear of job loss and- due to automation and AI increases, support for algorithmic government declines. In every case where we look, we find that people who are more worried about the broader effects of automation and artificial intelligence are more opposed to the use of algorithms by government.
[His words appear on the slide as he speaks.]
The bottom line is the following. That one, there's no single publicly acceptable justification for employing algorithms by governments. Governments have to deal with the fact that citizens have different sets of reasons for why they think it would be acceptable for government to automate decision-making. The second is that citizens to have a strong bias -- status quo bias against the use of algorithms. The third is that citizens punish algorithms more indifferently than they do to human decision makers. That all the work we've done to build up a trust the government can basically do the right thing when the chips are down, has to be built and built in different ways if we're going to use algorithms. And finally, opposition algo -- algorithmic government can grow with that broader societal opposition to automation and to artificial intelligence.
There are a lot of challenges to using algorithms in government. There's challenges of supervision, principal agent problems, implementation, explainability. But at the core of all of those and more important, I think that any other is a concern about whether citizens will simply agree to government taking humans out of decision-making and putting in algorithms so the government might be efficient, more fair, whatever justifications you wish to use. These are the barriers of citizen consent that stand between us today and much greater use of algorithms and artificial intelligence in government in the future. Thank you.
[He fades out. A purple title card with a starburst reads "Artificial Intelligence Is Here Series" Erica reappears, filling the screen. She's quickly joined by two other participants.]
Welcome back, everyone. I really hope you enjoyed that presentation. I know that I'm looking forward to unpacking some of the concepts and ideas discussed in the video with our expert panelists today. First, we have with us Peter Loewen, who you just saw in the video. Peter, thanks for being -- thanks so much for being with us again today and for that incredible presentation. I'll allow you to introduce yourself in person. Peter.
[Peter speaks, filling the frame. Shortly after, a chyron appears that reads "Peter Loewen. University of Toronto"]
Hi. I'm Peter Loewen, I'm the Director of the Munk School of Global Affairs and Public Policy. Uh, and I'm the, uh, one of the associate directors at the Schwartz Reisman Institute and just really pleased to be, uh, engaging with Canada's great public service on these issues.
Thanks, Peter. Our second panelist is another talented researcher from the University of Toronto, Wendy Wong. Wendy?
Hi, everyone. Thanks for having me here. As Erica probably introduced me earlier, I'm a Professor of Political Science and Canada Research Chair in Global Governance and Civil Society, and I'm also a research lead at the Schwartz Reisman Institute with a particular focus on international relations.
Thanks, Wendy for that introduction and for joining us today in this conversation. I'm really looking forward to diving into our- our moderated chat for the next, uh, 20-30 minutes and then some questions thereafter. Before diving in, however, I just want to take the opportunity to remind our audience that we have a time allotted for you later, uh, so please do continue to submit your questions, uh, throughout this discussion by clicking the "Raise Hand" button in the top corner of your screen. And if your question is for Peter or Wendy in particular, please also indicate that in the question so that we can make sure to address it appropriately. So Wendy, we've already heard from Peter over the last half-hour with that video, so I'd like to, uh, start with your overall impressions of Peter's lecture that we just observed. AI is a topic with many perspectives. Do you feel a sense of alignment with- wi- with what Peter was saying or do you have a different opinion that you'd like to add?
[As Wendy speaks, filling the frame. Shortly after, a chyron appears that reads "Wendy H. Wong. University of Toronto and Canada Research Chair"]
Yeah, I thought Peter's discussion was really good at hitting a lot of the points that many of us in political science think about in terms of issues of governance. And so I think what I'd like to add is sort of from the international human rights perspective and- and sort of thinking about global governance more- more broadly, um, which is to say that, you know, when we think about algorithmic government or- or AI-driven government, we tend to think that these might be solutions for human imperfections and human errors and decision-making. But I would just like to, you know, first point out that because humans are imperfect, our algorithms are also going to be imperfect. So, you know, automating government decisions is- is one thing, and I think it's important, but we have to also think about whether we're using AI technologies or non-AI technologies. And I think that's gonna affect the way that we assess the, uh, the importance and the relevance of these technologies going forward in government. So if it's a non-AI technology in terms of automation, I think a lot of the tools we have such as insisting on transparency and accountability work quite well, but if we're looking at AI or machine learning in government, I think we have to think about what kind of rules we want in a system where we have to kind of anticipate how the computer or how the algorithm is going to interpret our commands to it. So a lot of times with AI technologies, mistakes only come to light after the fact. And so with government decisions, the stakes could be quite high because small mistakes could lead to, uh, you know, implications for a lot of people. It's not the same as in the tech sector where we see, you know, mislabelled photos and that's sort of, uh, a funny error that AI has made. I think when we think about government decisions, we want to think about, well, what happens to people's lives? And I also wanna just add that transparency may not be as easy with AI given the nature of AI algorithms, um, and sometimes they aren't explainable precfily because the- the, uh, commands aren't as- as direct. Um, we want the machine to learn from the data, and so that we- we don't always know how the machine is learning. Now the other point I just wanna put here is, you know, we tend, as I think as humans have automation bias or if a machine tells us something, we tend to over rely or even to overly defer to those types of answers. And so one thing is we have to think about, you know, whether machines are better at making decisions, um, than we are or if we are gonna end up replicating some of the human errors at much larger scale, um, when we think about, um, um, algorithmic government.
Thanks, Wendy. Um, Peter, I'm gonna turn to you, uh, and I think Wendy has, uh, segued into my next question really nicely. Uh, in the video, Peter, you distinguish the various points between the use of automated artificial intelligence in a government setting versus non-government domains. Why is this distinction important? Like, what makes the use of AI in government so unique?
Yeah, I think it's -- I think it's very, uh, I think it's very important distinction and I'll just say that, I mean, there are a lot of instances where those things are gonna overlap in ways that we just don't -- we don't care about, right? So you imagine that you wanted to -- you want to automate the paying of invoices or something like that, right, some back-office, uh, function. And I mean who cares if it's a government officer of private sector, opposite of what you're looking for is efficiencies, right, or- or, you know, the- the text completion in your email- in your email client? That doesn't matter. But there are things where it does matter for government and matters for a couple of- for a couple of reasons. One is that there is, uh, I think sort of an obligation of explainability or justification at the core of government. So you see this reflected in a lot of things, but the basic idea is that intuition that citizens should know what government is doing and know what it's done and why it's done it. Um, we don't have the same expectations actually, at least not with the same moral weight when we're thinking about corporations. So, you know, why GM may -- General Motors may recommend some car to me over another car is really their -- is their business as to why they- as to why they do it. And I may not have a right to actually understand why they did that. If I go to see a government agency for assistance and the person they're dealing with or the program I'm dealing with has some discretion in which- in which, you know, support program they stream into, I may wanna -- I may feel like I shove the rights to understand why I was put into that- put into that program. More broadly, you know, our accountability models are that we want to understand how decisions were deliberated over within the public service and then acted upon by public servants and or by, and or by- by politicians. The more we automate decisions and, in particular, the more we automate them in some sense in a way where we don't see what's going on inside the deliberation, and that's one of the features of -- that distinguishes AI from other series, you know, from other systems of al- algorithms of decision-making that often at its highest kind of performance, an AI will- will- will deliberate outside the rules you've set for it, right? It will learn from what I've seen before and then delivering outside of that. Well, that may- that may introduce a certain amount of opacity, just a certain amount of uncertainty about what's happened in a way that kind of offends our notion that we should understand why things happened and- and where they happen. So that's- that's one reason, just that general need for justification within, um, within, uh, within government. And then, you know, there's just the broader- there's the broader element of it, which is really the human element of it, and I think of it in- in the- in the following way. We have certain moral intuitions, not only about that we want to know why things were done, but we have a certain moral intuition about billing -- being dealt with by a human that there's just something special about when- when you bring a case or bring something before a person and they consider it, we seem to think that there's more moral weight on a decision or there's more legitimacy to a decision when it's made by human. They [inaudible] simply made by a rule, right? Now that actually runs a little bit contrary to what Wendy said, which is that in some cases we might trust machines more, right, and that may well be the case, there are certain circumstances where we do, but I think particularly in those things where we expect to be heard by government when you're appealing an EI case or you're applying for a loan from government or- or, you know, you're a business that's lobbied for a certain, uh, rule change or something like that, you have reasonable expectation to be heard by a human. And AIs can cut humans, AI can cause humans out of the loop. So I think that you, know, there's no, that's not -- none of this is absolute and this is what makes it difficult, there's a lot of gray areas, but there are certain expectations that humans bring to interactions with government that they may not bring to interactions with private sector organizations, for example, and figuring out where that non-overlap is is really key to figuring out when -- kind of where and when you can use AI in government.
Absolutely. You raised some really good points there, Peter. So I have another question and I'm going to ask it to both of you, maybe Wendy, you can answer first and then Peter. Um, so given what both of you have just expressed at, if we're deciding when it makes sense to replace a human decision-maker with a machine in government, what -- like who do you think and what do you think they should be considering, uh, when they're making those decisions? Like, who should make that decision? Who is responsible for setting up whether it's an automated process or an actual decision-making, uh, algorithm? What level and what conditions of responsibility do we put on that? Wendy, I'll start with you.
Okay, this is a tough one, right, because of what Peter just said, there's a lot of great answer --
A good answer.
So -- yes. Let me- let me take a stab and then maybe Peter can- can, you know, figure out where I've left out important stuff. So, you know, in a democratic system, we're gonna have different expectations from a non-democratic system, so I should just point out that, you know, a lot of what I got out of Peter's talk was thinking about the convenience or expedience or efficiency of a lot of these algorithmic govern -- government possibilities, and I would point out that, you know, that's really important, but what we need to do is balance that against our concerns here in Canada, um, about fairness and equity and also our autonomy as individual citizens. And I think that also, we need to think about the accuracy of these technologies when we talk about using them for government purposes. So there's a lot of talk, for example, in facial recognition technologies being highly inaccurate, highly biased towards certain, uh, racialized persons and people who identify as sexual minorities. I would say that these are works that need to be fixed, they need to be addressed, and until we do that, we can't really for certain apply these types of technologies. But I would also say that this is not a simple set of decisions. I think it's a series of decisions and part of what, you know, the- the difference between some of Peter's research where he points out that people distrust automated decision-making and what I pointed out earlier as automation bias, I think there's a tension there, right, and I think part of it is people just don't know what to make of this because throughout our history as a species, we've only interacted interpersonally. And so what happens when we defer important decisions about social and individual welfare to machines that we've created, I think that's an open question, and I do think it would be a series of discussions, both at the federal level but also at more lo -- local -- more, you know, the provincial and local levels to get people, uh, to get their heads around these- these technologies and to really understand what they can do and what they can't do.
Yeah. I think that's a really fair question. I really appreciate you asked me. I think I've looked at this at a couple of different- different levels. So let's just-- let's recognize, I mean, what- what Wendy said, you know, when, essentially when you're using AI for interaction with citizens, we really have to think about, um, all of these concerns we have about- about bias, and accuracy, and fairness, and justifiability. But let's take a different perspective, which is to say, let's think about using AI to enhance decision-making in government. So let's say you're making an allocative decision. You've got a limited amount of money and you're trying to allocate it across small businesses that are applying for. I always use it- use it- use this example, but you're applying it across small businesses applying for loans. Because it's an area where we already use algorithms to some degree, right? And then you're trying to look over a bunch of cases and then make a recommendation up the chain. Well, one of the things that public services already do very, very well is they have very transparent and structured decision-making processes. That when you're considering a decision, you know, there's some work that's done by an analyst. That decision gets sent up-- data gets set up the chain, somebody could sit in it, they write a recommendation from there. Eventually it finds its way up to a deputy minister who presents, you guessed it, three options to the minister who makes a decision over those things, right? Well, at each of those stages, you could employ some AI if- if- if it was applicable, and you can understand where it's fitting into that decision-making process. Because the rest of the decision-making chain, if you will, is already so well understood or articulated. So I think that's an area where there's really an advantage here, and for- for- for governments in using algorithms, right? That-- there's already all these practices of collective decision-making with accountability, with transparency, with traceability built into them in a way that may not exist in the world west of a corporate-- um, uh, of a corporate environment. And, you know, the other thing you finally have, right, which is a really important kind of check, because at some point in a government decision-making process, you always have a human in the loop. At you might be the DM, the top, or it might be the minister above that. But you always have someone who's responsible, not only for the decision that was taken, but the process by which the decision was arrived at. So a lot of this is a way of saying that governments have already put in place the- the- the- the architecture and the practices to put checks on AI in a way that corporations who get built up and burned down from the ground up over and over again don't necessarily, ah, have.
Peter, you're really making me-- you're- you're- you're speaking to my experience has spent 15 years in program delivery in government. And I can definitely attest to the fact that a lot of the decisions are- are so well documented that anybody at any level could make the same decision, and- and agree with you that the point there is for that fairness and transparency. So, you know, I think that's a really interesting point that you're conjuring up that image because it makes me feel more comfortable. But yet, there's still all sorts of elements about AI that make folks feel uncomfortable. And-- and Wendy referred earlier to, for example, when facial recognition software, uh, introduces a fair amount of bias, um, significant in some cases. So Wendy, my next question, I'll turn back to you. If we are making decisions that-- decisions to use, uh, algorithms to help us with our decision-making, what steps should we be taking as a government to increase the transparency around the decisions and make sure that those, uh, methods and those, you know, mass are communicated to the public. And I'm gonna put a little caveat on this, keeping in mind that in all likelihood, the decisions that we're making right now by humans are probably not actually communicated to the public how we're making them.
It's funny. So I mean, yes, of course, exactly. Right, and I think that's something we have to think about is that we assume, or in some, a lot of these debates we assume over to person in the loop that it becomes more transparent. But in fact, we know that human beings often have very occluded decision-making processes, and they can't always tell you exactly why it is that they're doing what they're doing. So for me, you know, I think it's really important with any policy or any decision about how governments are going to be regulating people, or using AI to regulate going forward that we center things like the Canadian Charter of Rights, and we think about the broader global human rights framework that Canada has committed to, and-- and largely tries, I think, to champion internationally. And I think that any AI used in government should be evaluated for not just potential benefits, but potential human rights harms. I think that's really tough for us-- for us to think about. Um, if we don't really center some of the concerns, so not just the ones that we talk about a lot in the media, such as privacy or the freedom of expression. I think those are the very obvious ones. I think we need to start thinking about other rights that might also be infringed upon or at least affected through the use of AI. So things like the rights of equality before the law, uh, things like access to social welfare, and I say that because they're already hard. Some cases where AI was used in making decisions about access to welfare, for example, the- the Netherlands had some recent cases of that, where, you know, mistakes were made at scale, thousands or tens of thousands of people were affected and penalized before the- the errors were corrected, the errors of the AI were corrected. So because we're talking about people's lives that are impacted, we need to think about that in terms of regulation. On the flip side, I think that one of the things that I've really been emphasizing in my work is a- a lack of data literacy, and I say this for- for almost all of us, right? Everyone who's not a non-- who's a non technologist, who isn't a software developer working on AI is going to struggle with data literacy. And what I mean by that is really just understanding, uh, basic, you know, what--where-- what kind of data are being collected, how these data are being used, what- what implications do-- does it have for da-- for governments to have our data, or to have data about us, and what does it mean for us going forward? And I think if people have the right to data literacy in the same way that we have the right to linguistic and numeric literacy, we can start having actually more informed debates. And also I think this would give more people the opportunity to think about some of these issues that we're discussing here, and exactly how to hold, um, you know, governments accountable or how-- what-- under what conditions government should be more transparent with their decision-making if we are go-- going to move forward with more automated algorithmic decision, uh, processes.
I have a follow-up question for you Wendy, and Peter, you can feel free to jump in afterwards.
But-- um, you've raised some really good points about the- the things that we should be considering when implementing AI and recognizing that effort to regulate that AI are still in their infancy. Is there a conflict of interest or a perception of conflict of interest if governments start ramping up the use of algorithms before all of those regulation pieces are actually, you know, crystallized?
I don't know if it's conflict of interest. I think it's we risk creating problems that could maybe be avoided. And I know sometimes it's just hard to anticipate what might happen if we do X, right? And so sometimes we- we do that in the process of- of governing, you have to make mistakes. I mean, on the other hand, I think right now we're at this point where we have a lot of technologies out there. And some of them might work at in term- you know, complimentarily with both government incentives and government needs. But we don't know fully, you know, how- how we want them to work. You know, we have these technologies. And- and so what is appropriate- what- what are the places where it's appropriate for there to be a machine-learning algorithm processing the massive amounts of data we have on citizens to help governments make decisions and where is it not appropriate and what's safeguards and guard rails do we need in place? I think that's really again back to this, you know, we need to center the rights we already have in deciding how to create policies around these technologies we- we have and are currently developing and- and, you know, do not have yet.
Thanks, Wendy, Peter?
Yeah, I think one of the chall- is- Wendy saw right away, I think one of the challenges here is that citizens don't have a single view of how government should do things, nor do they have a single view of like what the kind of- what the bargain is with everything. So let me give you an example. Uh, the person who lives in a neighbourhood with kids, I think there's way too much speed in Canada, people with people drive kinda too fast all the time, right? And the enforcement model we've come up with for speeding is, you know, rare capture high fine. So if you're one of- if you're the one in 5,000 cars going 140 gets pulled over the 401, you'll get pulled over you might get a fine for $300, right? But everybody else gets off. Now, you could have a different approach, which is that everyone who speeds gets a small fine, right? And you- so you have high capture, low- low fine. And we're actually now in the kinda world where we could do that quite easily, right? Because we've got all sorts of recognition technologies for license plates. We know how fast people are going between points. We could actually have a real-time kind of finding system to deal with people. But with that would offend people. Not their sense that the law doesn't apply to them generally, but their sense that the law doesn't apply in this circumstance, that there's a certain bargain where how people are dealing with government in that- in that instance, right? Now, people don't have the view that -- they don't have the same sort of view about lawlessness with other government- with other government services. People's view isn't that everybody should be able to cheat on income support from the government- government. And one person out of thousand should be caught and sent to jail for it. People's view are you shouldn't cheat on Serb if you- if you can't cheat on Serb, you've got different bargains for different types of government policies, and who the heck knows how those evolve. So part of the challenge is, is that when you start getting into a real data-driven process where you're looking at a high-frequency, um, repeated government policy or- or a, um, program where you could automate things. You have to figure out what the- what are citizen's views of that particular interaction with government, and what kind of justifications do they expect and what kind of rules do they expect to actually be applied to it in practice, right? Now, that goes all the way down to citizen's data. You know, like no one sort of objects. If- if you go to get your health card, no one would really object if the government draws the address information of the driver's license database, right? That just seems like a reasonable thing in that circumstance, right? But people might object to government trying to get a different piece of data, um, that could, for example, linking your health data- your health information, right? So it's just -- what makes it really tricky is that it's hard for us to talk about particular- particular or kinda general rules that will apply in every circumstance when really what we're talking about are our particularities. I think it just makes it- what makes it triply difficult is that for those of, you know, people listening who are in the business of actually implementing government practices, you know, right? That- that there's not always a great chance that a program is going to be called out, or audited by the auditor general. But there's always some small chance, right? And who wants to be the person who stuck their neck out and really tried something and then risk public eye or for some reason it didn't really have anything to do with the fundamentals of how the thing was being done or whether it was within the law in practice, but whether it kind of offended public sensibilities. So it's just kind of a way of saying this stuff is really political, which makes it much more difficult to figure out how you could ever apply general rules to all these specificities.
Absolutely and inside government, we call that the [inaudible] test. How would this look if it's--
Yes- Yes.
Yeah. It's such a reasonable test, right? Because it actually captures what you're interested in, right? Which is not, are we doing things within the law but are we doing things within the law and within people's reasonable but actually kind of complex or contingent, um, expectations of what we might do as- as government.
Absolutely, so Peter I think you've brought us right into the heart of the matter here and I'm going to turn my next question to you, Wendy. Uh, what is citizen consent and- and how do we measure it? Can it be implicit? Should it be explicit. I know I've given you a whole bunch of questions there, but can you give us a bit of a primer on citizen consent?
Um, I'm sure I can talk about consent generally question because it's not- it's not a small topic. As you know. Um, I think when we think about consent in the context of algorithmic government, we're talking about two questions. First, what is consent? And second, how to secure that consent? Once we kinda figure out what it is. So, you know, in my- in my work, I think a lot about what consent means in the broader context of democracy in the digital age. You know, as Peter talked about in his- in his presentation, consent is a lot of work for us and democracy and in work on human rights. And, you know, his lecture was right to point out that consent is sort of the secret thoughts in making democracies work. It gives them legitimacy and it gives them, you know, these sort of procedural accountability steps that can be taken. So a lot of times we think about this as implicit consent, you know, you are a citizen of this country. You broadly accept the rules that govern this country. Um, but, you know, I think in the digital context, because we are moving from an analog world to a digital perhaps completely digital world. I think that the consent needs to be a lot more explicit. And I would say it would need to be meaningful, clear, and informed. So thinking about that, what counts as consent, what makes it meaningful? So if you think about models from the private sector, right? We can now click accept all the time to long lists of terms and conditions or we accept these cookies, we don't think about it because we don't have the time or the- or the necessary skills to understand the legal meaning of those terms. And so is that consent meaningful? I certainly think for government purposes it's not meaningful and I don't think it's appropriate. So that's one thing we have to think about what it means for somebody to give consent. So going back to my- my point about data literacy, I think the other thing is that consent needs to be clear. People need to know what they're consenting to and what the benefits and the drawbacks of that consent would entail. So what are the benefits of having a government go algorithmic, at least in part, and what are- what are the potential pitfalls? We also need to have an informed some sense. So this is the idea that people need to know what they're agreeing to and at least have a general understanding of how governments will be using these technologies. And so this is sort of the broad framework I think I would use to think about what consent is. From now then, how do you go about securing it? I think this is where Peter's research really speaks to the challenges of securing consent because people have all sorts of ideas as to why they don't or do agree with- with potential algorithmic government. And so I think the important thing within a democracy, a lot of times a simple majority or super majority might be appropriate. But in this context, I think that we really need to think about, um, how automation affects all kinds of different populations in this country. And we need to ask ourselves if we automated some areas of government, what would happen to the groups? Uh, minority groups. So, groups that are sort of skeptical of that or- or, you know, resist that kind of activity, just to say, you know, we can't just say, oh, it's just a minority of people who are resisting. Because when we think about even 10 percent of the population, I mean, that's a- that's a fair number of people who might be skeptical or uncomfortable with AI and government. Um, and then if you think about a lot of these maybe negative effects of automation are going to hit people who may be already at risk for harms through algorithmic government or those who are already at risk for having their rights infringed upon, and who may be least able to resist, um, decisions made by government in terms of how- how algorithms will be used. And I think that's something we need to- to really seriously think about, which is who are the people who might be acceptant of it and what position they have in our society, and then those who may not want this to happen. But then, you know, what is the rationale and I think this is again where the work that Peter is doing, it's important to understand. If it's a multiple kinds of, uh, of rationales we need to then -- the government needs to then address each of those separately. There's not a single, [giggles] you know, explanation that's gonna work. Right?
Peter, what are your thoughts?
I've got - I've gotten many. Let me just- let me just share three in- intuitions with you. One is it- let's just go back, just- let me know. I think everything when do you said is scraped. And what I would add to it is ask yourself the question why is there the global mail test, right? Because like, doesn't government operate within the law, right? Um, the test is there because what you do, uh, doesn't matter more than how you do it in some sense of how you do things matters as much as- as what you do. And a lot of this stuff is about process, about how government arrives at- at- at a decision. But this is what makes it really tricky, right? Because most of what you're thinking about with consent, now with how governments doing things is really about what the government's doing, what the policy goals are, but the- the concerns around AI or how are you doing it, right? How are you using my data to arrive at things? So it kind of puts at the forefront a set of questions which have normally been secondary within- within government. First one, the second one is- is that these applications are so- so rapidly evolving and are often so novel in how they get applied right and are actually frankly entrepreneurial. There's a great report by Dan Whoa and David Angstrom about the use of AI in governments in the US. And one thing they noticed that a lot of the times that AI gets- gets applied or - or kind of machine -learning gets applied. It's because somebody sees a problem and just within their unit they - they innovate a solution. And that solution may- may run a foul of really complicated privacy laws or laws about data storage or something right, but it's actually a better- a better solution. So- so- so part of the challenge here is that - is that we can't actually anticipate, even in talking to citizens, all the kind of tools that are going to evolve. So those are two of the challenges, right? This is really about how you do things and it's hard to know where the - where the frontier is. But the other is that our models for setting up consent really don't work all that well right? Because what we rely now is -- on for now is kind ethics among public servants. So personal ethics, some guidelines for how things should be done and then laws about how data should be used, you know. I think there's a model where you could imagine the following, that it doesn't make sense. My own professional interests not withstanding, it doesn't make sense to just pull people on things. Because what you want as an informed public thinking about these things. One option is for governments to think about setting up effectively citizen juries, right? People who are regular citizens who are disinterested in- in the outcome of any particular policy, but develop enough fluency around the use of data and AI that when governments want to use an AI or some machine-learning approach which they think is over some boundary. Then they can- they can go to the citizen assembly and make the case for why they want to use it. Or the citizen consent the panel, there's other ways you could do it, right? But they would- they would go through what is kinda like hopefully be much better than this, much more efficient. But what is like a research ethics board at a university where you explain why you're using something, why it's consistent with the general framework, and then you get permission for it. That would be one way of doing this to sort of set up a standby group of citizens that would say, yeah, if I was in that circumstance and I found out that my data were being used in this way, I would think that's appropriate so the government can go ahead and use it. And that would allow for some accountability, um, some process, but also maybe a little more speed, a little more flexibility than trying to do things within the confines of- of you know, legislation which- which could take a bit of time to change.
Thanks Peter. So we're approaching the time for Q&A questions from our audience and I'm seeing some good questions filter through. But before we move on to those, I'm just going to ask you both one last question and I ask you for a real bite-sized answer. Uh, so the a $10 million question, is the widespread adoption of algorithmic government inevitable? Wendy.
No, it's not, It's choice. And I think what we need [giggles] to do right, is to think about, I think the role of government isn't to convince citizens that algorithm governance is best or more efficient. I think the role of government is actually to give people information so that they can- they can voice what they think about this and to build trust and not erode it through the- the quick adoption of technology. So the short answer was, no, it's not inevitable, it is a choice, but government has a role. It's just not, not as a cheerleader necessarily, but as an evaluator and- and an educator and- and helping citizens think about these technologies.
And Wendy, oh, sorry, Wendy is already gone and Peter.
Yeah, nothing- nothing is inevitable, um. But I think that- that the pressure on governments to- to reconcile themselves to a digital world and to become a leader in that in- in- in conceptualizing how one can work and how one can function as a citizen rather than as a consumer with- within a digital world as an imperative for governments. And getting into this game is- and playing it rather than being on the sidelines is probably, um, one way of modelling- of modelling that- that's a long way of saying it's not inevitable, but you know, uh, you know, better to get in and swim with the water than just stand on the shore.
Thank you both. Uh, so we're just entering into our question and answer, period. There are a number of questions coming in for folks that are sending them in and thank you and please keep them coming. I might not hit all of them as some of them might have been addressed in our conversation already, but please do keep them coming. Um, the first question I will ask, what if individuals or communities do not give consent? Because they have been targets of abuses of government, abuses that have been amplified by AI especially since most of the algorithms are based on training data that is difficult, if not possible- impossible to de-bias. So the use of predictive pol- pol- policing of facial recognition technology is one example. Our citizens allow to draw a red line and an algorithm cannot be deployed. Wendy, why don't we start with you?
Sure. Um, yeah I mean, I think this is the question is how the technology is being used rather than is the technology inherently bad is what - bad is what I wo- would say.
So, you know, part of the reason why our existing technologies like facial recognition don't work well as exactly because they're training sets or biased. But if we had different training sets, you know I- I relatively positive that we could write that from a technical perspective. Now, you know, the question is, do you want that kind of technology? I think that's a different set of questions. The other question, I think that has come up specifically with facial recognition, since it's a tangible example, is how police are using that technology. So, you know, is it being used in lieu of good police work? So there have been anecdotal situations in the United States where police have actually fed composite drawings into the facial recognition algorithms. Probably not best practice. It is probably amplifying existing biases. So I would say that citizen groups are right to resist these types of for ill, um, wrongheaded policing, right or wrong headed activity by government agents. Now, you know, and they should speak up, but this is the point that I had earlier about minorities really taking minorities being numerical minority is really taking their concerns seriously. Because especially in situations where you have historical grievances and historical, you've experienced historical abuses. We- we have to be cognizant of that and not amplify those abuses through the use of technologies that may or may be much harder to track, uh, you know, the processes is of. And I think that's a really important thing that we should spend some cases I think actually stop and think about how the technologies can be used better, how they can be improved before, um, using them more broadly in government.
Thanks, Wendy. Um, Peter, I might move you into another question just to make sure we have time for- for a number of remaining ones. Um, so Peter, do you believe that the public perception will change as we approach the AI singularity, or will these stigmas continue even once we have reached that point?
Uh, well, we wanted to be at the singularity we're all done for. So I'm not gonna talk too much at that point, but no I think this thing is gonna- I think this thing is gonna move along in fits and in starts. It's gonna move along in fits and in starts. And I think that right now we're asking- we're really nibbling at the crust of the pie in terms of what we're gonna be talking about over the next 50 years if this whole thing really takes off. I mean, even the question of kind of algorithmic bias is because we're able to see that there's bias across groups that we can identify where we're looking for our keys under the- under the searchlight, right? But who knows how else algorithms are making decisions in a bias fashion where we can't easily understand the bias because we don't know who it's being unevenly applied to because you know I've got some trait and when he doesn't have some traits, so she's getting a loan and I'm not and it's not about how we look, it's about some underlying construct we can't see. Then that bias becomes even harder to see. So the stuff we're working on right now is tough vexing problems. But the next level is way more more complex, right? Like what happens when you start evaluating the efficacy of a government policy not along some you know, just some cherry picked metric that the person designing the program chooses. But a list of a hundred metrics identified by some machine-learning algorithm and then some, averaging of those with some other algorithm where it's like way beyond your capacity to understand what's going on. But it turns out that the policy is leading to better outcomes across all sorts of measures. Then we're really in a different world where machines are actually telling us what outcomes we should want and we should, and we should value. So I think that we just have to recognize the consent conversation is gonna keep rapid, like laddering up and getting more and more complex as we think about the nature of you know as algorithm government becomes- becomes- becomes more complex. The other thing for what it's worth is I mean, I don't think the metaverse is gonna be with us sort of anytime- anytime soon. You know but it is the case that over the next 50 years, I think we're gonna think differently about- about who humans are and how they're interacting with governments and who else we should be thinking about, what other things we should be thinking about when we're making decisions? Um, you know we're gonna be doing this in the face of mass- massive climatic uncertainty, for example. So these questions are only gonna get more complex about how government should be making decisions. Whether or not AI grows or not. But as AI increases, it becomes even more complex because you've got more autonomy and decision-making. So you know the first conversation among many, I guess is one way of putting it.
Absolutely. [giggles] Wendy, I'll ask you the next question from our audience. So this question, it isn't always the algorithm people distressed, but the underlying data. Can you speak to what assurances exist regarding the safety and security of the data and how it's used?
Uh, great question. Yes, that's right. So there isn't an interplay between the algorithm and the data of course. But they sort of mutually drive each other because without the data, you don't have the algorithms, but the algorithms also produce more data. So that's iterative relationship. You know we don't have a lot of good guarantees with the security of data. In fact, that is a huge problem and a huge concern. So you know I think Peter should speak to this because he's the- he's the one who's done the polling on people's perception of algorithms and algorithmic government. But I think that you know- I think people are just starting to become aware of how pervasive the data collection about their activities on a minute by minute, second-by-second basis is much of that is not government-driven, much of that is private sector driven. And so a lot of the conversation I think is around why isn't government doing more to help us think about this and regulate how private sector actors are collecting data. So I think you know Canada has made moves, I think the EU has done things. Individual states in the United States are moving. There are lots of countries that have data protection policies in place, but they need to be updated, I think to think about how data from people and their activities is different fundamentally from collecting data like addresses, right? And what- to what extent is my you know search history on Google or my you know what websites I visited, or what where I've been today with my iPhone. How are those data more or less co-personal than something like an address or a date of birth? And I think that's- I think I'm trying to get at what I think the questioner was- was getting at which is the level of data that are being collected and what protections we have in place certainly right now, this is exactly the question. What constitutes personal data? And why- why and what we can do about that.
Peter, do you want to reply to this one as well?
Yeah I do. I mean, I think when these points are all really, really well-made, right. How we think about-if you hear Wendy talk about it more like it becomes even more fascinating cause- you know there's a stock split yourself and then you sort of your digital self, right? There's this person who is constituted by all the things you do online, who has the rights to that person? It's a really interesting question, right? So in my more expansive moments, what I really hope is that we can really start to restart some conversations about how we use data and about what government does because what is happening now I feel like in some sense is that we're talking about how government should be allowed to use algorithms and AI and machine learning in the context of all these kind of arbitrary restrictions put on what kind of data government can access. For example, you know so for me the best example of it was that we designed a COVID alert app, which is you know pretty much useless. And really didn't provide any kind of public health data, about the location of where diseases were being transmitted. That would've been really, really, really useful to know 18 months ago, right. And couldn't have made a big difference if we'd had data knowing exactly where transmission was occurring? Yes, it could have right. But we started the conversation with a certain view of how government should be able to use data rather than saying, look let's just restart the conversation we're in the middle of a pandemic. Here's all the data you spew out every day that gets captured by Google and Facebook and resold. And here's the tiny slice of it that we think government should be able to use, should government be able to use more of it for the following benefit. But we couldn't even see to having that conversation. It wasn't for lack of talking, right. Everybody was at home. We were talking, listening to people on the news all the time. But we started the conversation with the assumption that that little slice was all that we could- that all we could use. And look, it's actually not hyperbolic to say that people probably died because of that. And we were locked up longer than we needed to be because of that. Now, maybe there wasn't an app based solution. But boy, it would've been really nice to know that most transmission was happening in certain types of places with certain types of ventilation much earlier on, right. Then- then we knew that. But we couldn't because we didn't have the imagination, I think to say, let's just restart this- you know let's white sheet, this whatever it is at a high price consultant would tell you, to take everything off and let's start from the ABCs of this.
Thanks Peter. The next question is also for you, but I'm hoping it's a quicker one. Is your survey data on support for AI published or available in some form? And this audience member is also particularly interested in the relation between support for AI and for immigration.
And so I say, those are good questions. A couple of things about it. The first is the data will be made public at some point. We're still, still ready things up and working them through. On the issue of AI and an immigration is one thing I'm really interested in, deeply interested in and I was gonna give talks on it just last week. I've been thinking about a lot is the relationship between people's concerns about AI automation and their potential kind of broad dislocating effects. And how that correlates with their political views. Another way of saying that is, if parties are in the market for votes, how do they- how do they leverage or use or exploit the fears of people around AI? And frankly, it's a wide open playing field politically. So I think that you know the parties that are able to articulate their comprehensive you know useful government responses to the dislocations that are coming from AI and automation will be parties that will do better. And that's not limited to parties or the right or the left or parties who are anti-immigration or pro-immigration, quite the contrary. You know we have a paper with PPF that came out a couple of years ago that shows that people who are concerned about AI and automation and this wider societal effects, they just want government to do something. You know and they're really actually an inviting government to come up with policy responses. Not only to the interesting but relatively narrow questions we've been talking about here. But in fact, the whole suite of you know areas that AI is gonna touch us. They would like government to have responses in those areas from labour markets all the way through government data.
So I think we have time for at least one, maybe two more questions. So this one is for both of you. But we'll start with Wendy. Is there any jurisdiction that has having very positive reactions from its citizens to automation through AI?
I don't know of- so- so I don't know of I guess a specific- the data around that specifically. I would say that we could point to some of the soci- societies that have widely deployed AI, not AI, but just digital- digital Government. So looking at Estonia and looking at places like Singapore where a lot of so- in fact, almost all social services provided by government are online and are digital. So I would say that in terms of thinking about AI and in government and whether people had been largely positive, I don't know. This is- I think a better question for Peter, frankly. I don't- I think that part of it is knowing what parts of government are actually using AI, and- and I don't know if necessarily that's something that's- that's very publicly known.
Thanks. Peter.
No. I would- I would add that to- to- I'd just add onto when you said the following, that- that if the king government looks like other- it looks like any other governments out there. AI and machine learning, which are really- are different things. It's more likely to machine learning is being used than AI. But- but where and when these things are being used is not completely known, right? Because you've got innovative people trying out- trying out new things all the time, you know, and there's somebody out there is surely who's written an algorithm that helps them wade through applications for something faster, so they can get away faster on the weekend. It's an example of machine learning. If- if they've done it at a certain- at a certain level or at least algorithmic decision-making. So it's happening in more places than we think, and we should find ways to kind of be regularly auditing this, not in the sense of, you know, being a- being like a rule enforcer, but auditing in the sense of just understanding it and- and- and- and- and cataloging it.
Perfect. All right. I'm gonna try and squeeze in one last one for both of you again. Are you aware of and what do you think of Treasury Board Secretariat its directive on automated decision-making or algorithmic impact assessment tools also put out by Treasury Board Secretariat?
Yeah, I'm big fan to them. I mean, I think- I think they're good- they're good responses by a public service that's trying to figure out how to use things. I think that if I were- if I were responsible for those files, I'd be repeating over and over again that these are- these are tools that have been in- in guidelines that have been created for one moment in time, and they're part of a conversation that's really gonna be ongoing all the time. You know, and- just to the point of, for example, an algorithmic impact assessment tool, it- it's- remember that when you can identify bias in something, bias and unfair allocations between two people, it appears bias because you can identify differences between the people who have- who have received different allocations. But if they differ in ways that you can't see, but that are still arbitrary, there is still bias there that you can't see, right? So don't be lulled into kind of a sense of false certainty that tools can help you see the totality of the pernicious effects of AI. They can't.
Yeah.
Wendy, do you want to comment on that question as well?
Yeah. I guess just to add to what Peter said because I think he put it quite well. I mean, I- I think just to- to emphasize the point that what we're calling AI, which is largely we're referring to the machine-learning technologies work- because of the way that they work and because sometimes they're passing through multiple layers of- of algorithms, it's hard to know exactly what's going on in terms of the outcome that the- the machine spits out. And so sometimes they can be very arbitrary. And the- the one example that really sticks in my head, and this is why something like an assessment impact tool would help but then also not necessarily catch some of the things that we're concerned with. So there's a- there's a fish that often gets fish for- for prize or, you know, people get this, and they take a photo with it, and an AI algorithm was really good at identifying it. Not because it recognize the features of the fish, but because it recognized that there were human hands around this fish and so it was frequently catching fingers. And so every time they were fingers in a picture with a fish, it would say it was this kind of fish because that's how most of the time the fish appears in its training dataset. So I would just say this is just to put a point on what Peter was- was emphasizing there is that this- it's- it's good to judge the outcomes. We actually- because the processes can be very black box, it's sort of hard to use these kinds of assessment tools to really gauge what's going on in an algorithm.
It's a great example that Wendy. Thanks for sharing that. So listen, this has been a really full session. I feel like we've covered a lot of ground and there's been a lot to reflect on. So I really hope that our audience members enjoyed this conversation. Before we go, if you're both willing, I'd love to ask you to each share one key message you'd like our learners to take away from this event. Wendy, can we start with you?
Gosh. Okay. One key message.
Just a short one.
I think we- [laughs] that's the key. Um, in our conversations around whether or not we use algorithms and specifically machine learning in government. I think we need to think about the potential harms in addition to the benefits of- of what happens to citizens when we use machines if- if- to make decisions or to help us make our decisions in government.
Thanks, Wendy. And Peter.
Just a totally mixed messages. I agree with Wendy entirely, and I also hope the people in government will think about, um, how much potential there is because of the way government already makes decisions to try to harness the power of- of machine learning, artificial intelligence, and high-power of computing to do government- to do government better. Doesn't make it any easier, but, uh, it's- it- it's a real challenge and there's a lot of tools there and they can help us do it better with all those important considerations in Wendy's list.
Great. Thank you again, both of you, this is really been a fabulous conversation. Just to all of you, learners who have attended today at this event was a partnership between the Transferable Skills and Digital Academy business lines here at the Canada School of Public Service. Together, we would like to sincerely thank Wendy and Peter, but also our- our series partner the Schwartz Reisman Institute for Technology and Society for the- for your participation today it really was a great conversation. Talented people like you, sharing your time and your expertise are what allow us at the school to produce excellent learning experiences for the benefit of public servants across the country. So again, thank you. It's been great. We're also grateful to you, our learners who registered for this event. Thank you for all of you who have tuned in. Your feedback is really important to us, and it's how we can continue to refine and make better events going forward. So I really do invite you to complete the electronic evaluation that you'll receive by e-mail in the next coming days. Not yet automated through any sort of AI, but hopefully coming soon.
And finally, I'll just leave you with, uh, with, uh, next step at the school. So we do look forward to seeing you again next time and invite you to consult our website to disc- discover the latest learning offerings coming your way. We've got courses, events, programs, and all sorts of other learning tools to help you in your- in your careers. The next learning event in this series and the AI is here series will take place on Monday, December 13th on the topic of when and how to use AI in government. Registration details will be available on our website very soon. So thank you once again, Wendy and Peter, and for everyone who attended today, this has been wonderful.
[A small chyron appears in the bottom-left that reads "canada.ca/school-ecole".]
Thanks so much.
Thanks so much.
Bye-bye.
[The chat fades out, replaced by the Webcast Webdiffusion logo. The Canada logo appears.]