Transcript
Transcript: CSPS Data Demo Week: Human Resources Foresight with Blue J Legal
[The animated white Canada School of Public Service logo appears on a purple background. Its pages turn, opening it like a book. A maple leaf appears in the middle of the book that also resembles a flag with curvy lines beneath. Text is beside it reads: Webcast | Webdiffusion.]
[It fades out, replaced by two title screens side by side in English and French. At the top, it shows three green maple leaves, each made of different textures. Text is beside it.]
CSPS Data Demo Week
Human Resources Foresight with Blue J Legal
GC data community
[It fades out, replaced by a Zoom video call. On the top left, Benjamin Alarie, a bald man in a blue suit sits in front of abstract art. On the top right, Wendy Bullion-Winters, a woman with long, caramel coloured hair and glasses sits in front of multiple paintings, one showing a winter landscape, and others showing geometric streets. On the bottom, Taki Sarantakis, a man with glasses, a neat goatee and a blue zipped-up sweater sits in a home library. He speaks.]
Taki Sarantakis: Good morning, good afternoon, good evening, depending on where you are across Canada and around the world. My name is Taki Sarantakis.
[Taki's image fills the screen]
Taki Sarantakis: I'm the president of the Canada School of Public Service and welcome to our kick-off of the CSPS data week.
[A purple text box in the bottom left corner identifies him: "Taki Sarantakis, Canada School of Public Service."]
Taki Sarantakis: As a public servant, you know that you need tools to do your job and you know that those tools change over time. And so, at the Canada School of Public Service, what we are doing this week is we are focussing on some tools related to data. And we are kicking the week off with artificial intelligence and data, and we are closing the week off with artificial intelligence and data in a second application. And in between, well, you'll just have to figure that out and tune in and watch. So, today's demonstration is about a particular application that is already in production. And not only that, it's actually on standing offers of the Government of Canada. So, it's actually kind of an approved Government of Canada tool that you can use today if your procurement officers—if you let them know that you need it. And data is something that we all know you will need going forward—and not going forward, you actually need it today. We can see this in a pandemic. We can see how the management of data helps us do our jobs. So, it is my great pleasure today to introduce Wendy Bullion-Winters, who is the director general of human resources at the Canada School of Public Service. Wendy will introduce the topic for today, the particular application and our special guest.
[Taki's video window is unpinned, and the three panelists' video windows reappear.]
Taki Sarantakis: After the introduction, our special guest will do a demonstration of the technology and then all three of us will come together for a discussion. Hope you can stay and join us for the discussion on artificial intelligence, data and the government of Canada.
Wendy Bullion-Winters: Thanks, Taki, it's a real pleasure to be here today. I'm excited for everyone to get to see this demonstration.
[Wendy's window is pinned.]
Wendy Bullion-Winters: When we think about artificial intelligence and machine learning, we might not immediately make the connection to human resources.
[A purple text box in the bottom left corner identifies her: "Wendy Billion-Winters, Canada School of Public Service."]
Wendy Bullion-Winters: But there are real use cases for it, and it can be a game changer in my domain. We've been using Blue J software on my labour relations team for more than a year and there are two immediate advantages. First, it's a much more user-friendly search engine for case law than our existing tools. Second, and most importantly, is the predictive analytics component. In just 10 minutes, an LR advisor can answer a short questionnaire, bring together all the specific details of an ongoing case, and the system will use the data, tabulate the relevant historical case law and pump out a report that shows jurisprudence and gives a prediction on the likelihood of an arbitrator agreeing to varying disciplinary outcomes. So this report is just the baseline for the human analysis, and that's where the magic happens. It's when we can get the right data quickly and efficiently into the hands of our subject matter experts. So without further ado, it's my pleasure to introduce Ben Alarie, co-founder and CEO of Blue J Legal.
[Wendy's video is unpinned, and all three windows move to the side of the screen as a browser shows a presentation slide. Text on it reads:
Blue J
Using AI in HR and labour relations
Prof. Benjamin Alarie
Osler Chair in Business Law, University of Toronto
CEO, Blue J Legal]
Benjamin Alarie: Thanks very much, Taki, thanks very much, Wendy. It really is my pleasure to be addressing you this morning.
[Benjamin's Window is pinned.]
Benjamin Alarie: And I'm going to be talking about using AI in human resources and labour relations. I'm a professor here at the University of Toronto. I'm in my office here at the Law School today 'cause it has the best Internet and so I figured it'd be the most reliable place to be addressing you all. I'm also CEO of Blue J Legal. I am shortly going to jump into a demonstration of the Blue J platform and how you can use it in HR and labour relations. Before I get to that, what I'd like to do is provide a little bit of context, a little bit of background about where we find ourselves with artificial intelligence and machine learning in law generally. And then we'll narrow in on HR specifically and the Blue J platform.
[The slide in the browser changes, showing a black and white portrait of a man with a large, wispy mustache looking into the distance. It's titled: "Law as prediction: 1897." A quote from Oliver Wendell Holmes from Harvard Law Review in 1897 reads:
"For the rational study of the law black-letter man may be the man of the present, but the man of the future is the man of statistics and the master of economics."]
Benjamin Alarie: OK, so one starting point, and this is the starting point, I think, for a lot of modern thinking about law and ultimately about labour relations and human resources management is that law fundamentally is about prediction. And there's this gentleman, Oliver Wendell Holmes Jr., who's a very famous judge from the United States. He sat on the United States Supreme Court and he wrote in an article, in the Harvard Law Review in 1897, that for the rational study of the law, the black-letter man may be the man of the present, but the man of the future is the man of statistics and the master of economics.
It's very interesting—setting aside the gendered language there and that emphasis on man, man, man, man, man—but the interesting thing here is that Holmes is making the claim that law is all about statistical regularities and that there must be some way to look at the law and use statistical reasoning to unearth or identify the underlying patterns in the law. And of course, the whole idea of the rule of law requires that the law have these regularities to it. And so, it's interesting, Holmes saw this very clearly in 1897, and we've been playing catch up with that in the legal system from 1897 to the present. It fundamentally is a prediction problem. Any time a lawyer advises a client, that's a prediction about what the law requires. Any time a labour relations advisor advises the government on a particular situation, that's making a prediction about what the law requires. Any time an HR professional advises its- his or her organization, that's making a prediction about what the law requires. And it probably adds in some additional colour around human nature and the situation, the context. But fundamentally, the legal component here is all about prediction.
[The slide changes. This one shows a bar graph. It's titled "Prediction in image recognition." The graph's vertical axis shows markings by 5, going up to 30. It s label reads "ImageNet Top-5 Error." The horizontal axis shows years, stretching from 2010, to 2017. Two yellow bars at 2010, and 2011 sit between 25 and 30. The blue bars representing later years get smaller and smaller, the smallest being 2017 sitting under the 5 marker at 2.25. A single red bar in the middle of the chart has no date marker but sits at the 5 marker. Text in the chart reads "T-S: Trimps-Shoushen (Deep Ensemble Learning)".]
Benjamin Alarie: And what's interesting is because this is all about prediction, that's something that AI machine learning is becoming exceptionally good at. So, this graphic depicts the error rates in something called the ImageNet competition. And so, the ImageNet competition was something that was held from 2010 through to 2017, an annual competition where algorithms, computer algorithms, computer programs were tasked with appropriately describing the contents of a huge number of photos. So, digital photos. And you can see in 2010, the top team, that top algorithm had an error rate of close to 30%. So that means 70 something percent were correct and close to 30 percent were incorrect. The little red bar there, in the middle of the graphic, that represents the human error rate. So, humans are not perfect at describing what we see in digital images. In fact, humans are right about 95 percent of the time, 95 percent agreement with the ground truth in this data set. What's interesting is around right between 2014 and 2015, these systems, these algorithms surpassed human ability at appropriately describing the contents of digital photos in such that by 2017 the contest was no longer as interesting as it was back in 2010. And the algorithms are very, very accurate now. What this means is that these algorithms are now better than humans at predicting what the appropriate caption should be for digital photos. So, this is- this is history. This has already happened. Humans have been surpassed in image recognition.
[The slide changes. This one shows a headline and another bar graph. The headline, dating from November 2020 reads "It will change everything: DeepMind's AI makes gigantic leap in solve protein structures." The bar graph details the high performance AlphaFold 2 at the CASP14 protein folding contest compared to past winners, including AlphaFold.]
Benjamin Alarie: More recently, DeepMind's AI team was working on something called AlphaFold. And this is a similar competition. You can see the bar chart here in the insert graphic here depicting the accuracy of contestants in a contest to predict how proteins would fold. This is- this a very interesting competition with really important scientific applications for human health, understanding how proteins fold based on the identity of the components of the protein. This tells us a lot about how to treat certain kinds of diseases, certain kinds of ailments, including COVID, of course. What's really astonishing about this is AlphaFold 2 had a major breakthrough in 2020 and has basically solved this problem with, using machine learning. This is a headline coming out of Nature, which is one of the world's leading scientific publications and the headline here is "It will change everything." Google has basically solved this problem. AlphaFold 2 has solved protein folding. This is going to have huge ramifications for health care going forward and for medicine. So, this is very exciting.
[The slide changes. This one is titled "Law as prediction: 2021." It shows a cut off portion of Benjamin's paper, "The Path of the Law: Toward Legal Singularity," and an embedded YouTube video of Benjamin giving a TEDtalk.]
Benjamin Alarie: But it's not just figuring out the captions for images, it's not just about protein folding, other scientific applications. My view on this is very much in line with Holmes and very much in line with the increases that we're seeing in computing power, with algorithmic developments and with digital data. We have more and more information available to help us figure out what the law is on a particular question. And so, my prediction is that law is moving in the direction of what I would call a legal singularity, where legal uncertainty is going to be something that is largely relegated to the past. I think we will look back on the 20th century and marvel at how well we did with all of the legal uncertainty that we confronted in getting through, kind of muddling through, our lives. I think very soon—and we're already seeing this, so I'm going to show you this in a moment—we're seeing how technology can give us insight into the likely patterns of outcomes if something were to go to an adjudicator or to court and be resolved. This is actually extremely exciting. It has big implications for the work that folks like I do as a legal educator, as a legal academic, how we teach law, how we think about law in law schools, but also very much so in practice in government administration. How do you engage with the law? How do you think about the law? And there are tool sets being built up that help with this and thinking about law as a prediction problem and allowing you to see law as a prediction problem.
[The slide changes. This one is titled "Blue J's government users leverage our AI to build better legal analyses." Below it, there are 4 bullet points. There are:
Speed: analyze legal positions on the merits with unrivaled speed
Confidence: the results are 90%+ accurate; system locates the right cases immediately
Collaboration: share results easily with others to optimize strategy and approach
Result: existing HR infrastructure is up to twice as productive, efficient, and reliable]
Benjamin Alarie: So, at the moment, Blue J's government users already are leveraging AI to build better legal analyses. What you're going to see in a moment is how you can take a legal position on the merits and get that report that Wendy mentioned in the introduction. And so you can analyze a particular situation, a particular set of facts, and the system will ask you a series of questions, hold your hand through the process of collecting those facts from you, and will tell you, based on the information you've provided to the system, here is the likely outcome if something were to go to adjudication. And the results are really quite accurate. So we're at the point where the system is as accurate as the very best humans at predicting what would likely happen if something were to go to adjudication.
In a whole bunch of settings our results are 90 percent and more accurate. And the system, as Wendy suggested, can locate the correct precedent cases almost immediately. So as you're working with the system, you can see, "Oh, I should look at this case," "I should look at this case a little bit more closely." And then it becomes very easy, actually, to collaborate on the right way to approach a particular situation because, you know what would happen if something were to go to adjudication. That can inform the decision that you're making in the present. We've been working at Blue J with the Centre for Labour and Employment Law at the Treasury Board Secretariat, at the Department of Justice, as well as with Wendy's team at the Canada School of Public Service. And what we found is that the existing HR infrastructure can be up to twice as productive, efficient and reliable using this kind of technology.
[The slide changes. This one is a screenshot of the Financial Post. A headline from February 2021 reads "Blue J and Osler, Hoskin & Harcourt LLP Introduce Innovative Transfer Pricing Search Tool."]
Benjamin Alarie: I mentioned Blue J has collaborated with the Centre for Labour and Employment Law, and we've also collaborated with other folks, Osler, Hoskin & Harcourt is one of our collaborators. We announced this collaboration on transfer pricing in the tax context a couple of months ago.
[The slide changes. This one is a screenshot of the CBC News Website. In politics, a headline reads "Litigation gone digital: Ottawa experiments with artificial intelligence in tax cases."]
Benjamin Alarie: The CBC is aware of the work that Blue J is doing with the federal government in taxation as well. This is the headline "Litigation gone digital: Ottawa Experiments with artificial intelligence in tax cases." This is actually from 2018. That pilot project has now turned into something more significant with the CRA and the Department of Justice. So that's very exciting in terms of improving the efficacy of our tax system for Canadians.
[The slide changes. This one is a screenshot of the ABA Journal website. A headline reads "Law firm teams up with Canadian legal tech company on AI-powered case prediction tool."]
Benjamin Alarie: And we're working with other folks south of the border, in the United States, on labour and employment law issues as well. So this was a recent headline in the American Bar Association journal on this technology. So this is really happening across multiple different areas of law: tax, employment law in Canada, in the United States. It's really taking the legal industry by storm. And this is all, I think, on the way to producing really strong and useful predictions about how situations would be decided if they were to go to adjudication.
[Benjamin switches tabs in his browser, pulling up the Blue J interface. A sidebar shows saved predictions, and boxes on screen show searches for different subjects, such as "Cause for Dismissal" and "Constructive Dismissal."]
Benjamin Alarie: So what I'd like to do now is jump into a demo of the Blue J platform. This is the screen that you see when you come into the platform. And the interaction with the system is essentially as follows. You would select an issue that you're curious about, that you want a prediction on. And so if I scroll down here,
[He scrolls past subject boxes and chooses one.]
Benjamin Alarie: you can see something here labelled progressive discipline. This is something, this is a tool that you can use to understand just what should be the consequence for a particular worker who is engaged in some kind of misconduct in an employment setting based on the prevailing case law, based on the prevailing other precedents that are available with respect to that particular kind of misconduct, the repercussions for the employer, how other employees have been treated, any discipline, guidance that a particular employer or unit has provided. And so what I'm going to do is show you how you would run through a progressive discipline analysis, take you right through to that end stage where you produce a report. I'll show you quickly how you can change a couple of the different factual assumptions going into that prediction to really get a sense of the situation. And then we'll talk about how this has implications more broadly beyond progressive discipline for a bunch of different areas of human resources and labour relations. 'Cause as you can tell in the platform, there are many, many different issues that the system can help you analyze. So, it's not just about progressive discipline. We're gonna use this as the example today, though. So, I'm going to reload an analysis that I preloaded into the system from the start.
[He selects a saved prediction from the sidebar. A new page reading "Progressive Discipline and filled with information on the subject loads. On the left-hand sidebar, options have numbers beside them.]
Benjamin Alarie: And this is the way that you interact with the system. So, what you will see here on the left-hand side are a number of questions relating to the background facts of this particular situation, 13 different questions around the nature of the misconduct that's involved, a handful of questions about the employer's response, a handful of questions around any previous discipline this particular worker would have confronted, and then some questions around the other circumstances. And the situation that you are going to see is an example of a worker who has engaged in some misconduct related to the preparation of a background check for a colleague, for a different worker in the federal government. So, there's misconduct here conducting a background check. What we're gonna do is not belabour the particular facts, because the most important thing is actually just showing you how you would be running through this analysis. So, when I click continue, what you will see is that this is going to get into the questions almost immediately.
[Benjamin clicks through, reading questions that have appeared on the page. Each question field has selectable response options. He selects answers as he goes.]
Benjamin Alarie: "So which jurisdiction governs" this relationship? Well, it's federal. Obviously, the choices might be the other provinces, but here we have federal. "Is this employment subject to the Canada Labour Code?" We're going to say no. This is a unionized employee.
[He scrolls past question fields, reading pre-selected answers to questions.]
Benjamin Alarie: This worker has worked for the employer for five years in public administration. It's a clerical and administration position. Not a supervisory or management role. Yes, it's a safety-sensitive position in the sense that these background checks are meant to help safeguard public safety. This employee did have a heightened responsibility to demonstrate proper compliance with policies. And this is subject to the federal Public Sector Labour Relations Act. So, these are all questions that are just setting the table. We know that these kinds of questions are ones that can influence just what the appropriate consequence should be. And to foreshadow what we're going to find out is, is the appropriate sanction for that employee a warning? Is it going to be a suspension or is it going to be a termination? And the system is going to tell us just how to calibrate the particular consequence for this particular employee in this setting.
So, when I click "continue," we're going to see a bunch of questions relating to the nature of the misconduct. What's the nature of the misconduct? What's a breach of fiduciary duty? There's a conflict of interest here. It's a breach of workplace policy, obviously, and there's some dishonesty or misrepresentation here. And it's incompetence and errors in work, substandard work performance.
But you can see that there is the opportunity to add a bunch of different information into the system just to really capture the complexity of the misconduct at issue. And if you're curious about it, you can click "more" and it will tell you exactly what each of these items is actually capturing.
[He clicks a "more" under question 2.1, and a text box pops up with extensive descriptions of each selectable misconduct at issue.]
Benjamin Alarie: And so, dishonesty and misrepresentation has a description. Each one of these kinds of examples of misconduct has a greater explanation that you can tap into if you're unsure about which one of these to select.
[He scrolls through more questions.]
Benjamin Alarie: So, we've done all that work already. And then there are questions around what's the nature of the potential harm as a result of that misconduct. Here we have "loss of public confidence and trust in the public institution" and "reputational damage to the employer's business." And again, there's a description of each of these things, if you click on "more," it'll take you through that. Then there's a question 2.2.1, did any of these harms actually manifest? So, did bad things happen as a result of this misconduct? Keep your eye on this one, 2.2.1. We're going to circle back and change that when we get to the report and say, you know, let's see what the differences between the harms rather being only potential, and then actually, as they manifest, what influence does that have on the overall result? We're going to continue on.
Was the misconduct intentional or accidental? It was intentional. It was planned. It was not really provoked, in the sense of provocation captured by this question. It involved multiple actions. So it was a pattern of behaviour and there wasn't a medical condition or disability that contributed to the misconduct. There was an expression of remorse. There was an apology. The employee was not honest and forthright about the details of the misconduct at issue originally. And there were none of these additional issues that applied.
I'm going to move a little bit more quickly now. We've got the misconduct on the table. We know what that is. We know those background issues. Here, we've got a series of questions about the employer's policies. So, does the employer have a written policy saying not to do this? Yes. Was there a published rule prohibiting this conduct? Yes. Was the employee offered any training specifically about this prior to the misconduct at issue? No. Did the employer conduct a proper investigation into the misconduct? Yes.
[A tinkling jingle plays. Benjamin reaches off screen, and the jingle stops.]
Benjamin Alarie: Did the employee have the opportunity to explain or provide their version of the events at issue? Yes. OK, does the employer have a published policy of incrementally more severe discipline? Is there a progressive discipline policy? Yes. What does it provide? It provides for a suspension as the next step of discipline based on the published policy. This is also going to be important. So we'll return to this question 4.2 as well. How long ago was the most recent incident, misconduct for this employee? Well, this employee has a clean disciplinary record. So there wasn't previous misconduct here. And has the employee engaged in a similar form of misconduct in the past for which there was no disciplinary warning? No, the answer's no. OK, we're almost there. Is there any issue of other employees engaging in similar misconduct for no disciplinary consequence? No. No other employees were also responsible for this particular misconduct. And finally, there are none of these special situations that apply like a sole caregiver or having to relocate to find other work and so on.
So, we've entered all the information into the system. What we know is that with this set of facts here in this situation, we know that we're able to make a very good prediction about what an adjudicator would say is the appropriate sanction for this particular worker based on all of the case law. So, this is not Blue J saying, "This is necessarily the correct outcome." What we're saying, what the platform is telling us on the next screen is based on all of the case law. This is the best prediction of what that adjudicator would say based on what adjudicators have said in all of the previous cases, based on what's happened in those previous cases. So, I'm going to click on "View Prediction Report" and we're going to see what the predicted outcome here is.
[Benjamin clicks a blue button at the bottom of the screen that reads "View Prediction Report." A prediction report replaces the question form, and the questions remain in the left sidebar. It reads "Predicted Outcome, Suspension: 2+ weeks." Beside the predicted outcome, a graphic titled "Confidence Level" shows a grey bar with a speck of blue marked "warning — 1%, a grey bar mostly filled in with blue marked "suspension — 72%", and a grey bar filled approximately a quarter with blue marked "termination — 27%." Below the predicted outcome and stats sits a list of cases with similar factors.]
Benjamin Alarie: OK, here we have a prediction that the appropriate outcome would be a long suspension, so 2+ weeks suspension. What you see here under confidence level is that it's very unlikely that an adjudicator would say that a warning is the maximum sanction here. What we see next is that there's a 72% chance that an adjudicator would say a suspension is the appropriate outcome and not termination, but we do still see a 27% chance that an adjudicator would agree, "OK, based on the gravity of this conduct, this misconduct, a termination would be appropriate here." This is really interesting. It seems like suspension is the best answer. The kind of central point of that distribution is saying, yes, a suspension, 2+ weeks is the most likely thing. It's very unlikely that a warning would be the outcome if this were to go to adjudication. But there's an outside chance, like a one in four chance, roughly, that an adjudicator would agree that termination is appropriate given this misconduct.
[He scrolls through the case list.]
Benjamin Alarie: So, as you scroll down, you can see that we've got the most similar cases immediately identified for us to take a look at. And so, if I click on "View Match Factors…"
[Benjamin clicks on a field labelled "View Match Factors" next to a case. Lists of similar and dissimilar factors expand.]
Benjamin Alarie: We can see the city of Bradford case is very similar based on all of these similar factors involved in that misconduct. Some dissimilar factors there, but it's the most similar case in the data set. And in that case, the result was termination. That's interesting. And I can scroll down, and it gives us an explanation about why this is the appropriate outcome. So this is a plain language explanation. So you as the user can get a sense of what the right outcome is. I'm going to do two things here. One, I'm going to change the question here on the left about did any of the above harms actually manifest? I'm going to say yes, instead of just being potential, these harms actually were realized and we're going to see what influence that has on the prediction.
[Benjamin pulls up the mentioned question and changes his answer to "Yes." The Prediction Report goes blank a moment, and then refreshes with the predicted outcome of "Termination." The Confidence Level graphic weights a suspension at 46% and termination at 53%.]
Benjamin Alarie: OK, so that makes a really big difference here, because what we see is that now the most likely outcome is that an adjudicator would agree the termination would be appropriate in this circumstance because it's caused reputational damage to the employer, and it's also resulted in a loss of public confidence and trust in a public institution. So, the right outcome changes from a long suspension to a termination. It's close. It looks like termination is 53% likely as the outcome, suspension, 46%. But on balance, the best prediction is that a termination would be the right outcome. I also mentioned another question here, which is around which of the following disciplines is available based on the published policy?
[Benjamin pulls up question 4.2 from the sidebar and selects "Termination."]
Benjamin Alarie: And if we say based on the published policy, if the appropriate outcome was termination and we keep all the other facts the same, what happens here to the prediction about what the adjudicator would do?
[The prediction report refreshes again, and the predicted outcome remains "Termination." The Confidence Levels shift to suspension at 8% and termination at 92%.]
Benjamin Alarie: OK, now it's very clear that if the published guidance on progressive discipline for this particular employee said that termination is the right outcome based on the nature of the misconduct and these harms actually manifested, that termination would be amply justified in the minds of our adjudicators who are looking at resolving these cases. So, if I were the one running this analysis, this is going to be very helpful for me in helping figure out just how should I be handling this in my department? How should I be approaching this particular misconduct? This informs my analysis based on all of the most similar cases, all of the you know, the messiness of the misconduct at issue. I've analyzed it, and this is giving me a lot of confidence in making sure that I've rooted my analysis in the applicable case law. In that body of precedent that is relevant for this situation. So this is very helpful to me. One of the things I can do is now download this report. So if I click on "download report," it will produce a PDF of this analysis. It's downloading this report now.
[Benjamin clicks a button at the bottom of the report reading "Download Report." A download bar appears at the bottom of the browser, and an icon shows the progress of the report's download. Benjamin clicks on it, and the report opens in a new tab as a PDF. He scrolls through it.]
Benjamin Alarie: And if I open it, I have this as a PDF that I could share with colleagues that outlines the analysis, gives the full explanation of the results, outlines the details of the misconduct, the prior discipline. It gives me links to all of the leading cases that are similar to the facts of my particular case, along with the outcome there. And so, you can see of the ten most similar cases, seven of them, the adjudicator found in favour of termination of that worker. And it's also got all the specific questions and all the specific answers that we entered into the system in producing this. So, this is a machine learning model that has been trained on hundreds and hundreds of progressive discipline cases. This was built in conjunction with the Centre for Labour and Employment Law in the federal government. And this is a very powerful tool for analyzing an area of the law that can be really difficult to penetrate and to really get a feel for it, and to calibrate your own intuitions and predictions about what would be merited in different circumstances. One last thing I should mention is you can switch to French analysis and the entire platform is bilingual.
[Benjamin switches back to the Blue J tab. In the top right corner of the interface, he clicks on a user icon, and selects "passer au français" from a dropdown menu. A small warning pops up and he dismisses it. The page reloads in French.]
Benjamin Alarie: So, I've been doing this en anglais, but you can switch into French at any point and then I would be able to simply reload this final screen. Everything is the same, except the entire analysis is in French. And so, I could do the same thing, change it. And if I downloaded the report, the report would also be in French. So, depending on the preferred language of analysis of the user, you can work with the system entirely in French or entirely in English, and you can also speak across both official languages in collaborating with colleagues who might be working on the same file with you, who may have more comfort in one language or the other language. And so, this is very, very helpful in the government when you're frequently going to be working with others who may prefer working in one or the other official language. And lots of people are equally comfortable in both. So, I think what I'll do is I'll pause here. I'm keen to have a discussion with Taki and Wendy, so I'm going to stop the share and, Taki, Wendy, let's talk about this.
[Benjamin's shared screen disappears. Taki and Wendy's video panels reappear, but Wendy's panel is black, and a muted mic icon sits by her name.]
Taki Sarantakis: Thank you, sir. You know what? Let's go back to the share, because we're going to show off a few things just so that people can really see the power of AI.
[Benjamin re-shares his screen and adjusts his Blue J page back into English. The three video panels sit at the righthand side of the browser.]
Because that's what kind of the 900 or so people that are with us today are actually here to see. So, as Ben walked through this, you've really seen AI in action. So, AI, for people who are of my age and Wendy's age, AI was kind of science fiction. AI was something in the future. But it's here. It's no longer science fiction. And you've been using AI for longer than you know. Every time you listen to Spotify, every time you order something from Amazon, every time you watch a movie on Netflix, AI has been operating in the background. But now you're starting to see slowly AI come into the workforce with different tools. So, I want to highlight a few things that Ben did, and then we'll play with this platform a little bit more.
[Wendy appears in her video panel.]
Taki Sarantakis: So, number one, for those of you that have been watching carefully, one of the things that's happened is Ben has taken a lot of manual labour and has turned it into a simple click. Wendy, talk to us just a little bit before we play a little bit with the platform, talk to us a little bit about how much labour is involved in something like this.
Wendy Bullion-Winters: So, from the user's experience, this is extremely intuitive. It really is like 10 to 15 minutes--
Taki Sarantakis: Oh, sorry, Wendy. Sorry, I don't mean from the perspective of using the platform. I mean, talk about without this tool.
Wendy Bullion-Winters: Oh, well, without this tool. Yes, it's very labour intensive. I mean, for those of you who are listening who are in this field, you'll recall that the existing search engine, which was the first advantage that I mentioned in my opening remarks, it's not very easy. You can't search, for example, based on the type of incident. You cannot with our existing tool search for a particular type of disciplinary incident that occurred. For example, in this one, which was a breach of trust, or someone who was not affecting their duties responsibly on the workplace. So, the way that we've had to scour case law and a historical case law for jurisprudence and precedence was very manual labour intensive. Now, CLEL helps us a lot to send us the most recent decisions that are coming out of the PSLRB. But even maintaining our situational awareness up until yesterday was quite onerous on the team. So, what I love about this is that you can answer these questions very quickly. Get access. It's the computer itself that's going to do the work for you. It's going to pull, scour all of that case law and pull up all the relevant ones up until yesterday night. So it is that up to date. And then you can decide what you do with it. The human decides what we do with it. OK, this one's really applicable. That one's less applicable. I'm going to cite these five cases that are nearly identical to the situation that I'm dealing with at hand.
Taki Sarantakis: So, the first thing is just kind of clicks, that instead of going through all of this thing, you just click. Now, the second thing that you will have noticed is data. Now, there's at least two kinds of data here. The first you saw, which is, basically, the system asked Ben a series of questions and they were all more or less, from what I saw, yes or no. And it's the same kind of thing that you would be going through as an analyst in any kind of program. It's like if you're doing an ESDC program or an ISED program, you have the same kind of questions like who's your recipient is? Is it an eligible recipient? Who is the population that you're serving? Talk to me about the project. Which project category does it fall into? Talk to me about the merits of the project. Does it have this merit? Yes. No. Yes, no. Et cetera. So, what you're basically doing is you're converting text into data just through the simple "yes," "no." But there's a second aspect of data here that I'm going to ask Ben to talk about, which is it's comparing your questions to something, right, Ben? Because it's making a prediction. Talk to us about the second pool of data.
Benjamin Alarie: Yeah. So that second pool of data that you're referencing, Taki, has been laboriously collected. And so, what we do at Blue J is we have an entire legal research team supervised by lawyers who have gone to the effort of collecting all of the past decisions, all of the past judgments. And then what we've done is created really, a master data set, a really big data set of the full text of all of these judgments and identified all of the different factors that adjudicators take into account in each one of these cases. So, the full data set for the system is rooted in, obviously the full text of all of those decisions, but then also a very careful analysis of identifying all of the different features of those cases, the things that adjudicators actually tend to care about in deciding what the right consequence is for an employee. And then we identify that consequence. And so, this second data set really involves a really elaborate representation of the facts of those cases and the outcome. And then it's pattern matching. Right? So, we've got hundreds and hundreds of these cases. For each one of them, we've got dozens of pieces of information about what actually happened in the circumstance, and we've got the outcome. And so, when you are being asked about this first set of data that you talked about, like what actually happened in your case, that's like pulling together that string of information about that particular situation. And then from there, the system saying, "OK, based on this string of information, in your particular case, there are some inferences that can be made based on this big stack of data that the system already knows about, has been trained on." And that's where that prediction is coming from, based on all of those pre-existing cases. This is not an obvious point, but I think it's an important one, when it's making that prediction about what should happen in your case, it's drawing on all of those cases in that second chunk of data, in that second data set. So, it's identifying those decisions with similar factors. Yes, but it's not ignoring everything else. It's learning from everything in the data set in making this prediction. There is no way a human would be able to read a thousand progressive discipline decisions in coming up with an appreciation of what should happen in the next case. And thank goodness people don't have to anymore because the system is basically doing that work for you. Now, of course, that's not the end of the story. Then you need to layer on your human judgment and your intuition and your knowledge of the players at the table. And you need to make a smart decision based on all of those human factors, too. But at least the legal component of this, what would likely happen if this were to go to adjudication is solved for you.
Taki Sarantakis: So, thanks, Ben. So now I want people on the line to start thinking about their data in their work. Whether they're accountants, whether they're program managers, whether they're IX's, whether they're executives. What's the particularity of your data and what is the global data that you deal in? And it might be obvious to you, but it might not be obvious. So, the more you start thinking about this, the more that you'll see the applicability in your work. And Ben mentioned something really important there, which is pattern recognition. A lot of us get paid, including myself, because I have pattern recognition. I've seen things in the past and therefore, because I've seen things in the past, the transaction costs associated with me doing something are relatively low the more I've seen it. But what Ben has shown us now is that the machine is also working in pattern recognition. And the machine is actually able to do faster a lot of the things that I, over the course of my career, have brought in kind of a manual way to a problem. We'll talk a little bit more about what the consequences of those are. But first, think about the tool. The last thing that's really, really important here is the notion of prediction, because so many of us actually work in areas of prediction without knowing it. Every time a minister asks a deputy minister or an ADM, or a director general something, it's often a prediction. Every time your boss asks you something, he or she is actually asking for a prediction. What do you think would happen if? I wonder how the scenario would change if we do the second thing?
So, let's play a little bit with the platform. So here we have a scenario where the case law in the past has said, given this set of facts, the particularities against this data—case law, tribunal decisions, charter of rights cases, etc.—here's what would happen: roughly three times out of four, you'd get a suspension. Roughly one time out of four, you would get a termination. But let's play a little bit with it. So, on the left, you see kind of the questions. So, let's go to 2.3, which just happens to be on my screen.
[Benjamin pulls up question 2.3 in a pop-up dialogue box.]
Taki Sarantakis: So, did the nature of the potential harm or misconduct pose a risk to public health or safety? I would think that if you change the answer from "yes" to "no," we would get a different answer. But I actually don't know because I'm not an expert in this area. Click it then, please.
[Benjamin updates the answer to "no." The prediction report refreshes with a predicted outcome of a 2+ week suspension. The confidence levels show suspension at 68%, termination at 31% and warning at 1%.]
Taki Sarantakis: So, you see, it has had some impact, but not a lot, so it's not necessarily a consequential impact. So, let's go to 2.5.
[Benjamin pulls up question 2.5.]
Taki Sarantakis: Was it planned or premeditated? So right now, we've got it set at, I think, planned. Is that what the "yes" means?
Benjamin Alarie: Yeah, that's right.
Taki Sarantakis: So, let's change it to a "no," that it was kind of an accident as opposed to premeditated.
[Benjamin sets the answer to "no" and the report refreshes. The predicted outcome doesn't change, but the confidence level changes to suspension at 86%, termination at 13% and warning at 1%.]
Taki Sarantakis: Oh, look at this. So, this has had a big impact. I think you've lowered the likelihood of termination by about half and I think you've increased the possibility or the probability of suspension to a pretty good. 86 times out of 100, adjudicators in the past or courts in the past with similar type facts have ruled this way. So, you have pretty good confidence. Flip it back, Ben.
[Benjamin reverts the answer, and the report refreshes, showing the previous results.]
Taki Sarantakis: So now we're back to the roughly three out of four, one out of four scenarios, let's keep scrolling down or now we're at two… So, our scenarios are roughly two thirds, one third. So, let's scroll down and let's just see.
[Benjamin scrolls through questions on the sidebar.]
Taki Sarantakis: OK, so slow down because I'm old. The employee- let's see "remorse," 2.9. So here, the employee said, "I'm very sorry. I didn't mean to do this." Let's assume they didn't.
[Benjamin pulls up question 2.9 and selects "no." The report refreshes.]
Benjamin Alarie: So that boosted the probability of a termination, Taki. The numbers are small, but it says 39%, so it makes it much more likely that a termination would be appropriate.
Taki Sarantakis: Exactly, so now you can see that, as your lawyer or your HR professional or whatever is asking these questions, you can actually see why in the past they've been asking you these questions because they're trying to apply the data that's in their head against the particularities of what you're doing or the data that's in their library or the data that's on a computer. And then they're kind of a little bit fumbling towards what would happen here, what would happen here? Let's flip it back so that we're roughly, again, one third, two third.
[Benjamin reverts the answer to "yes" and the report refreshes.]
Taki Sarantakis: And then let's scroll down and find another question. Scroll down a little.
[Benjamin scrolls through the sidebar slowly.]
Taki Sarantakis: Uh. Let's see, 2.12.
[Benjamin pulls up question 2.12.]
Taki Sarantakis: So, this is where, something where an employee, they didn't understand the consequences of what they were doing, they expressed a willingness to engage in the same behaviour again, et cetera. Let's see what happens if you click that. So, this kind of speaks to the employee.
[Benjamin selects "yes" and the report refreshes. The confident levels show suspension at 60% and termination at 39%.]
Taki Sarantakis: Wow. So. Look at that, though, that did have an impact, so flip it back again.
[Benjamin reverts the answer and the report refreshes.]
Taki Sarantakis: Now, let's look at it a little bit from the perspective of the employer. So, the employer in this case is an organization in the Government of Canada. Let's see what the impact of how the employer behaved or didn't behave or publish things ahead of time, whether that impacted. Let's say, 3.2. "Did your organization offer any training on this?" So, let's say that it did.
[Benjamin pulls up question 3.2. He selects "yes" and the report refreshes, showing confidence levels of suspension at 54% and termination at 45%.]
Taki Sarantakis: Look at that, big jump, big, big jump. So, as an employer, if you've made it clear that this kind of behaviour is not kosher, it's not acceptable, and you've also offered training, you have now changed it from two-thirds suspension to roughly 50/50 termination. Now you can start to see as an employee, here's kind of the situation. As an employer now, you're actually doing something really interesting. You're not necessarily just dealing with this case. You're actually starting to pre-emptively eliminate cases because this is now telling you as an employer, if you do the following things, you've kind of done the right thing, and employees have kind of acted beyond the right thing. But you as an employer have an obligation to, or it's to your benefit as an employer to let employees know what the right thing is. So let's flip it back, Ben.
Benjamin Alarie: I think it was 3.2, so we'll change that back to "no."
[Benjamin selects question 3.2 and reverts the answer to "no." The report results revert. Benjamin scrolls through questions.]
Taki Sarantakis: And let's scroll a little bit more. Oh, so this speaks about kind of the history of the employee. So instead of just focussing on the act that happened, you're now taking the act, plus the history of the employee. So, let's see…Let's do 4.4. In our current scenario, the employee- this is the first time the employee has done something like this. Let's see if they've done it, if this is their second or third or more than their first time.
Benjamin Alarie: Yeah. So with this question—just to be super clear—with this question is saying is this particular employee has done this similar thing in the past and it kind of got brushed under the rug earlier. There was no consequence for the employee beforehand. And so, it was kind of arguably condoned before. Let's see what happens when we change that.
[Benjamin pulls up question 4.4 and selects "yes."]
Taki Sarantakis: Thank you for that.
[The report refreshes. The confidence levels show suspension at 80% and termination at 19%.]
Benjamin Alarie: OK, so that really materially reduced the probability of a termination, it says at most you're going to be able to do a suspension here.
Taki Sarantakis: Look at that, like this is a huge, huge point. So, this is basically saying, because the employee did it in the past and you didn't take any action, that kind of increases the likelihood that the employee will get a suspension as opposed to a termination, because you, as kind of the boss, you were kind of sloppy or lazy or you didn't follow through. So now you can see how this is a very powerful tool to not only deal with these cases, but also to give you a kind of, a bit of a policy frame for "if this happens in my organization, it's important that I do this or it's not important that I do that," et cetera, et cetera. So now, Wendy, let's bring you in a little bit. So, you're the head of HR. These little tweaks that we just did here, how long would these have taken without AI?
Wendy Bullion-Winters: Well, this is the part that I think I can't emphasize enough. That's the game changer, because before this goes to arbitration, it goes through three levels of grievance hearings within the department itself. And across the table is management on one side and often representation of the employees on the other, sometimes unions, sometimes others. So, sometimes at these grievance hearings we get arguing about a particular fact. For example, was the employee warned or did the employee have training? And now we're able to use this tool to pump out two scenarios, if we can't agree on whether or not the employee was warned, we can pump out one scenario that shows he was warned or she was warned, and the other that shows they were not warned. And sometimes it doesn't change the prediction. And so now we can stop wasting our time talking about this particular variable and get to the basis of the discussion. So that's one aspect that I find really, really useful in practice. Otherwise, this also puts… It gives us the baseline. It's about 30 static questions that don't change. Every single ongoing case is answered with the same 30 static questions, which provides for uniformity and consistency in the analysis between the differing labour relations practitioners. So, for me, as a head of HR, I'm sure that they are adequately, and consistently, and uniformly analyzing every case in a way that I can then have confidence in, even from our PE-01's, all the way up. So, it's a great training tool as well. As we know, there's a shortage group in our HR field, and sometimes we can't recruit experienced labour relations practitioners. This tool, in my opinion, really advances the learning curve for our new young recruits who are coming into this complex field, which can be very intimidating and overwhelming at first. It can also be a place where the nomenclature can seem cryptic. And what this does, we didn't show you, but there are definitions. You hold your mouse over particular points and it's going to give you a definition of what that word means or what the question really is asking about. For example, on the mitigating or aggravating circumstances that we just went through, a mitigating circumstance being that the employee showed remorse, an aggravating circumstance, that there was reputational damage to the organization. So that's the real utility to these tools.
Taki Sarantakis: So, you can see now that these tools actually help you both as an employee, as an employer, as a lawyer, as a tribunal hearing. What, and the way they help you is they start to eliminate the transaction costs associated with going in the past and figuring out maybe a hundred years of case law or reading through mounds of documents, or repeatedly going forth in the government, our notebooks, saying, "I thought that the employee did show remorse. I remember, I recorded that. I wrote that down three months ago." And somebody says, "No, I wrote that they didn't have remorse." But here you have actually just kind of in a very simple, logical way, you've converted words into data. And again, I keep encouraging all of you to start thinking about in what part of your job do you use words that can be converted to data? And if you think about it a little bit, you'd be surprised. Because we're starting to do this at the Canada School with things like course descriptions where we've taken simple words and we've converted them into data, and we'll maybe do a session on that at some point. But it's really, really important. Now Ben. This must cost—don't tell me how much you're charging—but this must cost a gazillion dollars because you are… Basically, you've taken like thousands of cases. You've gone through the manually. You've had a team of lawyers, eight or nine hundred lawyers constantly looking at this and pre-programming what would happen. Obviously, I'm joking. Tell us what it really does.
Benjamin Alarie: So, [Benjamin chuckles] Taki, are you asking me how much it cost to build the system or how much it costs for people to use the system?
Taki Sarantakis: How much it would cost. Like, if I wanted to use this today, is it a prohibitive cost or is it relatively cheap?
Benjamin Alarie: Right.So, this is interesting. It goes back to something that you talked about at the beginning. This is available on the national master standing offer through the Government of Canada. And the cost per seat per user per year is roughly a thousand dollars. So this is a pre-negotiated price. You can call it up per user. It's roughly a thousand dollars. Or, if you prefer thinking per month, it's about eighty dollars per month to have access to this system, which means, I don't think it's at all prohibitive. It's a small investment in really improving the efficiency and effectiveness of a team here.
Taki Sarantakis: And that is another key characteristic of AI. Once it's built, the applications are relatively speaking, are pennies. Once something is done, all that manual work that was manual is now automated because the algorithm—every time there's a new case, you don't have to go back and reprogram things, and have lawyers, and a computer scientist and others—the algorithm just learns. The algorithm just incorporates it. So, once you've set it up—and this is why a lot of companies like Facebook and Amazon and Google are kind of a little bit taking over the world, because they're not doing things in a manual way anymore. They don't have armies of people going through data and combing through "where does so-and-so live?", "where does Bob live?" "What are the types of things that Mary likes to buy?" "Did Bill get his vaccine shot yesterday?" So, these are all things that the algorithm learns. And let's close off on that: how does an algorithm learn, Ben?
Benjamin Alarie: Well, I think it's quite fitting for the title of this week. It's data. Right? So, we've got data. All of these machine learning models are trained on data. So, if you add new data and as new cases get decided, they augment the existing data set and that changes the algorithms appreciation of what's likely to happen in the next case. And so you just keep layering on new information over time. And what's really nice is, as the tribunals learn over time, as social preferences change, as adjudicators encounter new and novel cases, this system is able to accommodate those new changes because we've got new data. And so, you've got new data and the system is going to be able to learn from those new cases and it'll slightly change the predictions about the new cases that are coming that you would be analyzing in government, in HR as an LR advisor, for example, you're going to be benefiting from those new cases just decided last week. The predictions are going to be taking that into account because of data.
Taki Sarantakis: So, Ben, Wendy, thank you. The first episode of our CSPS Data Demonstration Week, all the companies that we're showcasing this week, including some things that we've done within the Canada School of Public Service, ourselves for our own purposes, are Canadian. The technology is Canadian. In some cases, they are already kind of preapproved in the Government of Canada system, which is, they've gone through the PSPC hoops. But for today's purpose, it wasn't so much for you to learn about labour relations, unless you work in a labour relations field. Today and every other day is about showing you the power of data for your job. And think about the tasks that involve data in your job. Think about the tasks that involve pattern recognition in your job and think about the tasks that are repetitive. And those are some of the things that, in the future—and in the future might mean later this afternoon or tomorrow—but in the future, these are things that AI is coming for. And it's not coming to hurt you, it's coming to help you. Because those things are manual, time intensive, repetitive. And you, as a professional in the Government of Canada, you want to bring your professionalism to a job. You don't want to be bringing manual, time-intensive, inefficient things to a job. You want to be bringing what's human about you to a job, which is your judgment, your creativity and your curiosity. So, thank you for joining us for our first session of the CSPS demo week. We're going to close with AI on Friday. Every day we're having new demonstration. Tomorrow, we are getting a demonstration on one of the big issues of our time, disinformation and trust. And you will see a little not-for-profit Canadian company run out of the University of Toronto that is working to help make the consumption of news in this era safer for all of us. Thank you for joining us. Be well.
[Wendy smiles, Taki sits back in his seat. The Zoom call fades out. The animated white Canada School of Public Service logo appears on a purple background. Its pages turn, closing it like a book. A maple leaf appears in the middle of the book that also resembles a flag with curvy lines beneath. The government of Canada Wordmark appears: the word "Canada" with a small Canadian flag waving over the final "a." The screen fades to black.]