Language selection

Search CSPS

Building Artificial Intelligence-Ready Leadership in the Public Service (LPL1-V69)

Description

This event recording explores how artificial intelligence is transforming the work of the Government of Canada, the evolving role of public service leaders, and the skills needed to lead through technological change while maintaining a people-centred approach to decision-making and service delivery.

Duration: 00:59:07
Published: April 13, 2025
Type: Video


Now playing

Building Artificial Intelligence-Ready Leadership in the Public Service

Transcript | Watch on YouTube

Transcript

Transcript: Building Artificial Intelligence-Ready Leadership in the Public Service

[00:00:00 Animated CSPS logo appears.]

[00:00:07 Erica Vezeau appears full screen. Text on screen: Erica Vezeau, Director General, Digital Academy, Canada School of Public Service.]

Erica Vezeau: Good afternoon and welcome to today's event, entitled Building Artificial Intelligence-ready Leadership in the Public Service. My name is Erica Vezeau. I'm the Director General of the Digital Academy here at the Canada School of Public Service. My name is Erica Vezeau. I am the Director General of the Digital Academy here at the Canada School of Public Service. I will be your moderator today.

Before I begin, I would like to acknowledge that I am speaking to you today from the traditional, unceded territory of the Algonquin Anishinaabe People. I would like to express my gratitude to past and present generations of the Algonquin people, as the first guardians of this territory I occupy. I also recognize that our participants come from different regions of the country and that you may therefore be working on a different Indigenous territory. I invite you to take a moment to reflect on the territory you occupy.

We have the pleasure today of learning from Rishi Behari, a professional coach, consultant, and educator, and the founder and CEO at Flowstate Coaching & Consulting. Rishi is a former Associate Director of the Masters of Management in AI program at the Smith School of Business at Queens University, and works at the intersection of leadership, ethics, equity, and artificial intelligence.

The format of our presentation today will be a presentation by Rishi which will take approximately 25 minutes.

[00:01:40 Split screen: Erica Vezeau, and Rishi Behari.]

Erica Vezeau: Then, we will move on to questions and answers for Rishi, from you, our audience.

[00:01:48 Erica Vezeau appears full screen.]

Erica Vezeau: Today, we will start with a presentation from Rishi for about 25 minutes, and then we will move into questions and answers, and welcome questions from you, the audience.

We will be using Mentimeter during the presentation, which is an online polling platform. There will be a QR code that will be shown to you at the time.

[00:02:05 Split screen: Erica Vezeau, and Mentimeter QR code.]

Erica Vezeau: But in case you want to use your computer and get prepared, I encourage you to go to www.menti.com, and the code to access the room should be available on your screen right there. It's 4904 1085. So, just to reiterate, we will be using Mentimeter during our session today. The QR code will be presented to you now, but if you want to use your computer, please go to www.menti.com and the access code is 4904 1085.

[00:02:42 Erica Vezeau appears full screen.]

Erica Vezeau: Without any further delays – I know you're not here to speak with me, you're here to speak with Rishi –

[00:02:47 Split screen: Erica Vezeau, and Rishi Behari.]

Erica Vezeau: I'm, again, very thrilled to present Rishi Behari to you today, and you're going to have a great 25 minutes with Rishi going through the next presentation. So, completely over to you, Rishi.

Rishi Behari: Thank you, Erica. Thank you. Hello, everyone. And hello, everybody. I'm happy to be here.

[00:02:42 Rishi Behari appears full screen. Text on screen: Rishi Behari, Founder and CEO, Flowstate Coaching & Consulting.]

Rishi Behari: As we launch into this session, I actually want to ask you to do a thought experiment with me. We're going to be talking about AI and leadership and our relationships with technology as leaders today. I want you to begin by actually thinking back five years ago.

So, that puts us at February 2021. And think about where you were and what you would have been doing around this time. Of course, we were in the midst of a global pandemic. How were you using technology at that time? How has your relationship with technology changed? Well, we have sessions like we're having today, which are remote. That is something new that came out of the pandemic. We are able to broadcast messages, have conversations, and talk about leadership in a way that we weren't able to before.

[00:3:53 Split screen: Rishi Behari and title slide. Text on slide: Building AI-ready Leadership in the Public Service.]

Rishi Behari: I want you to think now, also, not just about where you're at today with your relationship with technology and emerging technologies like AI, but I want you to think five years into the future. So, put yourself in February 2031 and try and imagine how technology will change, how that may affect your life. We're going to get at some of what comes to mind for each of you when we ask ourselves this question, and that's really the central question that we're here to answer today. How do we prepare for modern times as well as the future, knowing that the unexpected things like the pandemic or otherwise can happen, and that we must navigate as leaders through that uncertainty.

So, I'm really looking forward to diving into all of that with you today. And here is the agenda that we're going to follow.

[00:04:42 Split screen: Rishi Behari and slide, as described.]

Rishi Behari: I'm going to tell you a little bit more about myself, and my approach to the ideas around AI and leadership. We're going to situate this in a long history of how technology has changed the way humans interact with each other, how technology has been changing, what leadership looks like for humanity since the start of history. We're going to talk about where this is all going and what the research tells us about the future of work and how that applies specifically to your work.

We're going to test our AI IQ, so you're all going to participate in a live quiz today, so I hope you did your homework and you're ready. This is going to help us establish a baseline because there's a lot of confusion around what is true and what is not true when we start talking about emerging technology and AI. And we're going to help define that following the quiz.

What is AI? Give you a working understanding and definition, as well as strategic frameworks that you can use to leverage your knowledge of AI in your work. We're going to talk about language models. Obviously, that is a big part of this. And then we're going to bring it all together. We're going to have some time for your questions, and we're going to talk about how AI and leadership interact specifically. I look forward to the time to answer your questions.

[00:06:00 Split screen: Rishi Behari and slide, as described.]

Rishi Behari: So, a little bit more about myself. Erika told you some of my background. And this journey with AI really started for me in 2018. I had specialized in creativity and entrepreneurship and innovation when I did my MBA. And I found myself really lucky to be in the right place at the right time in an ecosystem, at that time between Toronto and Kingston, where at the Smith School of Business at Queens University, we launched the world's first premier program to meld knowledge around AI and coding with business and strategy.

I was able to learn from some of the greatest minds of our time in AI, people like Jeff Hinton, if you're familiar, a Canadian who is considered the grandfather of deep learning and neural networks. And I am not a technical person in terms of my background with AI. What that allowed and sort of forced me to do was to learn how machine learning worked and to learn the technical aspects so that I could explain it in layman's terms. And that's some of what I'm hoping to do for you today. I've been able to carve out a bit of a niche as a knowledge translator in between technical and business-facing teams.

And along that journey, I've been lucky to work with some of the most well-known brands in the world. We partnered with the NFL in that program every year – kind of topical since we have the Super Bowl coming up this weekend – I was able to work with and visit Disney, who sat on our board. In my own continuing work, I've been able to work with companies like Amazon, as they navigate how to lead in technology and to focus on the human-centred piece. Companies, global brands, like Coca-Cola have been part of this journey for me as well. I am hoping to be able to share with you some of how these global brands think about AI and leadership and really be able to then tailor that to what that means for Canada and the public service.

And at the heart of this are discussions around ethics, sustainability, the environment. And so, I've been involved in these conversations for quite a long time now. That's something that I think about, teach and speak on regularly. And I know you will all bring diverse experiences and thoughts, which I'm happy to get into as we progress today.

[00:08:16 Split screen: Rishi Behari and slide, as described.]

Rishi Behari: I wanted to share a little bit about myself personally. In this time of technology, one of the themes of today is going to be how we approach the human side. And so, I just wanted to share a little bit about myself as a person.

A few images here on the left as you're facing the screen, you'll see me at a waterfall in Iceland. I love to travel. I love to go around the world. I love to learn different languages. And my travels eventually led me to Vancouver Island, Victoria, where I am today. And that is Tofino, my favourite place to go surfing, on the right. That's how I spend a lot of my time. And you might be able to see my surfboard behind me on the right in your screen. And at the centre, the image is of a Japanese art form called Kintsugi, and this is really central to how I approach leadership as a person.

And if you're not familiar with Kintsugi, what they do is they will take broken ceramics, like this bowl, and they put them back together. And instead of hiding the cracks, they actually highlight them in gold or a silver lacquer. And the idea is that the piece is more beautiful for having been broken and put back together, and we don't hide the scars, we celebrate them.

I've shared some of my accomplishments and experiences, but I've made many mistakes along the way and continue to learn from trial and error. And I believe that central to leadership is also the humility to understand that we won't always get it right and that we try to approach things with care and gentleness as we are the stewards of people, including ourselves. So, that's a little bit about me and the approach that I bring to this work.

[00:09:56 Split screen: Rishi Behari and slide, as described.]

Rishi Behari: What is unique about today? I believe we have over a thousand of you from across the country here today. And in an age of rapidly evolving technology, it's very tempting to want to speed up and play catch up. And we feel that we don't have enough time, I know, often in our work, and we need to get up to speed quickly.

Today is actually an opportunity to slow down and to take an hour together, a rare opportunity to examine some of the bigger questions that technology like AI brings up. We all know the famous story of the race between the Tortoise and the Hare. If you know how that story ends, you know that the faster animal actually does not win that race. I think this is a good metaphor for us as we learn about these technologies that are evolving so rapidly in this race. This is a marathon, not a sprint. I hope today we'll give you the tools to understand how slowing down can help us move more quickly in the long run.

And you'll see that I have put the quote, "The medium is the message". You might recognize this from a famous Canadian, Marshall McLuhan, sociologist and futurist. And what did he mean by this?

[00:11:10 Split screen: Rishi Behari and a slide with an image of Marshall McLuhan. Text on slide: "We become what we behold. We shape our tools, and thereafter, our tools shape us."]

Rishi Behari: Well, you can see his quote shared on the screen here, which tells us that we become what we behold. We shape our tools, like technology and AI, and thereafter, our tools shape us. So, part of understanding the leadership in AI is understanding how our use of these technologies is actually changing us as people and as a society. And in bringing us to a place of understanding around how this has happened throughout history, I want to show you a brief history of how technology has influenced leadership.

[00:11:50 Split screen: Rishi Behari and slide, as described.]

Rishi Behari: So, the first image that you'll see here on your screen is the printing press. In the 15th century, Gutenberg brought this invention to the world. And how did this change leadership? Well, it challenged formal authority. For the first time, people were able to interpret, as they learned to read, messages for themselves. This had major ramifications on religions and organized society around the world. And you can see how, as people develop the ability to think critically, that really changed the way that they would analyze messages, and we really saw a big shift away from formal authority.

As we move quite a bit forward in time, you'll see the radio. Now we're no longer stuck with a physical copy of something that we need to download a message. Now we're able to broadcast messages over long distances, and the influence that we can have, as leaders, changed tremendously as the reach changed.

Then I'll move towards television, early television, as audiences began to gather for big events, whether that be like the Super Bowl coming up. And what's interesting here to point out is, as I mentioned, that the medium is the message, as Marshall McLuhan said, there's a famous example with our neighbours to the south in the US, where in an election speech during the campaign that happened between Nixon and JFK,

[00:13:12 Split screen: Erica Vezeau, Rishi Behari, and slide, as described.]

Rishi Behari: people who listened to the debate on the radio actually thought a different person won than the people who watched it on television. So, that can tell you about how the technologies that we choose to transmit our message can change the way that message is received. That was a very powerful example.

And then we move into the era of the modern computer, which we all have smartphones now. But what this really did, when you combine it with the advent of the Internet in the '90s, was it gave rise to data analytics. We are now able to track so much more of people's behaviour and where their attention goes.

And if you think about the analogy, or the example of the Super Bowl that I mentioned, we used to have a general idea of how many people were watching on their television sets. But now, with the rise of Internet, we know the demographics of those people. We know what they searched after. We know how effective a Super Bowl ad was. And so, this changed the way the power and influence worked as well.

[00:14:14 Rishi Behari appears full screen.]

Rishi Behari: And of course, as we move towards modern times, we have the rise of social media, where we're connected, sharing ideas with each other on platforms like Facebook, Instagram, YouTube, X, formerly Twitter, et cetera.

[00:14:28 Split screen: Rishi Behari and slide, as described.]

Rishi Behari: Now we all have cell phones in our hands or nearby, and we're what we call "always connected". We can receive a message at any time. You don't need to have an event like the Super Bowl in order to get a message across to people because you have their attention. The first thing most of us do in the morning is check our cell phone, and it's a marketplace, and there are ideas – whether they're consumer ideas, political ideas – that are at our fingertips 24/7. And of course, we lead to the advent of artificial intelligence, modern language models that we're here to talk about today.

And I want to emphasize that if you look at this progression, you can see that technology is evolving more rapidly than ever. It has changed more in the last 20 years than in the 200 before it. And that is part of why we're here today.

[00:15:22 Split screen: Rishi Behari and slide, as described.]

Rishi Behari: And this illustration – I enjoy because it comically illustrates the famous idea of the evolution of humankind as we go to upright and then we hunch back over and stare into our phones. There's a photographer that removed cell phones from pictures of people out in society, and it just looks like a bunch of people staring at their hands. This has become a normal part of our behaviour.

The truth is we know, as we follow global politics and news, that this is where we get our information. This is where our leaders often send messages. And when people receive these messages, it's often without the whole context. It might be a 10 or 15 second clip that gets shared. It gets posted on social media. And so, communicating and keeping things in context is more challenging than ever, while it's more easy and efficient to spread ideas and messages than ever at the same time.

[00:16:16 Rishi Behari appears full screen.]

Rishi Behari: What does this mean for the future of work? Well, Harvard recently did a study of what the top 10 most in demand skills for the future of work are.

[00:16:26 Split screen: Rishi Behari and slide. Text on slide:

  1. Digital adaptability
  2. Empathetic communication
  3. Emotional and social intelligence
  4. Conflict management
  5. Persuasion and influence
  6. Inclusive leadership
  7. Calculated risk-taking
  8. Strategic agility
  9. Engaging and inspiring leadership
  10. Leadership without formal authority.]

Rishi Behari: They looked across thousands and thousands of job postings for leaders, and we're going to talk about all of them today. But I really want to draw your attention to number one, which is digital adaptability. And the keyword here is adaptability, because adaptability speaks to not that we need to know specific software programs that we use, but that we are able to learn new ones quickly. And when we talk about the speed of progress, we can see how digital adaptability becomes so important. And surprisingly, for many, the most important thing that we believe we need from the future of work and from our future leaders.

[00:17:09 Split screen: Rishi Behari and Mentimeter QR code slide.]

Rishi Behari: So, that is a good moment where I'm going to give you some time to log in to Mentimeter. When we talk about digital adaptability, what are we talking about? Well, I want to talk about some common facts and or myths about artificial intelligence in particular, so that we can be working from a common understanding moving forward.

So, we'll give you a moment to either scan the QR code, or log in at menti.com, and I'll take you through a series of questions, and we're going to get a unique perspective of how this group of federal workers think about this topic. We'll give you a moment to sign in.

[00:18:04 Split screen: Erica Vezeau, Rishi Behari and Mentimeter QR code slide.]

Rishi Behari: Okay, as you're signing in, we will slowly move to the first slide.

[00:18:14 Split screen: Erica Vezeau, Rishi Behari, and slide, as described.]

Rishi Behari: Okay, you can begin answering. The first question here really is about what are the first words that come to mind when you think about AI? And the way this Menti word cloud works is that the themes and ideas that appear most often are going to be the largest and towards the centre of your screen. So, this is a very cool opportunity to create a bit of a map of our collective minds,

[00:18:53 The Menti word cloud appears full screen.]

Rishi Behari: and the words that most commonly come up when we think about AI, as this particular group, and with such a great sample size here today of a thousand people, this is going to be very interesting and very telling to see.

So, we see many words, including future, innovation, a lot of positive words, but then you'll notice there's a lot of words around fear. Hopefully this is very interesting for you to be able to see how people are thinking about this. So, really, efficiency, future, change, tools, speed, unknown, robot, opportunity, as well as automation, data, ChatGPT, are appearing here. I'll give you another 30 seconds or so, so that we can get a full picture of this slide. Words like powerful, dangerous. Wonderful.

[00:20:15 Split screen: Rishi Behari and slide, as described.]

Rishi Behari: Okay. I'm going to move to the next Menti slide now. I just want you to answer as honestly as possible. Remember, all of these answers are anonymous and aggregate data. As a starting point, do you believe that AI is generally beneficial? Are you not sure, and you're undecided, that's part of why you're here today? Or do you think it might be harmful overall?

It's always so fascinating to see the results coming in. In the race we have, currently, beneficial slightly in the lead, followed by undecided, followed by harmful. Again, I'll give you some time. Make sure you get your answers in here, so we get a clear picture of where this group stands.

[00:21:30 Split screen: Erica Vezeau, Rishi Behari, and slide, as described.]

Rishi Behari: It does look like the trend here is that most of you believe that it is beneficial, which is very interesting. The next largest group is undecided. And of course, there is a group that believes that this could be harmful. I'm going to speak to these as we go. And I like to think of AI, think of it as an element like fire. It's like asking, is fire beneficial? Are we not sure? Or is it harmful?

Well, my take would be that it's all of the above. Fire can be used in tremendously beneficial ways for humanity and has been used.

[00:22:04 Rishi Behari appears full screen. Text on screen: Rishi Behari, Founder and CEO, Flowstate Coaching & Consulting.]

Rishi Behari: It's also been used to create some of the most destruction that we have seen. And if you're undecided, I think part of what today will help you do is really ask critical questions as we delve deeper into the topic. And fire, like AI, can sometimes take on a mind of its own, and it can run rampant without us as well. So, it's really great to see where you are at, as a starting point. We're going to switch to the next slide now.

[00:22:42 Split screen: Rishi Behari and slide, as described.]

Rishi Behari: Okay, this next question, we're going to go into a series of true and false here. Is it true or false that AI has a larger carbon footprint than the entire global airline industry combined?

So far, it looks like about twice as many of you believe that this is actually true versus false. That appears to be the holding trend.

[00:23:26 Split screen: Erica Vezeau, Rishi Behari, and slide, as described.]

Rishi Behari: As you continue to answer, I'm going to tell you the answer to this. It might surprise many of you to know that this is actually true, that currently the demand for energy and the carbon footprint of AI

[00:23:54 Rishi Behari appears full screen.]

Rishi Behari: does exceed what we believe to be the entire global emissions of carbon from the entire airline industry.

[00:24:03 Split screen: Rishi Behari and slide, as described.]

Rishi Behari: And this piece around environment, I know from some of the conversations I've had from some of your leadership, is something people are aware of. But it's – surprisingly, as I do this work – it's one of the things that is the least talked about, and it's certainly something that, as leaders, we need to address our positions on the use of this technology.

Part of how the use of energy and AI can be measured is how much water it takes to cool down the data centres, and the exponential increase in demand for energy to power our AI models is not sustainable currently. There's talk of building data centres underwater or in the ocean, which just opens a whole other can of worms in terms of the impact on the environment. So, very interesting to see your answers, and I hope that that sheds some light on a part of the conversation that is often not highlighted.

[00:25:01 Split screen: Rishi Behari and slide, as described.]

Rishi Behari: Okay, we'll move on to the next slide. So, this question is, does AGI exist, and is it being worked on by well-known organizations around the world? AGI would be human-level intelligence, or something that surpasses that. This is often what we see in the movies, in television, in popular fiction. Does this exist, and is it being worked on by well-known organizations?

This one, so far, is quite close. Very interesting split. And again, we'll give you some time to get your answers in. It looks like this is the closest one, so far, between true and false.

Okay, I'm going to share with you the answer to this one, which is that it is false that we are anywhere close to human level intelligence. In some narrow aspects, current AI does surpass human performance. But overall, we don't even have a really good understanding of how human intelligence works. Global experts cannot agree on a definition of intelligence. We have intelligence in terms of animals and nature that we continue to learn more about. And so, I think it's very important because we see so much of this in pop culture of this human level emotional intelligence, and that is not what currently exists.

However, well-known organizations are working on it. And so, the second part of the question really is, we have organizations that we all know that have said, We're trying to create this. We're working on this actively. But the truth is, we're nowhere near the technology to produce humanlike awareness and cognition.

And what's really important to note here is that when you hear people in the news talking about this or a singularity, they talk about an eventuality that's based on a speed of progress argument, which is to say the fact that we had video games when I was a kid that were 8 bits, your Nintendo games, and now in my lifetime, we've moved to photorealistic virtual reality. The argument is that the speed of progress over time means that we will get there eventually. But that's more of a theoretical argument than a practical one. I think that's very important because when we talk about leadership and the mix between human and machine intelligence, this is an important distinction to know that we are nowhere near the AGI that you might see most often in television and movies.

[00:28:20 Split screen: Rishi Behari and slide, as described.]

Rishi Behari: Okay, let's go to the next question. All right, this is a big one, and I'm sure that this has been part of some of your conversations already. Is it true or false that AI will replace more jobs than it will create? This, of course, is part of the public discourse around AI.

Okay, this is also quite close. As the answers are coming in, I can tell you that from the leading research around the world, experts do not believe that AI will take more jobs than it will create. And the reason for that is human oversight.

Because we're not dealing with AGI, because of all the things that humans can do that AI cannot, any use of AI, especially for the people in this meeting, AI is going to be something that you will need to learn to use to your advantage, not something that will replace you. And that'll be one of the takeaways from today, is that AI will not take your job, but someone who knows how to use AI eventually will. And that's a very important takeaway. We still need the human-centred skills to complement the technical skills. And as we revisit the Harvard example, you will see more of that in practice.

And it is important to note that a lot of the jobs that will be displaced, however, will affect the least educated portion of our society. And obviously, there are important conversations to have around societal impact and the financial impact and the ethical impact of that.

[00:30:29 Split screen: Rishi Behari and slide, as described.]

Rishi Behari: Okay, we'll go to the next question. Is it possible for AI to produce unbiased results? True or false?

Once again, interesting to see it quite close. That looks like a lot of people are split, but that false is slightly in the lead. Again, we'll give you a little bit of time to answer this. Okay, it looks like a slight majority believe that this is false.

If you said false, you are correct. It is impossible for us to create AI without bias. We can use AI to fight bias, but the key point here is that human unconscious bias will always make it into the artificial intelligence that we create.

A simple example of this could be your personal assistant on your phone. If you think about what the default setting on your phone is, that voice is likely female. And why is it that the voice of a personal assistant is set as a default to female? Well, it's because we have human biases that we bake into all the things that we create. The key, and especially for federal public service, is to prevent bias – which we know will always be there – from becoming discrimination, which is the real impacts on the lived experiences of people.

[00:32:18 Split screen: Rishi Behari and slide, as described.]

Rishi Behari: Okay, let's go to the next slide. We're almost done the quiz here. This is part of an honest answer of where you stand today. Do you believe that AI is integral to the future of organizations? Is it true or false, or is it just hype? And we're here to have an honest conversation about the answer to this question today.

Okay, it looks like the majority do believe that this is true, that AI is integral to the future of organizations. I would tend to agree with you to say that the cat is out of the bag, so to speak, or we've opened Pandora's box and ignoring the impact of AI will not be as beneficial for us to understand its impacts on the world around us, the way people consume media and use technology. And so, I would say that this is true, but that's part of what we're here to explore today.

[00:33:36 Split screen: Rishi Behari and slide, as described.]

Rishi Behari: And we have one more question for you, which is helpful for myself, and your leadership. Take a moment, and after you've taken this quiz, what are some of the areas that you are most interested in learning more about, moving forward?

This is a word cloud as well. And so, it will allow us for future sessions for your leadership to focus on the areas that you said are most important to you. And I hope that this quiz has helped to illuminate just how many myths there are, how divided we are in a basic understanding of artificial intelligence.

Okay, leadership, what we're here to discuss. Ethics, and governance. As you continue to answer that, I'm going to switch back to the slides from my presentation and thank you so much. I hope that that was beneficial for you to examine some of your own knowledge around AI. I'm going to resume sharing my slides.

[00:35:03 Split screen: Rishi Behari and slide, as described.]

Rishi Behari: Okay, I want to make sure we have time for questions. And so, I'm going to move through some of this quickly, but it's important. What is machine intelligence? Well, we have biological intelligence, what does it mean when we say machine intelligence?

Because when we are born – think of a baby or a child in your life – how do we learn? We have senses. We have sight, we have smell, touch, taste, et cetera. And through those senses, we interpret the world. A machine doesn't have any senses. I mentioned that experts don't even agree on what intelligence is. And so, what does it mean to say machine or artificial intelligence?

Well, I think what's most useful to understand from today's session is that when machines take in data, they are basically making predictions. And even when you use a large language model, all it is doing when you ask ChatGPT or the platform of your choice a question, what's happening? It's predicting words in order without any lived experience or context. And it's quite good at doing that, but that's a little bit scary when we start to realize that we're asking some big questions of a word prediction engine.

[00:36:14 Split screen: Rishi Behari and slide, as described.]

Rishi Behari: And in terms of large language models, here's some ideas around how you can think of them. One thing we know is that experts use language models more efficiently than non-experts. That's because experts know when an answer is incorrect or lacking information. Or perhaps, if you're an expert, you know that when someone asks you a simple question, there's usually a lot more nuance to the answer.

And so, this idea – you can see I have an image of a robotic parrot – there's a concept called "Stochastic Parrots". A parrot is able to imitate human language, but that doesn't necessarily mean that the parrot understands what it is saying. And so, if you think of a language model like a parrot, although a parrot is arguably more intelligent in the way that we commonly think of intelligence, you wouldn't trust a parrot that can repeat words.

And we need to use language models with caution because there is bias built in. Token prediction refers to those words. Those words are in the technical language, just tokens that are appearing in order. And machines and language models lack common sense and lived experience, which are two of the most essential things for our understanding of the world, which is why AI needs our oversight.

And small language models are a new type of model where you could, for example, take all of the things that I've written and that I've spoken about in the media and train a model to answer as me. And so, you can think of leadership and how that might affect leadership. And there are some famous people in the world now that have small language model versions of themselves that you can speak to. So, if you don't have access to the real person, you could have 24/7 access to a small language model trained on a niche. And these small language models help to get around some of the lack of context that we experience by engaging a subject matter expert.

[00:38:13 Split screen: Rishi Behari and slide, as described.]

Rishi Behari: And this is a powerful framework for your use of AI, which is really split up into three stages. And it's a health care example. But when you go to a doctor, you tell them what your symptoms are. That is descriptive or diagnostic information.

So, the first thing we want to think about when we're thinking about broader AI outside of language models, or even how to use language models, or what information or data do I need to give in terms of the more context that we build in our prompts or prompt engineering, the more we're going to be able to get predictions about, Well, based on past data, this is what we think will happen.

We take the weather as an example of this. Weather in February over the past recorded years has been like this. We can predict something like that. You want to start to think about in your work, how could this predictive technology help? And then we prescribe and we become strategic. So, it's really descriptive or diagnostic, which allows us to predict and make decisions with information, which allows us to then set strategy. And this simple framework can really help you leverage AI.

But what it doesn't allow you to do is it doesn't account for new things. Why did we not expect a pandemic to show up? It wasn't in the data set that we were looking at. If we looked far back enough in human history, we would have seen that, yes, absolutely, we're due. There's one coming. And so, human creativity and novel ideas still need to come from people. But you can see that when we make decisions with information, we are in a much stronger spot.

[00:39:52 Split screen: Rishi Behari and slide. Text on slide:

Digital adaptability;
Empathetic communication;
Emotional and social intelligence;
Conflict management;
Persuasion and influence;
Inclusive leadership;
Calculated risk-taking;
Strategic agility;
Engaging and inspiring leadership;
Leadership without formal authority.]

Rishi Behari: And I just want to return quickly to the modern leader and the skills. We talked about a digital adaptability, but as you take a look at 2 through 10, you'll notice number 7, calculated risk taking, number eight, strategic agility. AI gives us advantage in these areas, but the others, AI is not very good at: empathetic communication; emotional and social intelligence; conflict management; persuasion and influence; engaging and inspiring leadership; or leadership without formal authority; and inclusive leadership. So, you can see how we're building this hybrid model of data-informed decision making.

[00:40:30 Split screen: Rishi Behari and slide, as described.]

Rishi Behari: And a quick example from the world of business, if you're the same age as me, you've probably been in a Blockbuster Video at some point in your life, and they went from 9,000 locations down to just one. And how did that happen?

Well, that happened because they did not use information to make decisions. Netflix started as a mail service where you could order. They got rid of late fees, but they also collected data. And now when you turn on Netflix or a streaming platform, it says, Hi, Rishi, this is what I think you want to watch. It's making predictions. So, this is an example of how you can use data to make predictions and then set strategy.

And we want to make sure, as organizations and businesses, that we are making decisions with information, and we don't want to be like the Blockbusters of the world that just relied on experience and intuition only to make major decisions. At one point, Blockbuster had the offer from Netflix to buy them for $50 million, and Blockbuster laughed Netflix out of the room, and now the market cap on Netflix is over $300 billion, I believe. So, you can see how these decisions and leadership can really impact the way that we interact with each other in technology, and they can really impact strategy.

[00:41:53 Split screen: Rishi Behari and slide, as described.]

Rishi Behari: This is an example of a job posting that I came across just the other week, as I preparing for this session, from JD Power in Canada, a large consultancy. And you can see that they are actually hiring for a head of AI enablement. We're seeing new jobs like this pop up, and you can see in the highlighted text that they're looking for someone who has the technical skills to understand AI and its responsible use, but equally strong at designing programs that work for teams and bringing structure and enabling teams and teamwork. So, a mix of those technical and human-centred skills.

[00:42:30 Split screen: Rishi Behari and slide with an image of Peter Drucker, as described.]

Rishi Behari: And finally, in locating us within the human aspect of this as we finish the presentation, this is one of my favourite quotes by Peter Drucker. He said, "Culture eats strategy for breakfast". Again, taking this hybrid approach to AI, we can have the best strategy in the world, and it comes down to the people on whether the execution of it will work or not. And it's often the culture that we co-create with our peers that impacts whether a sound strategy will actually be effective or not.

[00:43:02 Split screen: Rishi Behari and slide, as described.]

Rishi Behari: And here are some questions that I want you to reflect on from today onwards around why is predicting so important? What could you predict in your line of work? And when I work with organizations, this is often one of the first questions. What could you predict that would help you in your work? How does access to data change leadership? I can tell you from my time at Queens that a lot of the business analysts are not at the C-suite level, so a lot of executives were not sure how the people were making their recommendations, and they started coming back to school to understand how these decisions were being made.

So, how does AI change the role of the leader? Well, a leader, the expectations now you can see are to be both human and strategic. AI is an extension of data analytics. We also need to be experts in how we manage and work with language models. And two core questions at the heart of all of this are, what are the things that people can do that AI cannot, and vice versa? What are the things that people can do that AI cannot? And you want to think about these. You can think of AI as a teammate that is there and available. It has certain strengths, and it has weaknesses. And when we partner together, we're more effective.

[00:44:14 Split screen: Rishi Behari and slide, as described.]

Rishi Behari: So, here are some takeaways. Technology is transforming leadership in society more quickly than ever before. And when it comes to AI adoption, it's happening so quickly that nobody's coming to save us or tell us what to do. It is conversations like these that we need to have.

Remember that AI is not coming for your job, with the people on this call, but someone who knows how to use it eventually may, and hopefully it's you that is learning how to use it through these sessions. Experts use language models most effectively, and as leaders, you are all experts in your field of work, and we need to bring that expertise to the technology and build this hybrid skillset, which needs to be both technical and human.

AI really is an extension of data analytics and should be integrated into an informed strategy. It's not a separate piece on the side that we need an AI strategy. It's data-informed decision-making as a strategy. And evolving technology, ironically, as we saw that evolution of humanity graph, is actually making us worse at the human-centred skills that Harvard said they're looking for because we have more superficial points of connection.

And so, more training more often than ever, like today, are needed. And remember that the ethical and environmental risks are real as we're debunking some of the myths and reality, and leaders need to provide guidance.

And ultimately, you are the leader that we are waiting for. Each of you needs to think about these things carefully. And I wanted to bring you back to where we started in terms of thinking about what would your relationship with technology be like in five years from now. How are you adapting? How are you taking responsibility to lead in a way that benefits yourself, and your people, and our country?

And I hope that just this brief discussion has helped you to feel better equipped to continue to answer these questions for yourselves and for the federal public service. And I'm happy to shift in to take questions.

[00:46:28 Split screen: Erica Vezeau, and Rishi Behari.]

Erica Vezeau: Great. Well, thank you, Rishi. Really a different way of thinking. I appreciate that you presented to us today a way of thinking about AI from a more human lens. And some of the questions that we need to be asking ourselves, as leaders, and as we plot out our leadership journeys. And the influence that we want to have here in the GC.

I just love that line that you ended on. You are the leader that you are waiting for. That really resonates. We can't wait for someone else to be telling us what to do or teaching the way. We are in a period of change, all of us together. It's really incumbent upon all of us at every level to do the learning ourselves, but also to teach others and bring others around us along. I really appreciate those messages, and I think they came through very loud and clear. I really appreciate that, Rishi.

I'm going to jump into a question. I know we've only got about 10 minutes left.

[00:47:25 Erica Vezeau appears full screen.]

Erica Vezeau: I'll start with a question in French. It's a question about human skills, such as empathy, resilience, or critical thinking. Which of these skills are becoming more important and not less important as AI tools are adapted in government workplaces?

[00:47:49 Rishi Behari appears full screen. Text on screen: Rishi Behari, Founder and CEO, Flowstate Coaching & Consulting.]

Rishi Behari: Thank you. Erica, that is a good question. As AI becomes more widespread in the federal public service, the most important human skills are creativity and empathy. These are abilities derived from lived experience, judgment and human intelligence, not machine intelligence. AI is very good at pattern recognition, memory, prediction and strategy support, as I said, but it does not, by itself, understand context, values, or real human impacts. That is why human judgment, supervision, and responsibility become even more essential. Research on skills for future leaders, particularly at Harvard, shows that creativity, empathy, critical thinking, and ethical judgment are becoming increasingly important. Lastly, AI can inform decisions, but humans remain responsible.

[00:48:54 Split screen: Erica Vezeau, and Rishi Behari.]

Erica Vezeau: That's very clear. Thanks again for placing the person at the centre of the conversation. Just to push you a little further, a second question on this point: What concrete steps can public servants take to maintain people-centred values when AI influences operational decisions?

[00:49:20 Rishi Behari appears full screen. Text on screen: Rishi Behari, Founder and CEO, Flowstate Coaching & Consulting.]

Rishi Behari: Maintaining people-centred valuesin an AI context requires (interpersonal) leadership and continuous dialogue. Firstly, discussions about AI, technology and human values must be continuous and not a one-off. Technology evolves rapidly, the risks evolve, the opportunities also evolve, and the compromises evolve. The role of leadership is to create space to question, adjust practices, and learn continuously. Secondly, federal organizations need a common approach while also respecting the human realities specific to each department. A people-centred approach recognizes different mandates, different audiences, and that fact that AI can influence decisions.

This requires managers to clearly assume responsibility for the decisions they make. People-centred AI is not an end state; it is a continuous leadership practice.

[00:50:46 Split screen: Erica Vezeau, and Rishi Behari.]

Erica Vezeau: Really interesting, Rishi. Thank you for sharing. You're making me think about a question that actually I just received recently, and I was struggling a little bit to answer it, so I hope you can help. It is a leadership question, leadership at any level, but particularly those who lead teams and who are trying to set the example for their teams and using that empathetic person-based leadership style that you just explained.

How can we, as leaders, encourage responsible experimentation with AI within our teams, and within a government context, while still respecting risk management and ethics and public accountability? I feel like this is a push and pull that many of us are feeling not really sure, not really having enough clear guidance to let us know how far to go. So, how do we help respect boundaries and ethics in driving experimentation in our teams?

Rishi Behari: That's a very practical question, Erica. I think when you look back to the quiz that we just took, we need to establish a basis. There's so much misinformation out there, and I hope that today's session is just the tip of the iceberg in having conversations with your teams and with your people about what AI is, what it isn't, and to create dialogue,

[00:51:58 Rishi Behari appears full screen.]

Rishi Behari: as I mentioned previously, around these ideas, because we're not all going to agree, and we didn't all agree today on some of the big questions around AI.

I think it's important in understanding human nature to know that we find change scary, especially when it's happening quickly. And to slow down and bring people to a common ground, I think is what's important to say, Okay, when we're talking about AI, what are we talking about? Are people worried about AGI? Are they worried because they think it's going to take their job?

And so, I think we do need to slow down, as leaders, to create space and understand that this is scary for people. And I hope sessions like today help to remove some of the fear, to have a better understanding of what we're actually talking about, so that we can then get in to the more complex discussions of, Okay, as a team, how are we going to use AI? Do we want to be writing letters and replacing the human side? Or do we want to keep that? What do we think about the environment? What is our approach?

And so, I think that we do need to slow down with our teams in order to speed up and to bring people along the journey with us. But I think what's happening right now in a lot of organizations is people are using AI at various comfort levels or not at all in their work. And we need to think about things like data privacy, especially as well.

And so, people might be using AI in ways that they don't realize are potentially dangerous. They might be using it in ways that are really helping their work. But we're using it in different ways, and we don't tend to have a unified approach until we have moments like this to try and get on the same page

[00:53:45 Split screen: Erica Vezeau, and Rishi Behari.]

Erica Vezeau: Yes, that's a really interesting...I've heard you say it now several times, slow down to speed up. And it's an interesting time to be sharing that kind of advice because we are being encouraged to speed up to speed up, essentially. But I think that you're raising a really valid point that if we're not taking the time to consider what we're doing before we just start blindly doing, then we're going to hit problems later. I appreciate that advice.

As I think about how to practically bring that into the workplace, I think that the trying and the doing has to stay at speed. But maybe where we can reflect that slow down piece is having the open conversations, as you've just suggested that we do, where we're sitting, whether it's with our teams or our peers, to actually have discussions around our collective boundaries and our collective use cases where we think the appropriate lines are and aren't. And that's the way that we can set our own guidelines as we're trying to accelerate in the absence of more official direction coming from anyone above us. I hope I'm not putting too many words in your mouth, Rishi, but I'm trying to pull a practical takeaway from that.

Rishi Behari: Absolutely. I think increasing our digital literacy and adaptability as leaders is that first step.

[00:55:09 Rishi Behari appears full screen.]

Rishi Behari: The tone does need to be set by leadership. And because AI is changing so rapidly, as long as we are not engaging critically ourselves, if we're waiting for someone to come tell us, it's going to be too late as the technology continues to change.

And so, I think a first very practical step, even ahead of creating dialogue with your team, would be to encourage leaders and to take the responsibility on if I need to get up to speed and learn more about this. I need to be more curious, potentially, around the basics of understanding AI. There's quite a bit of technical applied knowledge around how it is that machines learn. There's various types of ways that machines learn. Just becoming more familiar, I feel like with some of this as leaders is the first step because we need to be examples for the teams that we lead. We can't just ask people to engage, or how they're using it, if we haven't gone on that journey with them. And that's where that piece around humility and vulnerability comes in as well.

And I know from my experience that people often do respect when you say as a leader, I don't have all the answers, but this is something that we need to tackle together. And I'm on this journey with you. And I think that is one of the first steps that we can take as leaders is to take responsibility for our own level of knowledge on the subject.

[00:56:28 Split screen: Erica Vezeau, and Rishi Behari.]

Erica Vezeau: Thanks for that. We're almost at time here, so I don't even have time for another full question, but I will just share with you that a lot of the questions coming in are around our current context in the federal government.

[00:56:39 Erica Vezeau appears full screen.]

Erica Vezeau: There's a lot of change afoot. Let me just put it that way to be light.

[00:56:47 Split screen: Erica Vezeau, and Rishi Behari.]

Erica Vezeau: There's a lot of questions coming in around fear of job loss and fear of the impacts of AI if you as a driver of efficiency that will create the savings that can lead to displacing humans. We only have 60 seconds left, Rishi. I wonder if there's anything you can leave us with that is giving us a positive hopeful note on the change that we're navigating right now and how to think about this technology going forward?

[00:57:14 Rishi Behari appears full screen.]

Rishi Behari: As technology gets better, the human side becomes more, not less important. So, if we return at a very minimal to emphasizing empathy, communication, and the human-centred skills to complement these skills, I think one of the takeaways is that AI is not able to do a good enough job to replace people, and the people on this call. But with it, we can go further, faster with human oversight, human judgment, and an emphasis on the interpersonal and human skills that really matter for navigating these uncertain times.

[00:57:51 Split screen: Erica Vezeau, and Rishi Behari.]

Erica Vezeau: Thank you so much and thank you for respecting my 60-second call on that question and the note of optimism. I really appreciate it.

Listen, Rishi, I really enjoyed your presentation today. Being in the digital training space, I tend to think a lot about the technology. This was a really welcome reminder about the people, our staff, ourselves, the leaders we want to be. It is on all of us to make sure that we ensure the workforce and the way that we use these tools is according to our values and ethics and according to the way that we want to live and lead in Canada. I really appreciate you for bringing that voice to us today.

[00:58:26 Erica Vezeau appears full screen.]

Erica Vezeau: To our audience, thank you for joining us. Always appreciate your questions and the time that you take to be with us.

A reminder, the School is here for you. If you are looking for other learning on this topic or adjacent topics, we have just so much content on leadership, on change management, on AI, on technological adaptation. So, I encourage you to check out the school's catalogue.

So, have a good afternoon everyone. Thank you for participating with us. Thank you.

Rishi Behari: Thank you.

[00:58:54 CSPS animated logo. Text on screen: canada.ca/school-ecole.]

[00:58:59 The Government of Canada wordmark appears.]

Related links


Date modified: