Transcript
Transcript: Trends in Technology-Driven Change: Ask Me Anything, Part 1
[00:00:00 Text appears onscreen that reads "Trends In Technology-Driven Change".]
[00:00:06 The screen fades to Chris Howard.]
Chris Howard: Hi, I'm Chris Howard. I'm the Global Chief of Research at Gartner. Thanks for taking some time to listen to the advice that we have around A.I. and related subjects. I hope you find it interesting.
[00:00:17 Text appears onscreen that reads "Ask Me Anything with Chris Howard, part 1".]
[00:00:22 Text appears onscreen that reads "How do we get beyond the hype around A.I.?".]
Hype is sometimes useful because it makes people pay attention. What the attention should bring us though is a deeper conversation about what A.I. really is, what it can do, what it can't do, and just sort of a full on conversation about sort of its capabilities and limitations. Because it is so uncannily human-like, we pay a different kind of attention to it than with other technologies. And so, that's the thing that's different, but you started in the right place. It's actually going to take A.I. to solve climate issues, power issues, the power grid stuff, like the dynamic grid. It's all based on A.I. understanding, load balancing across wires of different ratings. And so, ultimately, the uses of it, if we communicate clearly about the uses of the technology, people have a clear understanding of the problems that can be applied too. So, it's that.
And so, asking questions is part of that in terms of, what questions have we never been able to answer before because the problem was just too hard? And in the case of A.I., it's, well, the problem class is tons of data where patterns could tell us something, period. That could be differential diagnosis, could be weather patterns, could be anything where there are a lot of variables interacting. And then, it's sort of converting from the hype around A.I. to say, I want to implement it to solve that problem. So, that's where this goes. But also, as I said, it's actually caused more attention to be paid to the climate issue because the fact it's damaging to it. And so, that's caused a whole lot of really interesting innovation, not just in the chipsets but in the math, like the theoreticals of this are changing faster because the desire to mitigate the climate impact.
[00:02:04 Text appears onscreen that reads "How do we justify investment around A.I. tools for employees?".]
Well, the first thing is you have to create a policy for use, of what is acceptable use and what's not. It doesn't need to be super complicated but it is sort of, here are the guardrails around this, here's where you can go wrong, but also here's a sandbox for you to go play in that is a totally safe environment. And so, what a lot of companies have done is spun up like a private instance of Microsoft, as you are, with the OpenAI capabilities in it. There's no way the data can get out. There's nothing that they can do wrong with it, and they give it to everybody. They give access to everybody and say, go play with this and bring back options for how you think we could implement it.
So, the benefit of giving more exposure in that safe environment is that you will get better ideas faster, but that assumes you have a governance process in place to absorb those ideas, prioritize them, and then figure out which ones to follow, right? And so, what I'm seeing is the growth of centres of excellence, like small cores, small fusion teams from across the organization, that are helping prioritize and think about value and so on, but also looking for duplication. And so, it's really, set it up, put the policy around it, make it safe for people to use, and then invite them into it. So, that's the case I see with most organizations that do it effectively.
[00:03:14 Text appears onscreen that reads "How deeply should organizations invest on data architecture to support A.I.?".]
The truth is that investment in data is good regardless of whether you're going to do A.I., and even using some of the A.I.-adjacent techniques like knowledge graphs, vector databases, those kinds of things are useful anyways, even if you never put A.I. on top of them, because what they do is they… rather than things being in a giant flat database, it starts to study the relationships amongst entities in that data. That's what knowledge graphs do, and that then becomes useful for search, for knowledge discovery and management, and so on, even if you never use A.I. And so, there's definitely investment needed but it's not needed for all of your data. I think that's the part that people get hung up on. It's like, gosh, I have to do this with all the data. No, you actually need to sort of have classifications of where it's really important to invest that in places where it can maybe just sit in paper because some of it's there still, right? So, don't ignore the data. That's the bottom line.
[00:04:15 Text appears onscreen that reads "What governance must governments prioritize to support A.I. usage?".]
A way to think about this is that not everything about this is new but there are some aspects of this that are new, but there's a lot of stuff here where you've already started to develop on-ramps to it, like algorithmic decision-making. So, the Treasury Board had a whole AIA, the algorithmic impact assessment stuff. That stuff is still valid, it just needs to be extended to the things that are slightly different about the rise of generative A.I., which is around the creation of IP, the use of IP, and so on. And so, what I would do is look to say, what policies do we have that can be extended to cover the delta that we experience right now? Which means a lot of people responsible for attenuating the noise that's part of this conversation and focusing on what is already pretty stable and what needs to be done that's new.
But it does have to do around privacy because information is easier to share, and especially in the data environment, right? So, the conversation I have with multiple agencies all the time, and you have, is data is really siloed, even within individual agencies. To accomplish these goals, we need to flatten that out and have access to everything. Well, that is going to require kind of a re-formulation of how that information is handled because part of the reason it is siloed is because there are policies around how it's used. Those things can't necessarily go away. They just need to be mechanized in a different way, right?
Interviewer: Is there a way to maybe query data sets instead of sharing data that might…
Chris Howard: Well, see, one of the things that I'm thinking about here is, is it data that we want to share or insights that we want to share, and integrate at the insight layer versus the data layer? And so, there's a whole set of thinking going on right now which is called agentic workflows. What this means is that Amazon Alexa is an agent and Alexa will broker other conversations for you in the background to bring you the results that you want. The same thing can happen in these systems, right? So, the Microsoft agent could talk to an SAP agent to create the complex result and bring it back to you in that kind of way. So, you're not integrating at the individual data level. At that point, you're integrating at the generated insight layer and then finding a meta insight over top of that, which means that it gives you a different view into what data actually needs to be integrated or not, and where can you actually learn what you need to do with the insight layer?
[00:06:35 Text appears onscreen that reads "Where does Canada have a strategic advantage on A.I.?".]
Well, I mean, I look at Ada and is it Bill C-26? I think it's 26 or 27.
Interviewer: 27.
Chris Howard: It's 27, yeah. Those are further ahead than the U.S. is in terms of really being clear about how things are used. You're probably behind where Europe is. You're similar to where Australia is. And so, you're probably more progressive here, and I take some of that back to Navdeep and the work that he was doing in the Innovation Council, and others. So, they were thinking about these things ahead of time. I also think that as a… somebody said it to me in an elevator today in one of the buildings across and got to know that Canada's trying to mechanize a huge social experiment, right? And the fact is the openness of your society means that you think about data in a different way than the U.S. does, for example, maybe more like Norway where I was recently. So, it is more like that. I think that gives you a sensitivity to the use of data that gives you a great advantage, not that it's easy, right? Because you go from province to province to province, there are data regulations that make data hard to (inaudible).
Interviewer: I was just about to say, well, what about the federations?
Chris Howard: I know. So, again, it's like, how do you maintain sort of that level of autonomy with an integrated insight over top of it? In Norway, it was a funny example. Their health systems cover three different regions, and people that own a place in one and a place in another have to deal with two completely separate ones depending on where they are, even though it's the same country, similar kind of thing, but I think it's just you are used to solving society-wide problems in a way that matches your ethos as a country that I think will make its way into the A.I. policy and will be a model for other countries to follow.
Thanks for watching. And again, I hope you found this useful and interesting for the work that you're doing in Canada.
[00:08:28 The CSPS logo appears onscreen.]
[00:08:35 The Government of Canada logo appears onscreen.]