Insurance Policies In the United States
This presentation is part of a series where we discuss new and emerging technologies. how they are affecting us as a people in Canada and also worldwide.
My name is Michelle Mekarski and I am the science advisor at the Canada Science and Technology Museum. For those of you attending with visual impairments, I’m a woman with shoulder-length brown hair and brown eyes.
And I am joining you this evening from my home office in the city of Ottawa, which is built on Unceded Algonquin Anishinaabe territory. Before we begin, I would like to take a quick moment to thank the National Research Council of Canada for their support in making this series more accessible through translations, captioning, and transcriptions.
Nature of Our Society, to Transform Our Industry, Our Culture
So, Curiosity on Stage. Our goal here is to inspire thought.
We rely on the insights of experts to get us thinking about topics in science and technology that have the potential to really shake things up and really fundamentally change our experience as humans. Certain technologies actually have the potential to revolutionize the very structure and nature of our society, to transform our industry, our culture, the economy, and even our philosophies.
If we take, for example, the agricultural revolution. It was driven by sciences and technologies like animal husbandry, irrigation, and the plow. The resulting food surpluses allowed our populations to grow into cities and then into states.
And the fact that not everybody needed to worry 100% of the time. What food they were going to eat allowed certain individuals to specialize in things like politics, handicrafts, or art, which created the basis for our modern economy.
If we fast forward to the Industrial Revolution, it was driven by machines like steam engines, which provided sources of power other than humans or animals. These new sources of power made industries more efficient and therefore their goods cheaper. Populations rose dramatically again, and those populations moved to cities, which urbanized our society.
Now, the information revolution is also known as the age of the Internet.
Here we have things like computers and TVs and mobile phones which demonstrate the advances in electronics, computing, and communications technology that define this revolution.
As these integrated systems of technology spread through society and take root, information, innovations, and ideas diffuse far and wide. Fundamentally changing once again, our culture, economy, politics, and our personal philosophies on life. Today, it seems like we’re in another technological revolution, an artificial intelligence revolution. During the industrial revolution, machines were able to replace much of the physical work being done by humans. Now we’re seeing AI, we’re seeing with A.I. this ability of computers to take on the cognitive work of humans, things that at least historically required human intelligence to do. So, as you’ll see later in this presentation, AI is an extremely powerful tool. And as a result, it’s spreading into every corner of industry, economy, and society.
The World’s Financial Records
Now, what makes A.I. so useful is it’s very good at finding patterns in very large sets of data. Think satellite images of the entire planet, your DNA, or the world’s financial records.
Now, financial professionals spend a lot of their valuable time on low cognitive tasks, like sifting through a whole bunch of financial transactions. Now, wouldn’t it be great if there was an AI system able to rigorously audit financial data and pick out the key areas that human professionals should investigate further? Well, today I am delighted to welcome John Coltheart of Mind Bridge AI, a company developed to do just that. John has had a whole…has held a series of roles with increasing responsibility at Mind bridge and currently serves as their senior vice president of Strategic Insights and Marketing. Before joining Mind bridge, John held leadership positions at IBM in brand management, product experience, and design, and he was a member of the team that launched IBM Watson Analytics. Before IBM, John was VP of Sales Operations for Clarity Systems, which was later acquired by IBM. So, I know I can’t hear you, but I hope you’re all clapping with me as we welcome John today to Curiosity on Stage. I really do think that it’s an interesting time to be in the world when we start thinking of where and how our finances and our whole ecosystem go as it relates to artificial intelligence. In fact, most of you probably already use artificial intelligence every single day. The idea and the concept of picking up a smartphone and asking the question of where you want to go, where you want to eat, and how to get something. That’s all based on the same logic that artificial intelligence was created. I’m really excited as well because I’m just down the road from the Science and Tech Museum. I get to go there with my kids fairly often here in Ottawa or as often as we were able to before.
Now, obviously, with our annual membership, hopefully, we’ll be able to go again. And we live fairly close to the aviation museum, which is really exciting for them.
So, thank you, very much to the team over at Ingenium for all their dedication to, learning and things like that for kids of all ages. I still consider myself a kid. I really do want to get to a point where you can ask me lots of questions, but there are some things that we need to do to get there.
First, we need to talk about what is artificial intelligence, how it’s changing everything, and where we go. The world is very, very different today than when I first started in the industry back in the late nineties, and early 2000s.
And it’s really accelerated at a clip that I don’t think anyone could possibly have seen or experienced. Right? I don’t think we had a full understanding of where things would go. And when you look there are laws like Moore’s Law, which is all about the ability for CPU or computer processing unit size to shrink but double in power every 18 months.
Industrial Revolution
It was the leading indicator for computer scientists and geeks.
Like myself in the late nineties to try to figure out how small will these things go.
Well, it’s gone so small. And I’ll show you a representation in a minute. It’s gone so small that we can now process more information than we ever have humanly thought possible with computing.
I think we all may be assumed we’d get there, but we’re doing it now and we’re doing it at speed and scale.
When I start thinking about artificial intelligence and having been in space almost a decade now, I really do liken the transformation to be very much like the Industrial Revolution probably was. Artificial intelligence is that, but sort of on, on an explosive scale that’s greater than anything we thought. And Jensen Huang, who’s the co-founder of NVIDIA, where a lot of the processing from
a graphical processing unit. I’ll throw in a couple of techie things for those that are really curious.
A company called Nvidia, which processes, which creates most of the processing units for graphics cards that are used all over the world and mostly in A.I.
He really agrees with the same statement that it is going to be a national imperative. Canada is very, very lucky to have had significant involvement from the Canadian government and from the provincial governments. We’ve got three centers of excellence for artificial intelligence across the world, or across the nation rather. But we have an ecosystem of startups that is expanding past, I think, 550 now different startups across Canada delivering artificial intelligence. So, you may work in it, you may see it, you may have it, you may be part of that.
So, we’re going to do a bit of rapid fire for the next 20 minutes or so, keeping myself on pace to have us into a question-and-answer round about half, half past the this, this, this webinar series. And we’d love to have the dialogue as many questions as you want. Feel free to start putting them in and getting your questions in and we will answer them. For those that may be looking at this as a replay, hopefully, you’ll find my contact information
and you’ll have Michelle’s, I’m sure. And you can email us, or call us. We’re happy to chat about this. But here’s the basic agenda.
So how can I help you understand further? So, if you think about it, we want to give you some of the basics.
We want to give you some of the things that we see and that I personally would believe are going to be contributing factors to your ability to embrace A.I. And then how is it actually affecting you? So, the basics. What is and why are we talking about artificial intelligence? Well, a lot of people are quite surprised to know that artificial intelligence is actually almost a 70-year-old concept. John McCarthy first coined the term at a Dartmouth College, Dartmouth University symposium that was bringing together mathematicians, statisticians, mechanical engineers, and the like. And they came together and said, we’ve got to really start thinking about how we can automate things further. I imagine this is on the backs of the Industrial Revolution. It’s a bunch of think tanks, you know, members coming together, and they started talking about artificial intelligence. They didn’t really know what it was going to be, but they started to think about how to get there. And the first general-purpose mobile robot was actually developed and deployed: Shakey. He was developed and deployed and… not sure why it has a male connotation.
The U.S. Department of Defense
But in 1969 so 14, almost 14 years later, through that same time, there was actually a lot going on in artificial intelligence. The U.S. Department of Defense put together a program to translate English to Russian, and Russian to English. As you can imagine during this time was the Cold War. And so, the two, if you will, the two superpowers were having a challenge. And one of the biggest challenges was how could they communicate and converse effectively without having a cast of thousands or hundreds of people being part of the information chain. They wanted to have very bidirectional conversations with each other. And so, the Department of Defense put together a multimillion-dollar project. They started translating English to Russian, Russian to English. This was an early concept of artificial intelligence, and it worked fairly well, except for the fact that it didn’t understand colloquialisms or unique things around a given, you know,
part of a person’s dialect. So, it would take things, you know, like “out of sight and out of mind” into something into Russian and then back into basically “blind idiot”. So, concepts like that didn’t quite work as well. And so, we had all these stops and starts. In ’97, those of you that have been around as long as I have might remember this: Deep Blue from IBM started playing chess earlier in the years preceding this, and there was finally a chess champion that was beaten by a computer. 2002, we got our first robotic vacuum. I think most people will attest that don’t like house chores as I do, robotic vacuums sound really great. But they were really basic. This is 20 years ago folks, right? Very basic. Then we went through another couple of fits and starts, and starts and fits, and stops and starts.
AI Systems Fit
But when you really look at the advancements and you think about what happened, it’s really the last almost, almost 15 years that have had the most impact on the world. And part of that is because of this. I mentioned computational power. The amount of data that we’re trying to process is quite significant when it comes to artificial intelligence, as Michelle said in her lead-in. Right? The ability for a financial professional to look at every single trade that’s going on in a business, and every type of transaction that’s going on in a business, and make sure that it’s accurate and it’s correct, and the right funds are going to the right people, to the right buyers, or getting from the right suppliers or customers, rather. It can be a nightmare. And you can imagine that if you look at the very large machinery, 250 megabits of storage was about 550 lbs. So that’s more than double me, okay. And it cost over $10,000 to deploy that much storage. You look at today and you can buy a 256 GB, you know, micro-SD card, it’s under two grams in weight and sometimes you get them for less than $30. So, you can imagine this computational power is a big piece of how we got here. Right? Now on some of the basics, I don’t want to go too technical because I think that it’s the concepts that will make sense, but there’s actually a microcosm or a set of envelopes that actual AI systems fit into. So, an overarching artificial intelligence platform will have everything from natural language processing to statistical modeling, it will likely be machine learning algorithms, and it will likely have some basic rules and very much scripted things going into it. And then you get into this very specialized area.
This darker blue of deep learning. And deep learning is really only the last ten years or thereabouts.
And this is what really drove such a radical expansion of investment in AI because we were able to do things that were so unbelievable ten, even ten years prior, based on this confluence of computational power, smarts, and engineering and the ability for us to develop new languages to do this.
Now, what do I mean by this? And why is this interesting? So, around the time I’m talking about, there’s a gentleman named Demise Hassabis. Demi is co-founder of DeepMind. DeepMind is an organization that also has some Canadian roots.
Dr. Jeffrey Hinton, who’s down at the University of Toronto, is one of their members, one of the founding members of the team as well. And they did something extraordinary. They took that little niche, the deep learning neural network area, and created something that had never been done before. It was called the Q-Learner. Now, let me explain this.
For those of you that are as aged, and hopefully, as well as I am as well, who had an Atari 2600 back in the eighties to play video games. They took a video game, they basically put it on an emulator and they taught a piece of software how to play a variety of games. They then converted that into their own level of coding. Think about, and this is why that picture of the brain with all synapses is there, think about all the decisions you make when you’re playing a game, whatever that game might be. This is, I believe, Space Invaders up there on the screen.
ou know you used to go across and you’d hit the red button to explode, whatever was in front of you. They converted all of those mechanics that you would normally think of doing into computer code.
And essentially got to the point where the system itself was able to take any game from the Atari 2600 and actually play it better and better and better than any human champion in less and less time. It started out with about 18 months, sorry, 18 hours to successfully learn enough without human intervention. Learn how to play the game without human intervention. It took 18 hours to win Space Invaders.
Then they went on and on and on and it got to the point where it was working at a speed of about every 2 hours it could learn a new game and be extensively better than anything else that had happened.
So why A.I.? Because we’re at a point where the computational processing power is available. We have elements like storage costs going down further and further and further. We’ve got cloud computing. We’ve got all these areas.
And we’ve got the Google cat detector, which has obviously broken a whole bunch of barriers into how we create something that will do good things and identify the things that we humans want out of this complex data. So, I’m going to transition into the human challenges with A.I… And sometimes people don’t like talking about this.
And the human challenges are, that some are based on our own DNA and our makeup, and some of it is based on just the technology itself. So, I’m actually going to go in reverse order here. But we’ll start with the black box of A.I. One of the biggest human challenges that we have with A.I. is that everyone is nervous about what that decision process looks like.
And it doesn’t matter whether it’s in, you know, industries like mine, like the one that I work in, which is with financial institutions, with public accounting and financial recordkeeping, or whether it’s in that business to consumer bot that’s helping you pick your next cell phone or cell phone plan, or whether it’s in, you know, self-driving cars. Everyone is very worried about what is this box actually doing and how do I get a comfortable understanding of what it’s doing? And so, we do try to make sure that the human side, right, that people side can get access to understand and be able to trust it. And that’s one of the biggest challenges that we have with A.I. It is not a human problem. It is a problem that the A.I. system vendors and the A.I. solution specialists need to continue to break apart. The second piece is biased.
And what’s really interesting is the last 20 years of developing and deploying and designing software for a variety of organizations around the world.
You see human bias everywhere. When there are some analytics tools that people have seen.
You know, just think of this as, you know, charts and graphs out of the data. In enterprises, it’s very common that you will insert your bias into the question you’re asking. I want to know how much our revenue grew. The period over the period. I’m going to go and find all the places where we grew up. I may ignore all the places that we didn’t because I’m already instituting a bias that I want us to grow. Therefore, I want to validate the hypothesis that we’re growing. Okay, interesting. Humans, as, as individuals, have a significant amount of bias. And I don’t want to belabor this, and I’m not trying to make this in any way a statement of politics or policies or anything like that, but we have bias, and the bias doesn’t come from the A.I. itself. It comes from the people implementing the A.I. or designing A.I. So, there’s a lot of time spent around how do we do this, how do we do this ethically, and how do we work through this.
Self-Driving Car in Ottawa
And I’ll give you an example of the challenges we have as it relates to our bias and then how it blends in with the ethical conversation. Now, you can probably assume, based on this picture, what I’m about to talk about. So, we have a self-driving car in Ottawa. We’ve got this wonderful program out in the Kanata Research Park that is going on to have self-driving vehicles. There are a few other programs across the nation that have been deployed. I think Ottawa might have been the first city that actually actively deployed something, and I think Toronto and a few others have followed suit. But we’ve got this issue of ethics with AI and with self-driving cars or the neural network that’s working behind it. So, as you can imagine, we’ve got a car coming down the road and we may not have enough room for that car to go past things. In this case, I’ll put people. Again, not trying to be biasing anyone in their thought process, but just giving you the sense there’s something in the way there’s an A and a B. When we look at the biases and the way that people interpret the ethics of doing something, there’s actually, this is a fairly significant study that has been done on this case where different parts of the world have different desires in terms of what they spare or what they protect. And so, when we start thinking about building A.I. when we start thinking about the ethics, are we thinking about all these types of environments that we have to work in? When we look at these biases and these ethics, it can be based on the age you know, it can be based on, you know, a level of gender. Right? Obviously, there are folks in the middle that is you know, that are nonbinary and identify differently. It doesn’t matter, that, it’s the DNA-based structure that we’re talking about here.
But there are different opinions, as you can see on the screen. And finally, just even in terms of our level of education. Right? Who would you know, sort of spare based on whether they’ve got one degree, two degrees, five degrees, no degrees? Right? And again, very different and differently applied. So, when you bring all of these things together, you have this issue of, you know, starting with the black box, what did it actually do? What level of bias is in there and what type of ethical concerns have been driven into this? And you can imagine AI in things like medicine, right? We want to get some of those things right. And so, it comes to our last point about the human challenge which is the privacy of all of this. It takes a significant amount of data to run amazing A.I. systems. And so, therefore, what level of data are we willing to use to train our systems, create inferences, and ensure that there’s a lack of leakage in the overall system? Right? How do we do this? Well, today, every time you sign up for a new service, right, you get a few terms and conditions. And I’m sure that many of us don’t read all of them all the way. And that’s okay.
But, you know, at the end of the day, we are selling a bit of ourselves into these ecosystems of data that we want to have. This personally identified information or PII, right, then goes into these programs. And I’m not picking on a single vendor here. It’s just they’re going into the programs; they’re going into the programs that you’re using. And those can, if not driven ethically, if not including bias, and when there explain ability, you can get a very good sense of how that personal information and how that privacy is affecting or the outcome. But at the end of the day, we had an issue a few years ago with a company called Cambridge Analytica which has been deemed to have influenced outcomes in certain political spectrums because of the amount of personally identifiable information or PII that they were able to use and leverage and build into their bots that were communicating in information flow through the media system.
So obviously, having that information and that level of depth, created an ability for them to be very targeted. And people feel that that is uncomfortable. I get it. So, let’s talk about it in your world. We set the stage, there’s a bit of a basis. There’s some really cool tech stuff that we’re doing based on really amazing advances in the actual technology. But how is it affecting you? And so, we’ll spend the last sort of six or so minutes as we get ready for Q&A.
AI-based System
Again, if you have questions and answers, you know, throw them into the chat window down at the bottom of any of the other places that we have available to us, we’re going to talk about how it affects you today and every day. Well, it starts with AI being everywhere.
That was my very first slide if you remember. And the thing is, is that we do, in fact, use it every day. I’m sure that not everyone has a smart TV, and I’m sure that not everyone has a smart thermostat, and I’m sure that not everyone has a smart car. But the reality is since about 2016, maybe 2015, every single vehicle is driven at a certain level of quote-unquote ‘trim’ and has been equipped with a variety of safety sensing componentry. These are all components that feed into an AI-based system. So, Toyota safety sense, they’ve got this camera at the front, that’s, and a LIDAR system that’s actually pushing out and gauging how close you are to that next car. The vehicle I drive, which happens to be part of the GM family of cars, has a counter, forward collision countermeasure. You know, Tesla is always in the news talking about their full self-driving capabilities, which is really interesting because we don’t have the right legal framework to actually enable all that where you can take your hands off the wheel and, you know, sleep. That’s not there yet, but you’re even using it in some very basic things. We’re about to hit tax season for most of you. You know, our RRSP deadline was the other day. You’re probably getting ready to do your tax, whether it’s TurboTax, Net file, File, you know, there are dozens of these all of these programs and the people managing these programs.
If you’re going into an H&R Block or you’ve got an accountant, they are using A.I. They’re using it to, again, try to support and help you cull through all this data and help you make better decisions. And so, I am very fortunate that I embrace this technology. A lot of people don’t. I find myself fortunate, though, that I know enough that helps me protect myself as much as possible from these things like privacy, ethics, and biases. I’m hoping that out of this you will come away with, okay, I’m going to spend a little bit more time on that terms and conditions type, type scenario. Now, I wanted to put a huge shout-out to the government of Canada.
So, to me, that’s kind of crazy and wild and wacky and insane. Now Google, a couple, a company most of us know, or its parent company, now Alphabet, did something interesting with DeepMind. They bought it. They bought it for a ridiculous sum of money, and they turned it into Google’s cat detector. So, the Google cat detector, it’s a very funny story.
They took this idea of the neural net and they pointed it at one of their assets, which is YouTube, and all of their cache of search searched websites. When you go to a search platform like Google or you’re maybe using a streaming service like Netflix, when you start typing in information, when you start asking for something, the responsiveness and the quality, of what you’re getting back is significantly high. So, Google bought DeepMind. They created the Google cat detector, but as a proof of concept to show that we have really taken things to a new level. So, we are now in this new A.I. Spring. It’s been almost a decade that we’ve been in this spring or this resurgence. And I love the Forbes quote from when this all started.
But it is quite complex and it is quiet, you know, integral into having to get to these levels, of information. So, I’m going to fast forward a little bit and speed up a little bit on some of those concepts. So why A.I.? Because it does it performs complex and laborious tasks. It doesn’t need to sleep.
Artificial Intelligence
It doesn’t have, traditionally it won’t have the same level of bias.
There are a whole bunch of reasons why we can pass huge amounts of data through it and provide the agility to act on the other side of it. And it basically takes all of this complex data, all this voluminous data, and it processes it in spite, speeds that you couldn’t put enough human beings on, and comes out with very interesting insights extracted very, very quickly. They really have been a forward-thinking leader around artificial intelligence and on how businesses can thrive and how we can move forward together. They’ve actually got something called the Algorithmic Impact Assessment Program. When it’s essentially a way for you to understand how much reliance you should put on a given type of artificial intelligence. That’s fantastic. Back in 2018, I believe it was, a series of businesses, Mind bridge being the first tech business, signed the Montreal Declaration which is all around ethical design, development, and deployment of artificial intelligence. And you know again I think that’s a really good testament to us being safe.
But it really does now lead us into the final sprint. How does it affect you and your finances? That was the pull, right? The reality is it affects everything.
And we already mentioned the whole tax and doing your tax returns and your filings. But there are so many other areas of your financial ecosystem that we need to talk about. So back in 2017 Toronto-Dominion Bank or TD bought an organization called Layer Six. Layer Six is an artificial intelligence team that was building amazing programs for the financial services community. And they have pointed all of those members into internally developed different tools and techniques. You may have seen if you’re a TV user, you may have seen in their most current apps, they’ve got this thing called… what do they call it again? It’s the spend alerts and the I think it’s called ‘My spend’. And it actually shows you how far, you know, above or below last month, and what your trends
are. There are elements of A.I. that are baked in there to try to help you figure out where you need to go. CIBC, I believe it is, has a program where they actually challenge you to save more. Right. So, they’ve actually, as part of their app system when they go to pay a bill, it asks you if you want to push some to savings. Then you’ve got, you know, Clear Banc or Wealth Simple or some of these other great Canadian upstarts, in the banking and wealth management space where they’re using A.I. to find the best product for you or the best investment for you. Even organizations like Causse de depot et placement du Québec, or CDPQ. A lot of their investment thesis is now being driven based on a variety of analytical programs that are steeped in A.I., as is CPP, which is our Canadian pension plan, obviously, or, you know, other private pension holders. So, it really does impact you all the time. And what’s kind of interesting is the next stage of this, which is how A.I. and the white-collar jobs transition. Now, I come from a space in the last five years of working with corporations and working with public accounting firms that deliver on audits. Why is this important and why is an audit, you know, why am I mentioning this about how it impacts you? There are some major failures that have happened over the years.
Those that have been investors for maybe a couple of decades, might recall the Sarbanes-Oxley an act that was enacted in the United States was a direct result of big failures like Enron, WorldCom, and Tyco.
These were big malfeasances composed of senior leadership and in those businesses actively, actively hiding money, moving money, and doing very strange accounting things. And it made the companies look bigger than they were, more performant than they were. People kept investing. And then, lo and behold, big failures. In Canadian recent times, there was a big furor around the likes of Nortel Networks and even more recently in a few other parts of the world in Germany, with Wire card, or in the United Kingdom or in England with Thomas Cook Travel, which actually affects all of us. I think all of us want to be back on a plane or at least have the ability to go and travel. Thankfully, most restrictions in most provinces are moving on. But Thomas Cook’s travel is a great example. It had a clean bill of health, an audit performed by a very large public accounting firm. Six months later, filed for receivership and bankruptcy. So, it affects you. It affects you and how you invest. It affects you and how your bank and how you, how you, you know, transform your ability to have wealth. And so, as it relates to my specific world, and this is not a plug for, you know, for Mind bridge being this great company, although I love it and I love being there. The ability for us to transition, to have folks like auditors and financial professionals Being able to use artificial intelligence to spot those errors and spot those challenges as quickly as possible is going to be a requirement for a more performant financial ecosystem. As you can see here, Klaus Schwab articulated that by 2025, the respondents expect to have seen almost all of the corporate audits will have had some form of AI performed 2025, which is fantastic to see. I’m sure it will actually take longer. Everyone that puts a stake in the ground, they don’t think of all the other factors that go into this, but that’s kind of where we are. A.I. is all around you.
A.I. Augments Human Capacity
You’re definitely working with it, and are accepting it as different parts of your livelihood, and parts of your day-to-day life. And what I’m hopeful for is that people will start thinking about, okay, how can I use that? Or how can I find products that are going to use that to make sure that my financial stability is there in the future? So, the last thing I’m going to say before we drop into question and answers, right, is: that A.I. augments human capacity, it doesn’t replace humans. Right? There may be a time for singularity. I’m not here to opine on that, but definitely part of the job of A.I. is to make it easier for humans to do more things either individually, for their business, or for hopefully, the world itself. So that was my little bit of opening up my curiosity. Michelle, maybe we can talk about where we go from here. Great. Thank you so, so much, John, for sharing some of the promises that are coming out of this
new kind of branch of technology, but also some of the challenges and some of the pitfalls and places where we could trip up. So, I’m going to, I’m going to invite our audience… there are already two questions that have come in. But I’m going to invite our audience to find the Q&A button at the bottom and type them in, and we’ll try and get through as many as possible. But, John, I figured we’d throw you an easy question first because the questions coming in are kind of deep. And so yes or no, and then you can explain further but… do you think we’ll ever get to the point where finances, bookkeeping, auditing, etc. are ever going to be fully automated by A.I.? Yes. And I say that because we’re already starting to see some of this happen.
And there are three technologies. Well, four. Imagine that actually all just being fully digital all the time. We’re not even there yet. Right? But if you could have that, that’s a stepping stone to having it fully automated with an AI system. There are already tons of technologies out there from companies like UiPath, Microsoft, Blue PRISM does something called robotics process automation in this space.
And so, what they do is they actually use robotics process automation. So essentially taking those OCR elements and actually going through and posting entries in a business. So, I went and I bought janitorial supplies or I have an invoice coming in from my marketing agency. It literally comes in too, think of it like a big file folder electronically, that RPA will look at it. It’ll say, oh, this one goes to janitorial expense, this one goes to marketing and advertising
expense. Posts it. And then a bot will pick it up and say, oh, it was net thirty on the janitorial and it was net 45 on the, on the marketing and I will now pay it and it will go in and create the banking to go and submit those funds to those vendors. And then it will reconcile that at the end of the day that what I came in, whatever that amount was went out of my bank balance. Job done. So that’s a really interesting place for us to be. If, apologies, that was not… meant to hope you didn’t hear too much of that ringing.
That was like I didn’t even know where it was coming from. The joys of being at home. So that’s the second piece RPA. The third piece is, is actually the ecosystem of full and end AI players, right? Where like there’s Bot Keeper, which is actually a process where you can submit anything from customer orders, invoices, etc. It’ll plug it in. The last piece of the puzzle for me, though, is actually blockchain. So totally different technology. We probably need to do a Curiosity on Stage on that at some stage. But blockchain is where you will have a level of transparency and a level of acceptance by all the parties. That will allow us to use AI to do the full spectrum. The thing is, even if we get there, it’s not going to replace a level of oversight that we need, whether that’s in regulatory bodies, whether that’s in human bodies at those individual enterprises and organizations. And for yourself, right, I don’t think you know, we already get direct deposit payments. We already get all of these things. I think there’s a level of human that always assists. Right? It’s but how much can we actually push down? I think it’s the vast majority in terms of bookkeeping and presenting financial statements. So, you’re saying the AI is going to do a really good job of finding fraud in my credit card statement, but I’m still going to have to skim through and make sure that I bought all those things. Exactly.
And there there’s a great Newfoundland-based company, Vera fin, which is protecting all of us. Most, at least. Most of the people who are banking here in Canada will be one of their customers. And they’re already doing some of that for us. But yes, you should always eyeball your bill and maybe eventually you start looking at other types of alerts. Right. And that will all be AI-based. And this is what TD is trying to do with there, I think it’s called my spend report, where every frequency that you set up, it’ll actually go and look at the types of spends you have in the types of categories and say, hey, there’s a blip over here, right? So that you’re sort of drawn to it. We’re trying to get you to the thing that matters, not the “Yeah, every week I have a payment for this, and every month I get my mortgage payment”. And, you know, it’s not that. It’s “oh, that’s a spike, that doesn’t make any sense” or, “Wow, you’re spending way more on shopping than you ever had”. I mean, obviously, for credit card fraud, that’s the place to look at. Look at that retail. Look at those. Unfortunately, oil and gas are a really big proponent of that. But retail, travel, and things like your gas for your car or places, for sure, you should be looking at that every statement. Great. Thanks. So, I thought, I think it was funny that you just had a little technological issue here because one of the questions in our chat is the… I mean, there’s going to be a crash at some point. Something’s going to go down. I’m assuming that an AI crash at some point is kind of an inevitability. Could it fail? What would be the repercussions? Are there fail safes? You know, that’s… It’s hard to assume that everyone’s going to do the right things. So, in a perfect world, yes. When it fails, it fails gracefully. Right. It will. There’s a lot of redundancy in a lot of systems that exist today. Although, you know, we see service outages all the time. Right? With products that we use. So, the question is when you’re building that A.I. system, what is the level of real transparency in that element of failing gracefully? Right. What did happen? What do I need to look for? So, when it happens, what I’m hopeful for is not like a…
I was going to use a TV reference, but that probably won’t translate to everyone who hasn’t seen it. But, you know, we don’t want to have this situation where the world goes dark, right. All of a sudden just everything’s turned off because the A.I. system failed. And we have to work really hard to make sure that we don’t have that situation happen. And I think that’s one of the reasons why, as much as the technology exists today to go way further, and I’ll use the Tesla example that I mentioned. A Tesla today could literally drive in the city of Ottawa, right? With the person sleeping from point to point and with a high degree of confidence in the nineties. It would get there without incident, without any issue. We’re not ready for it as humans, though. And therefore, that sort of giving and take of how much we’re willing to adopt is going to slow down getting from the nineties to 95 to… In the computer industry, we look at not, you know, five nines or seven nines. 99 point and then you know, either three nines or five nines as being the level of stability we can provide. Right. Most SAS vendors or subscription vendors will be looking for that for their uptime. And that’s what we need AI to be and we’re not there. Right. That’s just the reality. Can we get there? Yes. But there’s got to be this almost two-way dialog between well, three-way, if you include the governments governing and regulators governing, you know, whether it gets used. But there has to be this interaction between the AI provider, whatever that looks like, and the consumer to get to a point where we’re happy at the end state. Yeah, definitely, great. Thank you. There are a couple of questions coming in here about ethics. We’re going to take parse this out a little bit. I’m a bit of a sci-fi nerd. I like my sci-fi movies. And we often see AI kind of portrayed as these villains. If we want to think like: Terminator, Space Odyssey, Prometheus, Westworld, or Blade Runner, the plot, is all very, very similar. So, as we’re building these A.I. programs, how do we go about building ethics into them? So, you talked about some of the ethical choices that need to be made, or how do you actually put ethics into an A.I. program? Uh, so the best way I can describe or the best way that I would think about it is we need to have A.I. Usually, we talk about it in terms of resiliency. So, resiliency in an A.I. context is that there are Fail-Safes that are constantly checking the things that we want them to do. I’m not going to disclose that the party that this happened to, but there’s a very large technology firm that was using an AI bot to sift through resumes and decide who got selected for things like interviewing. Right. And so, this is a very ethical challenge.
Right. We are all striving for a level of diversity and equality, for sure. For most tech firms, this is one of the things they think of all the time. There are massive pushes into it with programs dedicated towards STEM or science, technology, engineering, and math to increase the level of diversity. But there was a large firm that was using their historical data profiles of existing employees to then infer who they should give them time to in an interview situation. And so, again, from a gender DNA perspective, the male-dominated environment in tech for the last 40 years, you can imagine that this bot did something that was not very good. Very biased. So, they had to scrap it. Right? So, they had to scrap it. And so, so the way to build these, you know, these things in is to give it more obfuscation of that PII than that sort of sensitive personal information. Things like gender, and look at core elements in this example. Right. Strip away the name strip away anything that could relate it back to gender, or age. Right. Because ageism is a thing. There’s something in the news of one of my former employers that, you know, talking about.
I’ll just say Dino babies and you can go and search what that looks like. It’s… we have to strip away elements of this. And so, when we’re building A.I., we need to be thinking of these things and it needs to be resilient and not single-fault tolerant. Right? It needs to be multiple, multiple faults tolerant and it needs to have dimensionality. And so, using a big monolithic system is not going to be good for pretty much anything. And I would urge anyone thinking of A.I. and building A.I. to go as wide as you can with, if you will, the dimensionality of what you’re looking for in order, in hopes that you will remove some of those ethical challenges because it will look at the dimensions
as, their natural state, versus looking at it in terms of what could be ethically compromising.
But at the end of the day, A.I. today is a tool. Right. And so, the other problem we’ve got to solve is: do people using the tool subscribe to a level of ethics? So, it’s a bit of… there’s no easy answer to that one, really. I think going to a resilient, multifaceted A.I. approach is going to be way better than trying to build a single system that looks at every piece of information and treats every piece of information as a sort of… yeah, I think you kind of get
the point. So, following up from that, then, obviously the AI’s using all this data, this personal data of ours. Are there regulations either in existence or that you think should be put into existence by, say, government safety and government about personal data gathering and how it’s used? So, there are quite a few, everything from the Canadian anti-spam legislation, which starts to safeguard what information you collect, to areas within the technology itself, where they have to identify the technology, the data they have on you. And if you request it, you can delete it. There are some things that have already gone down the path of stronger regulation, stronger awareness, and transparency.
There’s more to do. I think what was amazing about the Canadian government’s foray into this is they actually built a fairly rigorous program to think about how we should, we should look at AI in businesses and specifically for the work they do. This is the AI impact assessment report, which essentially is a big piece of trying to get there. The other piece is the Montreal Declaration. So, it’s an idea that even without regulation, firms will sign up and do good. Right. So based on, on, it’s a commitment. And right now, it’s a, very much a… what do you call that? Sort of like an honor system. Right. But I think it will have to progress further and it will. The reality is that it will. We will get more and more regulation and companies will have to reduce the amount of personal data they capture without you know, direct related consent. And you see this mostly – I’m not picking on a specific industry – but you do see it in what’s happening with our smartphones and our smart devices. Right. How much information is shared when you open up from one app to the other app? I don’t know if you’ve noticed this behavior, but if you’re shopping on Amazon and then you go to Facebook, you get some really interesting ads, right. Typically, directly related. ( Best Health Insurance Policies In United States ( US ) 2022 )
And that’s all around tracking you individually. It’s it actually is – on an iPhone anyways – it’s called limit ad tracking, and that will help separate it. But the reality is they’re going to find a different way to get to a similar programmatic answer, which is geofencing. Right. So where is the device? You know, where does that device normally go? Does it go to… So, I live in Ottawa, you live in Ottawa, right. Are we at Bayshore Mall or are we at Rideau Center or are we at Saint Laurent shopping center? Right. Very different profiles of stores in each one of those. Right. Did that device stop at X? And so now when it sees that device and it has a particular home, they’re going to try to figure out. Can I get there? There’s going to be no easy answer other than, I would… So, I applaud what the federal government did.
I actually applaud what the provinces are doing right now as well. But I would say there’s more to come. And we need to be very laser-focused on, you know, investing in businesses that are willing to take the step of being open, transparent, and building ethical and responsible AI and not defunding any of these other ones, but making it more of a reporting regulation issue where people are aware of what they’re buying, how they’re buying it based on, you know, how that company performs. So, a follow-up to this. Do you think the onus should be on the companies and the government like? Consumers, I mean, you talked about these terms and conditions, which I’m guilty of definitely not spending a couple of hours required to sift through all that information. ( Best Health Insurance Policies In United States ( US ) 2022 )
But people should kind of be aware of what they’re sharing and how it’s going to be used. Do you think there’s a way we can better help people understand what they’re sharing and how it’s going to be used? So, I’ve got a few friends in the legal community, so I’m going to offend them right now, maybe. I apologize if anyone is online and in the legal profession. I find that the most infuriating thing at the moment is how long it does take you to read those terms and conditions and how long it takes you to find the button or the key that says, no, thank you. So, I think there’s… and I get it right? And why I call it the legal profession is they are writing it in a way, right, which covers the bases and limits the liability. Right. We don’t want to become a litigious society where anyone and everyone is suing every company for everything. So, I get why they’re so long. I get why they’re so involved. But I think simplifying the language would be, go a very long way to people being more comfortable and confident, and making the choice to say yes or no in an opt-in or opt-out. Then I’d follow that up with, you know, we need a better way to get at the information. So that’s on the company. But definitely, the government has to play. So, it’s really, it’s not a single industry and it’s not a single group. It really has to come from educating consumers and individuals, having a layer that abstracts the legalese and the minutia detail of protection of liability, right, and IP protection into a more simplified state that people can understand. Companies signing up to do good things and then obviously government supporting that and enforcing especially in… I mean, we’ve already got massive regulation around banking, insurance companies, anyone that’s, you know, that’s touching your finances as well as, you know, even just things like CRTC and what’s happening in communications. Right. Rogers and TELUS and Shaw and Bell, have other levels of, you know, what they’re allowed to keep and store and capture based on your consumption of content right.
But I think it’s all three parties, right? It’s the businesses for sure. Right. I think, you know if we gave transparency of whether you are using ethical, and you have you know, you signed up for it and you get an ethical you know, audit, non-biased audit whatever it might look like, that would be a step for the businesses. I think individuals need to get more in tune with it. And that means we actually have to force the businesses to simplify. Right. So that it can be for everyone. Right. Versus just folks that have gone through and understood the legalese. And then the government has to have the right and appropriate influence from a regulatory perspective or from, from a direct sort of consequence perspective of whether these businesses should be able to do the things that they’re doing. Right. We’re coming close to the end. So, I want to do a little bit of a speed round with you. A couple of these questions. So, the rule is you have 30 seconds to a minute to answer the next couple of questions. All right. Okay. Question one. Do you think AI tech will reduce financial inequality or exacerbate it? Currently exacerbating. The desire to have it reduced right. I think that it should, it should get to the point where it reduces that inequality. People like Wealth simple and a few others, getting better access to trading tools and information and smartly investing is huge for people of all walks of life. But right now, it is sided the other way. It needs to become more equal.
Right. Number two. Humans use IQ to measure intelligence and we know there are issues associated with that. But is there such a scale or a rating system for A.I.? Not yet. The algorithmic impact assessment tries to get there in terms of what the Canadian government believes to be rating it in terms of how much reliance you can have on the AI system. And so, I think they’ve done a good job to start creating that level of awareness. There’s nothing concrete, and this is a bit of a dilemma. It needs to be a concerted effort from the firms and from the public to come together, trying to get to something that they can, they can agree on. Right. So, there is no scoring system today, but the more diverse and the more resilient someone’s built something, they will want to tell you about it, because that’s, it’s the way to go. I do imagine it’ll be tricky to come up with a system when all the AI programs are programmed to do such different things and use different types of intelligence. Are there any AI technologies that you think are a little scary or that we should be wary of? Um, I wasn’t expecting that one.
I don’t… I don’t think anyone should jump into an area that they’re not willing to… It’s a risk-reward system when dealing with AI, I guess is what I’m going to try to say. I don’t think… it’s not a one-size-fits. So, my tolerance for risk on whether A.I. is good or bad because I happen to be in the space, I probably am a higher degree risk profile, right? I’m willing to take more risks because I understand the elements of those consequences. But there’s no technology out there today that’s really being used, that I think is doing something nefarious. Yes, you get the phishing scams with the Prince of X-Y-Z Country asking for money or you won some lottery. Right. I think that type of A.I. trying to target you that way. Yeah, we got to stay protected. But I think for general mainstream use, in things that you’re probably touching today, I don’t think there’s anything that’s super scary yet. Yet. All right. Last question. Do you think we will achieve singularity? An A.I. that is self-aware, intelligent on all of the different levels that we consider intelligence, which passes for a living being.
An A.I. system is going to absolutely be better at driving a car than I will. 100%. And it should be there and it should… But is it sentient enough to know while it’s driving: “Oh, hey, I forgot about this other thing that I was supposed to do, so I’m going to make a left here.” Could it happen? Yeah, for sure. Should it happen? Not sure yet, but I think definitely based on our current, you know, definition, you know, Webster’s Merriam Dictionary, whoever we want to use of what intelligence is, we will absolutely get to that point at some stage with A.I. systems for sure. The question is, is whether we move the needle or not on what we feel it is to be intelligent. Great. Well, thank you. So, we’re at 4:00, so it is, unfortunately, time for us to wrap this party up. So, I’d like to say a huge thank you, John, to you for speaking with us this afternoon. Thank you for your time and your passion and kind of showing us, giving us a bit of an insight into A.I. that we might not have had before. And showing us some of the ways we might not have noticed it being used before. I’d also like to thank our audience here for joining us and for participating and giving
us some questions. I know we didn’t get to all of them, so I’m going to put John on the spot and hopefully ask if maybe he can answer some of them in a written format.
Great. Super. That we will publish to the Ingenium Channel afterward. We would like to hear your thoughts. We’re very interested in continuing to develop these presentations and make them better. So, if there is any feedback you’ve got for us, there is a survey link that should be appearing in the chat shortly, and there will also be one coming into your inbox in the next little bit as well. So, my final plug for the evening is if you did enjoy what you heard tonight, I would encourage you to register for our next, our next Curiosity on Stage, which is the final one in the series of Beyond injections: 100 years of diabetes, or 100 years of insulin, I’m sorry, and the future of diabetes. And that’s going to be presented by Lisa Hepner on May 12th, and she’s going to be telling us about The Human Trial, which is the story of a biotech startup on the verge of a major medical breakthrough: a cure for Type one diabetes. So, check the museum’s website. Thank you so much for coming and we hope to see you in the future. Bye now.