James Kotecki (00:00):
The boundary pushers, the futurists, the dreamers, the doers. What do they all have in common besides over-caffeination? They seek new possibilities, new innovations that can enhance the human experience. And you'll find them all at CES. So dive into the most powerful tech event in the world. CES 2025. It's January 7th through 10th. It is in Las Vegas and you should be there. So join us. Register today at ces.tech. This is CES Tech Talk. I'm James Kotecki, exploring the trends shaping the world's most powerful tech event, CES 2025 in Las Vegas, January 7th through 10th. Everybody loves a treasure map, but what if the real treasure is the map itself? Maps are woven into the fabric of our digital lives and today we explore the cutting edge of location technology that's powered by open source data and artificial intelligence. So when it comes to the technology that tells us where we are, literally, where are we metaphorically? Here to help us navigate this evolving map-scape is Michael Harrell, senior vice president of engineering at TomTom. Michael, welcome to CES Tech Talk.
Michael Harrell (01:24):
Thanks. Thanks for having me. James.
James Kotecki (01:27):
Let's start with a simple but possibly very complex question at the same time. Why do maps matter in 2024?
Michael Harrell (01:36):
Yeah, it's a great question. It's interesting because maps have been around for a very long time. The biggest transformation that we're seeing in maps that make it really matter is maps are no longer just helping humans and humans navigate, but they're now assisting computers and AI to navigate and understand the world. And so now that we have all these devices that have become mobile, your phone, your computer, your car, all of these devices, they need to understand their place in the world and to be able to interact with each other, to interact with the humans that are navigating in the world, understanding where they're located and what is around them ultimately allows them to do the best.
(02:21):
So we have obviously autonomous driving is a great example of one of these worlds that's continuing to advance and mapping so that the vehicles know how to navigate through the world, but also location analytics. Many different companies trying to better understand how they can continue to enhance their products or offerings to their customers, better understand related to their customers. So you can imagine the world of maps mixed with census data, mixed with private customer data ultimately coming together to be able to produce an even deeper and richer understanding of the world.
James Kotecki (02:57):
When you talk about maps not just being for humans anymore, how much of a fundamental shift is that in terms of what the maps are actually doing, what they are, what a map even is? I guess I'm just thinking about if you get a chance to drive in a self-driving or a car with some kind of autonomy in it, there might be a map on a screen that shows the human in the car, what the car "sees". But it always strikes me that that's just completely a convention for human eyeballs and brains to understand that what the brains, what the actual computer inside the car is experiencing as far as what it might think of as a map is totally different from the human version of that.
Michael Harrell (03:36):
Well, certainly how human interfaces with a map and how a machine and AI ultimately wants to index and quickly access that information and make interpretations from it is very different. And it's most basic raw level. Maps are still the same thing. Maps have been around for thousands of years and they'll be around for thousands of years more. And they've actually not meaningfully changed in that they're attempting to represent the physical world in some easy to consume way, to be able to represent your position in that world and how to ultimately decipher what is occurring. And the biggest change that's happened and continues to happen in maps is the amount of understanding of the detail, the accuracy, how much features you put into that data.
(04:28):
And the world that we're existing in today, machines require a much more significant amount of accuracy and detail to be able to operate safely, to be able to quickly and easily navigate and understand what's happening because humans are good about looking at the map and then self-navigating, self-understanding the rest of the world that's around them, whereas computers are not yet at that point, their sensors and their onboard equipment are not in a position to do that, nor have they been trained for years to figure out how to work within the world in a location-based way. And so these maps ultimately provide a tremendous amount of information and detail that allows for computers to better work within the world.
James Kotecki (05:14):
So you're saying that for computers to use maps, the map almost has to provide more than it would to a human brain just because the human brain is more as its ow, I'm putting the word "map" in quotation, but the human brain maps the world on its own. And you, the map maker can assume the human brain will do some of the work, whereas if you're making a map for a computer, you can't assume that computer will do as much of that work. Is that-
Michael Harrell (05:35):
Well, we take an example in autonomous driving. So we keep hearing that, will autonomous vehicles be able to drive themselves? And there is even parts of the industry say, well, we have enough sensors onboard that maybe we won't even need a map in the future for these autonomous cars. Well, humans, by the way, first they need a map. They need a map to navigate. Now the question is, do you need a high definition map to figure how to safely operate the vehicle? And the answer is absolutely because just simple situations like how slow should you slow down for a speed bump? It turns out there's a big chunk of concrete pulled out of the asphalt when you go up to a speed bump in many of these speed bumps because humans even have a hard time, first time going over the speed bump detecting the safe speed to drive over it, which won't bottom out your vehicle.
(06:24):
But having a map that understands having seen enough humans go over the speed bump and predict the right behavior of what a proper speed should be over that, you can put that in the map and then the vehicle can safely use previous human behavior to be in just in the map to determine proper future computer behavior. And that's important. So another analogy I tell people is, do you want to sit in a taxi going from the airport into a major urban city center with a first-time driver, or do you want a taxi driver that's been doing it for years and knows the entire area? If you build in all that experience and knowledge of how to safely navigate the roads from thousands of drivers driving over many days, you can build that into a map and really teach the computer how to behave more closely to a safe and proper driving behavior instead of having an onboard machine trying to figure it out for the first time every time as if they're like a new driver. And that's the difference of why you ultimately want a map and help the computer.
James Kotecki (07:33):
Is this where we get into the concept of open source data for mapping, which I know is really important to TomTom. I mean, this concept of open source has been around for a while, but is it especially resonant when it comes to just needing a lot of data to train these AI systems?
Michael Harrell (07:47):
In fact, it is one of the biggest things that's finally I think taken us over the threshold in the mapping industry. So the mapping industry, just to take a step back a bit, has been working in silos. So there's a few companies that build entire map, TomTom being one of them, and each of these companies have built a map on their own, in their own silo. And the challenge is is that they-
James Kotecki (08:14):
When you say entire map, you mean of the world, of a city? What is-
Michael Harrell (08:17):
Of the entire world.
James Kotecki (08:19):
Okay.
Michael Harrell (08:19):
So the entire-
James Kotecki (08:20):
It's just a few companies that have really mapped the whole world like that.
Michael Harrell (08:22):
Correct. And we say the entire world, it's in fact not of the entire world because there's regions that are just not well mapped or mapped at all. And it's because each of these companies have to prioritize. So we map much better North America and Europe, for example, than parts of Africa. And the reason is because it's just consumer... How many customers are utilizing the map that cause you to prioritize how much detail and how far you go into mapping? But that's not the best way to ultimately work in the mapping world. And now that the maps have become so demanding of yet even exponentially more data, the answer is that in fact, no one company can meaningfully map the world with all the data that's required to do it well.
(09:11):
So TomTom, there's a great slogan TomTom came up with, which is it takes the world to map the world. And that is very true. And that's where open really comes into play and just saying, hey, we all should collectively work together to map the base capabilities, the base understanding of the map so we have a shared understanding of the world. So you can now imagine if we all have a shared base understanding, just the most simple stuff like where the roads are, where the buildings are, where there's certain restaurants and tourist attractions and stuff like that. And we all share that exact same understanding. It won't be perfect because map never is a perfect representation, but it'll be more close to perfect because we'll all collaborate and fix and make it better. And more importantly, we'll all have that same shared understanding of what it is.
(09:59):
So now we can communicate, anything that's communicating in this virtual world is communicating with the same understanding. And before, we had this singular understanding of what the virtual world looked like and what its map was, and they were all working their own silos. You couldn't talk to each other. So devices, one device working in a location space couldn't talk to another device. The analogy is if your Android device, your iPhone and your PC couldn't communicate to each other. Imagine a world where these devices couldn't communicate to each other. That's the location-based space has been for decades and it's been significantly stifling to the advancements of autonomous driving. There's been advancements in a lot of location capabilities because we can't innovate and move forward because we're not all in the singular space. And so this has been a huge breakthrough for the industry to get us into this singular space and ultimately have a shared knowledge that we work against.
James Kotecki (11:01):
I don't know if you remember the movie The Truman Show, but as I was prepping for this interview, there's a scene in there where they flashback to a young Truman in school, and I think Truman says something like, "I want to be an explorer like Magellan." or something like that. And the teacher, to warn him off from trying to ever leave his town on TV, of course it's a simulated reality show for those who haven't seen the film. The teacher says, "Oh no, the whole world's been mapped." And then she pulls out a whole map of the world. And I think that movie came out in what, the late '90s.
(11:28):
And I just wonder if there's this sentiment that maybe you sometimes run into when you tell people about what you're developing. Do you ever run into this kind of obviously false sentiment? Like, oh yeah, mapping that's a solved problem. We know how to map things. It's just such a background functionality for a lot of people's lives. They don't think about it. And what you're saying, I think in articulating very effectively is the further we go down this technology road, the more sophisticated and the more challenging it actually is. It's not a solved problem at all.
Michael Harrell (11:57):
And not even close. I mean, the level of detail that's required isn't there. If you just look at the maps you look at today, do they represent reality close enough that you could imagine a computer understanding how to basically maneuver itself through that world? And the answer is in most cases, no. In fact, when you look at the 3D representation of it, it's very cartoonish. It's not representing reality exactly right. Where the road begins and ends isn't exactly right. And you can now imagine a computer if it doesn't know where the road begins and ends, that's going to be a big problem. If it doesn't know where the stop line is, big problem. That's one side of it. The other side of it is just the world is constantly changing, everything's changing, and the speed at which you can adapt to that change is also critically important.
(12:45):
I'll give a couple extreme examples. The earthquakes in Turkey, the recent flooding in Valencia, the minutes matter. When that world significantly changes at that level, you got to know which roads are open, which roads are not open to get emergency vehicles into place, you need to know where the pop-up tents for medical supplies are. There's a bunch of stuff that has to happen within minutes on the map to be able to ultimately save people's lives. And we're still not there, honestly. We're just not. We take more time than is necessary for us to get a new understanding of the world and have it mapped correctly for being able to operate well within that. So it's a more extreme example, but the world is constantly changing and our ability to keep up with it is still not where it should be.
James Kotecki (13:34):
And you talked about yourself and other private companies engaging in this activity, and yet we're talking about emergency situations which call for obviously a public response. I mean, people might think of maps or wrongly as a public resource, or at least we have this shared public idea of where things are. So what's the balance or what's your philosophical take on how much of this is a public good? Especially when we talk about open source data and the data coming in, what's the balance of public and proprietary when it comes to how your companies think about these things?
Michael Harrell (14:06):
Yeah, I mean ultimately the biggest challenge that I think separates the public versus private is the cost. And so where it requires meaningful cost to still acquire the information that's necessary to build the quality into the map, ultimately you have to be able to offset that cost through selling to customers. So where you find more often the value add or the things that are being sold are ultimately the things are also costly to acquire. Now of course, in these situations like Valencia and others, then that data is donated and you provide that as quickly as you can and help in any way you can to help the situation. But in normal day-to-day practice, so what's the things that have been commoditized that are considered open and no longer as much a value as the things that overture, which is that collaboration that was founded by TomTom, Microsoft, Meta and Amazon come together.
(15:04):
And we've basically decided that as a group and a much of industry companies have now joined that as well and said, look, this is the stuff we believe is now commoditized, should be open to the public and should not be something where you are taking money from. And so these are where the road network is, where the buildings are, where these admin areas are, and these places, things like restaurants and tourist attractions and things like that. And it's just a base understanding of them. So it's not opening hours or detailed menu information. Still very complicated to source all of that. And so that's still value add, that's put on top. Reviews, Tripadvisor and Yelp and all these that have detailed reviews that they've worked with their customers, that's still meaningful data that can make a difference for their business and ultimately make a difference for others that still value add on top.
James Kotecki (16:01):
Let's talk about AI from the perspective of how AI helps you maybe create these maps. We talked about AI and looking at a map and computers trying to use it and operate in the world, but if it takes the world to map the world, how much is AI involved in that map making process?
Michael Harrell (16:17):
So AI is evolved in everything we do. I just take a really high level, if you just give me a few minutes to just talk about AI because it gets a little confusing.
James Kotecki (16:26):
Please do.
Michael Harrell (16:27):
There's been a history of AI for quite some time at TomTom and in the industry to help set people's minds on AI. We had classical AI, which I would define classical AI as where humans wrote the rules to the computer, it told the computer what the rules were. And then this advancement happened called machine learning. And what machine learning did was instead of humans writing what the rules were, humans labeled data that said this is correct and incorrect as a part of the data or categorize the data. And then the machine looked over that categorization data and created rules, and that was a huge advancement for the industry to get into machine learning.
(17:07):
Then we had something called deep learning. Deep learning, all it did was still humans basically labeled the data, machines looked over the data and they found features within the data and then rules based on those features. And then deep learning gave us another step in the direction. And then 2017, there was a white paper called Attention is All You Need, which was attention to the input in particular what I was talking about. And it created something called transformers. This is the advancement, one of the two breakthroughs that happened in AI that we're all now talking about and is creating a lot of buzz. Transformers, all it did was say, now machines are going to, still humans label the data, but machines are going to identify the features. But as a part of identifying features, it's going to understand all the complex relationships to input, like when you're writing a story and you say, the Queen of Spain did blah, blah, blah, blah.
(18:06):
And then a paragraph lady, you say, she da da da. She relates to Queen of Spain, and you know that it is that. And you can imagine all these incredibly complex relationships that are very deep across pages of input information. That's what transformers started to allow the machines to understand those complex relationships on the input data. And it was a huge advancement for us. But the problem was is that there's trillions, I mean trillions of relationships that could exist when you get that complex. So another breakthrough had to happen. And the second breakthrough was called self-supervised learning. What self-supervised learning was as allowed the machine to label the data. So now if the machine can label the data, you can get a lot more data to these computers. And what it did was it did things like if you had a sentence that said, I like ice cream, you could actually get rid of the cream part and say, I like ice. What's the most probable next word for the machine to learn?
(19:00):
And so what it did was it just eliminated words out of sentences and then train the machine to learn what those words should be. And now in its simplest sense on a sentence like that, imagine doing it to the entire data of the entire web. So between self-supervised learning and these transformers, we created this new concept called of AI that we have today. Long story short on all of that is in fact we use all of those. We use classical AI still today. There's some places where we use machine learning and deep learning and certainly in other places where we've advanced, we're using transformers and self-supervised learning in a pretty significant way.
(19:36):
In mapping, it's all about GPS, camera data, LIDAR data, other sensors, are your ABS kicking in? If your windshield wipers are turned on, maybe you turn them on to wash bug off. But if everybody's windshield wipers are turned on, it's likely that it's starting to rain and the speed at which they are, it's telling us how much it's raining. So all this type of sensor data we can start using to inform ourselves of what's happening on the road more. And AI is important component of that because of course there's so much complexity when you're not on the ground to see it, how you interpret this data and ultimately create the right answers.
James Kotecki (20:13):
And then how does what the AI is doing connect with the human mapmaker? Is there a human mapmaker in the loop? Are they able to accept or reject what AI is suggesting that that person adds to their maps? How does that work?
Michael Harrell (20:29):
Yeah, that's a good question. So first of course, humans still label data. They still quality control data. The biggest challenge with the latest advancements in basically this generative AI is how well can you test it when it's trained on such a large set of data and how do you think about it differently than how we use training and learning data sets with machine learning? The answer is humans are ultimately the ones that still train machines and will continue to be, no different than humans are trained by other humans. And that's the answer which is just humans in the loop. Humans helping train the machines, humans helping maintain the quality of the machines, continuing to look over the data, humans helping correct where the machines make mistakes. So we still have a meaningful amount of humans ultimately involved to continue to help move it along. And that world doesn't change. We will continue to see that as far as we can see out forward.
James Kotecki (21:27):
I want to get your perspective on some of the CES industries that mapping and location-based technology obviously support and just get your perspective from where you are and how you see the technology, how these things are going. We've talked about autonomous driving. Where do you see the cutting edge right now of what's possible? What are the kind of challenges coming up down the literal or metaphorical road there? What's your state on the industry? Are we going to be all riding around in autonomous vehicles next year or is it never basically going to happen in just in a few spots? Where are we?
Michael Harrell (21:57):
Yeah, it's definitely happening. So there's no doubt about it. I mean there's already, obviously there's cars on the road driving themselves, already there. It's just not scaled. It's not beyond very tight controlled parts of cities and things of that nature. But where the automotive industry is moving is there was this excitement that we thought we could get to the cars drive by themselves without a steering wheel within a couple of years. And in reality, that was not realistic. And where we're really advancing towards is now instead of saying get from zero to end game overnight, we're moving the levels of autonomy. So most people have in their car speed assist and lane guidance. That's the first beginnings of autonomous driving and you continue to see further and further capabilities of taking your hands off the wheel.
(22:48):
The next one, which we call level three, is take your hands off the wheel as well as your attention off of the wheel so that you don't have to pay attention to the road any longer. That's that first meaningful breakthrough that changes how you interact with your vehicle significantly, because as soon as you no longer have to pay attention to the road as a driver and you can start to pay attention to other things that you're going to see the car start to meaningfully transform because now all of a sudden this whole world of the car opens up to you and go, well, now what do I do if I'm not operating the vehicle and paying attention to the vehicle as much? And we'll see the car advance in ways that we haven't seen in our lifetimes as a result of that change.
James Kotecki (23:28):
From where you sit, widespread adoption of people not having to pay attention to their vehicles is coming in five years, 10 years. When is that realistic from your perspective?
Michael Harrell (23:37):
All OEMs are now building that capability. It's going to be within the next few years depending on which technique they're using and stuff like that. But it's also going to be based on the road. So you'll see it adopted on the motorways or freeways more quickly because that's a more controlled situation and a little easier for these vehicles to handle than the more complicated intersections in urban setting. So we'll see it advance there and then we'll start seeing it coming down to lower road classes over time.
James Kotecki (24:11):
What's the state of navigation for humans in maybe indoor spaces, urban spaces, human-made spaces, especially when it comes to maybe overlays? We talk about augmented reality. So how is what you're doing in the technological innovations in the mapping and location space? How are they contributing there?
Michael Harrell (24:32):
So first, AR is also happening. There was massive investment in AR until AI blew up. And then we've seen some shifts in investments into a little bit more into AI than as much in the AR, but AR is still very much an active space. It is actually very similar to autonomous driving. So it's the same thing where the device has to understand its place in the world, it has to understand it with enough detail that it can understand where those objects are so it can place other virtual objects within that world. And so AR to us is a very similar problem space other than where it's located, it's located on pedestrian areas versus on roads. And so you have to have a lot more sensor information to pull that data and recreate it. And of course, we don't have a lot of AR devices out there that are collecting a significant amount of sensor data, so there's still a little bit more to go as far as getting the threshold of the amount of sensors-
James Kotecki (25:35):
Hit a critical mass.
Michael Harrell (25:36):
Yeah, a critical mass on that. So you'll see it adopted in, again, just like self-driving. It'll be in smaller areas where it's been mapped out a little bit more thoroughly. It's also going to be the first place where we see meaningful indoor navigation. The problem with indoor navigation has been knowing where the device is because it doesn't have the GPS single indoors. There's been a lot of ways to get indoor location. There hasn't been a singular one adopted yet to where every indoor, you have a known-and-crisp location of yourself.
(26:10):
Where AR helps, is it because it has a detailed understanding of the geometry of the world, it actually can place you in it and move you through it well. And so AR actually has a really good understanding of your placement in the world, but it's also because its devices are seeing the world while you're moving through it. A phone you've got in your pocket or you're holding it away, it can't know where it is because it can't see. And so AR starts forcing that visibility of where you are so it can place you better, so you'll start seeing indoor be a better experience as a result of AR as well.
James Kotecki (26:47):
So you could have this be theoretically, entirely just on the device itself without any connectivity. If the AR has a good enough map of the indoor space and it can see the indoor space just like a human brain would, oh, I'm seeing this corner and this wall and this door because I already have a mental model of this space, I know where I am in the space.
Michael Harrell (27:04):
You've got it. That's exactly right.
James Kotecki (27:07):
Let's talk about your presence at CES 2025. TomTom's going to be there. What are we going to expect when we show up and see you?
Michael Harrell (27:17):
So big advancements for us are, as we mentioned at the top, was this kind of open map making, basically takes the world to map the world and what is TomTom working with all of these other top tech leaders in the industry on what the new movement of the industry is in relation to mapping? We'll talk about that. We'll talk about real time maps, we'll talk about things like bring your own data and your private layers. So how can you contribute to the map with the data that you have without having to build the map yourself and integrate it so you can get the deep insights and create the unique offerings for your customer that you can do with that. We also have a new product that we're going to be really showcasing. That's what we're calling HD All Roads is basically we now have, with the amount of sensors and information that we have on the road, we can build a level of HD capability on all roads that vehicles drive.
(28:10):
So there'll be 3D visualization and some level of either visualization or autonomous driving available on most roads. And so that's a big deal. Of course, at the highest level of roads is going to be the highest level of autonomous driving and it falls off as the sensor capabilities and the saturation of the sensors falls some. We have a bunch of developer tools to allow people to use that. Well, we launched last year and we're still going to be talking about more this year is in-vehicle assistance. So the ChatGPT and the interface and talking to your car. So this world for those old enough to remember KITT and the old talking to your car, that is also-
James Kotecki (28:54):
A talking car from a TV show-
Michael Harrell (28:55):
The talking car is happening. You will have the talking car in the coming years and be much better able to interface, ask it questions, more complicated conversational stuff than what you're able to do with your current devices today. That's super exciting. And then we do a lot with, of course with traffic, richer traffic, hazard information, road works, a bunch of real time stuff, how are we updating the map to keep real time? And then the last thing is premium geocoding or last mile routing. So some stuff related to that as well.
James Kotecki (29:27):
As we close here, I would love to get your blue sky thinking involved. Paint us a picture of some kind of science fiction seeming thing that you firmly believe we will be able to do very soon. We've talked about some them on this call, but what's the thing that you're working on that maybe people are most reluctant to believe but that you really think is going to happen?
Michael Harrell (29:48):
Well, the thing that's absolutely going to happen and happening is there's a term I think that started to pick up a little bit. It's called Third Living Room. There's transformation happening in your vehicle that is going to happen over say the next five to 10 years where the vehicles will not look and you will not interact with them the same way you did before. And there's two reasons for that. One is autonomous driving, the other one is EV. So we talked about the autonomous driving. The EV is because with EV, you no longer have to have this huge section of the car that's used for the combustion engine. You can have a flat small battery that opens up the door for how you design cars differently.
(30:28):
So now you can imagine, instead when you walk into your car, you're walking into a third living room that has really been adapted to be an entertainment experience and a way to relax and enjoy this entertainment while you're commuting between A and B. And the reason that is is because you no longer have to worry about being an operator of a vehicle and all the things that are required for that, and you have a new way to completely redesign the vehicle because you've got all this space from EV. And we're seeing some of those innovations already happening. There's never been a transformation in the automotive industry or a lifetime, like what's happening now as a result of these advancements.
James Kotecki (31:07):
Well, obviously TomTom is going to be playing a major role in helping these autonomous cars get around and helping all of us get around no matter what kind of technology we're using. And we really appreciate you lending your thoughts to us today. Michael Harrell of TomTom, thanks so much for joining us.
Michael Harrell (31:22):
Yeah, thank you very much, James. Super exciting times.
James Kotecki (31:25):
Indeed. And that is our show for now, but there's always more tech to talk about. So if you're on YouTube, please subscribe and leave a comment. If you're listening on Spotify, Apple Podcasts, iHeart Media, wherever you get your podcasts, hit that follow button. Let's give those algorithms what they want. You can get even more CES and prepare for Vegas at ces.tech. That's C-E-S dot T-E-C-H. Our show today produced by Nicole Vidovich and Paige Morris, recorded by Andrew Linn and edited by Third Spoon. I'm James Kotecki, talking Tech on CES Tech Talk.