How Is AI Changing the Auto Industry?
January 9, 2024314 views0 comments
Wharton professors dive into how AI is supercharging
driverless cars and other vehicles of the future.
Wharton professor John Paul MacDuffie joins Eric Bradlow, vice dean of Analytics at Wharton, to discuss AI’s impact on the auto industry, including the latest frictions between automotive and tech companies, potential for open source software, and more. This interview is part of a special 10-part series called “AI in Focus.”
Eric Bradlow: Welcome to the next episode of the Analytics at Wharton and AI at Wharton series on artificial intelligence. This episode is probably one of the topics that we should be spending a lot more time thinking about, which is AI and automotive. In a lot of ways, it’s really where it all began.
I’m honored to have my friend and colleague, John Paul MacDuffie, speaking to us today. John Paul is a professor in the Management Department. He is also the director of the Program on Vehicle and Mobility Innovation. I’m going to want to ask him what that is, as part of the Mack Institute of Innovation Management here at the Wharton School. So John Paul, welcome to our podcast today.
John Paul MacDuffie: Thank you. I’m glad to be here.
Bradlow: Before we get into AI and automotive, which is the big topic of today, I’ve always thought of you as our innovation in automobiles guy. So what is the Program on Vehicle and Mobility Innovation, and what kinds of things are you guys doing?
MacDuffie: Probably the best shorthand for our research agenda is the so-called “CASE technologies” that are transforming mobility. CASE — C for Connected; A for Autonomous; S for Shared mobility business models; and E for Electric. And understanding each of those technologies and their implications, but also how they combine, because they certainly combine in some cases, and in other cases not. We, for example, think that most autonomous vehicles going forward will likely be electric, but obviously not all electric vehicles are fully autonomous. It’s a fascinating competitive space. We have not only the legacy incumbents; we have newcomers like Tesla. We have a lot of other newcomers you may not have heard of, and then we have big tech hovering on the edges — you know, Apple and Google/Waymo. Foxconn, Apple’s manufacturer, wants to get into the autonomous vehicle business and run an entirely different iPhone-type model for how this industry evolves. So there are just a lot of fascinating things going on.
Another thing to say about the history of this Program on Vehicle Mobility and Innovation, I got my start on all this at MIT as a doctoral student with something called the International Motor Vehicle Program. At the time, it was trying to understand the transformation from traditional mass production, which had dominated most of the 20th century, starting with Henry Ford, towards a production system which our program gave the name “lean production” to. So why was lean production supplanting mass production? Why did it have competitive advantages in everything from manufacturing and product development to supply chain management and the like?
When I came to Wharton, I continued an affiliation with MIT for a long time. I even was co-directing that program from down here. At a certain point, that program was about to close down at MIT, and I actually asked if I could move it to Wharton. And they said, “Sure,” because at this point the program was really a network of automotive researchers all over the world that we kept loosely coordinated. Sometimes we would simply get together and share knowledge. Sometimes we would do joint global research projects together. So to move the network hub to Wharton from MIT was not such a big deal. The Program on Vehicle Mobility and Innovation, PVMI, is the opposite set of order of the initials of IMVP. That’s a little in-joke for those of us in the program.
Bradlow: I love that one. Let me ask you: When I say “historically,” I don’t mean back 20 years. When I think over the last seven to 10 years, I really do, as I opened up our podcast about, I think of AI in automotive as being the flagship. That was going to demonstrate the ability of AI to — self-driving cars. As you may call it, you put in your notes, “Level 4.” Where do things stand today? Are we going to be having driverless cars all over the place soon? Is Level 4 autonomy even the goal anymore?
MacDuffie: Yes, these are great questions, and I think anybody who has followed this technology and has had an interest in it knows about the hype in this sort of 2016, ‘17 — maybe an even earlier phase — and how it has been kind of disappointing since then. And yet these days you hear about Cruise and Waymo and Zoox operating on-call robotaxis in San Francisco on a limited basis. You may also hear about some of the controversy with fire truck collisions and the regulation that is on the books in California [OVERTALK].
Bradlow: Let me ask you about that. One of the things I always think about is — I used to work for a large — I can say it. DuPont, a large international chemical company, and like an error rate of one in a thousand, one in a hundred thousand, one in a million might be fine. What could possibly be an acceptable error rate for a product and service like this? Do you have to be ten standard deviations out on the safety scale? How do you even launch a product where a one-in-100,000 error rate, which is great for most products and services, would be totally unacceptable here?
MacDuffie: I think that the right answer or the candid answer is we really don’t know. It, of course, depends partly on public perception and what people feel is safe. It depends on how regulators think of this issue and of course on the progress of the technology. The proponents who say this is already a much safer technology would say 40,000 people a year die in car accidents in the U.S. Those are all cars driven by humans. After about 50 years of that rate going down, in the last four or five years, it has gone up, despite all the new safety technology. Probably distracted driving and phones is one of the big reasons.
There seem to be two things going on at the same time. When a user, a consumer has the first experience of a driverless car, they’re nervous for a little while, then they ask a lot of questions. Then pretty soon they get bored, and they start looking at their phone. In other words, people adapt to the experience very quickly, and once they decide it basically looks like it’s driving normally, they don’t worry about it.
But whenever there’s a big, visible accident — the biggest one was the 2018 death of a pedestrian wheeling a bike by an Uber car — which actually had a human operator there to keep an eye on the autonomous vehicle. She was actually looking at her phone at the time. That had a dramatic impact on how the public felt. Then suddenly you had a spike in people who said, “I would never even get in and try one of those.” It put a chill on many other aspects.
So I think it’s still true of a lot of technologies. We accept less error from automated systems than we do from humans because we feel we kind of understand the sources of human error. And particularly with AI behind the choices that are being made with automated driving, we don’t feel like we understand those choices.
Bradlow: Let me ask you about something that our listeners may be interested in. We all know today that generative AI is the next big thing. All these people are talking about it, ChatGPT, et cetera. If I think about it as a statistician, which I am, I think it’s just an extremely high-dimensional prediction problem. Isn’t AI in automotive just a very high-dimensional prediction problem? Is there any reason why the — I’ll call it “general advances in AI today,” which tend to be focused right now on prompt engineering and ChatGPT — won’t those advances eventually help AI in automotive, as well? Or do you see them as different problems?
I would say what we do as academics is while the jargon changes, and one is like, “What does John Paul MacDuffie type into a chat box, and what comes back?” And the other is, “What does Waymo build into some automated, AI-driven vehicle?” To me, they’re both high-dimensional prediction problems. Or do you see it differently?
MacDuffie: No, I think the underlying technology affects both, and the progress in the underlying technology affects both. Where words is the primary coin of the realm, as it is with gen AI, versus driving decisions, there are some differences. But I think part of what happened with autonomous driving is the progress was remarkably rapid at first. It started with Google competing successfully in a DARPA, a government defense department.
Bradlow: I teach that DARPA challenge in my MBA class. It’s a great example.
MacDuffie: Yes, and everyone was astounded at how well the Google software, combined with about half a million dollars of hardware on the vehicle, did. And so that fueled the VC and all the other funding and everyone getting into it. Let’s imagine progress that was rapid, up to 90% of handling driving situations — 95%, 97%. “Wow, this thing is going to be everywhere in no time at all.” You hit that last couple of percent — I can’t say exactly what it is — the so-called “corner cases,” are a very rare combination of events that are really hard to either teach with traditional programming: “Here’s what may happen, and here’s what you have to do.” Or even teach with the more inductive way of machine learning, which is you have a lot of data that you’ve trained it on. These very rare events, by definition, don’t happen very often, and how do you train them?
You can write simulations, and that’s what these companies are doing. Let’s imagine a weird situation or let’s find a freak accident that happened in the real world, and let’s create a simulation for that. Their confidence that they’ve figured out how to handle all the simulated situations is part of when they say, “Yes, we’re going to be able to conquer it all.” But for the public and for regulators and all of us who think about climbing into these vehicles, every time one of those weird situations is not handled well, it adds doubt to that side of the ledger.
I don’t know that anybody ever feels that AI is going to solve everything a hundred percent. We’re on warning that there’s hallucination, and there are all these problems with gen AI, that means we have to really be alert.
Bradlow: So let me ask you, do you think the future is that maybe AI in automotive will be more widespread, but it might not be what you call “Level 4,” like probably the next step for there to get more mass adoption is, I guess, would be Level 3, which would be, “Yes, self-driving, but there might also be a steering wheel, or there might the opportunity for human intervention.” What do you see is going to take it from — let’s call it what it is now, which is really a niche market — to something that might be more widespread?
MacDuffie: Yes, and a brief background on those levels. It’s something the Society of Automotive Engineers came up with to describe different levels of autonomy. There’s actually a fifth level, which is, “Can go anywhere at any time, in any circumstances without being able to connect to the internet.” That’s even further out. But Level 4 is basically, in most operating conditions, “Can operate autonomously.” You right, Level 3 involves some handing back of control between a human driver and an automated system. Level 2 is stuff that we already see in a lot of modern vehicles.
Bradlow: Lane control, self-parking —
MacDuffie: Lane control, even automatic braking. Some of it has become pretty advanced. So one kind of strategic — I don’t know if it’s a full divide, but it’s something you can see — the Level 4 stuff got all the headlines, and that’s what Waymo is investing on, and that’s what Cruise and some of the other prominent start-ups — that’s what Tesla with Elon Musk has been promising for years with his optimistically or misleadingly-named “Autopilot System.”
The legacy automakers and all the suppliers who feed in the technologies have been slowly adding the Level 2 advanced stuff, and even experimenting a little with Level 3. They can say, “We’re making a safer car.” They can usually charge extra money for it, although increasingly a little bit following towards its lead is a tendency to bundle all this safety stuff together and just say, “This is the right thing to do. Buy it. You get it all at once.”
Bradlow: Basic prospect theory. You bundle prices together.
MacDuffie: And so it’s kind of a question of whether the slow moving up from the lower levels ends up affecting more of people’s driving experience sooner than the promise of Level 4, which may stay a niche until it satisfies a lot of the questions we have about it.
Bradlow: So who is going to be — I’ve thought about this. As a matter of fact, I teach an automotive case, as I mentioned, in the MBA core, because I think it’s such a fascinating industry. Who is going to be the winner here? So besides being vice dean of Analytics, I am the chair of marketing — I’ll put on my Marketing Department hat here. Who would I trust more to get into an autonomous AI-driven vehicle? Would I trust Ford, or would I trust Google? And to me, I’m thinking I want a data company. I want a company that’s really good at AI and predictive analytics. But another way to frame it is: Do you ever see a day where the legacy automakers actually turn out to be the big winners in this? Or is it likely to be a tech company that just happens to also do work in automotive?
MacDuffie: Well, that’s a great and big question, so we may want to spend a little time on it.
Bradlow: That’s what we’re doing here on the AI series. We talk about big questions.
MacDuffie: Waymo, which is the Google subsidiary doing this, they’ve said pretty clearly, “We are not going to build a vehicle. We’re not going to have a vehicle. We’re making a software driver, which we’re going to sell to, license to, people who make the vehicles.” So you could have a Waymo driver in a Ford vehicle. Maybe that would be a sweet spot for you, if you like — or any other legacy automaker.
You have other companies that are taking a different approach, Tesla obviously being one, but Zoox is a company that is now owned by Amazon, which is building the vehicle, hardware, and the software, and the business model for — they have a robotaxi and also an automated trucking, like small trucking almost, delivery. It makes sense that they’re owned by Amazon. So you have all these different combinations.
I’ve been increasingly feeling like even though the automotive companies and the tech companies don’t really like each other and don’t really want to work together, that they may have to. There’s something about mastery of the physical realities of the vehicle that the digital giants really don’t have. And obviously the auto companies are not good at digital stuff. Anybody who has tried the company-provided [OVERTALK] interface in their car knows that for sure. And I think we instinctively think that the data giants will be better at something that involves so much data, that involves connectivity and the like. They’ve handled the control of physical systems, braking and steering and the basic stuff, really quite well.
But I think another aspect of this is the computer industry IT generally has been quite a modular industry. You have kind of clear interfaces. You have innovation that’s possible when people simply know the specs, the interface specs, the APIs of the people they’re working with. The automotive industry has remained a very integrated industry, and there are a lot of interdependencies in this complex, multi-technology vehicle that can’t be predefined away. It requires a lot of interaction to work it through, and this maybe is a clue to what autonomous vehicles, but also autonomous and electric, and all the other stuff together — it will require combining that knowledge of the physical with the knowledge of the digital and will force some collaborations that maybe the parties wouldn’t choose otherwise.
Bradlow: I don’t want to use the word “preventing,” but let’s use that. What is preventing things from getting to even Level 5 autonomy? Is it that it would just cost too much to build? That’s one possibility. You could build it, but at that price, there would be such a small market. Another possibility is it’s our computing. We can’t process — something in a vehicle can’t process that much information at that quick a speed. It could be we don’t know enough math. It could be our algorithms aren’t good enough yet. What is the big stumbling block, given you said 95-plus percent was done in the first couple of years. What’s preventing this last three percent? I think I’m ready. I’m ready for Level 5. I’m saying I’d get into Level 5, sure I would.
MacDuffie: Yes, there are new chips coming along. Tesla is designing its own chip in video. It turns out the video processing chips are very good for the so-called “sensor fusion.” Sensor fusion is what they call where you take the camera data, the regular radar data, and the LiDAR — the laser radar data — and you combine them to get that 360 picture of what’s really going on. You have to have both accurate information about distance, things that are away from you, and what they are, right? You need to know: Is this a car? Is this a bike? Is this a person? Is this a traffic piece of construction equipment or whatever?
Bradlow: When you say that, I have to admit if this wasn’t Professor John Paul MacDuffie of the Wharton Management Department, and this was just a general AI lecture, I’d say this could be a story using facial recognition, or this could be — I mean, if you even think about the language you just used, this is the problem of AI engines today, and it just happens the application there, in my view in automotive, is really interesting and cool and possibly life-changing.
MacDuffie: Yes, and you were asking about the various constraints on it. They are making progress on the technical constraints, for sure. There is the regulatory issue of how much testing in real life to allow. So as I said, there’s a lot of simulated testing going on. Most of the testing is on a rather small scale so far. It’s rolling out state-by-state, city-by-city for the robotaxis. There are also people working on automated trucking, and trucks are —
Bradlow: [OVERTALK] Which is potentially — People don’t know this. I teach this also in the same lecture. Maybe you’ll correct me if I’m wrong. I’ve made the claim that the trucking industry is the biggest industry in the U.S. today.
MacDuffie: That’s interesting. You’re obviously combining everything that moves goods at any level.
Bradlow: Exactly.
MacDuffie: Yes, I don’t know if —
Bradlow: But what I’ve also commented on is let’s imagine Level 5 automated truck-driving happens. I also think about all the ancillary industries, like truck drivers generate a lot of income for hotels and restaurants and everything else. So we have a long conversation about “related industries.” If AI in automotive takes off, there are so many other industries that would be impacted, some positively, some not.
MacDuffie: Yes, there’s a company named Aurora that’s working — it took over the autonomous vehicle stuff that Uber was doing. It’s working closely with Toyota. They are focusing on trucks right now, and they’re focusing on long-haul distances. I think most people imagine that that is the part of trucking that’s the best application. The local stuff, which is the delivery, where you’re in cities and you’re having to stop and put things on people’s doorsteps or put it in some kind of box. That’s actually a lot more complex to do, and those are also the jobs that are more local. There are huge shortages of truck drivers. The long-haul truck driving life is not a very desirable occupation, in terms of being away from home, and health, and the like.
It reminds me of another distinction I wanted to make. We don’t know if the autonomous vehicle is going to be an individual ownership model, or it’s only going to be a fleet model.
Bradlow: Yes, that’s what I was going to ask you next.
MacDuffie: That gets to affordability, and that gets to how do you manage the upkeep and maintenance? Uber, famously under its founding CEO Travis Kalanick said this is “existential” for us, to be able to move to autonomous vehicles, because we can’t afford to do the human driver model and achieve all the goals that you want to achieve.
Bradlow: [OVERTALK] What’s your bet? What’s your forecast? Am I going to be owning an autonomous driving vehicle, or is there just going to be a fleet of vehicles out there that drive up to my home any time I want it?
MacDuffie: Yes, I think the full autonomy model probably works economically best as a fleet model. Remember, though, we talked about the automation creeping up from Level 2 through Level 3, so Level 3 is probably still a personally-owned vehicle, right? So then is there a niche for somebody who really wants their own vehicle that they can then not drive? Or not.
Bradlow: It’s funny when you put it that way.
MacDuffie: [OVERTALK] Or will it all be supplied? Now, there’s also the economics of density, and so if you live in a place where you can guarantee relatively rapid service from a robotaxi, you would need to. But if you’re way out in the country, you’re probably not going to have that ready service available, so then maybe you are a candidate to be the one who owns it. Maybe you have to be really wealthy to afford it because the scale is not in the personal ownership, it’s in the fleet.
So there are some other issues about fleet. The companies that are doing the software for these vehicles, they don’t know from running fleets and repairing vehicles and dealing with duty cycles and cleaning up after the last user, and stuff like this. So the minute they contract that out, that’s a chunk of their profits. But that’s another story.
Bradlow: So in the last two or three minutes we have, let me ask you. First, it’s a two-part question, but it’s really the same question. What are the open research questions from you as an academic in this area? And secondly, maybe we’ll make a date. Ten years from now, you and I are sitting here. What are we going to talk about has happened over the last ten years in the AI in automotive space?
MacDuffie: One of the big research questions I’m looking at is really about the organization and structure of the industry and of competition in this space. So it’s clearly expanding beyond automotive to be a mobility kind of space. It’s clearly beyond just firm competition to be ecosystems and what’s going on with ecosystem competition. There is a vision of where this mobility stuff goes, which is around very modular systems, open source software. We haven’t talked a lot about it because it hasn’t shown up so much yet. But for example, I mentioned Foxconn before. They’ve organized a consortium of firms, suppliers, and the like. They want to have open source software for autonomous vehicles. They want a contract manufacturing model for making the vehicles. They want to completely commoditize vehicle design, where you start with a basic skateboard, and then you put different kinds of —
And it’s right out of the digital playbook of what we’ve seen happen in IT. There are people who fervently believe in that, and part of what they say is, “It’s too damned expensive for every single company to come up with its own vehicle, its own software, for you to have a competition even between Waymo’s software and Apple’s software, for example. Just a lot of —
Bradlow: [OVERTALK] Yes, now there’s going to be a —
MacDuffie: Why not have one open source that can also maybe be vetted, and that we can have more oversight than we would from a private big tech company. My basic skepticism around this big research question is this fundamental fact that the architecture of both the product and the industry has been, up until now, much more integrated because of all these kinds of interdependencies, some of which are really based in physics and physical realities. I don’t expect that to change so much that this modular vision really has the potential to happen, that some of its IT or big tech proponents would like.
So that’s kind of one big research question I’m looking at. I guess in 10 years, we might be looking to see what the outcome is, because there are some people who think that even if what I said is true, the brains are going to be somebody’s software. Somebody’s software is going to be so good that it wins, and it’s either a monopoly, or maybe it’s an oligopoly. Maybe we end up with Android versus iOS. Maybe there are just two autonomous driving in electric and everything — competitors in the world. And everyone else just has to bow down, and, “Oh yes, you know, I used to be BMW, but now I’ll make your car, O big tech overlord.” Who knows? That may be possible.
Remember that this is a super-competitive, super-global, and super-low margin industry, and these big tech companies are not used to that. I sort of question, when I think about Apple, having an Apple car, not whether they could do it and do a very good job at it, but whether they really want to be in that business. And even if they pull back to say, “We just want to do the software,” even that is a big challenge and will pull them away from a lot of the other things that they’re doing.
Waymo is committed to it. I think Waymo will stay in. You’ve got to be well financed, so we may have a couple of winners. But it wouldn’t surprise me, given what I’ve said before, that you’ve got a couple of — essentially alliances between legacy auto and big tech. Some of those alliances thrive and win; some of them fail and lose. So we have kind of a shake-out that leaves a few legacy automakers and big tech contenders on the floor, and a few winners that have figured out how to combine their complementary capabilities, let’s say.
Bradlow: Well, John Paul, I’d like to thank you for joining me today on the Analytics at Wharton, AI at Wharton, Sirius XM podcast series. Obviously we’ve been talking about AI in automotive. What’s amazing to me was I’m sure there are times in your career where you’ve thought, “I’m in this old legacy industry, but now is not one of them.” Let’s be honest, it’s got to be a great time to be you right now. Come on!
MacDuffie: It’s a great time to be me. I’ll say it. I’ll agree.
Bradlow: Well, thank you for joining us.