Do AI Ethics Really Matter

It seems like the issue of AI ethics, especially as it relates to autonomous driving, is the great zombie topic of software today… it just won’t die. At least once a week, sometimes once a day, an article or blog pops up in my newsfeed about how important it is that we teach AI the best way to choose who to kill.

While this is a fun late-night philosophy topic, it’s not an important engineering topic, at least not now or for the foreseeable future. What’s more important is, is my car trying to kill me?

transcript:

My car is trying to kill me. My car is trying to kill me. Actually, two of my cars are trying to kill me. At some point, we decided to give our cars control over the gas pedal and cruise control, and then brakes, and now steering. What will it all come to? Let’s take a look at that.

Why is it that I say that my cars are trying to kill me? Well, it turns out I have a couple of cars that have modern features. They have these cruise controls that are based on radar, lidar, so that they can keep the proper distance, which is really cool when it works.

They have the automated braking assist that makes sure that if something’s right in front of you and you’re about to hit it, that the brakes go on, which is really great when it works.

But the real big annoying thing is this lane keeping assist. And again, intellectually, it’s a great thing if you take those long trips on the road. Like here from L.A., I’m driving out to Vegas; it’s a long drive, it’s boring, there’s nothing on the side of the road, it’s really easy to get highway hypnosis. And so it’s really great to have this theory that if you start to drift out of your lane, your car will pull you back in.

But in practice, both of my cars use this feature to try to kill me, and they do it in very different ways. And I don’t want to pick on a particular brand, as we’ll talk about in a moment, but the problem is systemic and actually very important. But it’s funny because the one car is a very gentle, sort of a boil the frog moment, if you will. You’re driving along, you got your hands on the wheel, and it slowly tries to nudge you either toward the car on the right or toward maybe the wall on the left. And if the wall is far away, the car drives really well, and if there’s no actual car right next to you, the car steers very well, but the second that there’s a big ol’ truck next to you, it gently, just gently pulls the wheel and says, “We can get closer, we can get closer. “Just a little closer, just a little closer.” Slowly trying to bring me into a collision.

And the same happens if there’s a wall. Not with a median, but like where it’s right there, right off of your left fender, then it’s “Let’s get a little closer, a little, “how close can we get to the wall?” Now this car, on the other hand, does a much better job at keeping track with the cruise control assist. You almost never have to interfere with it. The automatic braking tends not to panic and brake when nothing is happening.

The steering seems to have a much better recognition of where on Earth the lanes are. And I get these are all complicated problems. It turns out that humans are really good seeing and computers aren’t.

But this car is funny because it drives along and it gives you that false sense of surety, of safety, because it’s keeping track, it’s keeping in the lane. You’ve got your hands on the wheel just lightly and it’s making curves and it’s handling people coming in and out of the lane. And then all of a sudden it just goes, “Time to die!” And it just jerks the wheel left or right.

Now, I haven’t yet fully understood when this is happening, it seems like two times where I could make some kind of correlation. There was actually like a mark in the road. It wasn’t really a big pothole, but I suspect that the car is doing some sort of a pothole detection, and when you’re doing 65 miles an hour down the road it doesn’t want you to fly into that pothole. But I’m not sure; hopefully later I’ll know more.

And that’s the point, that the computer has a hard time knowing. It’s easy to go, “Hey, there’s a thing in front of you.” But it has a hard time knowing is it slowing down, is it speeding up, is it maintaining distance, where are your lanes? Sometimes the lines aren’t painted well, sometimes they’re non-existent. Sometimes the lanes have moved over time and so there’s multiple sets, one’s pale, one’s less pale. They’re almost never well-painted, right? Sometimes there’s a groove in the road where they originally put the lanes and then they moved the lanes over since then, so you’ve got a line, and a line, and a line, and sometimes the computer makes these funny decisions.

And again, like, you know, a discolored, circular spot on the road, which doesn’t fool me, fools the car. And this is the problem with AI, with, you know, computer vision. Trying to get the computer to understand very quickly what is something, what is going on, including nasty things like depth perception, right? And then on top of that, like knowing what to do about it, and how hard to do about it.

And again, these auto-steering tend to spend a lot of time sitting at one side of the lane real tight or the other side of the lane real tight, and not right down the middle of the lane where it’s safe. Especially if there’s a car next to me. When there’s a giant semi next to me and he’s hugging his lane, I should then be hugging the other side of the lane. Like these kinds of normal human responses.

The reason I bring this up is that there’s this real big movement over the last few months about AI and ethics. And that’s certainly a real issue, especially where AI is making decisions around say, healthcare, loan approval, et cetera. It’s very, very easy to create an AI system, a machine learning system, based on data that has some inherent bias or supports your bias, or simply didn’t take into account certain kinds of factors.

So AI ethics is important, but when it comes to autonomous driving, it’s probably the least important problem that we have at the moment. We usually hear about this in the scope of the trolley dilemma. There’s this old story about a trolley man and there’s a lot of variants, but basically, there’s either a trolley driver or a brakeman or a switchman. And at one point, the trolley’s in trouble but there’s something on the other track. Is it the trolley man’s kid, is it a child, is it the trolley man? Again, the stories vary, but basically the person in control at this moment of this lever can choose to kill themselves or this child or whatever it is versus everyone on the trolley.

It’s an ethics dilemma, it’s an interesting puzzle to talk about, right? Who do we kill in a scenario, right? Do we kill 20 people or one person? Well, what if that one person was gonna grow up to be Jonas Salk? Right, like, these are crazy, crazy dilemmas. That’s why they’re so great to talk about.

I love a great philosophical discussion like this. But when it comes to engineering, there may be less appropriate. I think about the fact that I drive back and forth to work, a pretty good L.A. commute every day. I’ve driven a lot of road trips to different places around the country. I’ve driven from east coast to west coast. I’ve driven all kinds of things.

I’ve calculated roughly in my life I’ve driven maybe a million miles. And when you think about that, in all of those times, and I’ve come close a few times, I’ve had cars stop in front of me. I’ve had vehicles I had to swerve around. I’ve had people jump out in front of my car. I’ve had cars jump into my lane. One time, going to work, I saw a car come across the medium, rolling; median, did I say medium, median. Rolling, it had gotten hit on the opposite side of the road, and luckily I was paying attention way ahead of me, and I saw it so I was braking gently.

This is where these machine learning autonomous systems tend to not know what a car rolling across their lane is. And so they wait until the last minute when they just go, “I don’t know what this is, but it’s big and it’s close and we’re gonna hit it,” and slam on the brakes. Which, like, my father would really have yelled at me for. Like, that’s bad driving.

So it’s funny, because in all of these scenarios I’ve had, driving in cities, driving in open road, driving at night, driving at daytime, driving in different states all around the country. I’ve driven in downtown New York, I’ve driven in L.A., I’ve driven in Chicago, I’ve driven in Wyoming, I’ve driven in Montana; all different. I’ve never yet had to make the choice who I was going to kill.

And that’s the point, like, maybe at some point our cars will be so good that the only important choices that come down to them are who dies. But at the moment, the bigger choice is, what on Earth is that in front of me and what do I do about it?

Earlier this week, NTSB released the report for autonomous collision and they had found that it was really bad software algorithms. So we have a long way to go before we have really good autonomous driving software, and believe me, I see bad software every day, right. That’s my world, is bad software. That’s why I’m the Code Curmudgeon.

But as bad as software can be, and as good as it can be on the other hand, humans are terrible, terrible drivers, just awful. So even when the software is somewhat buggy, we’re probably gonna quickly reach a point where it does better job than us humans do. And, you know, especially eventually, like the human to machine interface is where we have problems.

If the cars were all autonomous and there were no pedestrians and no bicycles, et cetera, this problem would be easy, right? The problem is hard and that’s my point, that worrying about AI ethics is really, really fun.

I think about my cars trying to steer into a truck next to me and the obvious conclusion is that the car has decided that an accident is inevitable. The truck is big and probably full of something valuable, and I’m one dude in a little car. So maybe this was an ethical choice, maybe the car decided that I was less valuable than the big truck next to me. It was programmed by somebody who has shipping lines, and they recognized that it’s better not to hit big trucks or maybe a big truck would cause a big accident or something, I don’t know.

But that’s my point, that autonomous driving’s exciting, we are slowly getting there. AI ethics is important, but AI ethics is not the problem that we’re worried about in autonomous driving right now. It’s much more important in the areas where AIs are making decisions about our financial decisions, our health decisions, et cetera. So let me know what you think in the comments, thumbs up if you liked the video, subscribe if you want to see more, and thanks for watching.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.