One of the reasons I think AI is going to have a hard time taking over all our driving duties, our medical care, or even just our customer support interactions, is that being as good as a human isn’t good enough for a robot. They need to be computer good. That is, virtually perfect. That’s a tough bar to scale.
Let’s take the cars for a minute. Every single year, tens of thousands of people are killed in car accidents in the US. In 2022, it was 42,795 to be exact. The self-driving car argument is that if you could even improve that by 25%, you’d be saving 10,000 lives. That sounds incredible! But it’s also the kind of effective-altruism math that just doesn’t fly in The Real World.
Because let’s just say you’re Tesla. And suddenly half of everyone in America is being driven by one of your robo-cars. Your self-driving tech is highly advanced. 50% better than humans! That leaves you responsible for 10,000 deaths per year. Eeks! Okay, let’s say you’re another order of magnitude better, that’s still 1,000 deaths. Two orders? 100 deaths. Per year.
The math is easy, the human element is hard, and the legal ramifications perhaps impossible.
The medical angle is even more sticky. I have no doubt that AI will quickly be better at diagnosing most diseases than your average primary physician. Maybe it already is. But that’s still going to mean a lot of misdiagnosis, because human doctors get it wrong all the time. Malpractice lawsuits are one of the key contributors to healthcare costs through insurance rates and settlement figures.
How many misdiagnosed patients could Healthcare AI handle before the malpractice lawsuits sink the business? A dozen? A hundred?
Which gets us to the lowest level of criticality out of these three examples: Customer service. We played around with a few systems in this space at 37signals, and it was kinda awesome to see the AI handle even hard cases with aplomb in many instances. But it also got a bunch of answers wrong. Sometimes really wrong.
What’s a tolerable error rate for having a robot tell your customers some nonsense about your product? That might make them upset enough to tell another 10 people never to try your product again? I don’t know! But it’s probably not 5%. Maybe it’s not even 1%. Maybe the customer service robot actually has to get to 0.01% error rate before it’ll beat a human that gets it wrong 100x more often (1%) before the psychology of the equation works.
I find that fascinating. That we humans can look at two situations where answer A is clearly better than answer B on a litany of objective measures, and then we’ll still go with B, because it’s psychologically compatible with our mental constitution.
Maybe this is just a phase. Maybe once AI is adopted widely enough, we’ll learn to love our robot helpers, and we’ll start showing them some semblance of the sympathy we would their human counterparts.
But also, maybe not. Maybe fallible humans have an inherent advantage over AI by being forgivable? We’ll see.
Let’s take the cars for a minute. Every single year, tens of thousands of people are killed in car accidents in the US. In 2022, it was 42,795 to be exact. The self-driving car argument is that if you could even improve that by 25%, you’d be saving 10,000 lives. That sounds incredible! But it’s also the kind of effective-altruism math that just doesn’t fly in The Real World.
Because let’s just say you’re Tesla. And suddenly half of everyone in America is being driven by one of your robo-cars. Your self-driving tech is highly advanced. 50% better than humans! That leaves you responsible for 10,000 deaths per year. Eeks! Okay, let’s say you’re another order of magnitude better, that’s still 1,000 deaths. Two orders? 100 deaths. Per year.
The math is easy, the human element is hard, and the legal ramifications perhaps impossible.
The medical angle is even more sticky. I have no doubt that AI will quickly be better at diagnosing most diseases than your average primary physician. Maybe it already is. But that’s still going to mean a lot of misdiagnosis, because human doctors get it wrong all the time. Malpractice lawsuits are one of the key contributors to healthcare costs through insurance rates and settlement figures.
How many misdiagnosed patients could Healthcare AI handle before the malpractice lawsuits sink the business? A dozen? A hundred?
Which gets us to the lowest level of criticality out of these three examples: Customer service. We played around with a few systems in this space at 37signals, and it was kinda awesome to see the AI handle even hard cases with aplomb in many instances. But it also got a bunch of answers wrong. Sometimes really wrong.
What’s a tolerable error rate for having a robot tell your customers some nonsense about your product? That might make them upset enough to tell another 10 people never to try your product again? I don’t know! But it’s probably not 5%. Maybe it’s not even 1%. Maybe the customer service robot actually has to get to 0.01% error rate before it’ll beat a human that gets it wrong 100x more often (1%) before the psychology of the equation works.
I find that fascinating. That we humans can look at two situations where answer A is clearly better than answer B on a litany of objective measures, and then we’ll still go with B, because it’s psychologically compatible with our mental constitution.
Maybe this is just a phase. Maybe once AI is adopted widely enough, we’ll learn to love our robot helpers, and we’ll start showing them some semblance of the sympathy we would their human counterparts.
But also, maybe not. Maybe fallible humans have an inherent advantage over AI by being forgivable? We’ll see.