TODAY'S RAMBLINGS
<3 Minute Read
No, this is not one of my hectoring social justice rants. You are welcome in advance.
Instead, today is about technology and how our natural avoidance of risk is going to slow the future. I will use two examples, and see if you don't agree that some gee-whiz tech is about to bump against the nuances we share as a species.
Both examples are deeply connected to the freak-out du jour, artificial intelligence. I've written previously that I think the hype is justified this time, and I only feel more strongly now. Indeed, the more I study and also use AI products, the more I am convinced its impact will be very large and very unpredictable.
Some are already hysterical about AI-run-amok scenarios. While in truth there is a less-than-zero percent chance of that occurring in our lifetimes, that's not my real concern. Actually, it's the opposite.
I believe a human's natural aversion to risk will unnecessarily slow (but of course, not stop) the progression of what we can do with AI. Like in the fields of medicine and transportation.
Medicine
Human error. That's a term that's used in many domains, and it certainly has its place in the medical arts. We've all heard the stories of the wrong limb being amputated, or the wrong drug being prescribed, or the wrong tooth being pulled. It happens and it's because humans make mistakes.
I mention that because one of the most promising areas of artificial intelligence is its application in the field of medicine. In one of many examples, AI is being used to identify and, more importantly, describe the 200 million + proteins that make up what we know as life. This will lead to a raft of breakthroughs, in terms of our understanding and ability to treat a variety of diseases. And it is estimated the process would have taken literally a billion years without AI.
That's all well and good, but most of us have no idea how drugs, or really anything else in medicine, work. Or care. Because until now, there's been no alternative to seeing a doctor - however virtual the experience may have become recently.
But what's going to happen when AI-driven medical advice is available? Telehealth is already widely accepted, a trend obviously fueled by the pandemic, but also by common sense. Yet if AI engines can already diagnose human ailments better than humans themselves, why have the middleman? A.K.A. "the doctor"?
And there you have it. AI engines now or at worst within a year will be able to radically outperform a given doctor in most fields, at least in terms of diagnosis and ideas for treatment. What non-emotional value is a human doctor adding in that world?
At least it will be a while before robots can perform surgery. Maybe.
But the first time it can be proven that someone followed the advice of a reputable AI medical engine and it went very wrong, we're going to have calls to shut down AI-powered healthcare. In fact, I will guarantee it - despite human doctors misdiagnosing patients (and worse) everywhere as I type this.
And knowing the US as I do, the efforts will probably be successful and/or become another front in our mindless culture war. And make a lot of money for the American Medical Association's lobbyists regardless.
Self-Driving Cars
Over-promise, under-deliver much? Years ago, dear leader Musk told us drivers would mostly be a thing of the past by now. He was off by just a bit - like at least a decade.
For one, autonomous vehicles are extraordinarily complex, on many levels. But that's not why we don't see a Cruise or a Waymo or a Zoox everywhere.
We don't have autonomous vehicles because they must be perfect, or nearly so. While we accept tremendous risk when we ourselves are driving (42,939 deaths in 2021), if an autonomous vehicle kills a single person, it's huge news and the viability of the whole endeavor is always called into question. But why?
I am not clear what the threshold is. It should be obvious that a self-driving car is going to be superior to a human in nearly every circumstance - it doesn't get sleepy, distracted, or drunk. Yet I think it's safe to say if 1,000 people in the USA were killed annually in autonomous vehicles, there would be an immediate cry for their ban.
But that still doesn't make sense: sticking with my hypothetical, wouldn't 41,939 fewer deaths be a great thing?
Yes, but it's not: we may not be tolerant enough for the future.
Epilogue: there were approximately 47,000 gun deaths in 2021 in the USA. If you want to understand America's malaise, know that we'll ban self-driving cars before assault rifles.
FROM THE UNWASHED MASSES
We're getting ready for our departure Saturday. We've made the camping meal assignments for Salt Point, and the doyenne of Del Webb herself, Lauren Ryder, indicated her 60th party is a go:
Looking forward to celebrating with you!
That Lauren - she's nothing if not demure.
Thank you to any one that is reading this newsletter.
KLUF
Here is the song "Built for The Future", from the very underrated Fixx.
Fun Fact: I snapped the photo myself, on 8/17/2016 at The Independent here in SF. And yes, Hunter Deuce was there. We somehow got that spot, basically onstage, a vantage point I had not had before or since - at least not for an act the stature of The Fixx.
Fun Fact: I snapped the photo myself, on 8/17/2016 at The Independent here in SF. And yes, Hunter Deuce was there. We somehow got that spot, basically onstage, a vantage point I had not had before or since - at least not for an act the stature of The Fixx.