Andrew Nelson

January 23, 2024

Merging with Artificial General Intelligences

Yesterday, Scott Alexander released a piece on the discourse on human primacy, or whether humanity could and should be replaced by superior species like artificial general intelligences. You may have heard of this argument from "e/acc's" (Effective Accelerationists) as stop/pause/accelerate arguments are made regarding artificial general intelligence research. I'm still figuring out where I stand on the stop/pause/accelerate side of things, so won't comment there, yet. But on the e/acc viewpoint, I absolutely disagree that we shouldn't even try to preserve humanity in the face of a “superior” species. Maybe that makes me "specieist". That's cool with me.

I enjoyed his arguments and questions, and found it generally an interesting take. There was one part of this post which did stand out to me:
"Will AIs and humans merge?"

and his "most confident" response
"Not by default...In millennia of invention, humans have never before merged with their tools. We haven’t merged with swords, guns, cars, or laptops. This isn’t just about lacking the technology to do so - surgeons could implant swords and guns in people’s arms if they wanted to. It’s just a terrible idea.

I strongly disagree with this premise. Even though the majority of us haven't physically merged with technology for non-medical purposes (with exception), I don't think that means that part of our brain hasn't merged with the external world in some way, including its tools.

If you go back to your hometown and taste the tap water or experience the unique smell of your childhood home's carpets and basement, you surely will be able to access memories from childhood more clearly. And extending to tools, I hope I'm not the only one who's had the experience of not being certain where a cafe or museum was in relation to my current location, but as soon as I saw a map, it's relative location all clicked into place.

When I wear a suit, I feel more confident and sense of belonging in “business” environments. In The Art of Learning, Josh Waitzkin describes using trigger songs to get himself ready to fight.

Our brains and nervous systems adapt to the contextual clues of our environment. I would call this merging because if you remove the external stimuli, cue, or reference, the function which was using it is negatively impacted or completely unusable (including even awareness that it ever existed, in the case of long-buried memories or skills).

And who’s to say that this isn’t already happening with machine learning tools? Personalisation algorithms enable exploitation of a ML algorithm’s model of our preferences. Even as I write this, a part of my brain is ready to ask ChatGPT to generate an image if I can’t think of a more relevant photo or cartoon that already exists.

So what makes merging with an AGI a diffferent story? Is it consciousness? If so, then several theories in psychology would argue this already happens naturally (e.g. internalising the voices of people in our lives). Is it agency of the consciousness or tool being merged? If so, then any persuasive material (e.g. news, film, books) should also be considered to have an impact on our goals.

It seems like merging with an AGI is more of a question of degree than of class given we’re already allowing these other tools and environmental factors to influence our thinking. So then the question becomes choice around letting more influence in than we already are, and that’s a very personal choice.

With thanks to: Scott Alexander for sharing his POV (even though I don't actually know him)

Written from:
the gym, as ChatGPT interpreted it based on what I was listening to
 
DALL·E 2024-01-23 17.46.19 - A gym designed in a style inspired by the album art and music .png


While listening to:
IMG_8231.PNG