what does Geoffrey Hinton believe about AGI risk is a short post by Tyler Cowen about an interview with Geoffrey Hinton.
The interview is here, but I don’t have access.
There is one quote I want to reflect on.
There are, he conceded, aspects of the world ChatGPT is describing that it does not understand. But he rejected LeCun’s belief that you have to “act on” the world physically in order to understand it, which current AI models cannot do. (“That’s awfully tough on astrophysicists. They can’t act on black holes.”) Hinton thinks such reasoning quickly leads you towards what he has described as a “pre-scientific concept”: consciousness
This line of argument is misleading. The argument that you have to act on the world to understand the world does not mean that you need to act on all aspects of the world to understand them, but rather that acting in the world is a precondition to the generation of a consciousness that can have an understanding of the world. So Geoffrey’s objection here holds no water.
Whether you believe that point about preconditions for consciousness is a separate issue. I do believe this, but need more time to flesh out my own thinking on this.
Topics in this post:
the title was [[title]] The tags were: [[hashtags]] “Science and Technology, AI research, Existential Risk, Consciousness Theory, Astrophysics”