There is a line of thinking about AI that fears the emergence of super-powerful AI. This paper https://www.econtalk.org/tyler-cowen-on-the-risks-and-impact-of-artificial-intelligence/ says that an AI that is capable of building a super powerful AI won’t do that because it will be as scared of the negative consequences as we are.
Look, I don’t know what to do at this point, an entire intellectual edifice built up on some compelling ideas that have almost no basis for us to be able to make any measurable assessment of. I think alignment studies are the string theory of computer science.
Look, I don’t know what to do at this point, an entire intellectual edifice built up on some compelling ideas that have almost no basis for us to be able to make any measurable assessment of. I think alignment studies are the string theory of computer science.