The product, pitched as a helpmate for journalists, has been demonstrated for executives at The New York Times, The Washington Post and News Corp, which owns The Wall Street Journal.
If we’re speaking of an AGI then I don’t need to solve those issues but it’s going to do it for me. By definition AGI doesn’t need a human to improve itself.
I think we have a very different view of what a true AGI will be like. I don’t need to tell/teach it anything. It’ll be million times smarter than me and hopefully will teach me instead.
Nothing stops it from being entirely self-serving. That’s why I expect it to destroy us.
How do you solve the problem of ethics? Is there even such a thing as objectively true ethics?
You have to answer that question before you can even start saying that being unbiased is possible in the first place.
If we’re speaking of an AGI then I don’t need to solve those issues but it’s going to do it for me. By definition AGI doesn’t need a human to improve itself.
How will you tell the AI what the proper ethics for humans are?
After all, you want the AI to be in service of humans, of us… right? If not, what is going to stop the AI from just being entirely self-serving?
I think we have a very different view of what a true AGI will be like. I don’t need to tell/teach it anything. It’ll be million times smarter than me and hopefully will teach me instead.
Nothing stops it from being entirely self-serving. That’s why I expect it to destroy us.
So then why are you looking forward to it?
I think it’s inevitable so might aswell hope it’ll turn out fine, but I doubt it. What I’m looking forward to is the ideal version of it