"When we have robots that can do what people and animals do, they will be incredibly useful."
Some people "should absolutely *not* be allowed to develop digital superintelligence..."
In this case, he's talking about a universe other than our own.
Get ready for humanity 2.0.
What happens when we manage to expand the intelligence of our human machine civilization a billion fold?
“AI will be the pivotal technology in achieving [human] progress. We have a moral imperative to realize this promise while controlling the peril.”
We could put a chip in robots' brains to shut them off if they start to get murderous.
Ubiquitous as they are, making sense of singularities is often a challenge.
It's no longer if we could, but rather if we should.
Can tech companies be trusted to self-regulate?
Over 350,000 people subscribe to our newsletter.
Sign in to join the conversation.