Products You May Like
Geoffrey Hinton, one of the so-called “godfathers” of artificial intelligence, urged governments on Wednesday to step in and make sure that machines do not take control of society.
Hinton made headlines in May when he announced that he was quitting after a decade of work at Google to speak more freely on the dangers of AI, shortly after the release of ChatGPT captured the imagination of the world.
The highly respected AI scientist, who is based at the University of Toronto, was speaking to a packed audience at the Collision tech conference in the Canadian city.
The conference brought together more than 30,000 startup founders, investors, and tech sector workers, most looking to learn how to ride the AI wave and not hear a lesson on its dangers or a call for government meddling.
“Before AI is smarter than us, I think the people developing it should be encouraged to put a lot of work into understanding how it might try and take control away,” Hinton said.
“Right now there are 99 very smart people trying to make AI better and one very smart person trying to figure out how to stop it taking it over and maybe you want to be more balanced,” he said.
Hinton warned that the risks of AI should be taken seriously.
“I think it’s important that people understand that this is not science fiction, this is not just fearmongering,” he insisted. “It is a real risk that we must think about, and we need to figure out in advance how to deal with it.”
“I think its important that people understand it’s not just science fiction; it’s not just fearmongering” – the Godfather of #AI, @geoffreyhinton, speaking about the risks of AI on Centre Stage at #CollisionConf pic.twitter.com/u3iM0LO4hP
— Collision Conf (@CollisionHQ) June 28, 2023
Hinton also expressed concern that AI would deepen inequality, with the massive productivity gain from its deployment going to the benefit of the rich and not workers.
“The wealth isn’t going to go to the people doing the work, it is going to go into making the rich richer and not the poorer and that’s a very bad society,” he added.
He also pointed to the danger of “fake news” created by ChatGPT-style bots and said he hoped that AI-generated content could be marked in a similar way central banks watermark cash money.
”It’s very important to try, for example, to mark everything that is fake as fake. Whether we can do that technically, I don’t know,” he said.