Google boss Sundar Pichai said last week that concerns over malicious applications of artificial intelligence are "very legitimate."
In an interview with the Washington Post, Pichai said AI tools require ethical guidelines and companies need to think carefully about how technology can be abused.
"I think technology needs to realize that it simply can not be built and then repaired," said Pichai, who was fresh from his testimony before the state legislature. "I think that does not work."
Technical giants need to ensure artificial intelligence with "ownership", does not harm humanity, said Pichai.
HERE HOW YOU CAN ROCKOCALLS WITH IPHONE AND BLOCK ANDROID
The tech manager who runs a company that uses AI in many of his products, including his powerful search engine, said he optimistic about the long-term benefits of the technology, but its assessment of the potential disadvantages of AI parallel critics who have warned against abuse and abuse.
Lawyers and technologists warned of the power of AI to encourage authoritarian regimes to strengthen mass surveillance and, among other things, spread misinformation.
SpaceX and Tesla founder Elon Musk once said that AI may prove to be "far more dangerous than nuclear weapons".
Google's work on Project Maven, a military AI program, sparked protest among its employees and led Tech-Gian to announce that work will not continue after the contract expires in 201
"Sometimes I'm worried that people underestimate the scale of the changes that are possible in the medium to long term, and I think the questions are actually pretty nice complex," he told the Post. Other tech companies, such as Microsoft, have embraced the regulation of AI – both by the companies that develop the technology and by the governments that monitor their use.
But AI, if handled properly, could have "tremendous benefits," Pichai said, including helping physicians diagnose eye diseases and other ailments through automated health data scans.
The beginnings are tough, but I think companies should regulate themselves, "he told the newspaper. "That's why we've tried to articulate a set of AI principles. We may not have done everything right, but we thought it was important to start a conversation. "
Pichai, who joined Google in 2004 and became chief executive eleven years later, named AI one of the most important things in January, Humanity is working on it," and said it could prove "more profound" to human society than "electricity or fire ".
However, the race to build machines that can work on their own has revived the fears of the culture of Silicon Valley. Disruptions could lead to technologies that harm people and eliminate jobs.