Google, a company whose motto used to be "do not be angry," has recently questioned its ethics because it insisted on developing AI for the Pentagon. If you're one of many people who do not understand why the Mountain View company runs the risk of reputational damage, then you're not alone.
It's not the money. According to a report by Gizmodo Google receives around $ 9 million. Sure, for most of us that would mean a lifetime, but let's not forget that Google is worth almost a trillion dollars. It can afford to skip a project that does not live up to its ethical standards.
And it's certainly not the prestige, you do not hear many experts calling on big tech companies to get deeper into the military. 1
The defense that uses the project is limited to "non-offensive" smacks the useless "weapons kills people" argument. Besides the particular concern in this situation is that the military will develop an AI that does not need human guidance to kill people, so our concern is that AI will kill and kill people. Whether it uses weapons, bombs, lasers or robotic kung-fu does not matter.
TensorFlow is an open source platform. It depresses the imagination that the Pentagon has no staff that could execute a simple image processing project without the help of Google engineers. Actually, I'll come right out and say it: it's a bunch of open-end wrenches.
In the past, as an information systems engineer with the US Navy, I worked with both military and civilian computer specialists at the Pentagon and worked with others who were assigned to the Pentagon but outside. I believe the US military is more than qualified to handle its own TensorFlow issues.
It feels like a situation where we do not have all the facts.
Government works with private companies all the time This is nothing new. It's probably safe to say that most people, with the exception of some extremists, are not angry, that Google has the audacity to work with the military. Instead, the outrage is over the fact that both entities decide to go on without responding to any of our fears.
Project Maven's purpose has been somewhat darkened in the wave of coverage he has received. It is not a project dedicated to the search of drone material – this was just his first mission. It was originally called Algorithmic Warfare Cross-Functional Team. It's not a one-shot deal relying on Google's help, but the Mountain View company is part of some earlier tests to determine how far the government can adapt private AI for military purposes.
The extent of Google's help remains unknown. While it continues to work, it does not work on weapons, thousands of its own employees feel uncomfortable with the level of involvement they know. They have written a petition requesting the company to terminate the contract, and at least a dozen have quit.
The construction of AI is not the same as building a knife. While both can be used for good or bad, you can not program a knife to kill certain types of people without being practiced by a human. There should be ethical responsibility on the part of the US government and private companies to regulate the development and use of AI, especially when it comes to warfare.
But instead of stopping and dealing with these issues, the government and Google are dealing with a quasi-secret mission associated with a larger project that no one really discusses.
I question Google's motives, which can explain exactly how DeepMind's AI beats the world's biggest Go players. I can not find the right words to relate to a government project to build a neural ground network for the Make image recognition meaningful.
And as a citizen and veteran, I question the motives of a federal government that has won Do not reveal the nature of an unclassified program that claims to spend millions of taxpayers on an open source AI platform.
We have reached Google and the Pentagon's public relations department, responding neither to requests for comment
Continue reading :.
HQ Trivia yield millions during NBA Finals