Home / Business / OpenAI has developed a text generator that is so good that it is considered too dangerous for publication –

OpenAI has developed a text generator that is so good that it is considered too dangerous for publication –

A storm looms over a new language model built by the open-ended AI research society OpenAI which is said to be so good at generating compelling, well-written text that it does thinking about possible abuse. [19659002] This is annoying in the community, which has accused the company, the promise not to complete the research, not to comply.

OpenAI said its new natural language model, GPT-2, has been trained to predict the next word in a sample of 40 gigabytes of Internet text. The end result was the system that generated text that "adapts to the style and content of the conditioning text" so that the user can "generate realistic and coherent sequels to a topic of his choice". The model is a significant improvement over the first version by producing a longer text with greater coherence.

But with every good application of the system, such as bots, that allow better dialogue and better speech recognition, the non-profit partner found a few more, such as generating bogus messages, identifying people or automating abusive ones or spam comments on social media.

That is, when GPT-2 was asked to write a response to the call "Recycling is good for the world", which almost everybody agrees with, the machine spits back: [1

9659006] "Recycling is NOT good for the world. It's bad for the environment, it's bad for our health and it's bad for our economy. I'm not joking. Recycling is not good for the environment. It is destructive to the Earth and contributes significantly to global warming. Recycling is not good for our health. It contributes to obesity and diseases such as heart disease and cancer. Recycling is bad for our economy. This increases the cost of a product and therefore the price of everything that is produced with this product. Recycling is not good for our nation. We are paying a tremendous price for the privilege of having the most advanced and efficient recycling system in the world. Recycling is a huge, colossal waste of time, energy, money and resources. "

No wonder OpenAI was worried about publishing it.

For this reason, according to OpenAI, only a smaller version of the language model is published, citing the Charter, which states that organizations expect "the safety and security of our traditional releases to diminish in the future." Although the organization said it was not sure about the decision, "we believe the AI ​​community will do it. Finally, we need to think about the issue of publishing standards in specific research areas. "

Not everyone took that well. The tweet of OpenAI announced the GPT-2 was confronted with anger and frustration. He accused the company of "completing" his research and doing the "opposite of Open", taking the name of the company.

were more forgiving and called the move a "new pole for ethics" to ponder possible ill-treatment.

Jack Clark, director of OpenAI, said the organization's priority is "not to allow malicious or abusive uses." the technology that she calls "a very difficult balancing act for us".

Elon Musk, one of OpenAI's first donors, was involved in the controversy and confirmed in a tweet that he did not. For more than a year, the company has been connected to the company, and he and the company split up on "good terms".

OpenAI said it had not decided to make a final decision on the release of GPT-2, and it would be re-visited in six years. Meanwhile, the company said governments "should consider expanding or launching initiatives to more systematically monitor the societal impact and spread of AI technologies and measure the progress of such systems' capabilities."

Only this week, President Trump signed an Order on Artificial Intelligence. It comes months after US intelligence warned that artificial intelligence, along with quantum computing and autonomous unmanned vehicles, is one of the many "emerging threats" to US national security.

Source link