It's time for another part of our current outlook on the future, presented to you thanks to the increasingly worrying possibilities of artificial intelligence. Everyone knows the problem of fake online news, and now NonAgent OpenAI, supported by Elon Musk, has developed an AI system that can produce such compelling fake message content that the group is too skeptical to publicly post it. They let the researchers see a small part of what they did and therefore do not completely hide them – but the concern of the group is clearly felt here.
"Our model, GPT-2, was simply trained to predict the next word in 40GB of Internet text," reads a new OpenAI blog about the effort. "Due to our concerns about harmful applications of the technology, we do not publish the trained model. As an experiment for responsible disclosure, we instead publish a much smaller model for experiments and a technical document. "
Basically, the GPT-2 system was" trained "by the delivery of 8 million web pages until it reached the point where the system could look at a set of text that was given and predicted the words, that could come next. According to the OpenAI blog, the model is "chameleon-like" ̵
Here is an example: The AI system received this human-made text promptly:
"In a shocking finding, the scientist discovered a flock of unicorns that lived in a remote, previously untapped valley in the Andes lived. Even more surprising to the researchers was the fact that the unicorns spoke perfect English. "
The AI system then continued the" story "after 10 attempts, starting with this text generated by the AI: [19659002" Scientists named the people for their unmistakable horn, Ovids Unicorn. These four-horned, silver-white unicorns were hitherto unknown to science. Now, after almost two centuries, the mystery of what triggered this strange phenomenon has finally been resolved. (You can read the OpenAI blog at the link above to read the rest of the unicorn history the AI system has worked out.)
Imagine what such a system could do to for example, letting go of the story of a presidential campaign. This means that, according to OpenAI, only a very small portion of the GPT-2 sampling code will be publicly released. No records, training codes or "GPT-2 model weights" are published. Again, this was announced by the OpenAI blog: "We know that some researchers have the technical skills to reproduce and open our results. We believe that our publishing strategy limits the initial set of organizations that want to do so, and gives the AI community more time to discuss the implications of such systems.
"We also think that governments should consider expanding or initiating initiatives To more systematically monitor the impact and diffusion of AI technologies on society and to measure the progress of such systems' capabilities, the OpenAI blog post concludes ,