Anonymous ID: 56688e Feb. 17, 2019, 6:05 p.m. No.5233639   🗄️.is 🔗kun   >>3659 >>3690

https://bgr.com/2019/02/17/openai-fake-news-elon-musk-too-scary-release/

 

Here’s an example: The AI system was given this human-generated text prompt:

“In a shocking finding, scientist discovered a herd of unicorns living in a remote, previously unexplored valley, in the Andes Mountains. Even more surprising to the researchers was the fact that the unicorns spoke perfect English.”

From that, the AI system — after 10 tries — continued the “story,” beginning with this AI-generated text:

“The scientist named the population, after their distinctive horn, Ovid’s Unicorn. These four-horned, silver-white unicorns were previously unknown to science. Now, after almost two centuries, the mystery of what sparked this odd phenomenon is finally solved.” (You can check out the OpenAI blog at the link above to read the rest of the unicorn story that the AI system fleshed out.)

Imagine what such a system could do, say, set loose on a presidential campaign story. The implications of this are why OpenAI says it’s only releasing publicly a very small portion of the GPT-2 sampling code. It’s not releasing any of the dataset, training code, or “GPT-2 model weights.” Again, from the OpenAI blog announcing this: “We are aware that some researchers have the technical capacity to reproduce and open source our results. We believe our release strategy limits the initial set of organizations who may choose to do this, and gives the AI community more time to have a discussion about the implications of such systems.