Anonymous ID: 48f896 Feb. 14, 2019, 4:30 p.m. No.5177721   🗄️.is 🔗kun

Hillary, soros and deep state blaming ai?

 

In 2015, car-and-rocket man Elon Musk joined with influential startup backer Sam Altman to put artificial intelligence on a new, more open course. They cofounded a research institute called OpenAI to make new AI discoveries and give them away for the common good. Now, the institute’s researchers are sufficiently worried by something they built that they won’t release it to the public.

The AI system that gave its creators pause was designed to learn the patterns of language. It does that very well—scoring better on some reading-comprehension tests than any other automated system. But when OpenAI’s researchers configured the system to generate text, they began to think about their achievement differently.

“It looks pretty darn real,” says David Luan, vice president of engineering at OpenAI, of the text the system generates. He and his fellow researchers began to imagine how it might be used for unfriendly purposes. “It could be that someone who has malicious intent would be able to generate high-quality fake news,” Luan says.

That concern prompted OpenAI to publish a research paper on its results, but not release the full model or the 8 million web pages it used to train the system. Previously, the institute has often disseminated full code with its publications, including an earlier version of the language project from last summer.

OpenAI’s hesitation comes amid growing concern about the ethical implications of progress in AI, including from tech companies and lawmakers.

Google, too, has decided that it’s no longer appropriate to innocently publish new AI research findings and code. Last month, the search company disclosed in a policy paper on AI that it has put constraints on research software it has shared because of fears of misuse. The company recently joined Microsoft in adding language to its financial filings warning investors that its AI software could raise ethical concerns and harm the business.

OpenAI let WIRED play with its text generator, via a web page that lets you type in text the system uses like a writing prompt. The results could be garbled, but the way the system riffed on prompts such as song lyrics, poems, and phrases like “here is a recipe for yogurt pancakes” confirmed Luan’s view that the output can look pretty darn real.

However, it didn’t take much to get the system to unspool paragraphs that resembled the work of disinformation artists. When WIRED typed in the phrase “Hillary Clinton and George Soros,” OpenAI’s system wrote the following:

“…are a perfect match, and their agenda appears to be to create a political movement where Soros and his political machine and Clinton are two of the only major players. This is the first time Soros and Clinton have been caught on tape directly colluding in promoting the same false narrative. One of the key revelations in the leaked audio was Clinton's admission to a Russian banker that she knew about the Uranium One deal before it was approved by Congress. Clinton was shown sharing the same talking points that were originally drafted by a Fusion GPS contractor hired by an anti-Trump Republican donor. The leaked audio is the clearest evidence yet that the Clinton campaign and the Hillary Foundation colluded with Fusion GPS to manufacture propaganda against President Trump.”

 

https://www.wired.com/story/ai-text-generator-too-dangerous-to-make-public/?CNDID=55182132&CNDID=55182132&bxid=MzA2MDM4NzA3NDI1S0&hasha=16355dec4c83e14e4f48f36d7bfa0067&hashb=28eebdd20f3e5d7e6ff5f4cc6c772cdc863206ed&mbid=nl_021419_daily_list1_p2&source=DAILY_NEWSLETTER&utm_brand=wired&utm_mailing=WIRED%20NL%20021419%20(1)&utm_medium=email&utm_source=nl