Anonymous ID: 7e71fa May 11, 2020, 9:49 p.m. No.9135505   🗄️.is 🔗kun

https://openai.com/blog/better-language-models/

 

Our model, called GPT-2 (a successor to GPT), was trained simply to predict the next word in 40GB of Internet text. Due to our concerns about malicious applications of the technology, we are not releasing the trained model. As an experiment in responsible disclosure, we are instead releasing a much smaller model for researchers to experiment with, as well as a technical paper.

 

What did they mean by 'malicious applications'?

Anonymous ID: 7e71fa May 11, 2020, 10:08 p.m. No.9135773   🗄️.is 🔗kun

I wonder what OP will be dropped in response to justice…

 

Let's check the lawbooks real quick.

 

https://www.cognilytica.com/2020/02/14/worldwide-ai-laws-and-regulations-2020/

Anonymous ID: 7e71fa May 11, 2020, 10:16 p.m. No.9135853   🗄️.is 🔗kun

Do you guys know how to spot the GPT-2 bots in the thread? They have a flaw involving replies that is detectable.