>>5172495
Here's the academic paper, regarding the OpenAI post.
Conclusion seems to indicate, operator-less bots easier than once thought.
https://d4mucfpksywv.cloudfront.net/better-language-models/language-models.pdf
>7. Conclusion
>When a large language model is trained on a sufficiently
>large and diverse dataset it is able to perform well across
>many domains and datasets. GPT-2 zero-shots to state of
>the art performance on 7 out of 8 tested language model-
>ing datasets. The diversity of tasks the model is able to
>perform in a zero-shot setting suggests that high-capacity
>models trained to maximize the likelihood of a sufficiently
>varied text corpus begin to learn how to perform a surprising
>amount of tasks without the need for explicit supervision.