Harmless unicorns thought of dangerous? experiment with GPT-2 from R
When this yr in February, OpenAI offered GPT-2(Radford et al. 2019), a big Transformer-based language mannequin educated on an infinite quantity of web-scraped textual content, their announcement caught nice consideration, not simply within the NLP group. This was primarily as a consequence of two info. First, the samples of generated textual content have been gorgeous….