That's fair: the statement isn't hyperbolic in its language. But remember that GPT-2 was barely coherent. In making this statement, I would argue that OpenAI was trying to impart a sense of awe and danger designed to attract the kind of attention that it did. I would argue that they have repeatedly invoked danger to impart a sense of momentousness to their products. (And to further what is now a pretty transparent effort to monopolize the tech through regulatory intervention.)
> (And to further what is now a pretty transparent effort to monopolize the tech through regulatory intervention.)
I disagree here also: the company has openly acknowledged that this is a risk to be avoided with regards to safety related legislation, what they've called for looks a lot more like "we don't want a prisoner's dilemma that drives everyone to go fast at the expense of safety" rather than "we're good everyone else is bad".