> However, it must be
emphasised that this does not include other dangers
posed through the misuse of these models, such as
the use of LLMs to generate fake news. Similarly,
we do not contend that future AI systems could
never pose an existential threat. Instead, we clarify
that, contrary to prevailing narratives, the evidence
from LLM abilities does not support this concern.
I find myself unconvinced that LLMs are "inherently controllable, predictable and safe."
> However, it must be emphasised that this does not include other dangers posed through the misuse of these models, such as the use of LLMs to generate fake news. Similarly, we do not contend that future AI systems could never pose an existential threat. Instead, we clarify that, contrary to prevailing narratives, the evidence from LLM abilities does not support this concern.
I find myself unconvinced that LLMs are "inherently controllable, predictable and safe."