I too expected more discussion of this. People play around with these things because they're interesting, then mostly hand wave away concerns about the implications with "well, people will just have to learn to be skeptical of recordings". But what we're really doing is muddying a previously reliable avenue of gaining quality evidence about the world. I expect this opinion is unpopular on HN but I think people shouldn't be developing these things, companies shouldn't be working on them, and they should be banned before they get to the point of causing real harm. I also believe that can be prevented by drying up funding and research, because bad actors have to rely on the body of existing work to make their bad actions practical.
As NN models get more advanced generating speech synthesis will get progressively more convincing and less expensive to implement, even if the models aren't built for speech synthesis specifically. The same can be said for image generation/transformation. If we are to continue develop AI then this is likely inevitable. There are benefits to these models for mute people, for example. Adversarial models can be built to detect fake audio samples. Regulation (ex: adding tells/signatures in commercial products) would also help. The government would have to ban most AI research or they would only be prolonging the inevitable.