Usenet was a set of fiefdoms mostly administered by academics in CompSci departments, and proved utterly unequal to its first real crisis*. Distributed systems work great as long as they're new and everyone is participating in good faith most of the time. In adversarial situations, they're rarely able to adapt flexibly enough, partly because the networked structure imposes a severe decision-time penalty on consensus formation. A negligent or malicious attacker just has to overwhelm nodes with high betweenness centrality and the whole network fails.
Immediately following crises everyone talks about making the network more resilient and so on, but it never fully recovers because everyone intuitively knows that establishing consus is slow and bumpy, and that major restructuring/retooling efforts are way easier to accomplish unilaterally. So people start drifting away because unless there's a quick technical fix that can be deployed within a month or two, It's Over. Distributed systems always lose against coherent attackers with more than a threshold level of resources because the latter has a much tighter OODA loop.
Exactly, and look what happened to Usenet. People abused the commons and we lost it to spam. Unmoderated networks always fall to bad actors.
I'm building a p2p social network and struggling hard with how to balance company needs, community needs, and individual freedom. A free-for-all leads to a tyranny of structurelessness in which the loudest and pushiest form a defacto leadership that doesn't represent the will of the majority. On the flip side, overly restrictive rules stifle expression and cause resentment. These are hard questions and there is no one answer, except that unmoderated networks always suck eventually, so the question is one of line drawing and compromise.
This was what early internet was like: Usenet, IRC, etc.