The commons were actually fairly well regulated by community norms that were well documented and established. The creation of the notion of the tragedy of the commons was quite possibly propaganda so that large land owners could consolidate and enclosed the commons under the guise that they could manage it better especially after traditions were disrupted.
> The commons were actually fairly well regulated by community norms that were well documented and established.
Not really. You are correctly citing the enclosure acts as a historic example, but that was not beginning or the end of history. It was just a recent, location-specific historic moment when big English landlords won in the millenia-old power struggle between peasant and landlord.
Control of the commons - land and the resources buried in it - has been a point of contention and bloodshed for as long as recorded human history. It's a pendulum that swung back and forth, but has always had bad actors making personally-profitable, socially-impoverishing decisions.
For an alternative example of how things have gone in other places, look at the blood feuds between ranchers and farmers[1] in the American west, concerns over upstream and rainfall water rights in literally any part of the world that relies on irrigation, or, the varied situations where existing landlords politically won the struggle... Or lost it in the 20th century.
As for well-established[2]...
---
[1] The enclosure acts echo this farmer/rancher dichotomy, actually. Feudal lord/serf relationships have the lords derive wealth from having ever more serfs doing ever more labour-intensive agriculture on their land. The enclosure acts, however, were intended to drive the peasants from their land, because in the case of England, the lords figured out that they can derive way more wealth from turning over their land to low-labour grazelands for sheep. And the way they could do that was to use the law as a cudgel to drive out their tenants at sword and gunpoint.
[2] They were only well-established for particular points in history. Prior to William the Conqueror arriving in England, and stealing all the land in it for himself and his mercenaries, there were also 'well-established' land use norms - that greatly limited the power and ownership-of-land granted to lords and petty kings. The Norman conquest turned all that over - into a different 'well-established' equilibrium - that was then, again, turned over into a 'well-established' equilibrium after the passage of the enclosure acts.
Sadly there's a lot of truth there. Generally it's a bad idea to lend tools to people that don't know how to use them. I don't lend my tools to friends,
although I make exceptions when I 100% trust the guy. This is based on experience.
But I'm happy to help, either by me going there or the friend bringing his stuff to my workshop.
This reminds me of colorized black and white movies from the 90s although I can know imagine AI being used to do that and upscale the past creating new hyper-real versions of the past.
This looks like a good resource. There are some pretty powerful models that will run on a Nvidia 4090 w/ 24gb of RAM. Devstral and Queen 3. Ollama makes it simple to run them on your own hardware, but the cost of the GPU is a significant investment. But if you are paying $250 a month for a proprietary tool it would pay for itself pretty quickly.
> There are some pretty powerful models that will run on a Nvidia 4090 w/ 24gb of RAM. Devstral and Queen 3.
I'd caution against using devstral on a 24 gb vram budget. Heavy quantisation (the only way to make it fit into 24gb) will affect it a lot. Lots of reports on locallama about subpar results, especially from kv cache quant.
We've had good experiences with running it fp8 and full cache, but going lower than that will impact the quality a lot.
I really found the story in chapter 14 (recursive self-improvement) about the guy who got so addicted to self-improvement that he ended up in his own meta-reality unable to understand even himself because he was getting so much better and hacking his learning. A completely fabricated story with no basis in reality that I'm aware of but man there are a lot of bullet points to make it seem factual. What are we going to do about the worrying trend of 10X hackers self-improving so much that they aren't able to exist in the real world.
Here's an excerpt
"The Addiction to Acceleration
The fourth uncomfortable truth is how recursive improvement becomes compulsive. Kenji can’t stop because each day of not improving his improvement feels like stagnation. When you’re accelerating, constant velocity feels like moving backward.
This addiction manifests as:
• Inability to accept plateau phases
• Anxiety when not optimizing optimization
• Devaluing of steady-state excellence
• Compulsion to add meta-levels
• Fear of falling behind yourself
Recursive improvement can become its own trap."
I find that this criticism is far less applicable to say individuals but perhaps it could be levied against the way companies are currently treating AI. Which of course is where this comes from.
Honestly there are some interesting concepts and broad overviews of them but this is hardly a "book" but just a verbose LLM document that briefly lists a lot of concepts without sufficiently or consistently fleshing them out into actual meaningful chapters. Not to say that this sort of thing isn't potentially useful but it seems more like the starting point of an outline of a book rather than anything resembling a finished published book.
I have a horrible time editing my own work. Decision paralysis and what not, but I did have the idea that a good way to practice would be editing the content of LLM generated fictional narratives. I think the point that many are making that LLMs are useful as cognitive aids that augment thinking rather than replacements for thinking. They can be used to train your mind by inspiring thoughts you wouldn't have came up with on your own.
Nice. I leverage the strengths of AI in a way that affirms the human element in the collaboration. AI as it exists in LLMs is a powerful source of potentially meaningful language but at this point LLMs don't have a consistent conscious mind that exists over time like humans do. So it's more like summoning a djinn to perform some task and then it disappears back into the ether. We of course can interweave these disparate tasks into a meaningful structure and it sounds like you have some good strategies for how to do this.
I have found that using an LLM to critique your writing is a helpful way of getting free generic but specific feedback. I find this route more interesting than the copy pasta AI voiced stuff. Suggesting that AI embodys a specific type of character such as a pirate can make the answers more interesting than just finding the median answer, add some flavor to the white bread.
One of the things I found helpful about getting out of the specific / formulaic feedback was asking the LLM to ask me questions. At one point I asked a fresh LLM to read the book and then ask me questions. It showed me where there were narrative gaps / confusing elements that a reader would run into, but didn't realy on the specific "answer" from the LLM itself.
I also had a bunch of personal stories interwoven in and it told me I was being "indulgent" which was harsh but ultimately accurate.
That's a great approach. I find LLMs work really well as Socratic sounding boards and can lead you as the writer to explore avenues you might have otherwise not even noticed.
In the end there are plenty of stories, but they're ones that are relevant. The story that the LLM gave feedback on was about flipping a raft on the Grand Canyon, the LLM's advice was that it felt unrelated to the point I was trying to make. That made me realize I had it in there more because I wanted to talk about the rafting Grand Canyon, vs. it being useful and entertaining to readers.
And now just think of all of the people who will be getting their knowledge from LLMs which are literally making up stuff through statistical linguistic inference on a grand scale from hearsay.
> LLMs are literally making up stuff through statistical linguistic inference on a grand scale from hearsay.
Hearsay being some personal "truth" it may be useful to know what the statistical average of that "truth" is. If we do it right perhaps we can get the various personal errors to cancel out.
That was my read. They can now identify species of very fragmentary bone remains via collagen protein matching. They didn't say what if anything clues this would/could lead to.
One party rule is almost never a good thing over the long run as politicians tend to become more self-serving and corrupt the longer they don't have to worry about being held accountable to voters. Instead they worry more about being accountable to their party leaders and funders who try to maintain the status-quo. Not sure the duopoly we have in the USA is preferable compared to say a robust democracy with smaller parties forming coalitions.
Take 30-60 minutes and read up on ranked choice voting, if you aren't familiar. I talk to a lot of people about it and the idea that we can vote any other way seems foreign to most.
It won't instantly change the world, but it will allow people to vote their true conscience vs. strategic voting, and allows smaller parties a chance at real power.