ghost creates 1 private special repo in your account, as a unified home to hold your config and runners for all your projects - it doesn't create a repo somewhere else and doesn't need any random secrets. If you do want to customize the config and add secrets tho, ghost does support that - put their names in the toml and it will wire them through for up.
It’s an esoteric enthusiast product handmade in Germany to extreme mechanical precision. It’s a miracle they got it down to $4400… I bet they’re not making much money on this, and it’s more of a labor of love.
Exactly. I run my own Gitea server, but put my stuff on Github, because that's where the people are. Self-hosting an MP3 is not the same as being on Spotify.
I keep my open source work on github for similar reasons. I don't expect nor want to deal with contributors having to create accounts on a self-hosted forge for every individual project they work on.
Unfortunately CIE standards like the JTC22 (D8/D1) work here tend not to be released for free. Eventually the mathematical curves should be adopted by open source implementations. Hopefully this isn't encumbered by patents.
Kind of. I want to yes, but its not directly how this works or how it sounds. A large increase in poverty or loss of property is insufficient to stoke revolution on its own. The increase of poverty in favor of the rich devastates the economy for multiple reasons, such as: opportunity contraction, less spending, loss of motivation/mobility, and more. When the economy loss becomes wide spread enough, regardless of bankruptcy/poverty/homeless or whatever rates is when revolution happens.
The problem has to effect a majority of society. 12% sounds devastating (it is), but it is not a wide enough umbrella.
It took 25% of the nation being out of work to, not revolt, but popularly elect someone willing to to spend a little government money on healthcare and welfare.
So it will get much worse before Americans finally read a book and figure out we should maybe do something different.
> So it will get much worse before Americans finally read a book and figure out we should maybe do something different.
You better forget about the books. Don't count on the media either; the abolishment of the fairness doctrine and financial incentives via corporate ownership can and will distort reality in a strata-optimized way. Social media is overrun by bots and influence ops as we speak. New threat: people will ask their LLM. Journalists will source their LLM. Next question: Who trains the LLM?¹
I read Grapes of Wrath recently on a recommendation from a friend and it’s one of the few great books I’ve read and felt was genuinely great. It feels incredibly relevant today with both inequality and automation. Would highly recommend it.
Bankruptcy won't even discharge the kind of debt many/most of the lower-middle class fall broke upon. Alimony, child support, student loans, "restitution."
This claim is simply false. The cause of bankruptcy in the U.S. has been extensively studied and absolutely none of the criteria you list comes even close to the number 1 reason that people in lower or middle class declare bankruptcy: medical bills.
No, it isn’t that well studied; and I’d be interested to see your source and confirm that it doesn’t trace back to a study that says something more like “A new study from academic researchers found that 66.5 percent of all bankruptcies were tied to medical issues —either because of high costs for care or time out of work”. (https://www.cnbc.com/2019/02/11/this-is-the-real-reason-most...)
What? You mean to tell me people file bankruptcy over the kinds of debt they can actually discharge and less so over the kinds of debt they can't?
That doesn't prove anything other than people filing bankruptcy aren't morons.
If the only thing you could discharge were gambling debts, there would be an equally specious claim that people aren't going broke over medical debt because 80% of bankruptcies cite gambling debts as the cause.
Bankruptcy won't even discharge the kind of debt many/most of the lower-middle class fall broke upon.
The whole point was that bankruptcy wasn't a remedy discharging these forms of going broke. It's unsurprising the bankruptcy data leans towards a 'cause' that will actually discharge their debt, otherwise the incentive for a broke person to file bankruptcy is lowered.
Average medical debt per person in 2020 was $430 per [0].
By comparison, in year 2006, there was 2.55B$ in arrears in my state of Arizona when it had ~5.5 million people, or an average of $463 per person. Not even adjusted for inflation. [1]
If you set the bar at medical debt, which you seem to have, it seems to have passed it on child support alone. And that is with a quite uncharitable handicap against me, as I'm comparing the 2006 child arrears numbers I found against 2020 dollars of medical debt.
Bankruptcy at this point is just a way to signal to creditors not to lend more money to this individual. As you said alimony, child support, student loans, restitution are a must so the filing simply is a formal notice that "every penny this person ever earns is already earmarked, heed this warning before lending"
It's quite convenient though that it actually discharges the kind of debts rich people and businesses are more likely to accrue, while not discharging the kind of debts the middle/lower classes are likely to accrue when they're unable to pay them.
I don’t have data on this. But I’m getting recommended YouTube videos that are 1-2 hr AI generated music, in the genre of background music (coffee listening and focus).
I listened to one. It was pretty good!! There’s no lyrical content, but the production was strong.
In that niche of “music you don’t really pay attention to” I predict AI generated music will only grow.
I love the original article [0]; it seems like Mark had fun building this app. It doesn't seem like he expected it to make him a billionaire. So what's the problem? If you post a recipe on a cooking blog, and someone immediately rips it for their own site, that's a bummer, but like… what are you going to do, patent it?
We software developers are so used to software being difficult, time-consuming and expensive, but that world is gone. We're now much closer to other creative arts like writing, music or photography (and sadly, we're about to be paid like artists too). In a creative field, when someone has a good idea or style, it gets copied. But that's just art.
I do really enjoy working on the site, it's great to have an outlet and playground for ideas and do things just for fun. There never was (and never will be) any commercial angle for this, as I said in a footnote in the "Sloppy Copies" post, I have other motives for writing code and I appreciate I am very fortunate that I have the opportunity to be able to do that.
There's always been a tendency amongst the "priesthood" of any in-group to hoard knowledge and use it to maintain their position. So, regarding the "democratizing" of creating software - I mostly agree with you, and also agree that it's probably a good thing. I think it's pretty neat that someone without any coding experience can create their own bespoke tooling to solve a problem. I have caveats and concerns, but that's a topic for another day.
I also agree with the "that's art" part of your comment. I learned to program by reading other people's code, learned to build infrastructure by watching what my peers were doing, and learned to play an instrument by listening to and copying musicians I admired. Heck, I play in a covers band!
The problem is that this isn't just someone being inspired to create their own thing and put their own spin on it, which could be cool.
Even "nice idea, I'm going to do that and see if I can charge for it" isn't really an issue, free market and all that. This is cloning and copying on an automated, industrial scale, apparently sometimes for malicious, criminal purposes too.
The article asserts that the quality of human knowledge work was easier to judge based on proxy measures such as typos and errors, and that the lack of such "tells" in AI poses a problem.
I don't know if I agree with either assertion… I've seen plenty of human-generated knowledge work that was factually correct, well-formatted, and extremely low quality on a conceptual level.
And AI signatures are now easy for people to recognize. In fact, these turns of phrase aren't just recognizable—they're unmistakable. <-- See what I did there?
Having worked with corporate clients for 10 years, I don't view the pre-LLM era as a golden age of high-quality knowledge work. There was a lot of junk that I would also classify as a "working simulacrum of knowledge work."
For me the issue is the lack of human explanation for mistakes. With a person, low quality comes from a source. Sometimes the source is lack of knowledge, sometimes time pressure, sometimes selfish goals.
Most importantly, those sources of errors tend to be consistent. I can trust a certain intern to be careful but ignorant, or my senior colleague with a newborn daughter to be a well of knowledge who sometimes misses obvious things due to lack of sleep.
With AI it's anyone's guess. They implement a paper in code flawlessly and make freshman level mistakes in the same run. so you have to engage in the non intuitive task of reviewing assuming total incompetence, for a machine that shows extreme competence. Sometimes.
Absolutely. Our heuristics for judging human output are useless with LLMs. We can either trust it blindly, or tediously pick over every word (guess which one people do). I've watched this cause havoc over and over at my job (I work with many different teams, one at a time).
AI signatures don't mean low quality, they just mean AI. And humans do use them (I have always used the common AI signatures). And yes, humans produce good-looking garbage, but much more commonly they produce bad-looking garbage. This is all tangential to the point.
> Our heuristics for judging human output are useless with LLMs.
We used to call the negative side of these heuristics "code smells", but I see no one has used that term yet in these comments. Code smells are what the post is referring to, what LLMs get rid of.
It was and still is a negative filter, not a positive one. Meaning it is easy to reject work because there typos and basic factual errors, absence of them is not a good measure of quality. Typically such checks is the first pass not the only criteria.
It is valuable to have this, because it the work passes the first check then it easier to identify the actual problems. Same reason we have code quality, lint style fixed before reasoning with the actual logic being written.
Perhaps it also conveys different type of meaning by having them in this context.
Errors [1] in community discussion threads like this are positive signals that I am human not a bot. A couple of decades ago, I would be unhappy with myself for it, today accent and idiosyncratic writing are perhaps signals[3] that you are human.
[1] i.e. not proof reading for them, not introducing them deliberately.
[2] I can only see one typographical error (it->if) and many grammar errors, did I miss something ?
[3] Not definitive and not as a personal signature, as it can be easily faked/replicated, but the variations at scale is for now not seen in models. Today's model instances do not get unique personas, accents, idiosyncrasies in writing that would make them unique.
> And AI signatures are now easy for people to recognize. In fact, these turns of phrase aren't just recognizable—they're unmistakable. <-- See what I did there?
You might spot these very obvious constructs and still miss 99% of AI generated text because it has no tells. Yet you don’t know that 99% was generated, and since you spot 100% of the pattern you outlined you think no AI generated text makes it past you.
I’m also not sure I agree with the assertion that LLMs will produce a high quality (looking) report with correct time frames, lack of typos, and good looking figures. I’m just as willing to disregard human or LLM reports with obvious tells. An LLM or a person can produce work that’s shoddy or error filled. It may be getting harder to differentiate between a good or bad report, but that helps to shift the burden more onto the evaluator.
This is especially true if we start to see more of a split in usage between LLMs based on cost. High quality frontier models might produce better work at a higher cost, but there is also economic cost pressure from the bottom. And just like with human consultants or employees, you’ll pay more for higher quality work.
I’m not quite sure what I’m trying to argue here. But the idea that an LLM won’t produce a low quality report just seemed silly to me.
You’ve missed the point of original article about the proxy for quality disappearing. LLMs are trained adversarially, if that’s a word. They are trained to not have any “tells”.
Working in a team isn’t adversarial, if i’m reviewing my colleague’s PR they are not trying to skirt around a feature, or cheat on tests.
I can tell when a human PR needs more in depth reviewing because small things may be out of place, a mutex that may not be needed, etc. I can ask them about it and their response will tell me whether they know what they are on about, or whether they need help in this area.
I’ve had LLM PRs be defended by their creator until proven to be a pile of bullshit, unfortunately only deep analysis gets you there
Yes, I don't think this matters. Much of "knowledge work" was always a proxy for something else.
High quality in terms of typos and errors is mainly a signal of respect in a similar way to wearing ironed white shirts with neck-ties. "Walls of text" that no one is expected to read in depth. Basically a symbolic demonstration of sacrifice and subservience (or something). LLMs remove this mode of signalling.
If quality of content wasn't examined before, it was probably never particularly important.
> I don't know if I agree with either assertion… I've seen plenty of human-generated knowledge work that was factually correct, well-formatted, and extremely low quality on a conceptual level.
Putting a high level of polish on bad ideas is basically the grifter playbook. Throughout the business world you will find workers and entire businesses who get their success by dressing up poor ideas and bad products with all of the polish and trimmings associated with high quality work.
reply