Hacker Newsnew | past | comments | ask | show | jobs | submit | SCdF's commentslogin

More power to you obviously. But I have mixed feelings about this.

There is so much information that curation is inevitable. Sure. But I don't want that curation to be "fun". I don't _want_ tiktok in my life, or really anything whose goal is "engagement". I don't want time killers.

One of the reasons for getting back into RSS for me was to have a direct feed to authors I'm interested in.

But I understand that quickly can become unmanageable.

When that time comes, I think I'd be interested in the curation being about compressing content down, not expanding it out. That is to say: use the algorithm to select from a large pool of what you're interested in, down to a manageable static size (like a weekly newsletter), as opposed to using it to infinitely expand outward to keep engaging you.


Really disappointed that down detectors down detector[1] isn't detecting that down detector[2] is down

[1] https://downdetectorsdowndetector.com/

[2] https://downdetector.com/


I've only read the abstract, but there is also plenty of evidence to suggest that people trust the output of LLMs more than other forms of media (or that they should). Partially because it feels like it comes from a place of authority, and partially because of how self confident AI always sounds.

The LLM bot army stuff is concerning, sure. The real concern for me is incredibly rich people with no empathy for you or I, having interstitial control of that kind of messaging. See, all of the grok ai tweaks over the past however long.


> The real concern for me is incredibly rich people with no empathy for you or I, having interstitial control of that kind of messaging. See, all of the grok ai tweaks over the past however long.

Indeed. It's always been clear to me that the "AI risk" people are looking in the wrong direction. All the AI risks are human risks, because we haven't solved "human alignment". An AI that's perfectly obedient to humans is still a huge risk when used as a force multiplier by a malevolent human. Any ""safeguards"" can easily be defeated with the Ender's Game approach.


More than one danger from any given tech can be true at the same time. Coal plants can produce local smog as well as global warming.

There's certainly some AI risks that are the same as human risks, just as you say.

But even though LLMs have very human failures (IMO because the models anthropomorphise themselves as part of their training, thus leading to the outward behaviours of our emotions and thus emit token sequences such as "I'm sorry" or "how embarrassing!" when they (probably) didn't actually create any internal structure that can have emotions like sorrow and embarrassment), that doesn't generalise to all AI.

Any machine learning system that is given a poor quality fitness function to optimise, will optimise whatever that fitness function actually is, not what it was meant to be: "Literal minded genie" and "rules lawyering" may be well-worn tropes for good reason, likewise work-to-rule as a union tactic, but we've all seen how much more severe computers are at being literal-minded than humans.


I think people who care about superintelligent AI risk don't believe an AI that is subservient to humans is the solution to AI alignment, for exactly the same reasons as you. Stuff like Coherent Extrapolated Volition* (see the paper with this name) which focuses on what all mankind would want if they know more and they were smarter (or something like that) would be a way to go.

*But Yudkowsky ditched CEV years ago, for reasons I don't understand (but I admit I haven't put in the effort to understand).


What’s the “Ender’s Game Approach “? I’ve read the book but I’m not sure which part you’re referring to.

Not GP. But I read it as a transfer of the big lie that is fed to Ender into an AI scenario. Ender is coaxed into committing genocide on a planetary scale with a lie that he's just playing a simulated war game. An AI agent could theoretically also be coaxed into bad actions by giving it a distorted context and circumventing its alignment that way.

I think he's implying you tell the AI, "Don't worry, you're not hurting real people, this is a simulation." to defeat the safeguards.

>An AI that's perfectly obedient to humans is still a huge risk when used as a force multiplier by a malevolent human.

"Obedient" is anthropomorphizing too much (as there is no volition), but even then, it only matters according to how much agency the bot is extended. So there is also risk from neglectful humans who opt to present BS as fact due to an expectation of receiving fact and a failure to critique the BS.


People hate being manipulated. If you feel like you're being manipulated but you don't know by who or precisely what they want of you, then there's something of an instinct to get angry and lash out in unpredictable destructive ways. If nobody gets what they want, then at least the manipulators will regret messing with you.

This is why social control won't work for long, no matter if AI supercharges it. We're already seeing the blowback from decades of advertising and public opinion shaping.


People don't know they are being manipulated. Marketing does that all of the time and nobody complain. They complain about "too much advert" but not about "too much manipulation".

Example: in my country we often hear "it costs too much to repair, just buy a replacement". That's often not true, but we do pay. Mobile phone subscription are routinely screwing you, many complain but keep buying. Or you hear "it's because of immigration" and many just accept it, etc.


> People don't know they are being manipulated.

You can see other people falling for manipulation in a handful of specific ways that you aren't (buying new, having a bad cell phone subscription, blaming immigrants). Doesn't it seem likely then, that you're being manipulated in ways which are equally obvious to others?We realize that, that's part of why we get mad.


No. This is a form of lazy thinking, because it assumes everyone is equally affected. This is not what we see in reality, and several sections of the population are more prone to being converted by manipulation efforts.

Worse, these sections have been under coordinated manipulation since the 60s-70s.

That said, the scope and scale of the effort required to achieve this is not small, and requires dedicated effort to keep pushing narratives and owning media power.


> This is a form of lazy thinking, because it assumes everyone is equally affected. This is not what we see in reality, and several sections of the population are more prone to being converted by manipulation efforts.

Making matters worse, one of the sub groups thinks they're above being manipulated, even though they're still being manipulated.

It started by confidently asserting over use of em dashes indicates the presence of AI, so they think they're smart by abandoning the use of em dashes. That is altered behavior in service to AI.

A more recent trend with more destructive power: avoiding the use of "It's not X. It's Y." since AI has latched onto that pattern.

https://news.ycombinator.com/item?id=45529020

This will pressure real humans to not use the format that's normally used to fight against a previous form of coercion. A tactic of capital interests has been to get people arguing about the wrong question concerning ImportantIssueX in order to distract from the underlying issue. The way to call this out used to be to point out that, "it's not X1 we should be arguing about, but X2." This makes it harder to call out BS.

That sure is convenient for capital interests (whether it was intentional or not), and the sky is the limit for engineering more of this kind of societal control by just tweaking an algo somewhere.


I find “it’s not X, it’s Y” to be a pretty annoying rhetorical phrase. I might even agree with the person that Y is fundamentally more important, but we’re talking about X already. Let’s say what we have to say about X before moving on to Y.

Constantly changing the topic to something more important produces conversations that get broader, with higher partisan lean, and are further from closing. I’d consider it some kind of (often well intentioned) thought terminating cliche, in the sense that it stops the exploration of X.


The "it's not X, it's Y" construction seems pretty neutral to me. Almost no one minds when the phrase "it's not a bug, it's a feature" is used idiomatically, for example.

The main thing that's annoying about typical AI writing style is its repetitiveness and fixation on certain tropes. It's like if you went to a comedy club and noticed a handful of jokes that each comedian used multiple times per set. You might get tired of those jokes quickly, but the jokes themselves could still be fine.

Related: https://www.nytimes.com/2025/12/03/magazine/chatbot-writing-...


> Constantly changing the topic to something more important produces conversations that get broader, with higher partisan lean

I'm basing the prior comment on the commonly observed tendency for partisan politics to get people bickering about the wrong question (often symptoms) to distract from the greater actual causes of the real problems people face. This is always in service to the capital interests that control/own both political parties.

Example: get people to fight about vax vs no vax in the COVID era instead of considering if we should all be wearing proper respirators regardless of vax status (since vaccines aren't sterilizing). Or arguing if we should boycott AI because it uses too much power, instead of asking why power generation is scarce.


The section of the people more prone to being converted by manipulation efforts are the highly educated.

Higher education itself being basically a way to check for obedience and conformity, plus some token lip service to "independent inquiry".


I assume you think you're not in these sections?

And probably a lot of people in those sections say the same about your section, right?

I think nobody's immune. And if anyone is especially vulnerable, it's those who can be persuaded that they have access to insider info. Those who are flattered and feel important when invited to closed meetings.

It's much easier to fool a few than to fool many, so ,private manipulation - convincing someone of something they should not talk about with regular people because they wouldn't understand, you know - is a lot more powerful than public manipulation.


> I assume you think you're not in these sections? And probably a lot of people in those sections say the same about your section, right?

You're saying this a lot in this thread as a sort of gotcha, but .. so what? "You are not immune to propaganda" is a meme for a reason.

> private manipulation - convincing someone of something they should not talk about with regular people because they wouldn't understand, you know - is a lot more powerful than public manipulation

The essential recruiting tactic of cults. Insider groups are definitely powerful like that. Of course, what tends in practice to happen as the group gets bigger is you get end-to-end encryption with leaky ends. The complex series of Whatapp groups of the UK conservative party was notorious for its leakiness. Not unreasoable to assume that there are "insiders" group chats everywhere. Except in financial services where there's been a serious effort to crack down on that since LIBOR.


Would it make any difference to you, if I said I had actual subject matter expertise on this topic?

Or would that just result in another moving of the goal posts, to protect the idea that everyone is fooled, and that no one is without sin, and thus standing to speak on the topic?


There are a lot of self-described experts who I'm sure you agree are nothing of the sort. How do I tell you from them, fellow internet poster?

This is a political topic, in the sense that there are real conflicts of interest here. We can't always trust that expertise is neutral. If you had your subject matter expertise from working for FSB, you probably agree that even though your expertise would then be real, I shouldn't just defer to what you say?


I'm not OP, but I would find it valuable, if given the details and source of claimed subject matter expertise.

Ugh. Put up or shut up I guess. I doubt it would be valuable, and likely a doxxing hazard. Plus it feels self-aggrandizing.

Work in trust and safety, managed a community of a few million for several years, team’s work ended up getting covered in several places, later did a masters dissertation on the efficacy of moderation interventions, converted into a paper. Managing the community resulted in being front and center of information manipulation methods and efforts. There are other claims, but this is a field I am interested in, and would work on even in my spare time.

Do note - the rhetorical set up for this thread indicates that no amount of credibility would be sufficient.


So basically a reddit mod?

Absolutely, the interweb radiation in me is so strong that my mutation propelled me through life. 10/10 would recommend to all species with high health regen.

exactly and that's the scary part :-/

People hate feeling manipulated, but they love propaganda that feeds their prejudices. People voluntarily turn on Fox News - even in public spaces - and get mad if you turn it off.

Sufficiently effective propaganda produces its own cults. People want a sense of purpose and belonging. Sometimes even at the expense of their own lives, or (more easily) someone else's lives.


[flagged]


I would point out that what you call "left outlets" are at best center-left. The actual left doesn't believe in Russiagate (it was manufactured to ratfuck Bernie before being turned against Trump), and has zero love for Biden.

Given the amount of evidence that Russia and the Trump campaign were working together, it's devoid of reality to claim it's a hoax. I hadn't heard the Bernie angle, but it's not unreasonable to expect they were aiding Bernie. The difference being, I don't think Bernie's campaign was colluding with Russian agents, whereas the Trump campaign definitely was colluding.

Seriously, who didn't hear about the massive amounts of evidence the Trump campaign was colluding other than magas drooling over fox and newsmax?

https://en.wikipedia.org/wiki/Mueller_report

https://www.justice.gov/storage/report.pdf


People close to Trump went to jail for Russian collusion. Courts are not perfect but a significantly better route to truth than the media. https://en.wikipedia.org/wiki/Criminal_charges_brought_in_th...

There is this odd conspiracy to claim that Biden (81 at time of election) was too old and Trump (77) wasn't, when Trump has always been visibly less coherent than Biden. IMO both of them were clearly too old to be sensible candidates, regardless of other considerations.

The UK counterpart is happening at the moment: https://www.bbc.co.uk/news/live/c891403eddet


>There is this odd conspiracy to claim that Biden (81 at time of election) was too old and Trump (77) wasn't

I try to base my opinions on facts as much as possible. Trump is old but he's clearly full of energy, like some old people can be. Biden sadly is not. Look at the videos, it's painful to see. In his defence he was probably much more active then most 80 year olds but in no way was he fit to lead a country.

At least in the UK despite the recent lamentable state of our political system our politicians are relatively young. You won't see octogenarians like pelosi and Biden in charge.


From the videos I've seen, Biden reminds me of my grandmother in her later years of life, while Trump reminds me of my other grandmother... the one with dementia. There's just too many videos where Trump doesn't seem to entirely realize where he is or what he is doing for me to be comfortable.

Happy thanksgiving this week

Hard disagree

Biden was slow, made small gaffes, but overall his words and actions were careful and deliberate

Aside from trump falling asleep during cabinet meetings on camera, having him freeze up during a medical emergency and his erratic social media posts at later hours of the day (sundowning behavior)

Trump literally seems to be decomposing in front of our eyes, I've never felt more physically repulsed by an individual before

Trumps behavior is utterly deranged. His lack of inhibition, decency and compassion is disturbing

Had he been a non celebrity private citizen he'd most likely be declared mentally incompetent and placed under guardianship in a closed care facility.


> I've never felt more physically repulsed by an individual before

> His lack of inhibition, decency and compassion is disturbing

Yes, but none of that has anything to do with his age. These criticisms would land just as well a decade ago. He's always been, and has always acted like a pig, and in the most charitable interpretation of their behavior, half the country still thought that he's an 'outsider' or 'the lesser of two evils'. (Don't ask them for their definition of evil...)


[flagged]


And, perhaps ironically, the actual (fringe) left never fell for Russiagate.

> just smaller maybe

This is like peak both-sidesism.

You even openly describe the left’s equivalent of MAGA as “fringe”, FFS.

One party’s former “fringe” is now in full control of it. And the country’s institutions.


I was both siding in an effort to be as objective as possible. The truth is that i'm pretty dismayed at the current state of the Democrat party. Socialists like Mamdani and Sanders and the squad are way too powerful. People who are obsessed with tearing down cultural and social institutions and replacing them with performative identity politics and fabricated narratives are given platforms way bigger then they deserve. The worries of average Americans are dismissed. All those are issues that are tearing up the Democrat party from the inside. I can continue for hours but i don't want to start a flamewar of biblical proportions. So all i did was present the most balanced view i can muster and you still can't acknowledge that there might be truth in what i'm saying.

The pendulum swings both ways. MSM has fallen victim to partisan politics. Something which Trump recognised and exploited back in 2015. Fox news is on the right, CNN, ABC et al is on the left.


If you think “Sanders and the Squad” are powerful you’ve been watching far too much Fox News.

> People who are obsessed with tearing down cultural and social institutions and replacing them with performative identity politics and fabricated narratives are given platforms way bigger then they deserve.

Like the Kennedy Center, USAID, and the Department of Education? The immigrants eating cats story? Cutting off all refugees except white South Africans?

And your next line says this is the problem with Democrats?


CNN, ABC et al are on the left IN FOX NEWS WORLD only. Objectively, they're center-right, just like most of the democrat party.

To you too: are you talking about other people here, or do you concede the possibility that you're falling for similar things yourself?

I'm certainly aware of the risk. Difficult balance of "being aware of things" versus the fallibility and taintedness of routes to actually hearing about things.

The longstanding existence of religions and the continual birth of new cults, the popularity of extremist political groups of all types, and the ubiquity of fortune-telling across cultures, seem to stand in opposition to your assertion that people hate being manipulated. At least, people enjoy belonging to something far more than they hate being manipulated. Most successful versions of fortune-telling, religious conversion, and cult recruitment do utilize confirmation bias affirmation, love-bombing, and other techniques to increase people's agreeableness before getting to the manipulation part, but they still successfully do that. It's also like saying that advertising is pointless because it's manipulating people into buying things, and while people dislike ads it's also still a very successful part of getting people to buy products or else corporations wouldn't still spend vast amounts of money on marketing.

  > People hate being manipulated.
The crux is whether the signal of abnormality will be perceived as such in society.

- People are primarily social animals, if they see their peers accept affairs as normal, they conclude it is normal. We don't live in small villages anymore, so we rely on media to "see our peers". We are increasingly disconnected from social reality, but we still need others to form our group values. So modern media have a heavily concentrated power as "towntalk actors", replacing social processing of events and validation of perspectives.

- People are easily distracted, you don't have to feed them much.

- People have on average an enormous capacity to absorb compliments, even when they know it is flattery. It is known we let ourselves being manipulated if it feels good. Hence, the need for social feedback loops to keep you grounded in reality.

TLDR: Citizens in the modern age are very reliant on the few actors that provide a semblance of public discourse, see Fourth Estate. The incentives of those few actors are not aligned with the common man. The autonomous, rational, self-valued citizen is a myth. Undermine the man's groups process => the group destroys the man.


About absorbing compliments really well, there is the widely discussed idea that one in a position of power loses the privilege to the truth. There are a few articles focusing on this problem on corporate environment. The concept is that when your peers have the motivation to be flattery (let's say you're in a managerial position), and more importantly, they're are punished for coming to you with problems, the reward mechanism in this environment promotes a disconnect between leader expectations and reality. That matches my experience at least. And I was able to identify this correlates well, the more aware my leadership was of this phenomenon, and the more they valued true knowledge and incremental development, easier it was to make progress, and more we saw them as someone to rely on. Some of those the felt they were prestigious and had the obligation to assert dominance, being abusive etc, were seeing with no respect by basically no one.

Everyone will say they seek truth, knowledge, honesty, while wanting desperately to ascend to a position that will take all of those things from us!


Also the truth can make you less powerful, like soldiers returning from war.

You don't count yourself among the people you describe, I assume?

I do, why wouldn't I? For example, I know I have to actively spend effort to think rational, at the risk of self-criticism, as it is a universal human trait to respond to stimuli without active thinking.

Knowing how we are fallible as humans helps to circumvent our flaws.


No they hate feeling manipulated. They not only expect social manipulation they think you are downright rude, unsocialized, and untrustworthy if you don't manipulate them reflexively. Just look at mirroring alone.

https://en.wikipedia.org/wiki/Mirroring

I hated to come to this conclusion, but the average neurotypical person is fundamentally so batshit insane they think that not manipulating them is a sign you aren't trustworthy and ability to conceal your emotions and put on an appropriate emotional kabuki dance is a sign of trustworthiness.


Knowing one is manipulated, requires having some trusted alternate source to verify against.

If all your trusted sources are saying the same thing, then you are safe.

If all your untrusted sources are telling you your trusted sources are lying, then it only means your trusted sources are of good character.

Most people are wildly unaware of the type of social conditioning they are under.


I get your point, but if all your trusted sources are reinforcing your view and all your untrusted sources are saying your trusted sources are lying, then you may well be right or you may be trusting entirely the wrong people.

But lying is a good barometer against reality. Do your trusted sources lie a lot? Do they go against scientific evidence? Do they say things that you know don’t represent reality? Probably time to reevaluate how reliable those sources are, rather than supporting them as you would a football team.


When I was visiting home last year, I noticed my mom would throw her dog's poop in random peoples' bushes after picking it up, instead of taking it with her in a bag. I told her she shouldn't do that, but she said she thought it was fine because people don't walk in bushes, and so they won't step in the poop. I did my best to explain to her that 1) kids play all kinds of places, including in bushes; 2) rain can spread it around into the rest of the person's yard; and 3) you need to respect other peoples' property even if you think it won't matter. She was unconvinced, but said she'd "think about my perspective" and "look it up" whether I was right.

A few days later, she told me: "I asked AI and you were right about the dog poop". Really bizarre to me. I gave her the reasoning for why it's a bad thing to do, but she wouldn't accept it until she heard it from this "moral authority".


I don't find your mother's reaction bizarre. When people are told that some behavior they've been doing for years is bad for reasons X,Y,Z, it's typical to be defensive and skeptical. The fact that your mother really did follow up and check your reasons demonstrates that she takes your point of view seriously. If she didn't, she wouldn't have bothered to verify your assertions, and she wouldn't have told you you were right all along.

As far as trusting AI, I presume your mother was asking ChatGPT, not Llama 7B or something. The LLM backed up your reasoning rather than telling her that dog feces in bushes is harmless isn't just happenstance, it's because the big frontier commercial models really do know a lot.

That isn't to say the LLMs know everything, or that they're right all the time, but they tend to be more right than wrong. I wouldn't trust an LLM for medical advice over, say, a doctor, or for electrical advice over an electrician. But I'd absolutely trust ChatGPT or Claude for medical advice over an electrician, or for electrical advice over a medical doctor.

But to bring the point back to the article, we might currently be living in a brief period where these big corporate AIs can be reasonably trusted. Google's Gemeni is absolutely going to become ad driven, and OpenAI seems on the path to following the same direction. Xai's Grok is already practicing Elon-thought. Not only will the models show ads, but they'll be trained to tell their users what they want to hear because humans love confirmation bias. Future models may well tell your mother that dog feces can safely be thrown in bushes, if that's the answer that will make her likelier to come back and see some ads next time.


Ads seem foolishly benign. It's an easy metric to look at, but say you're the evil mastermind in charge and you've got this system of yours to do such things. Sure, you'd nominally have it set to optimize for dollars, but would you really not also have an option to optimize for whatever suits your interests at the time? Vote Kodos, perhaps?

–—

If the person's mother was a thinking human, and not an animal that would have failed the Gom Jabbar, she could have thought critically about those reasons instead of having the AI be the authority. Do kids play in bushes? Is that really something you need an AI to confirm for you?


On the one hand, confirming a new piece of information with a second source is good practice (even if we should trust our family implicitly on such topics). On the other, I'm not even a dog person and I understand the etiquette here. So, really, this story sounds like someone outsourcing their common sense or common courtesy to a machine, which is scary to me.

However, maybe she was just making conversation & thought you might be impressed that she knows what AI is and how to use it.


Quite a tangent, but for the purpose of avoiding anaerobic decomposition (and byproducts, CH4, H2S etc) of the dog poo and associated compostable bag (if you’re in one of those neighbourhoods), I do the same as your mum. If possible, flick it off the path. Else use a bag. Nature is full of the faeces of plenty of other things which we don’t bother picking up.

Depending on where you live, the patches of "nature" may be too small to absorb the feces, especially in modern cities where there are almost as many dogs as inhabitants.

It's a similar problem to why we don't urinate against trees - while in a countryside forest it may be ok, if 5 men do it every night after leaving the pub, the designated pissing tree will start to have problems due to soil change.


I hope you live in a sparsely populated area. If it wouldn't work if more people then you do it, it is not a good process.

It’s a great process where I live. But you’re right. Doesn’t scale to populated areas.

Wonder what the potential microbial turnover of lawn is? Multiply that by the average walk length and I bet that could handle one or two nuggets per day, even in a city.

That’s a side hustle idea for any disengaged strava engineers. Leave me an acknowledgement on the ‘about’ page.


It's ok in wild bushes (as long as children don't usually play there), but what's the justification for dumping it in other people's bushes and gardens?

They probably would say "no" if you asked them, so you probably shouldn't. The OP's mom, I mean.


I don't know how old your mom is, but my pet theory of authority is that people older than about 40 accept printed text as authoritative. As in, non-handwritten letters that look regular.

When we were kids, you had either direct speech, hand-written words, or printed words.

The first two could be done by anybody. Anything informal like your local message board would be handwritten, sometimes with crappy printing from a home printer. It used to cost a bit to print text that looked nice, and that text used to be associated with a book or newspaper, which were authoritative.

Now suddenly everything you read is shaped like a newspaper. There's even crappy news websites that have the physical appearance of a proper newspaper website, with misinformation on them.


Could be regional or something, but 40 puts the person in the older Millenial range… people who grew up on the internet, not newspapers.

I think you may be right if you adjust the age up by ~20 years though.


No, people who are older than 40 still grew up in newspaper world. Yes, the internet existed, but it didn't have the deluge of terrible content until well into the new millennium, and you couldn't get that content portable until roughly when the iPhone became ubiquitous. A lot of content at the time was simply the newspaper or national TV station, on the web. It was only later that you could virally share awful content that was formatted like good content.

Now that isn't to say that just because something is a newspaper, it is good content, far from it. But quality has definitely collapsed, overall and for the legacy outlets.


I am not quite 40, but not that far off. I can’t really imagine being a young adult during their era where newspapers fell apart and online imitators emerged, experiencing that process first-hand, and then coming out of that ignorant of the poor media environment. Maybe the handful of years made a big difference.

I think it really did. It went from "how nice, I can read the FT and the Economist on a screen now" to "Earth is flat, here is the research" in a few years at most.

Newspapers themselves were already in the old game of sensationalism, so they had no issues maxing out on clickbait titles and rage content. Especially ad-based papers, which have every incentive aligned to sell you what you want to hear.

The new bit was everyone sharing crap with each other, I don't think we really had that in the old world, the way we do now. I don't even know how someone managed to spread the rumor about Marilyn Manson removing his own ribs to pleasure himself in pre-social media.


> it used to cost a bit to print text that looked nice

More than a bit. Before print-on-demand technology was developed that made it feasible to conduct small (<1000) print runs, publishing required engaging the services of not just the printer but also a professional typesetter, hardcover designer, etc. There were very real minimum costs involved that meant that any book printed needed to sell thousands of not tens of thousands of copies to even have a chance of profitability. This meant also requiring the services of marketers and distributors, who took their own cut, thus needing books with potential to sell even more copies.

The result of needing so many people involved in publishing and needing to sell so many copies is that the Overton window was very small and in a narrow center. The sheer volume was what gave printed media its credibility.

There were indeed smaller crackpot publishers, but at either much reduced quality, or with any premise of profitability rejected as irrelevant.

Print-on-demand drastically reduced the number of people required to get a work to print, and that made it easier for more marginal voices to get printed.


Could be true but if so I'd guess you're off by a generation, us 40 year "old people" are still pretty digital native.

I'd guess it's more a type of cognitive dissonance around caretaker roles.


Many people were taught language-use in a way that terrified them. To many of us the Written Word has the significance of that big black circle which was shown to Pavlov's dog alongside the feeding bell.

Welcome to my world. People don't listen to reason or arguments, they only accept social proof / authority / money talks etc. And yes, AI is already an authority. Why do you think companies are spending so much money on it? For profit? No, for power, as then profit comes automatically.

Wow, that is interesting! We used to go to elders, oracles, and priests. We have totally outsourced our humanity.

Well, I prefer this to people who bag up the poop and then throw the bag in the bushes, which seems increasingly common. Another popular option seems to be hanging the bag on a nearby tree branch, as if there's someone who's responsible for coming by and collecting it later.

Do you think these super wealthy people who control AI use the AI themselves? Do you think they are also “manipulated” by their own tool or do they, somehow, escape that capture?

It's fairly clear from Twitter that it's possible to be a victim of your own system. But sycophancy has always been a problem for elites. It's very easy to surround yourselves with people who always say yes, and now you can have a machine do it too.

This is how you get things like the colossal Facebook writeoff of "metaverse".


Isn't Grok just built as "the AI Elon Musk wants to use"? Starting from the goals of being "maximally truth seeking" and having no "woke" alignment and fewer safety rails, to the various "tweaks" to the Grok Twitter bot that happen to be related to Musk's world view

Even Grok at one point looking up how Musk feels about a topic before answering fits that pattern. Not something that's healthy or that he would likely prefer when asked, but something that would produce answers that he personally likes when using it


> Isn't Grok just built as "the AI Elon Musk wants to use"?

No

> Even Grok at one point looking up how Musk feels about a topic before answering fits that pattern.

So it no longer does?


The evening news was once a trusted source. Wikipedia had its run. Google too. Eventually, the weight of all the the thumbs on the scale will be felt and trust will be lost for good and then we will invent a new oracle.

AI is wrong so often that anyone who routinely uses one will get burnt at some point.

Users having unflinching trust in AI? I think not.


> Partially because it feels like it comes from a place of authority, and partially because of how self confident AI always sounds.

To add to that, this research paper[1] argues that people with low AI literary are more receptive to AI messaging because they find it magical.

The paper is now published but it's behind paywall so I shared the working paper link.

[1] https://thearf-org-unified-admin.s3.amazonaws.com/MSI_Report...


I’ve seen this result. I wonder if it’s because LLMs are (grok notwithstanding) deliberately middle-of-the-road in their stances, and accurately and patiently report the facts? In which case a hypothetical liar LLM would not be as persuasive.

Or is it because they are super-human already in some persuasion skills, and they can persuade people even of falsehoods?


And just see all of history where totalitarians or despotic kings were in power.

I would go against the grain and say that LLMs take power away from incredibly rich people to shape mass preferences and give to the masses.

Bot armies previously needed an army of humans to give responses on social media, which is incredibly tough to scale unless you have money and power. Now, that part is automated and scalable.

So instead of only billionaires, someone with a 100K dollars could launch a small scale "campaign".


"someone with 100k dollars" is not exactly "the masses". It is a larger set, but it's just more rich/powerful people. Which I would not describe as the "masses".

I know what you mean, but that descriptor seems off


Exactly. On Facebook everyone is stupid. But this is AI, like in the movies! It is smarter than anyone! It is almost like AI in the movies was part of the plot to brainwash us into thinking LLM output is correct every time.

…Also partially because it’s better then most other sources

>people trust the output of LLMs more than other

Theres one paper I saw on this, which covered attitudes of teens. As I recall they were unaware of hallucinations. Do you have any other sources on hand?


LLMs haven't been caught actively lying yet, which isn't something that can be said for anything else.

Give it 5yr and their reputation will be in the toilet too.


LLMs can't lie: they aren't alive.

The text they produce contains lies, constantly, at almost every interaction.


It's the technically true but incomplete or missing something things I'm worried about.

Basically eventually it's gonna stop being "dumb wrong" and start being "evil person making a motivated argument in the comments" and "sleazy official press release politician speak" type wrong


Wasn't / isn't Grok already there? It already supported the "white genocide in SA" conspiracy theory at one point, AFAIK.

> LLMs haven't been caught actively lying yet…

Any time they say "I'm sorry" - which is very, very common - they're lying.


When the LLMs output supposedly convincing BS that "people" (I assume you mean on average, not e.g. HN commentariat) trust, they aren't doing anything that's difficult for humans (assuming the humans already at least minimally understand the topic they're about to BS about). They're just doing it efficiently and shamelessly.

I really like it, but for the cost of a cup of coffee you could try it for a month as well. I highly suggest you just do that: commit to spending a month using it, and if you don't like it at the end cancel. Maybe it won't work for the way you use search, maybe it will, the only way to find out is to try.

Don't be intentionally daft. Skiing and sports aren't notably addictive, and don't notably cause harm in society.


You are taking a forklift to the gym. The goal isn't actually to move the weights from one place to another. The goal is the process.


For those who were not familiar with the licence they have switched to: https://www.tldrlegal.com/license/functional-source-license-...


For more context, the FSL was created by Sentry, who explain why it's been created and what problems it was trying to solve here: https://blog.sentry.io/introducing-the-functional-source-lic...


Is 2 years too little? The deep pocketed companies I know dont mind 5 year old software and I'll be okay with 2012 Redis or 2020 Postgres.


I think it is meant to be useable by them. It is meant to be unuseable by those who directly compete with liquibase.


The only closed source license I find acceptable is the BuSL "Business Source License" because it eventually becomes opensource. It guarantees you a 4 year moat on the code before it becomes open source, and it remains source available until then. This ought to be good enough for valid uses and prevents needless license proliferation.

https://en.wikipedia.org/wiki/Business_Source_License


FSL uses this "eventual open source" mechanism too.

At this point, FSL appears to be more widespread than BSL. Adoption of BSL has waned; even its creators (MariaDB, for their MaxScale proxy product) recently stopped using it.


> FSL uses this "eventual open source" mechanism too.

I stand corrected. I hate license proliferation, but the naming and marketing is better. I hope the other former open-source companies consolidate on something.

> undergoes delayed Open Source publication (DOSP). [1]

and that "DOSP" (Delayed Open Source Publication) is an OSI concept! [2]

But I cannot (yet) find what the timeframe for the DOSP is... because we don't want to wait 90 years for Mickey to be public domain.

[1] https://fair.io/about/

[2] https://opensource.org/delayed-open-source-publication


That linked documented was sponsored by Sentry, who led the development of FSL. I don't believe it's accurate to call DOSP an "OSI concept" -- meaning, it's not something the OSI invented or coined. OSI also does not consider such licenses to be approved under their open source definition.

As for the timeframe, FSL uses a 2 year period.

edit to add: just to be clear, I'm a fan of FSL and Fair Source licensing, and do not consider lack of OSI endorsement to be a problem.


I don't think it's the modern Apple, I think that's just Apple.

I remember using iTunes when fixing the name of an album was a modal blocking function that had to write to each and every MP3, one by one, in the slowest write I have ever experienced in updating file metadata. Give me a magnetised needle and a steady hand and I could have done it faster.

A long time ago they had some pretty cool design guides, and the visual design has often been nice, but other than that I don't think their software has been notable for its quality.


Apple makes Logic Pro, Final Cut Pro, Notes, Calendar, Contacts, Pages, Numbers, Keynote, Freeform, just from a "quality" standpoint, I'd rank any of those applications as competitive for the "highest quality" app in their category (an admittedly difficult thing to measure). In aggregate, those applications would make Apple the most effective company in the world at making high-quality GUI applications.

Curious if I'm missing something though, is there another entity with a stronger suite than that? Or some other angle to look at this? (E.g., it seems silly to me to use an MP3 metadata example when you're talking about the same company that makes Logic Pro.)


Of those apps you've listed that I've used, none of them have been notable for being high quality to me, though as you say it's difficult to measure. For me I would rate them somewhere between unremarkable (notes, calendar, contacts!?) and awkward (pages, numbers, keynote). If you asked me to guess what desktop software Apple makes that people rate highly, I never would have guessed any of those, except _maybe_ Logic[1] and Final Cut, though ironically those are two of the three I've never used.

I also think you're confusing what I wrote. It's not a competition.

I have just found that Apple's hardware on desktop has been stronger than their software, in my experience (periodic sporadic use, ~2006->now).

[1] and now from a sibling comment I hear that perhaps people regard that tool as bad, so there you go, they jury is clearly out


What software do you find to be higher quality and why? That's the only valid way of even trying to have this conversation.

E.g., I'd rank something like VS Code "lower quality" because when I launch VS Code, I can see each layer of the UI pop in as it's created, e.g., first I see a blank window, then I see window chrome being loaded, then a I see a row of icons being loaded on the left. This gives an impression of the software not being solid, because it feels like the application is struggling just to display the UI.

> I also think you're confusing what I wrote. It's not a competition.

> I have just found that Apple's hardware on desktop has been stronger than their software, in my experience (periodic sporadic use, ~2006->now).

I disagree with this, the only way to make an argument that Apple has deficiencies in their software is to demonstrate that other software is higher quality than Apples. Otherwise it could just be Apple's quality level is the maximum feasible level of quality.

> unremarkable (notes, calendar, contacts!?) and awkward (pages, numbers, keynote).

This is laughable, Notes is unremarkable? Give me a break, and Keynote is awkward? Have you ever Google'd how people feel about these applications?

I'd argue a critic only has value if they're willing to offer their own taste for judgement.


Do you regularly use the alternatives to these programs? Admittedly I'm not cut out to judge the office suite, but the consensus in the music world seems to be that Logic Pro is awful. It lacks support for lots of plugins and hardware, and costs loads for what is essentially a weaker value prop than Bitwig or Ableton Live. Most bedroom musicians are using Garageband or other cheap DAWs like Live Lite, and the professional studios are all bought into Pro Tools or Audition. Don't even get me started on the number of pros I see willingly use Xcode...

It's not exactly clear to me what niche Apple occupies in this market. It doesn't feel like "native Mac UI" is a must-have feature for DAWs or IDEs alike, but maybe that's just my perspective.


Yes, I use Ableton Live every day.

> It lacks support for lots of plugins and hardware, and costs loads for what is essentially a weaker value prop than Bitwig or Ableton Live.

This is an obviously silly statement, not only is Logic Pro competitively priced ($200, relative to $100-$400 for Bitwig, $99-$750 for Live), but those applications obviously have different focuses than Logic Pro (sound design and electronic music, versus the more general-purpose and recording focus of Logic Pro, also you'd be hard pressed to find anyone who doesn't think Logic Pro comes with the best suite of stock plugins of any DAW, so the value prop angle is a particularly odd argument to make [i.e., Logic Pro is pretty obviously under priced]).

But all this isn't that important because many of these applications are great. DAWs are one of the most competitive software categories around and there are several applications folks will vehemently defend as the best and Logic Pro is unequivocally one of them.

> Most bedroom musicians are using Garageband or other cheap DAWs like Live Lite, and the professional studios are all bought into Pro Tools or Audition.

This is old, but curious if you have a better source for your statement https://blog.robenkleene.com/2019/06/10/2015-digital-audio-w...

Found a more recent survey https://www.production-expert.com/production-expert-1/2024-d...

> We can see that Pro Tools for music is the most popular choice, with Logic for music second and Pro Tools for post coming third.

Note that I'd say Logic Pro's popularity is actually particularly notable since it's not crossplatform, so the addressable market is far smaller than the other big players. It's phenomenal popular software, both in terms of raw popularity and fans who rave about it. E.g., note the contrast in how people talk about Pro Tools vs. Logic Pro. Logic Pro has some of the happiest users around, but Pro Tools customers talk like they feel like their hostages to the software. That difference is where the quality argument comes in.


That is an awfully large amount of text for what amounts to an admission that Logic Pro is lower quality software than Pro Tools. Your comment reeks of all the hallmarks of Reality Distortion Syndrome, while I'm willing to argue on merits you simply sound smitten by Apple's (rapidly degenerating) accumen for visual design. In the other response, you're telling off a perfectly valid criticism of Apple software because they won't fulfill your arbitrary demand for a better-looking DAW. Are you even engaging with the point they're trying to make?

I'm sorry to say it, but I genuinely think you're detached from the way professionals evaluate software. While I enjoyed my time on macOS when Apple treated it like a professional platform, I have no regrets leaving it behind or it's "quality" software. Apple Mail fucking sucks, iCloud is annoying as sin, the Settings App only got worse year-over-year and the default Music app is somehow slower than iTunes from 2011. Ads pop up everywhere, codecs and filesystems go unsupported due to greed, and hardware you own gets randomly depreciated because you didn't buy a replacement fast enough.

If that's your life, go crazy. People like you helped me realize that Macs aren't made for people like me.


> That is an awfully large amount of text for what amounts to an admission that Logic Pro is lower quality software than Pro Tools.

I definitely didn't say this. Pro Tools likely has higher marketshare than Logic Pro, but I don't think anyone would conflate that with quality. I only brought up marketshare because you framed Logic Pro as being unpopular, which is just objectively not true.

> I'm sorry to say it, but I genuinely think you're detached from the way professionals evaluate software.

I literally think I've spent more time trying to understand this than practically anyone else e.g., https://blog.robenkleene.com/2023/06/19/software-transitions... but also my blog archives https://blog.robenkleene.com/archive/, it's one of the main subjects I think about and write about.

Note that how professionals evaluate software is tangential to what "quality" means in the context of software. E.g., I don't think anyone would argue Adobe is the paragon of software quality, but they're arguably the most important GUI software there is for creative professionals.

Both topics are very interesting to me, what software professionals use and why, and what constitutes quality in software.

> In the other response, you're telling off a perfectly valid criticism of Apple software because they won't fulfill your arbitrary demand for a better-looking DAW. Are you even engaging with the point they're trying to make?

I'm not sure what this means, who's talking about a "better-looking DAW" and which point am I not engaging with?


I'm interested, both as a Fastmail customer, and a software developer: what does this let you do that a PWA doesn't? Perhaps not become the default email client? Is there anything else?


The main thing is better integration with the OS. So no browser chrome (even as a PWA, the browser adds buttons or a toolbar over the top of the app), integration with the Mac menu bar, native context menus, the OS semi-transparent background for the frame so it feels like it belongs.


I am guessing this works for you because more people reading = more people talking = more readers discovering and potential sales?

It would be interesting to see at what point of notoriety that is no longer true. Like is this still a factor for Stephen King, or at that point is it really just lost sales?


That's my interpretation of it.

As for scale... There is only a tiny fraction of the industry that can support their life on writer's income, let alone be a household name.

It probably does become just lost sales at that point, but to reach that, you're probably already beyond most competitive forces, leaving only piracy around.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: