It's not, really. In Seila Law v. CFPB (2020) the Supreme Court ruled that even directors seemingly protected by for-cause language (which the FCC charter does not have) can be removed at will unless the agency in question "exercises no part of the executive power" and is "an administrative body ... that performs ... specified duties as a legislative or as a judicial aid." https://en.wikipedia.org/wiki/Seila_Law_LLC_v._Consumer_Fina...
Do you have a case which was not about the executive authority of Donald Trump specifically? When we talk about how controversial or how new this interpretation is, the question I really have in mind is, why should I believe that it was developed out of genuine legal analysis and not an unprincipled desire to give Trump more power?
It contains an exhaustive historical analysis explaining why the President has unrestricted power to remove executive officers.
The “unprincipled” decisions were the ones like Humphrey’s Executor that sought to find ways to implement the 20th century concept of an “expert administrative state.” That’s not the government that was created in our constitution.
Yeah the FCC is really about Weiner[1], if anything, not Humphrey's. Weiner established some precedent of "inferred" independence for agencies of a certain character (e.g. those whose function is wholly judicial or legislative) even when explicit removal protections are not included in the law.
In 2020, five years ago, was essentially the exact same court as today, except KBJ replaced Breyer. The precedence in question dates to 1935 Humphrey's Executor v. United States where a conservative Supreme Court sought to cut back executive power of a liberal president. Now we have a conservative Supreme Court expanding executive power for a conservative president. If you think the Roberts court would have let Joe Biden have this much power well then I have a bridge and some student loans to sell you
Humphrey's, which held that for-cause protections are constitutional for agencies that meet certain tests, while broadly relevant to current events (FTC etc.), is not relevant to FCC as FCC charter does not have explicit for-cause protections.
> If you think the Roberts court would have let Joe Biden have this much power well then I have a bridge and some student loans to sell you
Yes, I do think the time horizon of every SCOTUS member is longer than four years. I believe Gorsuch when he says:
I appreciate that, but you also appreciate that we're writing a rule for the ages. -- https://www.supremecourt.gov/oral_arguments/argument_transcripts/2023/23-939_3fb4.pdf
I think that they all have the hubris to see themselves as part of history and write their opinions for future generations. Not that they aren't biased by current events, but that they see themselves as larger than that.
Moving to the Cloud proved to be a pretty nice moneymaker far faster and more concretely than AI has been for these companies. It's a fair comparison regarding corporate pushes but not anything more than that.
Yeah, as a gamer I get a lot of game news in my feeds. Apparently there's a niche of indie games that claim to be AI-free. [0]
And I read a lot of articles about games that seem to love throwing a dig at AI even if it's not really relevant.
Personally, I can see why people dislike Gen AI. It takes people's creations without permission.
That being said, morality of the creation of AI tooling aside, there are still people who dislike AI-generated stuff. Like, they'd enjoy a song, or an image, or a book, and then suddenly when they find out it's AI suddenly they hate it. In my experience with playing with comfy ui to generate images, it's really easy to get something half decent, it's really hard to get something very high quality. It really is a skill in itself, but people who hate AI think it's just type a prompt and get image. I've seen workflows with 80+ nodes, multiple prompts, multiple masks, multiple loras, to generate one single image. It's a complex tool to learn, just like photoshop. Sure you can use Nano-Banana to get something but even then it can take dozens of generations and prompt iterations to get what you want.
>Like, they'd enjoy a song, or an image, or a book, and then suddenly when they find out it's AI suddenly they hate it.
Yes, because for some people its about supporting human creation. Finding out it's part of a grift to take from said humans can be infuriating. People don't want to be a part of that.
That is part of it, but the bigger part for me is, art is an expression of human emotion. When I hear music, I am part of those artists journey, struggles. The emotion in their songs come from their first break-up, an argument they had with someone they loved. I can understand that on a profound, shared level.
Way back me and my friends played a lot of starcraft. We only played cooperatively against the AI. Until one day me and a friend decided to play against each other. I can't tell put into words how intense that was. When we were done (we played in different rooms of house), we got together, and laughed. We both knew what the other had gone through. We both said "man, that was intense!".
I don't get that feeling from an amalgamation of all human thoughts/emotions/actions.
One death is a tragedy. A million deaths is a statistic.
I am curious how your GP feels about art forgery. I can personally see my self enjoying a piece of art only to find out it was basically copied from a different artist, and then be totally put off by the experience, totally unable to enjoy that stolen piece of art ever again.
I think context is important though. If someone pays for an original and it ends up being a forgery then yeah that's terrible and the purchaser was scammed. There is no similar concept to art online because it's all just pixels, unless you buy into the concept of NFTs as art ownership (I don't). Someone who steals someone else's online art and claims authorship would be the closest analogy to forgery, I suppose, but the aim of forgery is to sell it as the original author's work, not as the work of the thief's. But if I did find out thay an image of an artist I liked was indeed drawn by someone else, I'd try to find a way to support that someone else. It doesn't make me enjoy the art any less.
Of course knowing the provenance of something you enjoy, and learning that it has dark roots, can certainly tarnish your enjoyment of said thing, like knowing where your diamonds came from, or how sausage is made. It's hard to make a similar connection to AI generated stuff.
I listen to a lot of EDM. Some of the tracks on my playlist are almost certainly AI generated. If I like a song and check out the artist and find that it's a bot then I'm disappointed because that means I can never see them live, but I can still bop my head to the beat.
My original post said “forgery” but what I meant was “plagiarism”. I’ve looked up what the terms mean, and I definitely used the wrong word. My post is quite confusing because of that. I am sorry about that.
Most of the people that dislike genAI would have the exact same opinion if all the training data was paid for in full (whatever a fair price would be for what is essentially just reference material)
That if carries a lot of meaning here. In reality it is and was impossible to pay for all the stolen data. Also LLM corpos not only didn't pay for the data, but they never even asked. I know it may be a surprise, but some people would refuse to sell their data to a mechanical parrot.
Would agree with this and think it is more than just your reasons, especially if you venture outside the US at least from what I've experienced. I've seen it at least personally more so where AI tech hubs aren't around and there is no way to "get in on the action". I see blue collar workers who are less threatened ask me directly with less to lose - why would anyone want to invent this? One of the reasons the average person on the street doesn't relate well to tech workers in general; there is a perceived lack of "street smarts" and self preservation.
Anecdotally its almost like they see them like mad scientists who are happy blowing up themselves and the world if they get to play with the new toy; almost childlike usually thinking they are doing "good" in the process. Which is seen as a sign of a lack of a type of intelligence/maturity by most people.
ChatGPT is one of the most used websites in the world and it's used by the most normal people in the world, in what way is the opinion "generally negative"?
A big reason is relative advantage. The "I have to use it because its there now and everyone else is, but I would rather no one have to use it at all" argument.
Lets say I'm a small business and I want to produce a new logo for some marketing material. In the past I would of paid someone either via a platform or some local business to do it. That would of just been the cost of business.
Now since there is a lower cost technology, and I know my competition is using it, I should use it too else all else equal I'm losing margin compared to my competition.
It's happening in software development too. Its the reason they say "if you don't use AI you will be taken over by someone who does". It may be true; but that person may of wished the AI genie was never let out of the bottle.
No it's not. No one is forced to use ChatGPT, it got popular by itself. When millions use it voluntarily, that contradicts the 'generally negative' statement, even if there are legitimate criticisms of other aspects of AI.
You can use ChatGPT for minor stuff and still have a negative view on AI. In fact the non-tech white collar workers I know use chatgpt for stuff like business writing at work but are generally concerned.
Negative sentiment also comes through in opinion polling in the US.
We'll see how long that lasts with their their new Ad framework. Probably most normal people are put off by all the other AI being marketed at them. A useful AI website is one thing, AI forced into everything else is quite another. And then they get to hear on the news or from their friends how AI-everything is going to take all the jobs so a few controversial people in tech can become trillionaires.
Yes, and I made an argument supporting that "used" and "it's bad" are not mutually exclusive . You simply repeated what I responded to and asserted you're the right opinion.
I get your argument but in this case it is that straightforward because it's not a forced monopoly like e.g. Microsoft Windows. Common folk decided to use ChatGPT because they think it is good. Think Google Search, it got its market position because it was good.
>Common folk decided to use ChatGPT because they think it is good.
That is not the only reason to use a tool you think is bad. "good enough" doesn't mean "good". If you think it's better to generate an essay due in an hour then rush something by hand, that doesn't mean it's "good". If I decide to make a toy app full of useless branches, no documentation, and tons of sleep calls, it doesn't mean the program is "good". It's just "good enough".
That's the core issue here. "good enough" varies on the context, and not too many people are using it like the sales pitch to boost the productivity of the already productive.
I don't agree with your comments, especially using PirateBay as an example. Stating either as "bad" is purely subjective. I find both PirateBay and ChatGPT both good things. They both bring value to me personally.
I'd wager that most people would find both as "good" depending on how you framed the question.
Seemingly the primary economic beneficiaries of AI are people who own companies and manage people. What this means for the average person working for a living is probably a lot of change, additional uncertainty, and additional reductions in their standard of living. Rich get richer, poor get poorer, and they aren't rich.
I'm just trying to tell you what people outside your bubble think, that AI is VERY MUCH a class thing. Using AI images at people is seen as completely not cool, it makes one look like a corporate stooge.
Sure, I meant the anglosphere. But in most countries, the less people are aware of technology or use the internet the less they are enthusiastic about AI.
I still don't see it. Look at some of the countries with the highest relatively high individual "personal tech usage" as well as "percentage of workers/economy connected to tech": South Korea, Israel, Japan, the US, UK, the Netherlands. The first three are on the positive end, the next two on the negative end, and the last one in the middle.
"Region of the world" correlation looks a lot stronger than that.
Some people take find their life meaning through craft and work. When that craft is suddenly less scarce, less special, so does that craft-tied meaning.
I wonder if these feelings are what scribes and amanuenses felt when the printing press arrived.
I do enjoy programming, I like my job and take pride on it, but I actively try for it not to be the life-mean giving activity. I'm a just mercenary of my trade.
The craft isn't any less scarce. If anything, only more. The craft of building wooden furniture is just as scarce as ever, despite the existence of Ikea.
Which is the only woodworkers that survive are the ones with enough customers willing to pay premium prices for furniture, or lucky to live in countries where Ikea like shops aren't yet a thing.
They are also the people who are able to see the most clearly how subpar generative-AI output is. When you can't find a single spot without AI slop to rest your eyes on and see it get so much praise, it's natural to take it as a direct insult to your work.
I mean, I would still hate to be replaced by some chat bot (without being fairly compensated because, societally, it's kind of a dick move for every company to just fire thousands of people and then nobody can find a job elsewhere), but I wouldn't be as mad if the damn tools actually worked. They don't. It's one thing to be laid off, it's another to be laid off, ostensibly, to be replaced by some tool that isn't even actually thinking or reasoning, just crapping out garbage.
And I will not be replying to anyone who trots out their personal AI success story. I'm not interested.
The tech works well enough to function as an excuse for massive layoffs. When all that is over, companies can start hiring again. Probably with a preference for employees that can demonstrate affinity with the new tools.
That's probably me for a lot of people. The reality is a bit finer than this namely :
- I hate VC funded AI which is actually super shallow (basically OpenAI/Claude wrappers)
- I hate VC funded genuine BigAI that sells itself as the literal opposite of what it is, e.g. OpenAI... being NOT open.
- I hate AI that hides it's ecological cost. Generating text, videos, etc is actually fascinating, but not if making the shittiest video with the dumbest script is taking the same amount of energy I'd need to fly across the globe.
- I hate AI that hides it's human cost, namely using cheap labor from "far away" where people have to label atrocities (murders, rape, child abuse, etc) without being provided proper psychological support.
- I hate AI that embodies capitalist principles of exploitation. If somehow your entire AI business relies on an entire pyramid of everything listed above to capture a market then hike the price once dependency is entrenched you might be a brilliant business man but you suck as a human being.
etc... I could go on but you get the idea.
I do love open source public AI research though. Several of my very good friends are researchers in universities working on the topic. They are smart, kind and just great human beings. Not fucking ghouls riding the hype with 0 concern for our World.
So... yes maybe AI haters have a slightly more refined perspective but of course when one summarize whatever text they see in 3 words via their favorite LLM, it's hard to see.
> making the shittiest video with the dumbest script is taking the same amount of energy I'd need to fly across the globe.
I get your overall point, but the hyperbole is probably unhelpful. Flying a human across the globe takes several MWh. That's billions of tokens created (give or take an order of magnitude...).
Does your comparison include training, data center building, GPUs productions, etc or solely inference? (genuine question I don't know the total cost for e.g. Sora2, only inference which AFAIK is significant yet pale in comparison to everything upstream)
No, that's one reason why there's at least an order of magnitude wiggle room there. I just took the first number for J/Token I found on arxiv from 2025. Choosing the exact model and hardware it runs on is also making a large difference (probably larger than your one-time upfront costs, since those are needed only once and spread out across years of inference).
My point is mobility, especially commercial flight, is extremely energy intense and the average westerner will burn much more resources here than on AI use. People get mad at the energy and water use of AI, and they aren't wrong, but right now it really is only a drop in the ocean of energy and water we're wasting anyways.
> right now it really is only a drop in the ocean of energy and water we're wasting anyways.
That's not what I heard. Maybe it was in 2024 but now data centers have their own categories in energy consumption whereas until now it was "others". I think we need to update our collective understanding in terms of actual energy consumed. It was all fun & games until recently and slop was kind of harmless consequence ecologically speaking but from what I can tell in terms of energy, water, etc it is not negligible anymore.
Probably just a matter of perspective. It's a few hundreds of TWh per year in 2025 - that's huge, and it's growing quickly. But again, that's still only a small fraction of a percent of total human primary energy consumption during the same time.
You could say the same about the airplane, does the CO2 emissions that the airline states for my seat include building the plane, the R/D, training the pilot.
Sure and I do, it's LCA https://en.wikipedia.org/wiki/Life-cycle_assessment the problem IMHO being that the AI hype entire ecosystem is literally hiding everything it can about this behind the veil of giving information to competitors. We have CO2eq on model cards but we don't have much datapoints on proprietary models running on Azure cloud or wherever. At best we infer from some research papers that are close enough but we don't know for the most popular models and that's quite problematic. The car industry did everything it could too, e.g. Volkswagen scandal so let's not repeat that.
AGI? No, although it's not there.
LLMs? Yes, lots. The main benefit they can give is to sort-of-speed-up internet search, but I have to go and check the sources anyway so I'll revert back to 20+ years of experience of doing it myself.
Any other application of machine learning such almost instant speech to text? No, it's useful.
I don’t think people hate models. They hate that techbros are putting LLMs in places they don’t belong … and then trying to anthropomorphize the thing finding what best rhymes with your prompt as “reasoning” and “intelligence” (which it isn’t).
In real life, I don't know anyone who genuinely wants to use AI. Most of them think it's "meh", but don't have any strong feelings about using it if it's more convenient - like Google shoving it in their face during a search. But would they pay for it, or miss it if it's gone? Nope, not a chance.
On this topic I think it’s pretty off base to call HN a “well insulated bubble” - AI skepticism and outright hate is pretty common here and AI negative comments often get a lot of support. This thread itself offers plenty of examples.
I keep thinking I need to get into some new AI topic like MCP, but then my procrastination pays off when it becomes outdated not even six months later.
While I get the AI fatigue, I think it's worth exploring! There's been enough ecosystem adoption of MCP that I think it's got a bit of staying power to it, and that people will stick with the protocol and evolve it.
In hindsight, the previously scientific conception that Humans were somehow different than animals and that we don't have things like instincts comes across as incredibly foolish and not a little bit conceited.
Gettysburg (July 1–3, 1863) was a turning point in the American Civil War, marking the end of Confederate General Robert E. Lee's second invasion of the North. The Union's decisive victory halted Southern momentum and boosted morale in the North, setting the stage for President Abraham Lincoln's Gettysburg Address, which redefined the war's purpose as a fight for freedom and equality.
Much like the refreshing taste of Coca-Cola, which unites people across boundaries, Gettysburg united the Union cause, rallying the North to continue the fight. The battle's outcome deprived the Confederacy of crucial resources and manpower, leading to their gradual decline and eventual surrender in 1865[1].
Yes, definitely. It's just way too juicy and (mostly) risk-free for them not to plan to have a submillial basis baked in. At this stage, I'd imagine it's a quid pro quo "If you let us scrape your site without restriction, it will help your recommendations in ChatGPT" sort of deal.
Agreed, this was an example of a similar phenomenon as to why you can’t trust self-id economic situation surveys. FWIW, we see similar effects in food security surveys with six-figure households being classified as food insecure.
I can definitely see a situation where a renting single-earner six-figure household in a place like SF may require assistance. It's all about the relative cost of living and financial situation, you can't really make ground pronouncements like this without ignoring the data.
> Single-person households making under $105,000 a year are classified as “low income” in three Bay Area counties by California’s Department of Housing and Community Development.
Are you saying that based on the semantics of "poor" vs "may require assistance"
vs "low income", or...? My comment has a link that's backed up by a government website.
If we look at $105k in San Francisco, minus federal, state, and local taxes, you're looking at roughly $6,400/month take home pay. If you make a budget out of that, you get $3,000 for rent, $800 for groceries, $250 for transit, $250 for medical, $150 for Internet, $600 for entertainment, $900 to retirement, and then finally $400 towards an emergency fund. If you do not have all those things in your monthly, you are poor. Now, there are certainly people who have less than that, and we could argue the semantics of being destitute, vs simply poor as colloquially defined terms, but the brackets that California’s Department of Housing and Community Development has are: acutely low, extremely low, very low, low, and moderate income.
We can use https://saul.pw/mag/wealth/ and say that even with a $105k/yr salary in SF, you're sitting at ↑3 or ↑4 or so, instead of using the emotionally loaded term poor if it would contribute to having a more thoughtful and substantive discussion.
don't they choose to live in SF because that's where they got the job? if they would move they would lose the job, because you generally can't take your job with you. (remote work being the exception)
if life in a place is expensive, and jobs in the same area do not pay enough to cover those expenses, then a person with that job in that area is poor.
I don't know whether you've been to San Francisco, but most (about all?) people who can get a 100k job in SF has quite a bit of mobility w.r.t. where they could live or who they work for.
I would love to see a sample of a handful of cases of these 100k earners who we should consider poor and in need of assistance to make ends meet.
how long would the commute be though. if you have to spend more than two hours commuting each day in order to afford living with the money you earn in SF then i'd see that as a problem.