1. You were a therapy session for her. Her negativity was about the layoffs.
2. FAANG companies dramatically overhired for years and are using AI as an excuse for layoffs.
3. AI scene in Seattle is pretty good, but as with everywhere else was/is a victim of the AI hype. I see estimates of the hype being dead in a year. AI won't be dead, but throwing money at the whatever Uber-for-pets-AI-ly idea pops up won't happen.
4. I don't think people hate AI, they hate the hype.
Anyways, your app actually does sound interesting so I signed up for it.
Yeah, as a gamer I get a lot of game news in my feeds. Apparently there's a niche of indie games that claim to be AI-free. [0]
And I read a lot of articles about games that seem to love throwing a dig at AI even if it's not really relevant.
Personally, I can see why people dislike Gen AI. It takes people's creations without permission.
That being said, morality of the creation of AI tooling aside, there are still people who dislike AI-generated stuff. Like, they'd enjoy a song, or an image, or a book, and then suddenly when they find out it's AI suddenly they hate it. In my experience with playing with comfy ui to generate images, it's really easy to get something half decent, it's really hard to get something very high quality. It really is a skill in itself, but people who hate AI think it's just type a prompt and get image. I've seen workflows with 80+ nodes, multiple prompts, multiple masks, multiple loras, to generate one single image. It's a complex tool to learn, just like photoshop. Sure you can use Nano-Banana to get something but even then it can take dozens of generations and prompt iterations to get what you want.
>Like, they'd enjoy a song, or an image, or a book, and then suddenly when they find out it's AI suddenly they hate it.
Yes, because for some people its about supporting human creation. Finding out it's part of a grift to take from said humans can be infuriating. People don't want to be a part of that.
That is part of it, but the bigger part for me is, art is an expression of human emotion. When I hear music, I am part of those artists journey, struggles. The emotion in their songs come from their first break-up, an argument they had with someone they loved. I can understand that on a profound, shared level.
Way back me and my friends played a lot of starcraft. We only played cooperatively against the AI. Until one day me and a friend decided to play against each other. I can't tell put into words how intense that was. When we were done (we played in different rooms of house), we got together, and laughed. We both knew what the other had gone through. We both said "man, that was intense!".
I don't get that feeling from an amalgamation of all human thoughts/emotions/actions.
One death is a tragedy. A million deaths is a statistic.
I am curious how your GP feels about art forgery. I can personally see my self enjoying a piece of art only to find out it was basically copied from a different artist, and then be totally put off by the experience, totally unable to enjoy that stolen piece of art ever again.
I think context is important though. If someone pays for an original and it ends up being a forgery then yeah that's terrible and the purchaser was scammed. There is no similar concept to art online because it's all just pixels, unless you buy into the concept of NFTs as art ownership (I don't). Someone who steals someone else's online art and claims authorship would be the closest analogy to forgery, I suppose, but the aim of forgery is to sell it as the original author's work, not as the work of the thief's. But if I did find out thay an image of an artist I liked was indeed drawn by someone else, I'd try to find a way to support that someone else. It doesn't make me enjoy the art any less.
Of course knowing the provenance of something you enjoy, and learning that it has dark roots, can certainly tarnish your enjoyment of said thing, like knowing where your diamonds came from, or how sausage is made. It's hard to make a similar connection to AI generated stuff.
I listen to a lot of EDM. Some of the tracks on my playlist are almost certainly AI generated. If I like a song and check out the artist and find that it's a bot then I'm disappointed because that means I can never see them live, but I can still bop my head to the beat.
My original post said “forgery” but what I meant was “plagiarism”. I’ve looked up what the terms mean, and I definitely used the wrong word. My post is quite confusing because of that. I am sorry about that.
Most of the people that dislike genAI would have the exact same opinion if all the training data was paid for in full (whatever a fair price would be for what is essentially just reference material)
That if carries a lot of meaning here. In reality it is and was impossible to pay for all the stolen data. Also LLM corpos not only didn't pay for the data, but they never even asked. I know it may be a surprise, but some people would refuse to sell their data to a mechanical parrot.
Would agree with this and think it is more than just your reasons, especially if you venture outside the US at least from what I've experienced. I've seen it at least personally more so where AI tech hubs aren't around and there is no way to "get in on the action". I see blue collar workers who are less threatened ask me directly with less to lose - why would anyone want to invent this? One of the reasons the average person on the street doesn't relate well to tech workers in general; there is a perceived lack of "street smarts" and self preservation.
Anecdotally its almost like they see them like mad scientists who are happy blowing up themselves and the world if they get to play with the new toy; almost childlike usually thinking they are doing "good" in the process. Which is seen as a sign of a lack of a type of intelligence/maturity by most people.
ChatGPT is one of the most used websites in the world and it's used by the most normal people in the world, in what way is the opinion "generally negative"?
No it's not. No one is forced to use ChatGPT, it got popular by itself. When millions use it voluntarily, that contradicts the 'generally negative' statement, even if there are legitimate criticisms of other aspects of AI.
We'll see how long that lasts with their their new Ad framework. Probably most normal people are put off by all the other AI being marketed at them. A useful AI website is one thing, AI forced into everything else is quite another. And then they get to hear on the news or from their friends how AI-everything is going to take all the jobs so a few controversial people in tech can become trillionaires.
A big reason is relative advantage. The "I have to use it because its there now and everyone else is, but I would rather no one have to use it at all" argument.
Lets say I'm a small business and I want to produce a new logo for some marketing material. In the past I would of paid someone either via a platform or some local business to do it. That would of just been the cost of business.
Now since there is a lower cost technology, and I know my competition is using it, I should use it too else all else equal I'm losing margin compared to my competition.
It's happening in software development too. Its the reason they say "if you don't use AI you will be taken over by someone who does". It may be true; but that person may of wished the AI genie was never let out of the bottle.
You can use ChatGPT for minor stuff and still have a negative view on AI. In fact the non-tech white collar workers I know use chatgpt for stuff like business writing at work but are generally concerned.
Negative sentiment also comes through in opinion polling in the US.
Yes, and I made an argument supporting that "used" and "it's bad" are not mutually exclusive . You simply repeated what I responded to and asserted you're the right opinion.
I get your argument but in this case it is that straightforward because it's not a forced monopoly like e.g. Microsoft Windows. Common folk decided to use ChatGPT because they think it is good. Think Google Search, it got its market position because it was good.
>Common folk decided to use ChatGPT because they think it is good.
That is not the only reason to use a tool you think is bad. "good enough" doesn't mean "good". If you think it's better to generate an essay due in an hour then rush something by hand, that doesn't mean it's "good". If I decide to make a toy app full of useless branches, no documentation, and tons of sleep calls, it doesn't mean the program is "good". It's just "good enough".
That's the core issue here. "good enough" varies on the context, and not too many people are using it like the sales pitch to boost the productivity of the already productive.
I don't agree with your comments, especially using PirateBay as an example. Stating either as "bad" is purely subjective. I find both PirateBay and ChatGPT both good things. They both bring value to me personally.
I'd wager that most people would find both as "good" depending on how you framed the question.
Seemingly the primary economic beneficiaries of AI are people who own companies and manage people. What this means for the average person working for a living is probably a lot of change, additional uncertainty, and additional reductions in their standard of living. Rich get richer, poor get poorer, and they aren't rich.
I'm just trying to tell you what people outside your bubble think, that AI is VERY MUCH a class thing. Using AI images at people is seen as completely not cool, it makes one look like a corporate stooge.
Sure, I meant the anglosphere. But in most countries, the less people are aware of technology or use the internet the less they are enthusiastic about AI.
I still don't see it. Look at some of the countries with the highest relatively high individual "personal tech usage" as well as "percentage of workers/economy connected to tech": South Korea, Israel, Japan, the US, UK, the Netherlands. The first three are on the positive end, the next two on the negative end, and the last one in the middle.
"Region of the world" correlation looks a lot stronger than that.
Some people take find their life meaning through craft and work. When that craft is suddenly less scarce, less special, so does that craft-tied meaning.
I wonder if these feelings are what scribes and amanuenses felt when the printing press arrived.
I do enjoy programming, I like my job and take pride on it, but I actively try for it not to be the life-mean giving activity. I'm a just mercenary of my trade.
The craft isn't any less scarce. If anything, only more. The craft of building wooden furniture is just as scarce as ever, despite the existence of Ikea.
Which is the only woodworkers that survive are the ones with enough customers willing to pay premium prices for furniture, or lucky to live in countries where Ikea like shops aren't yet a thing.
They are also the people who are able to see the most clearly how subpar generative-AI output is. When you can't find a single spot without AI slop to rest your eyes on and see it get so much praise, it's natural to take it as a direct insult to your work.
I mean, I would still hate to be replaced by some chat bot (without being fairly compensated because, societally, it's kind of a dick move for every company to just fire thousands of people and then nobody can find a job elsewhere), but I wouldn't be as mad if the damn tools actually worked. They don't. It's one thing to be laid off, it's another to be laid off, ostensibly, to be replaced by some tool that isn't even actually thinking or reasoning, just crapping out garbage.
And I will not be replying to anyone who trots out their personal AI success story. I'm not interested.
The tech works well enough to function as an excuse for massive layoffs. When all that is over, companies can start hiring again. Probably with a preference for employees that can demonstrate affinity with the new tools.
That's probably me for a lot of people. The reality is a bit finer than this namely :
- I hate VC funded AI which is actually super shallow (basically OpenAI/Claude wrappers)
- I hate VC funded genuine BigAI that sells itself as the literal opposite of what it is, e.g. OpenAI... being NOT open.
- I hate AI that hides it's ecological cost. Generating text, videos, etc is actually fascinating, but not if making the shittiest video with the dumbest script is taking the same amount of energy I'd need to fly across the globe.
- I hate AI that hides it's human cost, namely using cheap labor from "far away" where people have to label atrocities (murders, rape, child abuse, etc) without being provided proper psychological support.
- I hate AI that embodies capitalist principles of exploitation. If somehow your entire AI business relies on an entire pyramid of everything listed above to capture a market then hike the price once dependency is entrenched you might be a brilliant business man but you suck as a human being.
etc... I could go on but you get the idea.
I do love open source public AI research though. Several of my very good friends are researchers in universities working on the topic. They are smart, kind and just great human beings. Not fucking ghouls riding the hype with 0 concern for our World.
So... yes maybe AI haters have a slightly more refined perspective but of course when one summarize whatever text they see in 3 words via their favorite LLM, it's hard to see.
> making the shittiest video with the dumbest script is taking the same amount of energy I'd need to fly across the globe.
I get your overall point, but the hyperbole is probably unhelpful. Flying a human across the globe takes several MWh. That's billions of tokens created (give or take an order of magnitude...).
Does your comparison include training, data center building, GPUs productions, etc or solely inference? (genuine question I don't know the total cost for e.g. Sora2, only inference which AFAIK is significant yet pale in comparison to everything upstream)
No, that's one reason why there's at least an order of magnitude wiggle room there. I just took the first number for J/Token I found on arxiv from 2025. Choosing the exact model and hardware it runs on is also making a large difference (probably larger than your one-time upfront costs, since those are needed only once and spread out across years of inference).
My point is mobility, especially commercial flight, is extremely energy intense and the average westerner will burn much more resources here than on AI use. People get mad at the energy and water use of AI, and they aren't wrong, but right now it really is only a drop in the ocean of energy and water we're wasting anyways.
> right now it really is only a drop in the ocean of energy and water we're wasting anyways.
That's not what I heard. Maybe it was in 2024 but now data centers have their own categories in energy consumption whereas until now it was "others". I think we need to update our collective understanding in terms of actual energy consumed. It was all fun & games until recently and slop was kind of harmless consequence ecologically speaking but from what I can tell in terms of energy, water, etc it is not negligible anymore.
Probably just a matter of perspective. It's a few hundreds of TWh per year in 2025 - that's huge, and it's growing quickly. But again, that's still only a small fraction of a percent of total human primary energy consumption during the same time.
You could say the same about the airplane, does the CO2 emissions that the airline states for my seat include building the plane, the R/D, training the pilot.
Sure and I do, it's LCA https://en.wikipedia.org/wiki/Life-cycle_assessment the problem IMHO being that the AI hype entire ecosystem is literally hiding everything it can about this behind the veil of giving information to competitors. We have CO2eq on model cards but we don't have much datapoints on proprietary models running on Azure cloud or wherever. At best we infer from some research papers that are close enough but we don't know for the most popular models and that's quite problematic. The car industry did everything it could too, e.g. Volkswagen scandal so let's not repeat that.
AGI? No, although it's not there.
LLMs? Yes, lots. The main benefit they can give is to sort-of-speed-up internet search, but I have to go and check the sources anyway so I'll revert back to 20+ years of experience of doing it myself.
Any other application of machine learning such almost instant speech to text? No, it's useful.
I don’t think people hate models. They hate that techbros are putting LLMs in places they don’t belong … and then trying to anthropomorphize the thing finding what best rhymes with your prompt as “reasoning” and “intelligence” (which it isn’t).
In real life, I don't know anyone who genuinely wants to use AI. Most of them think it's "meh", but don't have any strong feelings about using it if it's more convenient - like Google shoving it in their face during a search. But would they pay for it, or miss it if it's gone? Nope, not a chance.
On this topic I think it’s pretty off base to call HN a “well insulated bubble” - AI skepticism and outright hate is pretty common here and AI negative comments often get a lot of support. This thread itself offers plenty of examples.
I hate to be cagey here but I just really don’t want to make anyone’s life harder than it needs to be by revealing their identity. Microsoft is a really tough place to be an employee right now.
That's because there are at least 5 different definitions of AI.
- At it's inception in 1955 it was "learning or any other feature of intelligence" simulated by a machine [1] (fun fact: both neural networks and computers using natural language were on the agenda back then)
- Following from that we have the "all machine learning is AI" which was the prevalent definition about a decade ago
- Then there's the academic definition that is roughly "computers acting in real or simulated environments" and includes such mundane and algorithmic things as path finding
- Then there's obviously AGI, or the closely related Hollywood/SciFi definition of AI
- Then there's just "things that the general public doesn't expect computers to be able to do". Back when chess computers used to be called AI this was probably the closest definition that fits. Clever sales people also used to love to call prediction via simple linear regression AI
Notably four out of five of them don't involve computers actually being intelligent. And just a couple years ago we still sold simple face detection as AI
It's the opposite. It is doing the driving but you really have to provide lane assist, otherwise you hit the tree, or start driving in the opposite direction.
Many people claim it's doing great because they have driven hundreds of kilometers, but don't particularly care whether they arrived at the exact place, and are happy with the approximate destination.
Is the siren song of "AI effect" so strong in your mind that you look at a system that writes short stories, solves advanced math problems and writes working code, and then immediately pronounce it "not intelligent"?
It doesn’t actually solve those math problems though, does it? It replies with a solution if it has seen one often enough in training data or something that looks like a solution but isn’t. At the end, the human still needs to proof it.
Same for short stories, it doesn’t actually write new stories, it rehashes stories it (probably illegally) ingested in training data.
LLMs are good at mimicking the content they were trained on, they don’t actually adopt or extend the intelligence required to create that content in the first place.
Oh, I remember those talks. People actually checking whether an LLM's response is something that was in the training data, something that was online that it replicated, or something new.
They weren't finding a lot of matches. That was odd.
That was in the days of GPT-2. That was when the first weak signs of "LLMs aren't just naively rephrasing the training data" emerged. That finding was controversial, at the time. GPT-2 couldn't even solve "17 + 29". ChatGPT didn't exist yet. Most didn't believe that it was possible to build something like it with LLM tech.
I wish I could say I was among the people who had the foresight, but I wasn't. Got a harsh wake-up call on that.
And yet, here we are, in year 20-fucking-25, where off-the-shelf commercially available AIs burn through math competitions and one shot coding tasks. And people still say "they just rehash the training data".
Because the alternative is: admitting that we found an algorithm that crams abstract thinking into arrays of matrix math. That it's no longer human exclusive. And that seems to be completely unpalatable to many.
You and I must be using very different versions of Claude. As an infra/systems guy (non-coder), the ability for me to develop some powerful tools simply by leveraging Claude has been nothing short of amazing. I started using Claude about 8 months ago and have since created about 20 tools ranging from simple USB detection scripts (for secure erasing SSDs) to complex tools like an Azure File Manager and a production-ready data migration tool (Azure to Snowflake). Yes, I know bash and some Python, but Claude has really helped me create tools that would have taken many weeks/months to build using the right technology stack. I am happy to pay for the Claude Max plan; it has returned huge dividends to my productivity.
And, maybe that is the difference. Non coders can use AI to help build MVPs and tooling they could otherwise not do (or take a long time to get done). On the other hand, professional coders see this as an intrusion to their domain, become very skeptical because it does not write code "their way" or introduces some bugs, and push back hard.
Yeah. You're not a coder, so you don't have the expertise to see the pitfalls and problems with the approach.
If you want to use concrete to anchor some poles in the ground, great. Build that gazebo. If it falls down, oh well.
If you want to use concrete to make a building that needs to be safe and maintained, it's critical that you use the right concrete mix, use rebar in the right way, and seal it properly.
Civil engineers aren't "threatened" by hobbyists building gazebos. Software engineers aren't "threatened" by AI. We're pointing out that the building's gonna fall over if you do it this way, which is what we're actually paid to do.
Sorry, carefully read the comments on this thread and you will quickly realize "real" coders are very much threatened by this technology - especially junior coders. They are frightened their job is at stake by a new tool and take a very anti-AI view to the entire domain - probably more-so for those who live in areas where the wages are not high to begin with. People who come from a different perspective truly see the value of what these tools can help you do. To say all AI output is slop or garbage is just wrong.
The flip of this is to understand and appreciate what the new tooling can help you do and adopt. Sure, junior coders will face significant headwinds, but I guarantee you there are opportunities waiting to get uncovered. Just give it a couple of years...
Every HN thread about AI eventually has someone claiming the code it produces is “trash” or “non-working.” There are plenty of top-tier programmers here who dismiss anyone who actually finds LLM-generated code useful, even when it gets the job done.
I’m tempted to propose a new law—like Poe’s or Godwin’s—that goes something like: “Any discussion about AI will eventually lead to someone insisting it can’t match human programmers.”
Seeing an AI casually spit out an 800 lines script that works first try is really fucking humbling to me, because I know I wouldn't be able to do that myself.
Sure, it's an area of AI advantage, and I still crush AI in complex codebases or embedded code. But AI is not strictly worse than me, clearly. The fact that it already has this area of advantage should give you a pause.
Humbling indeed. I am utterly amazed at Claude's breadth of knowledge and ability to understand the context of our conversations. Even if I misspell words, don't use the exact phrase, or call something a function instead of a thread, Claude understands what I want and helps make it happen. Not to mention the ability to read hundreds of lines of debug output and point out a tiny error that caused the bug.
I think these companies would benefit from honesty, if they're right and their new AI capabilities are really powerful then poisoning their workforce against AI is the worst thing they could do right now. Like a generous severance approach and compassionate layoffs would go a long way right now.
I was an early employee at a unicorn and I saw this culture take hold once we started hiring from Big Tech talent pools and offering Big Tech comp packages, though before AI hype took off. There's a crazy lack of agency that kicks in for Big Tech folks that's really hard to explain. This feeling that each engineer is this mercenary trying really hard to not get screwed by the internal system.
Most of it is because there's little that ties actual output to organizational outcomes. AI mandates after all are just a way to bluntly for e engineers to use AI, where if you were at a startup or smaller company you would probably organically find how much an LLM helps you where. It may not even help your actual work even if it helps your coworkers. That market feedback is sorely missing from the Big Techs and so hamfisted engineering mandates have to do in order to for e engineers to become more efficient.
In these cases I always try to remind friends that you can always leave a Big Tech. The thing is, from what I can tell, a lot of these folks have developed lifestyle inflation from working in Big Tech and some of their anger comes from feeling trapped in their Big Tech role due to this. While I understand, I'm not particularly sympathetic to this viewpoint. At the end of the day your lifestyle is in your hands.
Thanks for signing up. I’m going to try really hard to open up some beta slots next week so more people can try it. There’s some embarrassingly bad bugs in prod right now…
It's a bit more expensive. It's not the end of the world. Production will likely increase if the demand is consistent.
> What about diverting funding from much more useful and needed things?
And who determines that? People put there money where they want to. People think AI will provide value to other people and those people will, therefore, pay money for AI. So the funding that AI is receiving is directly proportional to how useful and needed people think AI is. I disagree, but I'm not a dictator.
> What about automation of scams, surveillance, etc?
Technology makes things easier, including bad things. This isn't the first time this happened and it won't be the last. It also makes avoiding those things easier though but that usually lags a bit behind.
> I can keep going.
Please do because it seems like you're grasping at straws.
Not to diminish your overall point, but enshittification has been happening well before AI, AI just made it much easier and faster to enshittify everything.
It's the closing trash compactor of soullessness and hate of the human, described vividly as having affected Microsoft culture as thoroughly as intergranular corrosion can turn a solid block of aluminum to dust.
Fuck Microsoft for both hating me and hating their own people. Fuck. That. Shit.
> It's the closing trash compactor of soullessness and hate of the human, described vividly as having affected Microsoft culture as thoroughly as intergranular corrosion can turn a solid block of aluminum to dust.
That's a great way to describe it. There's a good article that points out AI is the new aesthetic of fascism. And, of course, in Miyazaki's words, "I strongly feel that this is an insult to life itself."
1. You were a therapy session for her. Her negativity was about the layoffs.
2. FAANG companies dramatically overhired for years and are using AI as an excuse for layoffs.
3. AI scene in Seattle is pretty good, but as with everywhere else was/is a victim of the AI hype. I see estimates of the hype being dead in a year. AI won't be dead, but throwing money at the whatever Uber-for-pets-AI-ly idea pops up won't happen.
4. I don't think people hate AI, they hate the hype.
Anyways, your app actually does sound interesting so I signed up for it.