Hacker Newsnew | past | comments | ask | show | jobs | submit | oorza's commentslogin

I think the story makes a good point, but I'm not sure it's even the primary point the story was trying to make.

> “Writing a book is supposed to be hard,” he said.

> “Is it, though?” said the AI. The novelist wasn’t sure, but he thought he detected a touch of exasperation in the machine’s voice.

> “Perseverance is half the art,” he said. He hadn’t had much natural talent and had always known it, but he had staying power.

It's this right here. I don't think any LLM-based AI is going to be able to replace raw human creativity any time soon, but I do think it can dramatically reduce the effort it takes to express your creativity. And in that exchange, people whose success in life has been built on top of work ethic and perseverance rather than unique insight or intelligence are going to get left behind. If you accept that, you must also accept its contrapositive: people who have been left behind despite unique insights and intelligence because of a lack of work ethic will be propelled forward.

I think a lot of the Luddite-esque response to AI is actually a response to this realization happening at a subconscious level. From the gifted classes in middle school until I was done with schooling, I can always remember two types of students: those that didn't work very hard but succeeded on their talents and those that were otherwise unexceptional beyond their organizational skills and work ethic. Both groups thought they were superior to the other group, of course, and the latter group has gone on to have more external success in their lives (at least among my student peers I maintain contact with decades later). To wit, the smart lazy people are high-ranking individual contributors, but the milquetoast hard workers are all management who the smart lazy people that report to them bitch about. The inversion of that power dynamic in creative and STEM professions... it's not even worth describing the implications, they're so obvious.

Let's say, just for the sake of argument, that AI can eventually serve to level the playing field for everything. It outputs novels, paintings, screenplays - whatever you ask it for - of such high quality that they can't be discerned from the best human-created works. In this world, the only way an individual human matters in the equation is if they can encode some unique insight or perspective into how they orchestrate their AI; how does my prompt for an epic space opera vary meaningfully from yours? In other words, everything is reduced to an individual's unique perspective of things (and how they encode it into their communication to the AI) because the AI has normalized everything else away (access to vocabulary, access to media, time to create, everything). In that world, the only people who can hope to distinguish themselves are those with the type of specific intelligence and insight that is rarely seen; if you ask a teacher, they will recant the handful of students over their career that clear that bar. Most of us aren't across that bar, less than 1% of people can be by definition, so of course everyone emotionally rejects that reality. No one wants their significance erased.

We can hand wring about whether that reality ever can exist, whether it exists now, whatever, but the truth is that's how AI is being sold and I think that's the reality people are reacting to.


> And in that exchange, people whose success in life has been built on top of work ethic and perseverance rather than unique insight or intelligence are going to get left behind. If you accept that, you must also accept its contrapositive: people who have been left behind despite unique insights and intelligence because of a lack of work ethic will be propelled forward.

I think there's still a very high chance that someone willing to refine their AI-co-generated output 8-10+ hours a day, for days on end, will have much more success than someone who puts in 1 or 2 hours a day on it and largely takes the one of the first things from one of the first prompt attempts.

The most successful people I know are in a category you leave out: the people who will put in long hours out of being super-intrinsically-motived but are ALSO naturally gifted creatively/intelligently in some domain.


> I think there's still a very high chance that someone willing to refine their AI-co-generated output 8-10+ hours a day, for days on end, will have much more success than someone who puts in 1 or 2 hours a day on it and largely takes the one of the first things from one of the first prompt attempts.

That's the truth right now, but that's merely a limitation of the technology. Particularly if you imagine arbitrarily wide context windows such that the LLM can usefully begin to infer your specific preferences and implications over time.

> The most successful people I know are in a category you leave out: the people who will put in long hours out of being super-intrinsically-motived but are ALSO naturally gifted creatively/intelligently in some domain.

Those are the people I mention at the end, those that clear the bar into being uniquely special. From what I hear from my friends that have been teaching for about twenty years now, you're lucky if you get more than one or two of those every ten years.


No previous force multipliers have lifted the "lazy but smart" over the "smart and NOT lazy". That's not how lazy works, or how taste/expectations work. The "smart and NOT lazy" will evolve their preferences, perspectives, and point of view over time much faster than the "smart and lazy" will so even if they have these agents doing all their work for them, the people motivated to introspect much more on that work will be the ones driving the trends and leading the edge of creative production.

It's like conventions in art: you could make Casablanca much more easily today than in 1942. But if you made it today it would be seen as lazy and cliche and simplistic, because it's already been copied by so many other people. If you make something today, it needs to take into account that everyone has already seen Casablanca + nearly 85 additional years of movies and build on top of that to do something interesting that will surprise the viewer (or at least meet their modern expectations). "The best created human works" changes over time; in your proposed world, it will change even faster, and so you'll have to pay even more attention to keep up.

So if you're content to let your AI buddy cruise along making shit for you while you just put in 1 hour a day of direction, and someone else with about equal natural spark is hacking on it for 10 hours a day—watching what everyone else is making, paying much more active attention to trends, digging in and researching obscure emerging stuff—then that second person is going to leave you in the dust.

> Those are the people I mention at the end, those that clear the bar into being uniquely special. From what I hear from my friends that have been teaching for about twenty years now, you're lucky if you get more than one or two of those every ten years.

Again, it's a false dichotomy. What you described was just "super super smart", not what I suggested as "smart + hard worker: "In that world, the only people who can hope to distinguish themselves are those with the type of specific intelligence and insight that is rarely seen; if you ask a teacher, they will recant the handful of students over their career that clear that bar. Most of us aren't across that bar, less than 1% of people can be by definition, so of course everyone emotionally rejects that reality. No one wants their significance erased." That's not hard work + smart, that's "generationally smart genius." And that set is much smaller than the set I'm talking about. It's very easy to coast on "gifted but lazy" to perpetually be a big-fish-in-a-small-pond school-wise. But there are ponds out there full of people who do both. Twenty or thirty years ago this was the difference between a 1540 SAT score, As/Bs in high school, and going to a very good school and 1540 SAT score, A's in high school with a shitload of AP courses, and significant positions in extracurricular activities, and going to MIT. I don't know what it looks like for kids today - parents have cargo-culted all the extracurriculars so that it now reflects their drive more than the kids' - but those kids who left the pack behind to go to the elite institutions were grinders AND gifted.


Of course talent+effort are better than either alone, but it seems strange to argue that there will be zero effect on the value of having just one of them. AI may not raise the talented lazy person straightforwardly above the hard-working grinder but it seems likely that it will alter their relative position, in favor of talent.


What does it mean to even say "having just one of them"? I think the false dichotomy just torpedoes the ability to predict the effect of new tools at all. There's already a world of difference between the janitor who couldn't learn how to read but does his best to show up and clean the place as well as he can every day and the middle manager engineer with population-median math or engineering abilities but a 12-hour-day work ethic that has let him climb the ladder a bit. And the effect of these AI tools we're considering here is going to be MUCH larger on one than the other - it's gonna be worse here for the smarter one, until the AI's are shoveling crap around with human-level dexterity. (Who knows, maybe that's next.)

Anyone you'd interact with in a job in a HN-adjacent field has already cleared several bars of "not actually that lazy in the big picture" to avoid flunking out of high school, college, or quitting their office job to bum around... and so at that point there's not that same black-and-white "it'll help you but hurt you" shortcut classification.

EDIT: here's a scenario where it'll be harder to be lazy as a software engineer already, not even in the "super AI" future: in the recent past, if you were quicker than your coworkers and lazy, you could fuck around for 3 hours than knock something out in 1 hour and look just as productive, or more, than many of your coworkers. If everyone knows - even your boss - that it actually should only take 45 minutes of prompting then reviewing code from the model, and can trivially check that in the background themselves if they get suspicious, then you might be in trouble.


The "smart but lazy" person in an agentic AI workplace is the dude orchestrating a dozen models with a virtual scrum master. It's much more possible today to get a 40h work week's worth of work done in 4h than it ever has been before, because the gains that are possible with complex AI workflows are so massive, particularly if you craft workflows that match problems specifically. And because it's absolutely insane to do such a thing with modern tools and the lack of abstractions available to you, even insaner to expect people to do it, so you can't set proficiency targets on that rubric. You might have to actually work 40h at the onset, but I definitely work with someone who is considered a super hero for the amount of work they do, but I know they dick around and experiment all day every day, because all they do is churn Cursor credits into PRs through a series of insane agents. They're probably going to get a bonus for delivering an impossible project on time, as a matter of fact.

> Anyone you'd interact with in a job in a HN-adjacent field has already cleared several bars of "not actually that lazy in the big picture" to avoid flunking out of high school, college, or quitting their office job to bum around... and so at that point there's not that same black-and-white "it'll help you but hurt you" shortcut classification.

I'm clearly not talking about the _truly_ lazy people. I'm talking about classifications within the group of already successful creative/STEM professionals that are the ones who are going to be maximally impacted by AI. Obviously you're not as lazy as you could be if you manage to have a 20 year software career, but that doesn't mean you aren't fundamentally lazy or have a terrible work ethic, it just means you have a certain minimum standard you manage to hold yourself to. That's the person I'm talking about - the person who works twelve hours a day more isn't going to be able to meaningfully distinguish themselves any more. The quantity of their work becomes immaterial, so what matters is the quality, and the smarter, lazier dude is going to have better AI output because he has smarter inputs.


> Let's say, just for the sake of argument, that AI can eventually serve to level the playing field for everything. It outputs novels, paintings, screenplays - whatever you ask it for - of such high quality that they can't be discerned from the best human-created works.

This requires the machine to understand a whole bunch of things. You're talking about AGI, at that point there will be blood in the streets and screenplays will be the least of our problems.


I'm not sure you need AGI to clear that bar; I'm not sure you need more technology than currently exists beyond iterative improvements to things like how expensive it is to train a model.

But let's say it's free-ish to train a model, so you decide that that's how you're going to write the next Marvel movie. You train an LLM specifically on screeplay writing, teaching it to cross reference literary techniques with audience reaction, you teach it the sum total of Marvel canon, you teach it the sum total of the American cinema canon, you train it on all the social media reaction to either, and so on. You teach it to specifically engineer screenplays for Marvel movies that Marvel audiences will do maximum Marvel fanboy shit about. Do you genuinely believe such a dedicated model couldn't output a Marvel movie that everyone would love as much as Endgame?

Obviously, in the economy of 2026, it is cheaper for Disney to hire flesh-and-blood writers instead of doing this madness. But one day it won't be - and this is hardly even the tip of the iceberg. The ability to finely hone models quickly and on-demand (potentially on a per-prompt basis) would unlock another tier of accuracy and performance from LLMs, and for some/most artistic tasks, I think that gets you to "indistinguishable from mass market media."


Marvel movies is the worst example, it's a roller coaster ride, not a movie. I agree any braindead idiot or machine could write one. But they couldn't know to write The Pianist or how the subject could be approached or why it's time to write it.

AI can make slop yes, but it can't make the kind of art people don't get tired of. It's the difference between wisdom and knowledge.


Very well said.


> Let's say, just for the sake of argument, that AI can eventually serve to level the playing field for everything. It outputs novels, paintings, screenplays - whatever you ask it for - of such high quality that they can't be discerned from the best human-created works. In this world, the only way an individual human matters in the equation is if they can encode some unique insight or perspective into how they orchestrate their AI

It's an insightful point, but I think there's more going on. It seems that quite a lot of the people consuming media and art do actually care how much it's the product of a human mind vs generated by a machine. They want connection with the artist. Maybe it's a bit like organic produce. If you give me a juicy white peach, I probably can't tell whether it's an organic one, lovingly raised and harvested by a farmer with a generations-in-the-family orchard, or one that's been fertilized, pesticide-sprayed, and genetically-engineered by a billion dollar corporation. But there's a very good chance I care about the difference. I'm increasingly getting the impression that a big swathe of consumers prefer human-made art. Probably bigger than the percentage that insist on organic produce. There will be a market for human-created works because that's something that consumers want. Yes, some authors will cheat. Some will get away with it. It'll start to look a lot like how we think of plagiarism.

Maybe the strength of that preference varies in different parts of the industry. Maybe consumers of porn or erotica or formulaic romance or guilty pleasure pop songs don't care as much about it being human-produced. Probably no one cares about the human authenticity of the author of a technical manual. But I suspect the voters at the Oscars and Grammys and Pulitzers will always care. The closer we are to calling something "art", the more it seems we care about the authenticity and intention of the person behind it.

The other thing I think is missing from the debate is the shift from mass-market works to personalized ones. Why would I buy someone else's ChatGPT-generated novel for twenty bucks when I could spend a few cents to have it generate one to my exact preferences? I'd point to the market for romance novels as one where you can already see the seeds of this. It's already common for them to be tagged by trope: "why choose", "enemies to lovers", "forced proximity", etc. Readers use those tags to find books that scratch their very specific itch. It's not a big jump from there to telling the AI to write you a book that even more closely matches your preferences. It might look even less like a traditional "book" and more like a companion or roleplay world that's created by the AI as you interact with it. You can see seeds of that out there too, in things like SillyTavern and AI companion apps.


I don't disagree, but I would argue that the reason people prefer human works over AI works is the dynamic I mentioned in the original comment. I play a lot of idle games and it's not uncommon for one to start becoming popular, it's revealed that it's vibe coded, and the community turns against the developer; one game dev was bullied out the community and deleted his entire online presence because he wasn't ashamed of using Claude. It isn't about anything but the mob mentality of "AI bad."

The point you make about "romance" novels is true: there's tools like SmutFinder which are effectively exactly what you describe. You can be as specific or generic as you like, you can lay out the specific plot points and chapters or let the AI do it for you, or you can build the entire story one paragraph at a time like a super interactive choose your own adventure novel. And it's all modeled on smut, specifically built for the AI to design the user's specific fantasy. Smut books are arguably the lowest and easiest bar to clear in terms of audience-acceptable quality, but this technology existing in this space today assuredly means it'll be available for science fiction novels of acceptable quality in short order and eventually science fiction films, and eventually all media.

But the presence of personalized works in an AI marketplace makes what I said more salient because it serves as yet another bar to clear for someone else's artwork to become relevant to me. Why would I consume your space opera when I can make my own that's "better" according to my judgments? There's obviously reasons for me to buy an author's work even in an AI world because I want to be surprised, but if there are dozens / hundreds of options from equally qualified creators, what makes yours special?


I mean, if I had Elon Musk money, I'd build some kind of giant carbon capture mechanism. Perhaps I'd buy the largest basalt quarry I could find and start sequestering carbon at a planetary scale. It would cost a ton of money, but I'd do it in secret. If it worked, eventually it would show up on the scales, and I'd emerge from the shadows. This particular method of carbon capture could potentially work at a planetary scale and could potentially be done in secret, at huge cost, but the only blocking factor today is money.

https://eos.org/articles/basalts-turn-carbon-into-stone-for-...

This is the answer to carbon storage by the way, people just do not know about it. There's more than enough reactive mineral sites on the planet. The process is basically just dissolving CO2 into water, heating it, and soaking basalt in it to allow crystals to form. The water becomes heavier than ground water and can simply be poured into the Earth. The unsolved problems are optimization problems: direct air capture of CO2, using saltwater, that sort of thing.

If the world's billionaire class decided to buy carbon sequestering, we could have global CO2 levels returned to 1900 levels within a decade or two. The technology exists, the economic willpower does not.

https://www.bbc.com/news/world-43789527

> Potentially, basalt could solve all the world's CO2 problems says Sandra: "The storage capacity is such that, in theory, basalts could permanently hold the entire bulk of CO2 emissions derived from burning all fossil fuel on Earth."

Having said all of that, this is likely the most dystopian option. It's the "tech bails us out, yet again" solution because we could deploy it thoroughly enough that we can solve climate change without addressing any of the existential issues that got us here. The right combination of corporate+government partnership commercializing this technology and making it mandatory is a very plausible way to arrive at "there's 4 corporations on Earth that run the show" a la Aliens.


It's very much the wrong time to scale carbon capture. Doing some pilot plants for research is a good idea, but if your goal is to see the effects on the global plots, you should be working on something else.

There's a sibling with the long-form reasoning. The problem is that we are pushing a lot of new carbon into the atmosphere, you just won't be able to scale anything enough and there's a really big opportunity cost to try to push the tide away.


Carbon capture is probably the only geoengineering thing you could do that isn’t going to be massively controversial. Probably not practical though.

The other options mentioned like messing with the atmosphere to make it reflect more heat into space will likely cause wars due to lack of global consensus


> cause wars due to lack of global consensus

Who would attack who? Let’s say we are putting calcium carbonate into the upper arctic atmosphere to stop Greenland melt.

Who attacks?


These changes (very likely) cause major changes in precipitation. So whoever all the sudden gets a drought and famine.


Aren’t we facing major changes in precipitation (due to global warming) and by preventing global warming, having fewer changes?

TBH, I don't understand anti geoengineering logic…


Sure, but those ‘just happen’. If you start intentionally changing things, that next famine (or set of floods) has your name explicitly on it.

At some point countries will just start doing it out of desperation, but folks are rightly nervous to be the first.


Maybe not attacks but... "Congratulations you've won 100M+ climate refugees!"


I think you don’t understand the true scale of the problem. Just the additional fossil carbon being put in the atmosphere by the US alone is trillions of KG/yr.

Not only is there no way to hide trying to do something about it at that scale, there is no single site (or even multiple sites) that could handle that amount of sequestration - we’re talking hundreds.

And even Elon Musk could not afford it, even if he dumped everything he had into it.


No, but you could do enough of it in secret with Elon Musk resources to prove that it's both planetarily viable and doesn't cause catastrophes by existing and then lend your political weight to having it scaled up globally. By the time the public heard about it, it would already be a done deal.

I think you could prove it out at a scale that people could measure on planetary CO2 sensors for a couple dozen billion dollars, then take that data to a sitting POTUS you're friendly with and work out a multi-trillion dollar commercialization plan, using the USA's global bullying power to immediately establish a global monopoly.

A particularly cynical view would be this CEO buying global laws that dictate carbon neutrality while simultaneously also making it impossible to achieve without his CCS. Then merely canceling a sales contract topples a regime and you've arrived a global corporatocracy.


Mind doing some math and showing your work?


> > No, but you could do enough of it in secret with Elon Musk resources to prove that it's both planetarily viable and doesn't cause catastrophes by existing and then lend your political weight to having it scaled up globally. By the time the public heard about it, it would already be a done deal.

> Mind doing some math and showing your work?

I don’t see how anyone could spend tens or hundreds of billions of dollars in secret, so I’m not sure how important it is to show their work. I found the premise a bit absurd.


Hell, I just want to see the math on how much they think it would cost.


Along those same lines, I just want to know what vendors they would suggest would be used that could keep a billion dollar secret.


> HTML purists who do things without JS. They are the real web developers.

I don't think using one set of technologies as compared to another one can really be said to make one a "real" web developer. Real web developers are developers who put sites on the web, there is no benefit to anyone to be had claiming one choice is "real" and therefore the other choices are lesser-than.

Put it this way: whatever set of constraints you used to arrive at that decision does not apply to every situation, and when you frame things through the lens where you implicitly disregard that oh-so-obvious truth, it's hard for anyone to interpret your analysis as anything but myopic in the best case and actively self-serving and destructive in the worst case. It's nearly impossible to read through someone speaking this way about a topic and believe their analysis is objective, comprehensive, or without obvious bias, even if it may actually be all of those things.


I remember back in the before times... when escape analysis debuted for the JVM - which allows it to scalar replace or stack allocate small enough short-lived objects that never escape local scope and therefore bypass GC altogether - our Spring servers spent something like 60% less time in garbage collection. Saying enterprise software allocates a ton of short lived objects is quite an understatement.


You don't even need an AppleTV. My Roku and Amazon TVs were both zero-configuration AirPlay targets. And generally speaking, there's not any issue with wifi streaming video - there's a noticeable input lag, but it doesn't desync audio, and videos tend not to be interactive.


Yup. I'm introducing my sister to the masterpiece that is Chrono Trigger by playing an emulated version on my Mac streamed to our Roku TV. Works great. Video is even easier.


I don't think it's necessarily any larger of a leap than any of the other big breakthroughs in the space. Does writing safe C++ with an LLM matter more than choosing Rust? Does writing a jQuery-style gMail with an LLM matter more than choosing a declarative UI tool? Does adding an LLM to Java 6 matter more than letting the devs switch to Kotlin?

Individual developer productivity will be expected to rise. Timelines will shorten. I don't think we've reached Peak Software where the limiting factor on software being written is demand for software, I think the bottlenecks are expense and time. AI tools can decrease both of those, which _should_ increase demand. You might be expected to spend a month outputting a project that would previously have taken four people that month, but I think we'll have more than enough demand increase to cover the difference. How many business models in the last twenty years that weren't viable would've been if the engineering department could have floated the company to series B with only a half dozen employees?

What IS larger than before, IMO, is the talent gap we're creating at the top of the industry funnel. Fewer juniors are getting hired than ever before, so as seniors leave the industry due to standard attrition reasons, there are going to be fewer candidates to replace them. If you're currently a software engineer with 10+ YoE, I don't think there's much to worry about - in fact, I'd be surprised if "was a successful Software Engineer before the AI revolution" doesn't become a key resume bullet point in the next several years. I also think that if you're in a position of leadership and have the creativity and leadership to make it work, juniors and mid-level engineers are going to be incredibly cost effective because most middle managers won't have those things. And companies will absolutely succeed or fail on that in the coming years.


Tailwind might not be the most perfect fit, but it's "just" CSS.


And Tailwind v4 is notably better than v3 in terms of being "CSS first": https://tailwindcss.com/blog/tailwindcss-v4


Where was this line of thinking when it was Obama ordering the DEA to not enforce marijuana laws? Where is this line of thinking when it's a city that chooses not to enforce dog breed restrictions?

The enforcement of law being separate from the passage of law is a key plank in a functioning democracy, it's one of the safety valves against tyranny.


I doubt those events made it to HN, and the questions are obviously from people outside the US who thought that 'Supreme' means 'Supreme'.


Trump has a history of accepting bribes. Past history with this is very relevant. Let me know if Cleveland mayor is accepting bribes for pitbulls.


While I find it entirely plausible that Trump's character is such that he might accept bribes I am aware of no credible evidence that he has ever done so.


Companies spending a lot of money at a Trump property then being granted contracts or favorable legislation is a bribe in my eyes.


And for every video of quality on the platform, there's one that's blatant political propaganda, one that's blatant conspiratorial misinformation, one that's sexualizing children, etc.

It's a mixed bag. It has no more to offer than any other social network. Less, some might argue, because of how easy it is to crosspost to the other video networks.

The only way this is different from the loss of other social networks, Vine most closely, is the government is shutting down the site and collapsing the ecosystem rather than private equity.


I think you'll find most people in leadership positions at most companies are not that forward thinking, proactive, or frankly intelligent. I thought cost-benefit and risk was analyzed on most big company decisions, until I sat in rooms for a Fortune 500 where those decisions were getting made. If you assume that everyone everywhere is doing just barely the minimum to not get fired, you're right more often than not.


Career risk is also a very real motivation. If you are an executive at a company whose competitors are jumping on the AI bandwagon, but you are not, you will have to justify that decision towards your superiors or the board and investors. They might decide that you are making a huge strategic blunder and need to be replaced. Being proven right years later doesn't do much for you when you no longer have a job. And if you were wrong, then things look even worse for you. On the other hand, if you do get on the bandwagon yourself, and things go sideways, you can always point to the fact that everyone else was making the same mistake.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: