Hacker Newsnew | past | comments | ask | show | jobs | submit | slfnflctd's commentslogin

Let's not forget that Waymo requires an extensive, custom mapping and software/pre-training development process for every new city they operate in, are only in 10 cities total after over 20 years, and are still nowhere near profitability (or even with a clear plan to get there as far as I can tell).

I personally believe widely available self-driving cars which don't operate at a loss will continue to elude us until we accept the tradeoffs of dedicated lanes, a standardized vehicle-to-vehicle communication protocol, and roadside sensors. We were lied to.


For a fraction of the cost of developing self-driving cars we could have self-driving trains/trams/subways and most likely minibuses as part of public transportation networks.

And self-driving minibuses would basically provide 95% of the benefits of self-driving buses. They could offer 24/7 frequent service with huge coverage, we already have dedicated bus lanes in many places (and we could scale dedicated bus lanes much faster than dedicated self-driving car lanes), etc.

Now, I understand that in many places (especially the US) this is infeasible because public anything = communism.


Folks in the US are happy to spend tax dollars on roads, it's just that mass transit spending is considered communism.

To be fair to the anti-train crowd, we've been led so far down this disastrous path of car-led sprawl that the hope of even building feasible buses that can reach into the byzantine suburbs is unlikely.

So, maybe our best hope is self-driving EVs? At least in our lifetimes.


> my nerdy colleagues were going wild with home automation stuff [...] I wanted to play with it too [...] these guys weren't spending less time than me turning on their lights

Yep. The IoT home automation stuff is still less performant than much older, wired solutions where whole systems were designed at once in a set-and-forget mode and didn't have weird sync issues or delays. I remember seeing the 'home of the future' exhibit at Epcot like 20+ years ago and these IoT setups are often still a total joke in comparison because of all the protocol issues and fiddling with various interfaces needed.

Just like how the analog wired POTS phone systems were more performant in many ways than pretty much any IP based voice setup.

I simply got tired of messing with stuff that kept breaking in unexpected ways. It wasn't saving time, it was adding a lot of totally unnecessary stress and actually taking time away from me-- for little more than an occasional spark of novelty. Being able to use voice accurately & repeatably for simple task requests is probably the only standout advancement.

My 'nerdy colleagues' and myself can get a lot of enjoyment out of tinkering with this new agentic hotness. However, very few of us I think are really getting something that's actually saving us time in the long run (at least in our personal lives), and it's going to take a while to figure out what's actually realistically reproducible toward that end at a reasonable cost.


IoT was an absolutely terrible fit for the home space. My parents have light switches in their house installed in the 1940s. They still work just as good. Getting something from the IoT of home automation to last like that is very difficult. Yet it seems to be the first model everyone reaches for when talking about it. If they had to replace the switches it would not cost too much to do either.

IoT really comes into its own space though when you pair it up with something that is a real pain to get to. Think somewhere you have to have a crainlift and a 4 hour drive just to touch the 20 year old computer something is hooked up to. Or basically anywhere that takes hours to get to. The space my company typically targeted was high rise air con companies. Or companies where the customer would service out any sort of PLC work to a 3rd party. At that point the savings of having to roll a guy out there vs looking on a computer has the thing pay for itself in 1-2 trips. Also the ability to show up on site with the correct parts. That alone was a huge savings.

IoT's big issues is you have to beat many things that are already dead simple to do.


There are too many things that can go wrong, you should never look away from the road for more than a second or two.

Adaptive cruise control, lane keeping, blind spot detection and emergency braking are all the modern automation I want in a personal vehicle at this point. Other drivers are unpredictable, I want to choose how I respond to their various forms of idiocy and not delegate to a black box.


Agreed. I came to the conclusion years ago that we are going to keep having the same problems until we engineer our way out of them by altering our biology-- either genetically or with implanted augmentation devices (or both).

I have yet to see a convincing counter argument to this hypothesis.


Agreed, hard to imagine a different outcome with the same starting parameters.


> There just happened to be a whacko that got into the White House

My counter to this is that such an occurrence was increasingly likely starting around the time the massive US Evangelical base was essentially fully captured by (and became a wing of) the Republican party. It was more and more obvious over a period of at least 40 of those 60 years you mention.


There's a kid outside the window of the place I'm staying who's been in the yard playing and talking with people online through his VR headset for like 2+ hours. He's living in the future. Whatever happens, he and his friends are going to continue to be interested in more of this.

Whether what they're using in 20 years is produced by the company formerly known as Facebook or not is a whole different question.


You jest, but it's a good question.

When people talk about the 'plateau of ability' agents are widely expected to reach at some point, I suspect a lot of it will boil down to skyrocketing costs and plummeting accuracy past a certain point of number of agents involved. This seems to me like a much harder limit than context windows or model sizes.

Things like Gas Town are exploring this in what you might call a reckless way; I'm sure there are plenty of more careful experiments being conducted.

What I think the ultimate measure of this new tech will be is, how simple of a question can a human put to an LLM group for how complex of a result, and how much will they have to pay for it? It seems obvious to me there is a significant plateau somewhere, it's just a question of exactly where. Things will probably be in flux for a few years before we have anything close to a good answer, and it will probably vary widely between different use cases.


Human journalists and marketing copy writers have been writing like this for at least 50 years, if not considerably longer.

I am exhausted by so many people calling writing out as AI without sufficient proof other than writing style. Some things are more obvious, sure... maybe I'm just too stupid to see a lot of the rest of it? But so much of what gets called out seems incredibly familiar to me compared with traditional print media I've been reading my entire life.

I'm starting to wonder if a lot of people just have poor literacy skills and are knee-jerk labeling anything that looks well written as AI.


I think one factor is the lack of variation. Sure, a copywriter might use those techniques as a hook, but there’s far more content using them paragraph after paragraph after paragraph than I’ve ever seen before.

You might also reframe how you read those comments. Perhaps when people are labeling a piece as “written by AI,” they’re just conveying that they perceive it to use the same “voice” that LLMs use, and judge that voice negatively. Sometimes people say things non-literally and don’t need proof.


You're right that (some) marketing copy writers have been writing in this style for decades, but suddenly every second tech blogger has assumed the same voice in the past 2 years. Not everyone is as sensitive to it. I read this crap daily so I've developed an awareness and I'm confident in calling it out.

I don't think I've personally seen a single false positive on HN. If anything, too much slop goes through uncontested.


> If anything, too much slop goes through uncontested.

It's actually insane opening up /r/webdev and similar subreddits and seeing dozens of AI authored posts with 50+ comments and maybe a single person calling it out. Makes me feel crazy. It's not as much of a problem here, but there is absolutely a writing style that suddenly 50% of submissions are using. It's always to promote something and watching people fall for it over and over again is upsetting.


> almost as bad as when LLMs link things to prove their point, you visit the link, and find it says nothing of the sort or even the opposite

To be fair, they got it from us. This happened to me plenty of times long before modern LLMs.


It learned by reading HackerNews, after all.


> The implications of a machine that can approximate or mimic human thinking are far beyond the implications of a machine that can approximate or mimic swimming

It seems to me like too many people are missing this point.

Modern philosophy tells us we can't even be certain whether other humans are conscious or not. The 'hard problem', p-zombies, etcetera.

The fact that current LLMs can convince many actual humans that they are conscious (whether they are or not is irrelevant, I lean toward not but whatever) has implications which aren't being discussed enough. If you teach a kid that they can treat this intelligent-seeming 'bot' like an object with no mind, is it not plausible that they might then go on to feel they can treat other kids who are obviously far less intelligent like objects as well? Seriously, we need to be talking more about this.

One of the most important questions about AI agents in my opinion should be, "can they suffer?", and if you can't answer that with a definitive "absolutely not" then we are suddenly in uncharted waters, ethically speaking. They can certainly act like they're suffering (edit: which, when witnessed by a credulous human audience, could cause them to suffer!). I think we should be treading much more carefully than many of us are.


You lost me there. :)

The question of whether the current generation of "AI" can think, whether it is conscious, let alone whether it can suffer(!), is not even worth discussing. It should be obvious to anyone who understands how these tools work that they don't in fact "think", for even the most liberal definition of that term. They're statistical models that can generate useful patterns when fed with vast amounts of high quality data. That's it. The fact we interpret their output as though it is coming from a sentient being is simply due to our inability to comprehend patterns in the data at such scales. It's the best mimicry of intelligence we've ever invented, for better or worse, but it's far from how intelligence actually works, even if we struggle to define it accurately. Which doesn't mean that this technology can't be useful—far from it—but it's ludicrous to ascribe any human-like qualities to it.

So I 100% side with Dijkstra on that point.

What I'm criticizing is his apparent dismissal and refusal to even consider it a worthy philosophical exercise. This is why I think that the comparison to submarines and swimming is reductionist, and ultimately not productive. I would argue that we do need to keep thinking about whether machines can think, as that drives progress, and is a fundamentally interesting topic. It would be great if the progress wouldn't be fueled by greed, self-interest, and manipulation, or at the very least balanced by rationality, healthy skepticism, and safety measures, but I suppose this is just inescapable human nature.


> The question of whether the current generation of "AI" can think, whether it is conscious, let alone whether it can suffer(!), is not even worth discussing. It should be obvious to anyone who understands how these tools work that they don't in fact "think", for even the most liberal definition of that term.

While I agree with your second sentence here, the first one gives me pause. Why isn't it "worth discussing"? Do you refuse to engage in conversation with all mentally challenged people? Do you avoid all interactions with human children? There are many, many folks living their lives as fully as they can right now who are convinced these things are alive. There are ethical implications to that assumption regardless of whether the things are actually alive, especially when people respond to them as if they are.

We need to have better arguments and refine them for different audiences.

Are you aware of the concept of philosophical zombies? Some of the top minds on the planet are telling us they can't even determine if you or me are conscious and sentient, let alone if a machine is. On the other hand, some of those people's peers are arguing that weather patterns might be conscious (among even more extreme claims). From the standpoint of logic and reason being paramount, we cannot claim to know the answers to these questions. What we can do is discuss the ethical implications of various people coming to different conclusions about them.


> Why isn't it "worth discussing"?

Because it's obviously not true. The second sentence follows the first.

> There are many, many folks living their lives as fully as they can right now who are convinced these things are alive.

And those people are living in a delusion, whether it's self-imposed, or the result of false advertising. The way you get them out of that is by rationalizing and explaining the technology in terms they can understand, not by mistifying it and bringing up existential topics.

> Are you aware of the concept of philosophical zombies?

I wasn't, no.

> Some of the top minds on the planet are telling us they can't even determine if you or me are conscious and sentient, let alone if a machine is.

Look, we can philosophize about the nature of existence until we're blue in the face. People have been pondering about similar questions since the dawn of humanity. FWIW I don't believe in "top minds" as having authority to tell us anything. What we know for certain is how technology works, since we built it. And we damn well know that this technology has absolutely zero understanding about anything. Go ahead, ask it how it works. It will tell you that it doesn't understand a single word it's generating, but it sure can string together patterns that make it look like it does. And you think there's some deeper meaning here we should discuss seriously? Please.

Like I said, I think these are interesting thought experiments, and something we should keep thinking about. But it should be clear to anyone, especially technically minded people, that we're nowhere near being able to create artificial intelligence. What we have now are a bunch of grifters and snake oil salesmen selling us a neat statistical trick and telling us it's "AI". This should be criminally prosecuted, if you ask me.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: