We see this in our open-source community. We've had a community channel for over two decades, where community members help newcomers and each other solve problems and answer questions.
Increasingly we have people join who tell us they've been struggling with a problem "for days". Per routine, we ask for their configuration, and it turns out they've been asking ChatGPT, Claude or some other LLM for assistance and their configuration is a total mess.
Something about this feels really broken, when a channel full of domain experts are willing to lend a hand (within reason) for free. But instead, people increasingly turn to the machines which are well-known to hallucinate. They just don't think it will hallucinate for them.
In fact I see this pattern a lot. People use LLMs for stuff within their domain of expertise, or just ask them questions about washing cars, and they laugh at how incompetent and illogical they are. Then, hours later, they will happily query ChatGPT for mortgage advice, or whatever. If they don't have the knowledge to verify it themselves then they seem more willing to believe it is accurate, where in fact they should be even more careful.
> In fact I see this pattern a lot. People use LLMs for stuff within their domain of expertise, or just ask them questions about washing cars, and they laugh at how incompetent and illogical they are. Then, hours later, they will happily query ChatGPT for mortgage advice, or whatever. If they don't have the knowledge to verify it themselves then they seem more willing to believe it is accurate, where in fact they should be even more careful.
The AI companies have taken all the wrong lessons from social media and learned how to make their products addictive and sticky.
I’m a certified hater, but even I’ve fallen into the exact trap you’re describing. Late last year I was in the process of buying a house that had a few known issues with a 30 day close. I had a couple sleepless nights because I had asked ChatGPT or Claude about some peculiar situation and the bots would tell me that I was completely screwed and give me advice to get out of the contract or draft a letter to the seller begging for some concession or more time. Then the next day I’d get a call from the mortgage guy or the attorney or the insurance broker and turns out, the people who actually knew what they were doing fixed my problem in 5 minutes.
This _is_ all true but what's also true is that there's an historical pattern (in many communities) of "n00bs" not being or (at least) _feeling_ welcome. So, I can't say I blame people for spinning in circles with LLMs instead of starting with forums or mailing lists where they may be shamed or have their questions closed immediately as "duplicate" or "off-top" (e.g. SO).
I think if we want newcomers to lead with human interactions, the onus is on us community leaders/elders/whatever need to be a little warmer, understanding and forgiving. (Of course, some communities and venues are already very good about all of this and I'm generalizing to make the larger point.)
Personally this type of behavior played a large part in why I left 2 oss communities.
A lot of the passerbys nowadays feel like trolls. They come in copy pasting chatgpt responses spamming they need help instead of chit chatting asking questions. We fix their problems, they don't trust us or understand at all. Or worse we tell them their situation is unreasonably bad and they should start over, they scream at us about how some unimaginably bad code passes tests and compiles just fine and how we are dumb.
They tell us we don't need to exist anymore in one way or another. They try to show off terrible code we try to offer real suggestions to improve it, they don't care. Then they leave the community once their vibe/agentic coding leaves that part of their code base. Complete waste of time, they learned nothing, contribute nothing, no fun was had, no ah-hahs, just grimey interactions.
I’m subscribed to a couple of mailing list and follow the archive of a few others. I wonder if the friction associated with the medium is why I haven’t seen those shenanigans?
People are losing their ability to reason without prompting an LLM first.
It's affecting their ability to collaborate. They retain the confidence of years of experience, but their brain isn't going through the appropriate process anymore to check their assumptions.
I've seen a similar thing happen to engineers who move into management, but this is now happening at such a large scale.
> if I’m asking the AI a question, that’s replacing a human interaction I would have had with a coworker.
Importantly, you're removing a signal: If I'm not asked things anymore, I don't know which aspects of our domain are causing the most confusion/misunderstandings and would as such benefit most from simplifying the boundaries of.
I am rereading the Asimov robot novels. A decrease in human to human interaction is a major side effect that he has foreseen. Decreasing interaction and collaboration are some of the core themes.
At the end of the day chatgpt won't be there to hold our hands in the hospital, have a laugh over failing to pick up a date, get invited to a bbq, groan over the state of the code in utils.c, or recommend us for our next job/promotion. They say software is social for a different reason than most of these examples.
It's good to be efficient, whatever that means, but there are no metrics on the gains that get made by talking to people. In a lot of ways those gains are what life is about.
Yes. 100%. Chatgpt can't get drunk with you share personal experiences grill food for you or network with humans for you. At some point certain people have to choose to live a life otherwise why have one anyways.
I think you are right, but it also makes sense. Human communication is inherently inefficient.
Points of view, miscommunication, interpretation... It's the obvious point to automate.
Not defending it, just my thoughts
I have a couple of colleagues that run all communication through an LLM. It really helps their writing, but it does nothing to help their understanding.
It also makes me hate communicating with them because they'll (somewhat obviously) prompt the LLM to make the conclusion they want. For example, "respond to this jira with why this isn't an issue"
It’s really really inconsistent. Sometimes select all is available, sometimes not. Sometimes the handles don’t work. Selecting text in a scrollable region is fiddly, etc.
I’ve seen an insane drop in the quality of swipe typing recently as well. To the point where I’ll often go back to regular typing. I’ve made maybe six or more corrections just to this paragraph alone.
I think swipe typing suggests words inconsistent with any higher level language model, like word tuples, when proposing words which are possible matches for letter sequences swiped.
and it drives me crazy too.
I've just had good luck it seems with text select.
Have you found any way to do a Find within a span of text on iOS? That would be very useful, but I haven't seen it.
Will drop this here in case you’re not aware of it (but I’m guessing you probably are), sorry if a bit off-topic.
I’m low-vision and made great use of Microsoft Soundscape until it got discontinued. I’d been waiting for an alternative for ages and didn’t realise one actually got released and is on the app store!
I absolutely LOVE! Voice Vista. It is an amazing bit of software. I wasn't able to use SoundScape when it first came out because it was never made available in my region, but VV is, and I would never want to miss it anymore when traveling. I love it. A lot.
This is actually one thing I think will be great as AI coding agents get better. Companies whose main expertise is hardware might start producing better software.
There are so many little bugs in consumer-facing apps that hit the ‘sweet’ spot of being incredible little annoyances that just aren’t worth putting an engineer on for a week to fix, but which are totally worth having an engineer throw an agent onto them.
I find that the code AI likes to write actually checks for “errors” too often when often you wouldn’t even want to do that. You don’t need to check every dictionary access and come up with a default value for example
This is actually one thing I think will be great as AI coding agents get better. Companies whose main expertise is code reviewing might start producing better software.
Curious as an outsider what you mean with US politics? Seems like Apple has a pretty strong stance when it comes to things like privacy that pushes back on some things (that could be smoke and mirrors though I guess).
The privacy is more of a market position thing than it is a political thing.
Apple has led the industry on hardware but is woefully behind on the software and services front. Focusing on device-level privacy controls turns what would be a gap into a moat, and it helps deprive Google and other services from monetizing their customer base.
Not to say that it's not something the company is passionate about - but it's also good for their business. Especially when you compare it to things like human rights, transparency, and security research where Apple could take a stronger stand but don't.
> The privacy is more of a market position thing than it is a political thing.
It is a market position, but companies do have some choice in which market positions they choose to take. And I wouldn't underestimate the effect of the personal views of the CEO in that.
If you’re referring to their AI services being ‘woefully behind’, that’s just a market sector that they’ve chosen not to focus too much effort on. That was a sensible gamble too, given how unpredictable that sector is five years after it was released.
I’m not sure what else they are behind on frankly, as their current offerings have been extremely stable from day dot.
How many products has Google released and killed in the past 20 years? Apple managed to land on a good thing with Apple iTunes and iPhotos in the early oughts, and managed to transition those core services into Apple Music and iCloud with little to no disruption to users. iCloud is generally a pretty predictable service that delivers on a core set of user requirements very well.
Also, thief productivity suite isn’t meant to completely replace Office, and for a free package, it meets many users needs perfectly fine.
> That was a sensible gamble too, given how unpredictable that sector is five years after it was released.
Define sensible. Apple's B2C margins are peanuts compared to what Nvidia's commanding right now, and they're both ARM retailers competing for the same cutting-edge fab space.
Are you referring to any security features in particular? There's a new zero-click exploit every 6 months for iOS, and NSO Group is showing no signs of slowing.
“Capitulating to the current regime on everything is in shareholder’s best interests” is neither a foregone conclusion nor a statement of fact. It’s economic myopia at best.
Let me be clear - I'm not happy about it. But ignoring such a reality reminds me of that quote comparing Job's best friend to a lawnmower.
That said, I'd love to enlightened to how it's myopic, or rather, what course(s) of action you would take, keeping in mind that Apple is a multi-trillion dollar public company.
I’m telling you that thinking a->b is myopic. It could be that shareholder value would’ve been higher had Tim Cook told Trump (or Biden, or Trump, or Obama) to go fuck himself. Perhaps the people who spend money on iPhones, specifically, would’ve been more inclined to buy a new iProduct, than they are now that he’s bent the knee.
Myopia is thinking “well he did it so it must have been good”. There are myriad other things he could’ve done, that have a strong argument towards higher shareholder value.
Edit to add: Think TSLA, if you want a concrete example. If that stock was at all trading on fundamentals (and if they had a remotely capable or competent board) and not Magic Memes, Musk’s hard right pivot was inarguably bad for the brand and shareholder value, even if it made the President temporarily happy.
Given that Apple is doing well, the onus is on someone claiming that Apple would have done better, having a strong argument.
Not "could" have done better, because things could obviously have gone better, worse, or anything else, given any substantive or random difference. Could means nothing.
(And I say this as someone very disappointed with how Cook handled that.)
Ah, "If you can't definitively and completely prove a negative then you're wrong (but also I'm like, totally not carrying water for those people)" is definitely not a weak opinion, though.
That said, maybe you should read the discussion a bit more carefully before jumping in with "OMG PROOOOOOF" or whatever the fuck this was supposed to be? The entire, plain English discussion, revolved around one thing not being the only possible "fact" just because it happened. None of the posts were particularly long, and none used challenging words.
My point isn’t that anyone’s view is wrong. I can’t make that claim either.
I hate what Cook did.
I would be happy and open to anyone who can point out how Apple was supposed to handle the actual threat of major tariffs in their components and systems better than he did.
But simply asserting a counter factual, a plausible way it might have been better, isn’t that. What would Cook be expected to do with that?
But what?
Not dismissing that there was a better way. There must be. It’s very worthwhile figuring out, even as a counter factual. That’s how we all learn.l
Not judging anyone. My answer is just or even more weak! I have really thought about this too, and come up with nothing so far.
(I appreciate and take note that my comment didn’t communicate my point well enough. It’s important to recognize weak reasoning. But that wasn’t meant to discourage, or show a lack of respect for another person’s efforts. I want a better answer too.)
> Myopia is thinking “well he did it so it must have been good”.
You're writing words that I did not say or imply.
The point is going against any (current) admin is almost always bad for a publicly traded company. Any public entity is going to have to have extremely good reasons to "fight back", how doing so is good for business. As a CEO of such an entity you're having to answer to many people who want a concrete plan and a belief in your strategy.
In the first rodeo, when all this was novel, it was believed such social signaling would pay off. Obviously silicon valley as a whole no longer feels this way.
TSLA is an outlier being grounded more on some superior man theory, that Apple did have in the past w/ Jobs, who is no longer there. Religious fervor stuff. It doesn't really apply. Rational moves here, please.
> There are myriad other things he could’ve done, that have a strong argument towards higher shareholder value
This is what I asked you to expound on. Please state a few.
Most shareholders may not care beyond the next quarter, but CEO action that led to those results were made couple of years ago at least, and current action will do as much to determine not the next quarter, but one slightly further in the future. Hence Jamie Dimon, for example, making a different decision in a similar matter. As Dimon explained: “[…] we have to be very careful about how anything is perceived, and also how the next DOJ is going to deal with it. So, we’re quite conscious of risks we bear by doing anything that looks like buying favors or anything like that”[1].
It's less than the other tech CEOs who seem to evade criticism on HN. Elon literally worked for Trump, accomplished nothing, and ended up just leaking everyone's social security data. Thiel and Palantir are profiting from war and building out the surveillance state. Bezos made a $75M documentary about Melania. Larry Ellison took over TikTok US to squelch any criticism of US and Zionist war atrocities.
Depending on who you talk to, this could go either way. Some people want big companies to champion their own political ideals on a larger stage and think Apple should do more. Others would say Apple should stay out of it, after things like their gift to Trump[0], for example.
For me at least I always remember it being referred to as 16-bit, in all the gaming and computer magazines etc. Part of the 16-bit home computers; I remember the Atari ST being referred to that way as well.
I don’t remember seeing references to 32-bit until the 386/486 days on the home computer side and Sega 32X on the console side.
It’s so strange. I think there’s a few different groups:
- Shills or people with a financial incentive
- Software devs that either never really liked the craft to begin with or who have become jaded over time and are kind of sick of it.
- New people that are actually experiencing real, maybe over-excitement about being able to build stuff for the first time.
Forgetting the first group as that one is obvious.
I’ve encountered a heap of group 2. They’re the ones sick of learning new things, for whatever reason. Software work has become a grind for them and vibe coding is actually a relief.
Group 3 I think are mostly the non-coders who are genuinely feeling that rush of being able to will their ideas into existence on a computer. I think AI-assisted coding could actually be a great on-ramp here and we should be careful not to shit on them for it.
You’re missing the group of high performers who love coding, who just want to bring more stuff in the world than their limited human brains have the energy or time to build.
I love coding. I taught myself from a book (no internet yet) when I was 10, and haven’t stopped for 30 years. Turned down becoming a manager several times. I loved it so much that I went through an existential crisis in February as I had to let go of that part of my identity. I seriously thought about quitting.
But for years, it has been so frustrating that the time it took me to imagine roughly how to build something (10-30 minutes depending on complexity) was always dwarfed by the amount of time it took to grind it out (days or sometimes weeks). That’s no longer true, and that’s incredibly freeing.
So the game now is to learn to use this stuff in a way that I enjoy, while going faster and maintaining quality where it matters. There are some gray beards out there who I trust who say it’s possible, so I’m gonna try.
Good point and I’m exactly at the same point as you with this. Working on letting go of the idea (and to be honest just the habit) that it’s somehow ‘cheating’ at the moment.
Not a troll. I’ve been doing a lot of self reflection on this topic lately. Some people seem to enjoy software for the act & craft, where the outcome / artifact is secondary or irrelevant. I don’t. Some people enjoy the artifacts it produces, for their utility or economic value. Not really me either. Often people frame it as this dichotomy, but I’ve realized my enjoyment and self-fulfillment comes from creating an artifact that is genuinely good and that I can be proud of creating. Too much AI robs me of this. I’ve created cool stuff with AI that leaves me feeling nothing because I didn’t really create it.
This is all valid. Your original comment came across as a troll because it implied that nobody could ever feel good about stuff they built with AI. Asserting that you know more about the emotional state of strangers on the internet than they know themselves is arrogant.
Well, it’s a genuine question. Like, if I have a machine in my house where I give it a recipe and it spits out the food, should I feel good about having “cooked” that food? Or what if someone prompts an AI for some art, should they feel proud of “creating” that art? I think not. And it’s the same with code. Depending on how much of the work you actually did should influence how you talk and feel about a creation. So many people lazily prompt an AI and then come here to post about something they “made” and I think that’s wrong.
I’m thinking there’s probably degrees to it. Like there is some stuff I absolutely want to hand craft, but then other stuff I don’t mind so much.
One of the interesting discussions at work (I’m in gamedev) has been about tooling and where AI fits in there.
Previously you’d spend sometimes significant time writing a tool, then polishing it up and giving it to the team (think things like editor extensions that make your workflow easier).
But AI can make this kind of bespoke tool dev so cheap now that it’s possible for every single dev to have their own tool that matches the way they work exactly. At that point, do you really need to spend the long 80% effort of polishing and getting it ready for mass consumption?
Stuff like that is interesting. I still can’t imagine never looking at the AI-generated code, but I’ve seen people take the approach of “I’m not interested in the code, only in what the thing does. If it’s wrong, I ask the agent to fix it”.
Yes I'm exactly like you as well. I've been coding for 30+ years, I still love coding and system building etc, but sometimes the level of frustration to find the information and then get something working is simply too high.
Over a weekend, I used ChatGPT to set up Prometheus and Grafana and added node exporters to everything I could think of. I even told ChatGPT to create NOC-style dashboards for me, given the metrics I gave it. This is something that would have painstakingly take several weeks if not more to figure out, and it's something I've been wanting to do but the cognitive load and anticipatory frustration was too high for me to start. I love how it enables me to just do things.
My next step is to integrate some programs that I wrote that I still use every day to collect data and then show it on the dashboards as well.
On a side note, I don't know why Grafana hasn't more deeply integrated with AI. Having to sift through all the ridiculous metrics that different node exporters advertise with no hint of naming convention makes using Grafana so much harder. I cut and pasted all the metrics and dumped it into ChatGPT and told it to make the panels I wanted (ex. "Give me a dashboard that shows the status of all my servers" and it's able to pick and choose the correct metrics across my Windows server, Macbooks and studio, my Linux machines, etc), but Grafana should have this integrated themselves directly into themselves.
I’ve encountered a heap of group 2. They’re the ones sick of learning new things, for whatever reason.
I think it's easy to dismiss that group, but the truth is there was a lot of flux in our industry in the last decade before AI, and I would say almost none of it was beneficial in any way whatsoever.
If I had more time I could write an essay arguing that the 2010s in software development was the rise of the complexity for complexity's sake that didn't make solving real world problems any easier and often massively increased the cost of software development, and worse the drudgery, with little actually achieved.
The thought leaders were big companies who faced problems almost no-one else did, but everyone copied them.
Which led to an unpleasant coding environment where you felt like a hamster spinning in a wheel, constantly having to learn the new hotness or you were a dinosaur just to do what you could already do.
Right now I can throw a wireframe at an AI and poof it's done, react, angular, or whatever who-gives-a-flying-sock about the next stupid javascript framework it's there. Have you switched from webpack to vite to bun? Poof, AI couldn't care less, I can use whatever stupid acronym command line tool you've decided is flavour of the month. Need to write some Lovecraftian-inspired yaml document for whatever dumbass deploy hotness is trending this week? AI has done it and I didn't have to spend 3 months trying to debug whatever stupid format some tit at netflix or amazon or google or meta came up with because they literally had nothing better to do with their life and bang my head against the wall when it falls over every 3 weeks but management are insisting the k8s is the only way to deploy things.
That in itself feels like second-system syndrome but instead of playing out over a single software project it’s the large-scale version playing out over the entire industry.
> I’ve encountered a heap of group 2. They’re the ones sick of learning new things, for whatever reason.
I say this kindly, but are you sure that _you_ aren't the one in group 2, and _they_ aren't the ones learning new things?
A lot of the discourse around ai coding reminds me of when I went to work for a 90s tech company around 2010 and all the linux guys _absolutely refused_ to learn devops or cloud stuff. It sucks when a lifetime of learned skills becomes devalued over night.
That’s pretty fair, I’m currently in the “trying to get over the feeling that it’s cheating” phase and also just haven’t formed the habit yet of reaching for AI as a tool in my toolbox; particularly in things like pre-review AI-assisted code review, which I’ve found really useful but sometimes don’t think of doing when I could.
I don’t think that is true. I know several very high-performing engineers (some who could have retired a long time ago and are just in it for the love of the game) who use AI prolifically, without lowering any bars, and just deliver a lot more work.
EDIT: Sorry I realised you were asking more about categorisation and not downloading.
——
The closest thing I can think of is Tube Archivist, which seems made for archiving large YouTube collections, including things like comments on videos.
I’ve had mixed luck with it and it’s a bit too heavy for my fairly limited needs. Youtube-dl hasn’t worked for me for the last month or so on it —- oddly enough I have a MeTube instance on the same physical machine (different VM) which is a lighter web UI for yt-dlp and which is still working fine. That’s Youtube’s fault I assume and not the fault of Tube Archivist.
The thought crossed my mind the other day — if I’m asking the AI a question, that’s replacing a human interaction I would have had with a coworker.
It’s not just in coding, it’s everything. With ChatGPT always available in your pocket, what social interactions is it replacing?
The thing that gets me is, we are meant to fundamentally be social creatures, yet we have come to streamline away socialisation any chance we get.
I’m guilty of this too — I much prefer Doordash to having to call up the restaurant like in the old days, for example.
reply