Hacker Newsnew | past | comments | ask | show | jobs | submit | hollowturtle's commentslogin

> Unsurprisingly, participants in the No AI group encountered more errors. These included errors in syntax and in Trio concepts, the latter of which mapped directly to topics tested on the evaluation

I'm wondering if we could have the best of IDE/Editor features like LSP and LLMs working together. With an LSP syntax errors are a solved problem, if the language is statically typed I often find myself just checking out type signatures of library methods, simpler to me than asking an LLM. But I would love to have LLMs fixing your syntax and with types available or not, giving suggestions on how to best use the libraries given current context.

Cursor tab does that to some extent but it's not fool proof and it still feels too "statistical".

I'd love to have something deeply integrated with LSPs and IDE features, for example VSCode alone has the ability of suggesting imports, Cursor tries to complete them statistically but it often suggest the wrong import path. I'd like to have the twos working together.

Another example is renaming identifiers with F2, it is reliable and predictable, can't say the same when asking an agent doing that. On the other hand if the pattern isn't predictable, e.g. a migration where a 1 to 1 rename isn't enough, but needs to find a pattern, LLMs are just great. So I'd love to have an F2 feature augmented with LLMs capabilities


I've found the AI assisted auto-completion to be very valuable. It's definitely sped up my coding and reduced the number of errors I make.

It reduces the context switching between coding and referencing docs quite a bit.


Have you read my comment or are you a bot?

This is what we're paying sky rocketing ram prices for

We are living in the stupid timeline, so it seems to me this is par for the course

Don't think so, and we should stop spread damaging narrative like this. I'd say it's already able to imitate this kind of explainers(badly) thanks to his training data. All the subtle teaching nuances, effort, know-how and visual creativity that people like Bartosz Ciechanowski put on this kind of work is not reproducible if not statistically imitating it

And the usual corollary: Not just thanks to his training data, but because training data of that kind and for this kind of topic - still - exists.

Exactly and him not publishing any new post in 2025 makes me wonder...

Then you have Anthropic that states on his own blog that engineers fully delegate to claude code only from 0 to 20% https://www.anthropic.com/research/how-ai-is-transforming-wo...

The fact that people keep pushing figures like 80% is total bs to me


It’s usually people doing side projects or non-programmers who can’t tell the code is slop. None of these vibe coding evangelists ever shares the code they’re so amazed by, even though by their own logic anyone should be able to generate the same code with AI.

> Coding workflow. Given the latest lift in LLM coding capability, like many others I rapidly went from about 80% manual+autocomplete coding and 20% agents in November to 80% agent coding and 20% edits+touchups in December

Anyone wondering what exactly is he actually building? What? Where?

> The mistakes have changed a lot - they are not simple syntax errors anymore, they are subtle conceptual errors that a slightly sloppy, hasty junior dev might do.

I would LOVE to have jsut syntax errors produced by LLMs, "subtle conceptual errors that a slightly sloppy, hasty junior dev might do." are neither subtle nor slightly sloppy, they actually are serious and harmful, and no junior devs have no experience to fix those.

> They will implement an inefficient, bloated, brittle construction over 1000 lines of code and it's up to you to be like "umm couldn't you just do this instead?"

Why just not hand write 100 loc with the help of an LLM for tests, documentation and some autocomplete instead of making it write 1000 loc and then clean it up? Also very difficult to do, 1000 lines is a lot.

> Tenacity. It's so interesting to watch an agent relentlessly work at something. They never get tired, they never get demoralized, they just keep going and trying things where a person would have given up long ago to fight another day.

It's a computer program running in the cloud, what exactly did he expected?

> Speedups. It's not clear how to measure the "speedup" of LLM assistance.

See above

> 2) I can approach code that I couldn't work on before because of knowledge/skill issue. So certainly it's speedup, but it's possibly a lot more an expansion.

mmm not sure, if you don't have domain knowledge you could have an initial stubb at the problem, what when you need to iterate over it? You don't if you don't have domain knowledge on your own

> Fun. I didn't anticipate that with agents programming feels more fun because a lot of the fill in the blanks drudgery is removed and what remains is the creative part.

No it's not fun, eg LLMs produce uninteresting uis, mostly bloated with react/html

> Atrophy. I've already noticed that I am slowly starting to atrophy my ability to write code manually.

My bet is that sooner or later he will get back to coding by hand for periods of time to avoid that, like many others, the damage overreliance on these tools bring is serious.

> Largely due to all the little mostly syntactic details involved in programming, you can review code just fine even if you struggle to write it.

No programming it's not "syntactic details" the practice of programming it's everything but "syntactic details", one should learn how to program not the language X or Y

> What happens to the "10X engineer" - the ratio of productivity between the mean and the max engineer? It's quite possible that this grows a lot.

Yet no measurable econimic effects so far

> Armed with LLMs, do generalists increasingly outperform specialists? LLMs are a lot better at fill in the blanks (the micro) than grand strategy (the macro).

Did people with a smartphone outperformed photographers?


Lots of very scared, angry developers in these comment sections recently...

Not angry nor scared, I value my hard skills a lot, I'm just wondering why people believe religiously everything AI related. Maybe I'm a bit sick with the excessive hype

FOMO really

There's no fear (a bit of anger I must admit). I suspect nearly all of the reaction against this comes from a similar place to where mine does:

All of the real world code I have had to review created by AI is buggy slop (often with subtle, but weird bugs that don't show up for a while). But on HN I'm told "this is because your co-workers don't know how to AI right!!!!" Then when someone who supposedly must be an expert in getting things done with AI posts, it's always big claims with hand-wavy explanations/evidence.

Then the comments section is littered with no effort comments like this.

Yet oddly whenever anyone asks "show me the thing you built?" Either it looks like every other half-working vibe coded CRUD app... or it doesn't exist/can't be shown.

If you tell me you have discovered a miracle tool, just some me the results. Not taking increasingly ridiculous claims at face value is not "fear". What I don't understand is where comments like yours come from? What makes you need this to be more than it is?


Also note that I'm a heavy LLM user, not anti ai for sure

This is extremely reductive and incredibly dismissive of everything they wrote above.

It's because they don't have a substantive response to it, so they resort to ad hominems.

I've worked extensively in the AI space, and believe that it is extremely useful, but these weird claims (even from people I respect a lot) that "something big and mysterious is happening, I just can't show you yet!" set of my alarms.

When sensible questions are met with ad hominems by supporters it further sets of alarm bells.


I see way more hype that is boosted by the moderators. The scared ones are the nepo babies who founded a vaporware AI company that will be bought by daddy or friends through a VC.

They have to maintain the hype until a somewhat credible exit appears and therefore lash out with boomer memes, FOMO, and the usual insane talking points like "there are builders and coders".


society doesn't take kindly to the hyper-aware. tone it down.

i'm not sure what kind of conspiracy you are hallucinating. do you think people have to "maintain the hype"? it is doing quite well organically.

So well that they're losing billions and OpenAI may go bankrupt this year

what if it doesn't?

better for them! the heck i care about it

This is a low quality curmudgeonly comment

Now that you contributed zero net to the discussion and learned a new word you can go out and play with toys! Good job

You learned a new adjective? If people move beyond "nice", "mean" and "curmudgeonly" they might even read Shakespeare instead of having an LLM producing a summary.

cool.

>Anyone wondering what exactly is he actually building? What? Where?

this is trivially answerable. it seems like they did not do even the slightest bit of research before asking question after question to seem smart and detailed.


I asked many question and you focused on only one, btw yes I did my research, and I know him because I followed almost every tutorial he has on YouTube, and he never mentions clearly what weekend project worked on to make him conclude with such claims. I had a very high respect of him if not that at some point started acting like the Jesus Christ of LLMs

its not clear why you asked that question if you knew the answer to it?

Just in case you missed it and are interested in a go alternative https://wails.io/

To anybody with experience, how's Swift? Especially outside MacOS/iOS programming. Let's say I want to use it standalone for doing some systems programming, how's the standard lib? I'd like to not rely on apple specific frameworks like uikit

One of the biggest issues I ran into years ago was debugging outside of macOS was a nightmare. Even now, debugging is a terrible experience on a non-trivial project. I am not really sure if it the size of the projects I've worked on, interop with objc, compiler configs, project configs, or what, but it has always been a bad experience. I used it on/off for a project on Linux and the debugger didn't work at all. This was so long ago I am sure that has changed but at least so far in my experience, lldb will stop working at some point. I've worked on large Obj-C and C++ codebases and never ran into any of the problems I've run into with swift in this area.

Apple really needs to decouple swift debug and profiling tools from Xcode. I've been using vscode for swift work but hate having to switch back to Xcode for the tools.

Swift is pretty good.

As a language, I really like it. It feels very much like a cousin to Rust with a few tradeoffs to be more ergonomic.

The standard library is pretty good but the extended ecosystem is not as strong outside of Apple platforms, though that is improving.

If the ecosystem improved, like this project here, it would probably be my go to language. Failing that it’s usually rust , Python, C# and C++ for me.

UI libraries outside of Apple frameworks is about as weak as all those other languages if you don’t have Qt bindings. Qt does have Swift bindings officially in the works though so that could change.


> It feels very much like a cousin to Rust with a few tradeoffs to be more ergonomic.

Rust can be just as ergonomic. It takes some minor boilerplate of course, since you're resorting to coding patterns that are somewhat unidiomatic - but not nearly as much as the likes of C# or Java.


I disagree that rust can be as ergonomic. I’ve been writing rust for longer than Swift , and there’s a lot of niceties in Swift.

Default parameters, null shortcircuits, lazy static initializers, computed properties, ease of binding to C++, RC by default, defer.

Both languages are great, but I don’t think they’re on the same ergonomic level by any means.


> ease of binding to C++

I wouldn't really call this an "ergonomic" feature of a language. That's a whole research project.

Regardless, C++ interop in Swift isn't straightforward and there are a multitude of issues. One being that you need to compile your C++ codebase with Apples fork of LLVM and in some cases add annotations in your C++ so that Swift plays nice (which basically isn't interop at that point)

You can see the Ladybird projects issue tracker[0] and issues on the Swift forum that LB maintainers have created[1][2] to get an idea. Swift adoption has stalled due to these.

0: https://github.com/LadybirdBrowser/ladybird/issues/933

1: https://forums.swift.org/t/ladybird-browser-and-swift-garbag...

2: https://forums.swift.org/t/ladybird-gc-and-imported-class-hi...


It’s not perfect but if that’s your standard, then that cuts a lot of stuff from both languages :-)

I’m not sure why annotations are a bad thing to you. They’re not necessarily swift specific and could benefit other bindings too, and their existence doesn’t mitigate that it’s a binding. Or do you not consider rust being bindable to any non-C language since you’d have to write CFFI bindings in between?


Could be better, think .NET Core 1.1 timeframe when Microsoft finally decided to make it cross-platform.

You get the bare bones standard library, some of it still WIP, and naturally most libraries were written expecting an Apple platform.

Windows workgroup was announced yesterday, and Linux support is mostly for macOS/iOS devs deploying some server code, because naturally OS X Server is no more.


Not quite systems programming but this might give you some insight. Swift is memory efficient, and runs stable backend services. I've seen benchmarks showing that it's slightly more performant than typescript but twice as memory efficient (but not as efficient when it comes to memory management compared to Rust, C, and C++).

The other point I've seen is that its string library is slow and very accurate.

Besides that, the C-interop means you have quite a bit of flexibility in leveraging existing libraries.


>The other point I've seen is that its string library is slow and very accurate.

Swift strings default to operating on grapheme clusters, which is relatively slow. But you can always choose to work with the underlying UTF-8 representation or with unicode scalars, which is fast.

The only situation where UTF-8 incurs overhead is if the String comes from some old Objective-C API that gives you a UTF-16 encoded String.


> Swift strings default to operating on grapheme clusters, which is relatively slow.

The unicode-segmentation crate implements this for Rust, in case it matters for accuracy.


Unicode scalars are not so fast. But yes working directly with UInt8/Data bytes is efficient.

That’s how I took over maintenance of SwiftSoup and made it over 10x faster according to my benchmarks. Besides various other optimizations such as reducing copying, adding indexes, etc.


Being only slightly more performant than an interpreted GC-only language is hard to believe (even though typescript is just a spec and you probably meant slightly more performant than v8).

That's right, I said TypeScript but yeah, it's v8 under the hood.

For web server benchmarks, it’s far behind typescript. Async has further degraded performance competitiveness

It also has C++ interop btw


Swift "feels" like C#. A lot of systems programming is done in C#.

Depending on your goals, it's worth giving C# a test-drive given Swift's similarity to C#.


Traditionally I would say it feels more like Ada, Modula-2, Object Pascal.

And if making reference counting part of the picture, Cedar, Modula-2+,...

Finally catching up with what we already had in the 1990's and lost, in a couple of decades split between C, C++ and VM based languages.


> Traditionally I would say it feels more like Ada, Modula-2, Object Pascal.

Well, that's from the Objective C history; and Objective C borrows a lot from those languages.

The thing is, once you're doing systems programming, it's unlikely you're going to call any Objective C APIs, or APIs that have an Objective C history. You're more likely to call something in C.


NeXTSTEP systems programming was done in Objective-C, including writing drivers.

Also Objective-C has nothing to do with those languages, so I got lost in what history.

It picks from C and Smalltalk.


Here's what I wrote on this subject back in September. Nothing has changed so far as I can see.

https://news.ycombinator.com/item?id=45417366


Oh noes! Will someone ever figure out how to open a file on windows? I do t think we have the technology.

About 5-6 years ago, I worked a fair bit on an iOS app, primarily in swift (there were some obj-c and C++ bits). Until then, 90% of what I had written was either C++ or python on Linux, and I had never worked on a mobile app and had barely used MacOS (or iOS for that matter, I've always had android phones). From that experience I had an unexpectedly favorable impression of the swift language. I thought the ergonomics of the typing system and error handling compared quite favorably to C++, with better performance and safety compared to python. I didn't really like the Apple frameworks though, it felt like they were always making new ones and the documentation was surprisingly poor. Nor did I really gel with XCode (which is virtually a requisite for iOS development) or MacOS itself. But I actually liked swift enough that I give it a try outside of ios for a few test apps. Unfortunately, at the time swift outside iOS wasn't really mature and there wasn't much of an ecosystem. Not sure how much that has changed, but these days I'd probably reach for rust instead.

It's lovely, not “always has been” but since, say, 5.10.

// I'm an originally Pascal and assembly dev (learned most Internet dev langs along the way) who hated what people did with Java (until last 5 years), failed to like Ruby, liked Clojure, disliked go, did like Nim, but really found Swift to be fresh air for data shapes and flow. And the tooling experience with git repo to iCloud build to testflight is worth every penny of the annual dev fee.


I tried using it on Windows, but it failed to compile as soon as I used file IO. The error was non-descriptive and had no matches online. I couldn't figure it out so I tried it without file IO, but as others have said the compiler is odd, the errors are odd, and in general doesn't feel like the tooling is nearly as good as most other popular languages

Quite enjoyable. Some compiler errors are a pain.

What an understatement :)

It’s a lovely language but the compiler has got to be the most unreliable I’ve ever seen.

It crashes semi-frequently. And it will sometimes try to run analyses that are way beyond O(n). So you can have perfectly valid code that it can’t compile, until you simplify or reduce the size of some function.


>> You no longer need to review the code.

You also no longer need to work, earn money, have a life, read, study, know anything about the world. This is pure fantasy my brain farts hard when I read sentences like that


You also no longer need to work, earn money, have a life, read, study, know anything about the world. This is pure fantasy

This will be reality in 10-20 years


A traditional Marxist revolution is more likely than that.

It's already reality if you want to, today and in 10-20 years the outcome will be the same: being an homeless! And no please no UBI bs thanks

99.9% of today’s jobs will be fully automated in 20 years. What do you think will happen to all the unemployed population?

I remember when they were saying that 20 years ago

20 years ago, Kurzweil predicted AGI will be achieved by 2029, and ASI by 2045. We are right on track.

hahahahaha. Please can you advice on lottery numbers? I'd like to win a bunch of money before losing the job

Why should we throw away decades of development in determistic algorithms? Why tech people mentions "geneticists"? I would never select an algorithm with a "good" flying trait for making an airplane works, that's nuts

But you have selected an algorithm with a "good" flying trait already for making airplanes. Just with another avenue to get to it versus pure random generation. The evolution of the bird has came up with another algorithm for example, where they use flapping wings instead of thrust from engines. Even among airplane development, a lot was learned by studying birds, which are the result of a random walk algorithm.

No there is no selection and no traits to pick, it's the culmination of research and human engineering. An airplan is a complex system that needs serious engineering. You can study birs but up till a certain point, if you like it go doing bird watching, but it's everything except engineering

>it's the culmination of research and human engineering.

And how is this different than the process of natural selection? More fit ideas win out relative to less fit and are iterated upon.


First of all natural selection doesn't happen per se, nor is controlled by some inherent mechanism, it's the by product of many factors external and internal. So the comparison is just wrong. Human engineering is an interative process not a selection. And if we want to call it selection, even though it is a stretch, we're controlling it, we the master of puppets, natural selection is anything but a controlled process. We don't select a more resistant wing, we engineer the wing with a high bending tolerance, again it's an iterative process

We do select for a more resistant wing. How did we determine that this wing is more resistant? We modeled its bending tolerance and selected this particular design against other designs that had worse evaluated results for bending tolerance.

And that, my friend, is just engineering, like I said above it's an iterative process. There is no "natural selection" from random shaped wings

First, how did we model the bending tolerance if everything is just randomness?

Second, there are other algorithms that constructively find a solution and don't work at all like genetic algorithms, such as mathematical solvers.

Third, sometimes, a design is also simply thought up by a human, based on their own professional skills and past experience.


Yes, and it was an intentional process.

Natural selectiom:

- is not an intentional process

- does not find "the strongest the fittest the fastest etc."


By that logic, everything humans do is per definition result of natural selection. Everything is a sphere if you zoom out far enough.

However your starting definition was more limited. it was specifically about "creating candidates at random, then just picking the one that performs best" - and that's definitely not how airplanes are designed.

(It's not even how LLMs work, in fact)


Great rule of business: sell a solution that causes more problems, requiring the purchase of more solutions.

Customers are tired of getting piles of shit, look at the Windows situation

Or don't sell the solution. When you have monopolies, regulatory capture, and endless mountains of money, you can more or less do what you'd like.

That's a lie, people will eventually find a way out, it was always like that, being it open source or by innovating and eventually leave the unable to innovate tech giants dying. We have Linux and this year will be the most exciting for the Linux desktop given how bad the Windows situation is

Only been hearing that for twenty years and these tech giants are bigger than they’ve ever been.

I remember when people said Open Office was going to be the default because it was open source, etc etc etc. It never happened. Got forked. Still irrelevant.


I said "being it open source or by innovating" eg Google innovated and killed many, also contributed a lot to open source. Android is a Linux success, ChromeOS too. Now Google stinks and it is not innovating anymore, except for when other companies, like OpenAI, come for their lunch. Google was caught off guard but eventually catching up. Sooner or later, big tech gets eaten by next big tech. I agree if we stop innovating that would never happen, like Open Office is the worst example you could have picked

> redistribute something to the society

with a proprietary black box tool you pay a subscription for? that's nonsense


In Greek mythology Prometheus took fire from the gods and gave it to humans, for the low subscription fee of a liver a day.

You can always run models locally? Local models will become cheaper and faster over time. Don't panic just yet

Will this will that and never consider will not. As of now, observation made on evidence, it looks way more the latter to me.

this argument is nonsense…I write code on a macbook running macos. it’s not a subscription, but some people also pay a subscription for a proprietary IDE. so any FOSS written with proprietary paid software doesn’t count to you? only if it’s a subscription model?

> I write code on a macbook running macos. it’s not a subscription

You already answered yourself, but let's pretend yours is a valid point: you lose access to Jetbrain IDE you can still code on another free ide/text editor and still give to society without heavily relying on ai somewhere in the cloud of the tech bros, which they don't want to give back to society, they want to be the gatekeepers of programming.


and you can switch AI providers, or use local LLMs. again, a nonsense point to raise about how FOSS is developed. coding “by hand” also doesn’t go away. if you lose your proprietary tools (laptop, OS, IDE, or coding agent) you can always work around it

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: