Big fan of this push back, because there are alot of projects that have that smell over engineering with the wrong base. (especially with vibecoding now) Thought there are use cases where some have lots of medium-sized data divided up. For compliance, I have a lot of reporting data split such that duckdb instances running in separate processes work amazing for us especially with lower complexity to other compute engines in that environment. If I wanted to move everything into somewhere a clickhouse/trino/databrick/etc would work well the compliance complexity skyrockets and makes it so we have to have perfect configs and tons of extra time invested to get the same devex
I think articles like this need the pre-amble/framing of accommodations given to neurodivergent individuals come with benefits that are not, otherwise, intuitive to those without that 'flavour' of neurodivregance.
Theres alot of what about women arguments which is fair, but I think it's entirely fair to view this as a harm to men's mental health, and it should be the focus.
Right now, we have a societal issue of isolating, vulnerable men. Men have the highest suicide rate, are statistically more likely to commit violent offenses and have a high rate of domestic abuse. 'Incels' who are likely these companies target market have committed domestic terrorism. I think this is an issue we should 100% be looking at, imagine what happens when there is AI misalignment? They could become a risk to themselves and others, the last thing we need is a unreliable tool for someone in that situation to sooth the pains of social isolation.
> Men have the highest suicide rate, are statistically more likely to commit violent offenses and have a high rate of domestic abuse. 'Incels' who are likely these companies target market have committed domestic terrorism.
Do everyone a favor and stop equating 'incel' with things like terrorism.
Largely agree, given your definitions and clarifications, but I see some things are just co-related issues not directly a death of that programming approach. Where I see it is the gap between programmers and end users, scope of 'users' expanding to other programmers, and the increased complexity causes more abstract soft skill code delivery/management roles are entirely co-existing issues. Where they didn't cause the death directly, more a co-morbidity situation, didn't help, but it didn't cause the death. I'd say the primary cause is cost and complexity of operations, forcing the perspective shift from 'help at least one actual human being' to 'help at least <MINIMUM VIABLE MARKET SHARE> of users/developers'. I'd also as an aside argue frameworks and items directed at devs (that are well-designed), are still abstractly utilitarian, because, if they didn't exist a human would have to do the work of programming or doing the work manually so it would directly help at least 1 human.
Looks like they want to build up and support middle men to do the apps more than them, and act more like a platform or operating system position. Which makes sense giant corporations reporting 95% failed AI projects and the core success cases are specialist companies tuning the platform to a specific problem are successful. Then there are a ton of snake oil AI apps that are over promising under delivering hurting the image of AI's usefulness
This is probably purely a pivot in market strategy to profitability to increase token usage, increase consumer/public's trust more than farming ideas for internal projects.
I think that MIT study of 95% of internal AI projects failing has scared off a lot of corporations from risking time in it. I think they also see they are hitting a limit of profitable intelligence from their services. (with the growth in inelegance the past 6–8 months being more realistic, not the unbelievable like in the past few years)
I think everyone is starting to see this as a middle man problem to solve, look at ERP systems for instance when they popped up it had some growing pains as an industry. (or even early windows/microsoft 'developers, developers, developers' target audience)
I OpenAI see it will take a lot of third party devs to take what OpenAI has and run with it. So they want to build a good developer and start up network to make sure that there are a good, solid ecosystem of options corporations and people can use AI wise.
The MIT study as released also does not really provide any support for the 95% failure rate claim. Until we have more details, we really don't know where that number came from:
Yea from what I understand 'Chats' and AI coding are something they already have market domination/are a leader on and are a good/okay product. It's the other use cases they haven't delievered on in terms of other companies using them as a platform to deliver AI apps, which I would imagine would have been a huge vertical in their pitches to investors and internal plans.
These third-party apps get huge token usage with agenentic patterns. So losing out on them and being forced to make more internal products to tune to specific use cases is not something they want to biuld out or explore
AI coding is mid(okay) yes, my main point is people use it and it's a good line of business right now for them. They expected bigger break throughs like gpt-2 to 3 to 4, and that's not happening so they have to lean on the other aspects of the business more.
The fact it is mid is why they are really needing all the other lines of business to work. AKA selling tokens to AI apps the specialize in other mid products, and limit the snakeoil AI products that are littering the market ruining AI's image of being the new catch all solution.
I was a big user of IntelliSense and more heavily, IntelliJ, for most of my career. It truly seemed like magic back then. I recall telling a colleague who preferred Emacs that it felt like having an editor that could read your mind, and would joke that my tab key was getting worn out.
Then I discovered LLMs.
If you think IntelliSense is comparable to what LLMs can do, you really, really need to try giving an AI higher-level problems to solve. Throwaway example I gave in a similar thread a few weeks ago: https://news.ycombinator.com/item?id=44892576
I think a big part of simonw's shtick is trying to get people to give LLMs a proper try, and TBH that's what I end up doing a lot too, including right now! The problem is a "proper try" takes dedicated effort, because it's not obvious where the AI will excel or fail for your specific context, and people legitimately don't have enough time for that.
But once you figure it out, it feels like when you first discovered IntelliSense, except you already know IntelliSense, so it's like... IntelliSense raised to the power of IntelliSense.
The things is that languages that need intellisense that much are language that made it too easy to construct complex systems. For lisp and C, you can get autocompletion for free, and indexing to offer docs preview and signature can be done quite easily as well. There's also an incentive to keep things short and small.
Then you have Java and C# where you need a whole IDE if you're writing more than 10 lines. Because using anything brings the whole jungle with it.
Hmm, I think all languages, regardless of verbosity, could be better with IntelliSense. I mean, if the IDE can reliably predict what you intend to type based on the context, regardless of the complexity of the application involved, why not have it?
Seems like languages like Java and C# that encourage more complexity just aim to provide richer context to mine. Simple example, given an incomplete line like "TypeA foo = bar.", the IDE can very easily figure out you want "bar.getBlah(baz)" because getBlah has a return type of "TypeA" and "baz" is the only variable available in the scope. But to have all that context at that point requires a whole bunch of setup beforehand, like a fine-grained types supported by a rich type system and function signatures and so on, which incentivizes verbosity that usually scales with the complexity of the app.
So yes, that's a lot of verbosity, but also a lot of context. To your point, I feel like the philosophy of languages like Java and C# is deliberately based on providing enough context for sophisticated tooling like IntelliSense and IntelliJ.
Unfortunately, the languages came before such sophisticated tooling existed, and when good tools did exist they were expensive, and even with those tools now being widely and freely availble, many people still don't use them. (Plus, in retrospect, the language designs themselves genuinely turned out to be more complex than ideal in some aspects.)
So the current reputation of these languages encouraging undue complexity is probably due to their philosophies being grounded in sound reasoning but based on predictions that didn't quite pan out as expected.
The thing is we did have nice tooling before those languages came to be. If you look at Smalltalk, it has this type of context in an even more powerful way. You can browse the whole library in a few click and view its code. And it has a Playground element where you can try and design stuff. And everything was inspectable.
Same with Lisp. If you take emacs has an example, you have instant documentation on every functions.
Another example can be python where there’s an help system embedded into the language.
Java is basically unwritable without a full indexer and completion. But it has a lot of guardrails and its verbosity discourages deviation.
And today we have Swift and kotlin which is barely better. They do a lot of magic behind the scene to reduce verbosity, but you’re still reliant on the indexer which is now coupled with a compiler for the magic stuff.
Better languages insists on documentation, contextual help, shorter programs, no magic unless created by the programmer, and visibility (inspection with a debugger and traceability with the system source available, if possible).
I never used SmallTalk but from what I heard about it, I feel like Java/C# etc were a deliberate push towards that kind of environment via IDEs. I am not sure why SmallTalk didn't catch on, but it may have something to do with resistance from the C++ programmers that Guy Steele mentioned they had to drag towards Lisp via Java. It seems to me that the current crop of languages is the result of this forced evolution of a reluctant developer market from relatively barebones languages like C/C++ towards a SmallTalk-like future.
Same, my personal theory where it excels and overachieves is where there is already really fleshed out and oversaturated developer ecosystems (and experienced developer pool) that organizations have alot of legacy software built on it. I think it will gain momentum as we see more need for distributed LLM agents and tooling pick up. (Or when people need extreme cost savings on front facing apis/endpoints that run simple operations)