I was pondering similar thoughts. Will LLM assistants ossify our current programming languages? My limited testing seems to show LLM assistants do well the more popular the language is (more data in its training), so is the hurdle for adoption of something new going to get even higher?
In an alternate universe, if LLM only had object oriented code to train on, would anyone push programming forward in other styles?
I recently picked up Hare, which is quite obscure, and Claude was helpful as a better— albeit hallucinogenic— Google. I think LLMs may not lead to as much ossification as I’d originally feared.
I had looked at it recently while checking out C-like languages. (Others included Odin and C3.) I read some of the Hare docs and examples, and had watched a video about it on Kris Jenkins' Developer Voices channel, which was where I got to know about it.
I like it much more than Zig, and while I like Odin’s syntax more, Hare is more focused on the types of tooling I want to build, so I find Hare’s stdlib preferable. Give it a spin. It’s a simple language.
I started reading the Hare tutorial again because of your comment.
Looks good so far.
Just one note for anyone else wanting to check it out:
There are a few sections in the tutorial which are empty and marked as TODO. E.g. "Using default values" and "Handling allocation failure" (the empty sections seen so far, there may be others below).
>My limited testing seems to show LLM assistants do well the more popular the language is (more data in its training), so is the hurdle for adoption of something new going to get even higher
Not only that they also tend to answer using the the more popular languages or tool event when it is NOT necessary. And when you call it out on it, it will respond with something like:
"you are absolutely right, this is not necessary and potentially confusing. Let me provide you with a cleaner, more appropriate setup...."
Why doesn't it just respond that the first time? And the code it provided works, but very convoluted. if wasn't checked carefully by an experienced dev person to ask the right question one would never get the second answer, and then that vibe code will just end up in git repo and deployed all over the place.
Got the feeling some big corp may just paid some money to have their plugin/code to on the first answer even when it is NOT necessary.
This could be very problematic, I'm sure people in advertising are just all licking their chops on how they can capitalized on that. If one thing currently ad industry is bad, wait until that is infused into all the models.
We really need ways to
1. Train our own models in the open, with weight and the data it is trained on. Kinda like the reproducible built process that Nix is doing for building repos.
2. Ways to debug the model on inference time. The <think> tag is great, and I suspect not everything is transparent in that process.
Is there something equivalent of formal verification for model inference?
Of course it’s always been easier to find talent when working in more popular languages. That’s the big risk you take when you choose the road less traveled.
Easier to find CV's with a word. In reality, just about any programmer should be able and willing to pick up a language when needed. Unless you have hyperspecific needs with some massive framework, a pattern matching HR is not a good way to find programmers.
> is the hurdle for adoption of something new going to get even higher?
Yes.
But today the only two reasons to use niche languages are[0] 1) you have existing codebases or libraries in that language 2) you're facing quite domain-specific problems where the domain experts all use that language.
In either cases you won't just use Java because LLMs are good at Java.
In an alternate universe, if LLM only had object oriented code to train on, would anyone push programming forward in other styles?