This offering, and the other half-dozen like it this past week or so, is like giving a kid a flamethrower.
It's all fun and games until they burn down your house.
> ... I need to understand the intent, the whys behind the choices.
As do I.
And that is something ChatGPT-X (for any given X) cannot provide, regardless of whether or not what is produced is correct. Perhaps with some form of backward chaining[0] a ChatGPT-X someday can explain how it arrived at what was produced works.
It's weird to see a forum for hackers, with hacker in the name, and with a line about encouraging curiosity in the charter, be so hostile to someone who hacked something together.
Sign of the times perhaps.
Though I guess it's not much different from the thread trashing Dropbox however many years back.
>so hostile to someone who hacked something together.
It's not hostile but I'm a bit tired of all those projects that sprout around AI.
If it was an open-source project full of bugs, I would understand, and encourage and give solutions to the creator of the project, maybe even create tickets or fix bugs.
But with AI, we are flooded with tons of closed-source frontends to a closed-source backend, and those projects are more than buggy since they confidently give bad solutions. It's not like a "DIY electric car project," it's someone putting pieces of cardboard on a Tesla and pretending it makes it safer or faster.
I'm dumbfounded and I don't know how I am supposed to react to this, I would certainly not release that to anyone since it's antithetical to what I do and believe what software should be.
Good point. I wish OpenAI released more of their work as open source. I wish people building on top of them did too. That said, I usually won't begrudge a small-time developer or entrepreneur from choosing whatever licensing model they think is going to make them the most money. An army of small-time entrepreneurs who build closed source can still have democratizing effects on a market that's been captured by a few large companies. I'm more frustrated when I see big, entrenched companies finding ways to capture value from the open source ecosystem and privatize it.
My view on v1s, prototypes and PoCs regardless of their licensing is that by design they're going to be a mess and have errors, if they don't you waited too long to ship. Maybe these folks should have been a little more honest in their marketing but man if we're going to get into a list of the offenders on that front I think they are way way down on that list.
Overall in my view LLMs are the most disruptive thing to come along since the Web itself. Business model's like Google's are facing a direct challenge from this technology. Why do I want to look at Google's first page full of shitty search ads when I can use a LLM to get an answer immediately? As far as I'm concerned at this stage I would love to see a billion projects from every corner of the world built on top of this technology. Whether they're great or they're crap, the avalanche is the first real opportunity in many years to disrupt some giants.
> It's weird to see a forum for hackers, with hacker in the name, and with a line about encouraging curiosity in the charter, be so hostile to someone who hacked something together.
My comment was in direct response to an overarching concern raised by the implications of incorporating "LLM-generated code." This is relevant here due to the "Show HN" description above, which reads thusly:
Regex.ai is an AI-powered tool that generates regular
expressions. It can accurately generate regular expressions
that match specific patterns in text with precision.
If you interpreted my characterization of "... like giving a kid a flamethrower" as being hostile, then I extend my apologies to the OP as I was using this phrase as a literary tool detailed subsequently. I thought the subject expansion of "the other half-dozen like it this past week or so" was sufficient.
As to "encouraging curiosity", I point you to feedback I provided to the OP in a reply peer to this one.
Are you trying to say that every sort of criticism equals hostility? If I dont like your half-thought-out idea, I am hostile. If I praise it, I feel like an idiot. Not much choice remaining after all....
I’m not critical of the hack itself (unless it uses OAI’s closed commercial LLMs). Just not a fan of some implications of using it in real circumstances: it might work for a personal thing but if you use it for anything important you still need to know how regular expressions work.
> It's weird to see a forum for hackers, with hacker in the name, and with a line about encouraging curiosity in the charter, be so hostile to someone who hacked something together.
I guess people are getting tired of too many topics in one narrow space. I come to HN for variety. It does get tiring when every single day I see yet another LLM-based solution attempting to solve a problem I don't think I even have.
Overdose of a certain topic is not good for a general tech forum like this. Everything should be in moderation and all that.
This forum is also against decentralization and Web3, and often shills for large centralized corporations. The ethos of hackers was always ANTI that stuff.
You can ask it to explain why. It might not be a true representation of why those decisions were made but at least it’s a plausible explanation of why something could work like that which is better than nothing. I’m not sure why you think it can’t do that already?
So if I look at most codebases, someone would be able to explain what all the code does and why it does that way? I'm extremely sceptical of that, even if I myself wrote the code 3 weeks ago.
A person should be able to explain the code they're adding to a repo at the time they are adding it. Whether or not they can explain it at some arbitrary point in the future is a different question/issue.
It's all fun and games until they burn down your house.
> ... I need to understand the intent, the whys behind the choices.
As do I.
And that is something ChatGPT-X (for any given X) cannot provide, regardless of whether or not what is produced is correct. Perhaps with some form of backward chaining[0] a ChatGPT-X someday can explain how it arrived at what was produced works.
But "the why" is the domain of people.
0 - https://en.wikipedia.org/wiki/Backward_chaining