Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Are we sure that an AI could not engage in enough back and forth conversation to firm up the spec?

This is the doomsday argument. What would I do if there's a nuclear apocalypse before lunch? I guess I'll die like everyone else.

An AI sufficiently advanced to do that is also sufficiently advanced to run the entire business in the first place, and also argue cases in court, do my taxes, run for president and so on.

You either believe that transformers models are "it", or you haven't actually removed the problem of specifying requirements formally. Which, you know, is actually much harder to do in English than it is to do in C++.



>You either believe that transformers models are "it", or you haven't actually removed the problem of specifying requirements formally. Which, you know, is actually much harder to do in English than it is to do in C++

This is actually something that makes me happy about the new AI revolution. When my professor said that I thought he was an idiot, because no-code tools always make it harder to specify what you want when you have specific wants the developer didn't think about.

We give kids books with pictures because pictures are easier, but when we want to teach about more complex topics we usually use language, formulas, and maybe a few illustrations.

I still think no-code was always doomed due to the fact that any attempt at it lacked the interface to describe anything you want, like language does.

AI is finally putting an end to this notion that no-code should be clicky high-maintenance GUIs. Instead it's doing what Google did for search. Instead of searching by rigid categories we can use language to interact with the internet.

Now the language interaction is getting better. We haven't regressed to McDonald's menus for coding.


I’ve used no code tools since the 90s and it just has a fatal flaw. For simple demo use cases it looks simple and cool. Then when you go to the real world and start getting pivots and edge cases you have to fix in the interface then it becomes a 4D nightmare and essentially a very bad programming language


I’ve spent a fair bit of time working on interactive chat systems that use a form of visual programming. It’s not good. Once you get past the toy stage (which is good and ergonomic), it’s just the same as programming except the tooling is far worse, you have to invent all your change management stuff from scratch, and it’s like going back 30 years.


What about coding in two languages, one textual and one visual?

Or a single language that has both visual and textual components

Or a single language where each component can be viewed in textual or visual form (and edited in the form that makes most sense)


Isn't the "Chat" part of ChatGPT already doing something close to this? I mean the clarification comes from the end-user, not from the AI, but with enough of this stuff to feed upon, perhaps AIs could "get there" at some point?

For example, this guy was able to do some amazing stuff with ChatGPT. He even managed to get a (mostly working) GPU-accelerated version of his little sample "race" problem.

See: https://youtu.be/pspsSn_nGzo


> Isn't the "Chat" part of ChatGPT already doing something close to this?

No, the amount of handholding you have to do to get it to work effectively presumes you already know how to solve the problem in the first place.

The best way to use it is the opposite everyone is busy selling: as a linter of sorts that puts blue squiggles below my code saying stuff like "hey stupid human, you're leaking memory here", or even "you're using snake case, the project uses camel case, fix that".

That would actually lower my cognitive load and be an effective copilot.


Fair enough - assuming steady state, but the acceleration is the curve I'm most curious about.

The point I was alluding to above was that the prompts themselves will be recursively mined over time. Eventually, except for truly novel problems, the AI interpretation of the prompts will become more along the lines of "that's what I wanted".

Some things to think about: What happens when an entire company's slack history is mined in this fashion? Or email history? Or GIT commit history, with corresponding links to Jira tickets? Or the corporate wiki? There are, I'd guess, hundreds of thousands to millions of project charter documents to be mined; all locked behind an "intranet" - but at some point, businesses will be motivated to, at the least, explore the "what if" implications.

Given enough data to feed upon, and some additional code/logic/extensions to the current state of the art, I think every knowledge worker should consider the impact of this technology.

I'm not advocating for it (to be honest, it scares the hell out of me) - but this is where I see the overall trend heading.


This is the doomsday scenario again, though.

In a world where we have the technology to go from two lines of prompt in a textbox to a complete app, no questions asked, then the same technology can run the entire company. It's kind of hard to believe transformers models are capable of this, given we are already starting to see diminishing returns, but if that's what you believe they are, then you believe they can effectively do anything. It's the old concept of AI-complete.

If you need to formally specify behavior, at any point in the pipeline, then we're back to square one: you just invented a programming language, and a very bad one at that.

This remains true for any version of a language model, even an hypothetical future LLM that has "solved" natural language. I would not rather write natural language than formal language given the chance.


> If you need to formally specify behavior, at any point in the pipeline, then we're back to square one: you just invented a programming language, and a very bad one at that.

But what if the "programming language" is not a general-purpose language, but a context/business domain specific language? One that is trained on the core business at hand? What if that "language" had access to all the same vocabulary, project history (both successful and unsuccessful), industry regulations, code bases from previous (perhaps similar) solutions, QC reports, etc.? What if the "business savvy" consumer of this AI can phrase things succinctly in a fashion that the AI can translate into working code?

I don't see it as a stretch "down the road." Is it possible today? Probably not. Is it possible in 5-10 years time, I definitely think so.


I agree with your point about how to best use it today. We have seen that each new model generation both improves the prior tasks and unlocks new ones through emergent behavior. That’s the fascinating/scary part of this development. And yes, it’s “just” a language model. It’s “just” predicting next token given training + context. We don’t really understand why it’s working and it’s evolving non-linearly.

I asked GPT-4 to give me an SVG map of my town. I then asked it to put dots on some local landmarks. The map was toddler level, but the landmarks were relatively accurate in terms of their relationship to each other and the blob that it drew.

So this is a language model that has some emergent notion of space in its code generation abilities.


This is far from the doomsday argument, but maybe it's the "AI can do everything that has significant economic value today" argument.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: