Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've been developing a programming language that does this, the repo can be found here https://github.com/plangHQ

Here is a code example

  ReadFileAndUpdate
  - read file.txt in %content%
  - set %content.updated% as %now%
  - write %content% to file.txt
I call this intent based programming. There isn't a strict syntax (there are few rules), the developer creates a Goal (think function) and writes steps (start with -) to solve the goal

I've been using it for clients with very good results, and from the 9 months I've been able to build code using it, the experience has shown far less code needs to be written and you see the project from different perspective.



It looks like Python without parentheses, but instead of using the REPL you use a black box that costs money for every line.

I kind of regret when NFTs were all the rage, at least it was fun.

Also please show us a real code that is generated.



I like what you're doing there. It does seem like we might need some new kind of language to interface with LLMs. Sort of the language of prompt engineering, that's a bit more specific than just "raw English language" but also more powerful than just pure templating systems.


Yeah, I don't believe LLM will be able to code fully, they are analog trying to do something digital where everything needs to be 100% correct.

Plang being analog language I see the LLM is able to code so much more and it never has syntax, library or other build errors.


But we have to admit also that LLMs may become (or maybe even OpenAI-01 already is) smart enough that they can not only write the code to solve some task but understand the task well enough to also be able to write even better Unit Tests than humans ever could. Once AI starts writing Unit Tests (internally even) for everything it spits out we can probably say humans will at that point be truly obsolete for writing apps. However, even then, the LLM output will still need to be computer code, rather than just having the LLMs just "interpret" English language all the time to "run" apps.


Ever heard of the halting problem [0]? Every time I heard these claims, it sounds like someone saying that we can travel in time as soon as we invent a faster than light vessel, or better, Dr Who’s cabin. There’s a whole set of theorems that says ultimately how a formal system (which computers are) can’t be completely automated as there are classes of problems it can’t solve. Anything the LLMs do, you can write a better performing software except for the task that it is best suited for: translation between natural languages. And the latter, it’s because it’s a pain to write all the rules.

[0]: https://en.wikipedia.org/wiki/Halting_problem


LLMs are doing genuine reasoning already (and no I don't mean consciousness or qualia), and they were even since GPT3.5.

They can already take descriptions of tasks and write computer programs to do those tasks, because they have a genuine understanding of the tasks (again no qualia implied).

I never said there are no limits to what LLMs can do, or no limits to what logic can prove, or even no limits to what humans can understand. Everything has limits.

EDIT: And before you accuse me of saying LLMs can understand all tasks, go back and re-read the post a second time, so you don't make that mistake again.


Hmm, couldn't that example be simplified to:

  SetUpdatedToNow
  - set %content.updated% as %now% in the file "file.txt"
The whole reading and writing feels like a leftover from the days of programming. Reading it in, modifying and then writing assume it fits in memory, leaves out ideas around locking, filesystem issues, writing to a temp and swapping, etc. Giving the actual intent lets the LLM decide what is the best way which might be reading in , modifying and then writing.


The way I designed the language I had the current languages in mind. Don't forget you are programming, it's just more natural, you need details

But also the main reason is that it's much more difficult to solve the intent when mixing multiple action into the same step. In theory it's possible but the language isn't there yet.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: