Hacker Newsnew | past | comments | ask | show | jobs | submit | thallukrish's commentslogin

Since code is generated by LLM these days, how does specific syntactic constructs like a Symbol which essentially carries the context and can be manipulated with python operators help when compared to a normal python code generated by LLM with all the checks and balances instructed by a human? For example, I can write in this syntax to convert all fruits to vegetables or I can simply prompt an LLM to construct a program that takes a list of fruits and calls a LLM in the background to return the vegetables equivalent. I am trying to understand the difference.


Hallucination obstruction, I'd imagine. When you have an LLM create a formal system, it can be verified way easier than a general purpose one


Yes. That seems to be the case. While it may not be saving any time compared to generating general python code vs. specific symbolic code, the real value could be that it has an engine to enforce the contract on LLM responses with the library or even do the calls to the LLM as a common piece of code making it less error prone and bringing consistency in the interactions with the LLM.


The problem with organizing is not the lack of tools or techniques. It is simply not possible to devise a system which works automatically with little effort. Everything requires varying levels of discipline which is hard to keep up with time and a different situation.


> It is simply not possible to devise a system which works automatically with little effort.

I use one.

See https://news.ycombinator.com/item?id=43135927 in this thread.

Truly ZERO effort, yet a backup way to find things without search.


I kind of agree with you! But still wanted to take a shot at it:

https://thoughtscape.app/


this is like minecraft.


Don't find time for something. Find something that takes your time.


Will it give the same answer for a question or change the answers every time you ask it ? Just curious.


Humans seem to do a million different things. And all of them were kids growing into adults and do by learning to do things. Isn't that good enough if a machine did just that - ready to learn if taught, to be classified as being generally intelligent ?


> Isn't that good enough ...

I think approximately nobody is studying AI with the end goal of making a machine that is (say) as smart as a dog. It is a good intermediate goal but the hope has always been to surpass humans. If we want more doglike intelligences we can just breed more dogs.

Generally intelligent is only the start, and not a very interesting part. Animals can obviously learn things and they are "generally intelligent", but most are not all that bright. We are not all that clear on why either. Elephants have larger brains than us, so clearly it's not raw size but rather the complexity of the brain that makes us so much smarter. Which parts of brain complexity matter and which are accidents of evolution is almost entirely unknown.


What if unique content is not the one I want?


I've been a dev building things and I can resonate with what you say very much. A builder is like someone who has a hammer in his hand with everything around looking like a nail. Esp. if you are a builder who is constantly excited with your own ideas, it's hard to ask who would want that ?


i should have just written this in the post.


There is no perfect answer for a query. There can only be contextually meaningful answers. They can be from highly relevant to far off. It is very hard to prove a search engine as bad in general. It may not work for you. But it may still work for a lot more. Google has indexed the entire Web. This is as large a span as one can take.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: