Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Are you sure it would be hard?

Maybe it only requires asking the LLM to be creative when designing the algorithm. The parent poster spent some time thinking about it, obviously--he didn't generate it accurately "on the fly," either. But he's able to direct his own attention.

I don't see why the LLM couldn't come up with this logic, if prompted to think about a clever algorithm that was highly specific to this problem.



I suspect that it would be unlikely to come up with it because it requires execution of a fairly lengthy algorithm (or sophisticated mathematical reasoning) to find the smallest/largest valid numbers in the range. You can verify this for yourself with the following ChatGPT prompt: "What is the smallest number in the range (1, 100000) whose digits sum to 30? Do not execute separate code."


O1 did find the optimization in a sibling comment (sibling to my GP)

So probably time to update your expectations


Why limit its ability to write separate code?


Because otherwise we are talking about LLMs augmented with external tools (i.e. Python interpreters). My original comment was pointing to the limitations of LLMs in writing code by themselves.


You wouldn't ask a programmer to solve a problem and then also not let them write down the source or debug the program as you write it?

Are you asking it to not write down an algorithm that is general? They are doing a pretty good job on mathematical proofs.

I still don't understand why you wouldn't let its full reasoning abilities by letting it write down code or even another agent. We should be testing towards the result not the methods.


I'm simply pointing out the limitations of LLMs as code writers. Hybrid systems like ChatGPT-o1 that augment LLMs with tools like Python interpreters certainly have the potential to improve their performance. I am in full agreement!

It is worth noting that even ChatGPT-o1 doesn't seem capable of finding this code optimization, despite having access to a Python interpreter.


> y = sum([x for x in range(1,n)] <= 30

> Write an efficient program that given a number, find the integer n that satisfies the above constraints

Goal: Find n where sum of integers from 1 to n-1 is ≤ 30

This is a triangular number problem: (n-1)(n)/2 ≤ 30

... code elided ...

> Ok, now make an find_n_for_sum(s=30)

def find_n_for_sum(s: int) -> int: return int((-(-1) + (1 + 8s)*0.5) / 2)

# Tests assert sum(range(1, find_n_for_sum(30))) <= 30 assert sum(range(1, find_n_for_sum(30) + 1)) > 30


But programmers are LLMs augmented with the ability to run code. It seems odd to add a restriction when testing if an LLM is "as good as" a programmer, because if the LLM knows what it would need to do with the external code, that's just as good.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: