Hacker Newsnew | past | comments | ask | show | jobs | submit | sunnynagam's commentslogin

To travel and spend time with their family and just not work a job knowing that their stake/share is likely worth millions and maybe one day billions.


I was thinking along the same lines and I think where I end up is realizing searching though the possible space of token sequences isn't the way to do it as the text output space is too far removed from the ground truth for humans. As in text is already a biased projection of reality by humans, now we're searching through an LLM's crude estimation of a biased projection of this?

I think a deeper architectural change involving searching internally in the latent space of a model is required. That way we could search in "thought space" instead of "text space" and maybe then only convert the output to text after searching.


Not op but what's JV?


From the context it seems to mean Joint Venture?


I do similar stuff, I'm just willing to learn a lot more at the cost of a small percent of my knowledge being incorrect from hallucinations, just a personal opinion. Sure human produced sources of info is gonna be more accurate (more not 100% still), and I'll default to that for important stuff.

But the difference is I actually want to and do use this interface more.


Also even if I learn completely factual information, I'm still probably going to misremember some facts myself.


llama3 on groq hits the sweet spot of being so fast that I now avoid going back to waiting on gpt4 unless I really need it, and being smart enough that for 95% of the cases I won't need to.


Optimized with binary search (can't figure out why ycomb formats my code all ugly)

var speed = 500;

[rin, bin, gin].forEach(col => { col.valueAsNumber = 7; col.dispatchEvent(new Event('change')); col.score = 0; });

async function tryit(col, value) { col.valueAsNumber = value; col.dispatchEvent(new Event('change')); submit.click(); await new Promise(resolve => setTimeout(resolve, speed)); var res_text = result.innerText.split(/[ ()%]/)[4]; if (res_text === "Splendid!") { throw new Error("Finished - Found correct combination"); } col.score = parseInt(res_text); return col.score; }

async function binarySearch(col) { let start = 0; let end = 15; let mid = 7; let startAccuracy = await tryit(col, start); let endAccuracy = await tryit(col, end); let midAccuracy = 0;

    while (true) {
        mid = Math.floor((start + end) / 2);
        midAccuracy = await tryit(col, mid);

        if ((end - start) <= 2) {
            const max = Math.max([startAccuracy, midAccuracy, endAccuracy]);
            if (startAccuracy == max) await tryit(col, start);
            else if (midAccuracy == max) await tryit(col, mid);
            else await tryit(col, end);
            return;
        }

        if (endAccuracy > startAccuracy) {
            start = mid;
            startAccuracy = midAccuracy;
        } else {
            end = mid;
            endAccuracy = midAccuracy;
        }
    }
} async function findOptimalCombination() { for (const col of [rin, gin, bin]) { await binarySearch(col); } /* rounding */ for (const col of [rin, gin, bin]) { const mid = col.valueAsNumber; const score = await tryit(col, mid); const left = await tryit(col, mid - 1); if (score >= left) { const right = await tryit(col, mid + 1); if (score >= right) await tryit(col, mid); } } console.log("Optimization complete"); }

await findOptimalCombination();


This is great -- it has a very consistent performance of ~20 steps. I notice that my naive attempt seems to be faster in some cases.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: