Hacker Newsnew | past | comments | ask | show | jobs | submit | adelineJoOs's commentslogin

hidden input box is something I heard before from some hacker-ish old collegues - seems to be a powerful and reliable approach to store state & enable communication between components!


Oops, I worded my comment poorly -- it's not a hidden input, but rather a "CSS-visibility-hidden textbox input". Hidden inputs are useful but something completely different.


Gotcha, thank you for the clarification!


Fun game! Starred on github for making the development process transparent, including sharing your prompts! :)


tbh, that seems pretty close to what I would call snapshot testing already. What people usually do with it is using it for more broadly compared to API testing (for example, I currently use it to test snapshots of a TUI application I am developing) - i.e. you can use it whenever your test is of the form "I have something that I can print in some way and that should look the same until I explicitely want it to look differently in the future". There are a bit more bells and wizzles - For example, it is nice that one does not have to write the initial snapshots oneself. You write the test, execute, it creates the resulting file, then you review and commit that - handy little workflow!


> Snapshots can also stub out parts of the response that are not determistic.

TIL! The way I knew to do it was to have a mock implementation that behaved like the real thing, expect for data/time/uuids/..., where there was just a placeholder. Snapshot tests being able to "mask" those non-deterministic parts sounds cool!


Slightly adjacent, I like these two blog articles that show ways to think about non-linear dialogues:

https://philipphagenlocher.de/post/video-game-dialogues-and-...

(introduces an interesting and useful way to think about dialogues, in my opinion)

https://philipphagenlocher.de/post/data-aware-dialogues-for-...

(further expands on the ideas of the first blog post, automatically ensuring that some properties that might be desirable)


I am not a ML person, and know there is an mathematical explanation for what I am about to write, but here comes my informal reasoning:

I fear this is not the case: 1) Either, the LLM (or other forms of deep neural networks) can reproduce exactly what it saw, but nothing new (then it would only produce legal moves, if it was trained on only legal ones) 2) Or, the LLM can produce moves that it did not exactly see, by outputting the "most probable" looking move in that situation (which it never has seen before). In effect, this is combining different situations and their output into a new output. As a result of this "mixing", it might output an illegal move (= the output move is illegal in this new situation), despite having been trained on only legal moves.

In fact, I am not even sure if the deep neuronal networks we use in practice even can replicate their training data exactly - it seems to me that there is some kind of compression going on by embedding knowledge into the network, which will come with a loss.

I am deeply convinced that LLMs will never be exact technology (but LLMs + other technology like proof assistants or compilers might be)


Oh I don't think there is any expectation for LLMs to reproduce any training data exactly. By design an LLM is a lossy compression algorithm, data can't be expected to be an exact reproduction.

The question I have is whether the LLM might be reproducing mostly legal moves only because it was trained on a set of data that itself only included legal moves. The training data would have only helped predict legal moves, and any illegal moves it predicts may very well be because the LLMs are design with random variables as part of the prediction loop.


That link was new too me, thanks! However: I wrote some chess-program myself (nothing big, hobby level) and I would not call it hard to implement. Just harder than what someone might assume initially. But in the end, it is one of the simpler simulations/algorithms I did. It is just the state of the board, the state of the game (how many turns, castle rights, past positions for the repetition rule, ...) and picking one rule set if one really wants to be exact.

(thinking about which rule set is correct would not be meaningful in my opinion - chess is a social construct, with only parts of it being well defined. I would not bother about the rest, at least not when implementing it)

By the way: I read "Computationally it's trivial" as more along the lines of "it has been done before, it is efficient to compute, one just has to do it" versus "this is new territory, one needs to come up with how to wire up the LLM output with an SMT solver, and we do not even know if/how it will work."


> Not replacing a human.

Obviously not, but that is tangential to this discussion, I think. A hammer might be a useful tool in certain situations, and surely it does not replace a human (but it might make a human in those situations more productive, compared to a human without a hammer).

> generating new ideas

Is brainstorming not an instance of generating new ideas? I would strongly argue so. And whether the LLM does "understand" (or whatever ill-defined, ill-measurable concept one wants to use here) anything about the ideas if produces, and how they might be novel - that is not important either.

If we assume that Tao is adequately assessing the situation and truthfully reporting his findings, then LLMs can, at the current state, at least occasionally be useful in generating new ideas, at least in mathematics.


As someone with some experience in Haskell (although not an expert by any means): Haskell and some of its concepts are foreign to many people, but I think that it is actually easier to program in Haskell than in many other languages I know. At least for my ADHD brain ;)

This impression can be changed somehow by the fact that Haskell and its community has two faces: There is the friendly, "stuff-just-works" and "oh-nice-look-at-these-easy-to-understand-and-usefull-abstractions" pragmatic Haskell that uses the vanilla Language without many extensions, and being written by people that solve some real-world problem by programming.

Then there is the hardcore academic crowd - in my experience, very friendly, but heavily into mathematics, types and program language theory. They make use of the fact that Haskell is also a research language with many extensions that are someones PhD thesis. Which might also be the only documentation for that particular extension if you are unlucky. However, you can always ask - the community is rather on the side of oversharing information than the opposite.

Rust fills that gaping hole in my heart that Haskell opened a bit - not completely, but when it comes to $dayjob type of work, it feels somewhat similar (fight the compiler, but "when it compiles, it runs").


This is something I am currently thinking about. I am a software engineer who also happens to be a amateur musician. I used to do at least 2h of exercise on my instrument for a year, and then not less than that for many years after that. Lots of time I did allocate to fundamentals and standard songs I did not want to lose - and even today, more than a decade after my peak and active time, I have a feeling of where I am skillwise when I comes to those things I practised.

But for software engineering? This seems a lot harder to me. What seems to make most sense to me currently is really high-level stuff like "build up a local dev environment from scratch", "implement a minimal change than is visible in the frontend, but results in a change to the data storage in backend" and "write an integration test". Those seem to touch on many areas of skill and should be "trainable" in some sense, making them a good target of deliberate practice.

Thoughts or experiences anyone? :)


While learning to write compilers, I would memorize small, but critical, programs like converting a char range, like [a-zA-Z_], in string format into a table or reporting an error if the range was invalid. At my peak, I could implement the function that did this in about 60 lines of Lua in about 3 minutes.

I haven't done exercises like that recently, but I found it helpful at the time.


20-30 years ago that was a role in competitive programming team - fast typer with knowledge of data structures, whose job was exactly that: very fast and bug less writing of them during competition


I came to a belief recently that memorization is way too underrated of a skill. Most of programmers, myself included believe that why you should memorize something if you can look it up, but... I'm not really sure by now.

Perhaps we rely too much on our ego that we can come up with everything on the fly where we should instead look into how other crafts did it in the past?


I've spent maybe 40-60 hours a week on average programming since I was 15 or so (42 now, but I don't program as much nowadays). I'm very logical in general, so I was naturally drawn to programming, but that's a ton of work to put into something. Software engineering comes easily to me now, from the very high level to the very low level.

I don't know tons of stuff about tons of stuff, but I do have a fairly good sense for how computers work at all levels of the stack, which helps.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: