Hacker Newsnew | past | comments | ask | show | jobs | submit | 1wheel's commentslogin

It can be! Here's a circuit showing how the model processes "PCB tracing stands for" to output "printed":

https://www.neuronpedia.org/gemma-2-2b/graph?slug=pcb-tracin...


Browsers won't use http2 unless https is on — chrome only allows six concurrent requests to the same domain if you're not using https!



this is super cool! Do you know how they achieve this effect?


There's also https://lmql.ai/


LQML (and guidance https://github.com/guidance-ai/guidance) are much more inefficient. They loop over the entire vocabulary at each step, we only do it once at initialization.


Does looping over the vocabulary add much overhead to the tok/s? I imagine they're just checking if the input is in a set, and usually there's only ~30k tokens. That's somewhat intensive, but inference on the neural net feels like it'd take longer.


They’re checking regex partial matches for each possible completion, which is intensive indeed. You can look at the Figure 2 in our paper (link in original post) for a simple comparison with MS guidance which shows the difference.


Basically just a bunch of d3 — could be cleaned up significantly, but that's hard to do while iterating and polishing the charts.

I also have a couple of little libraries for things like annotations, interleaving svg/canvas and making d3 a bit less verbose.

- https://github.com/PAIR-code/ai-explorables/tree/master/sour...

- https://1wheel.github.io/swoopy-drag/

- https://github.com/gka/d3-jetpack

- https://roadtolarissa.com/hot-reload/


I was going to ask the same question. Those are some great visualizations



Here's an example of that with a smaller BERT model: https://pair.withgoogle.com/explorables/fill-in-the-blank/



Mine is 60 lines of js; the markdown library does most of the work.

https://roadtolarissa.com/literate-blogging/


The most minimal I can muster is 3 lines of Bash:

    for filename in ./posts/*.md; do
        pandoc -s -c style.css $filename -o outputs/$(basename $filename md)html
    done


This can get pretty pedantic. Where do you draw the line between what is the blog generator and the tools required to do it when counting the number of lines of code? One could easily argue in this case you might want to be counting the lines of code in pandoc, not this bash script.

That said, I do think this is the way to go, using a popular and generic tool (notably that you do not have to maintain) to accomplish a specific task. And more importantly, composing utilities together in a succinct and efficient way.

Also, if you used semicolons, or xargs with a pipe, you could make this one line :) newlines can be pretty arbitrary, I wonder if there's a better measurement for simplicity, like branches or statements/expressions.


In that case, here it is in one line, producing byte-for-byte identical output to the snippet above:

    pangeadoc -c style.css ./ -O ./_site
(pangeadoc, of course, is a fork of pandoc that when invoked as above behaves exactly the same as those "3 lines of Bash".)


damn, that's not a bad effort, I like it. But it does sort of feel like cheating ;)

Mine is about 140 lines of bash, and I don't /think/ i'm using anything that isn't part of coreutils.


The real story here is not the 60 lines, but the literate programming style used for it.

Aside from that, this approach is very similar to Marijn Haverbeke's (the CodeMirror author) generator, although your 60 lines does lean more heavily on third-party packages.

https://marijnhaverbeke.nl/blog/heckle.html


There's a dinosaur version of Anscome's Quartet:

https://www.autodeskresearch.com/publications/samestats


oh, very nice!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: