I agree with the spirit of this piece, but I think Wordle is a weird choice to try to highlight the point. I enjoy Wordle, but I'm certainly not going to die without it. Not to downplay Wardle, but, as xeromal pointed out, it's pretty easy to crank out a clone. Lots of people already have. If NYT paywalls it, I'll either stop playing or find or create a clone.
I don't much like that differences in sizing are not taken into consideration (not all fonts are equally large at a given pixel size).
I have been using Fira Code extensively and also writing texts. Both in English & in Greek, both spotting my mistakes and reading what I'd written improved within minutes of first trying it out. I may give JetBrains Mono a try as well, though I really am taken with Fira Code.
Unfortunately I have yet to find the perfect general purpose font to use in general. Ubuntu fonts do come really close to being perfect.
Just played the game with names off and ended up with Fira Code. But since I have the names off, I don't know what second place was, but I think it was Jetbrains Mono based on the look of it.
I use Fira too, currently, but IBM just won for me, with JetBrains as the runner up. I'm actually surprised by that, and learned a lot from this game(?).
This site is pretty neat! It's a bit weird it doesn't have Menlo, Monaco, etc. to compare to some widespread fonts. I'm not sure what I'd pick if those were mixed in.
1) Looking at the font names I eventually picked my favorite/workhorse: IBM Plex.
2) With hidden names, I picked Courier Prime, which I disliked at first sight.
3) Again with hidden name, I picked Cousine, which looked alright in the simulator but had lines that were too strong and even on my more contrasted color scheme. I prefer fonts with varying line widths, which the game didn't really detect -- it was mostly unreadable versus readable.
My favorite based on the game is Inconsolatas. However after installing it I realized that the game doesn't show different variations of the font (bold, italic etc..). This can easily give a wrong impression of the font when one is using combinations of different variations in an editor.
Tangential thought: if you're using a passphrase you're not going to ever type manually, for example something you're going to generate once and stick in a secret management system, why not build the passphrase using all possible UTF-8 characters as your corpus? Seems like restricting yourself to ASCII characters is just giving an advantage to those attempting to brute force the passphrase.
> why not build the passphrase using all possible UTF-8 characters as your corpus? Seems like restricting yourself to ASCII characters is just giving an advantage to those attempting to brute force the passphrase.
Restricting yourself to ascii means you don't need to worry about text encoding. Who knows when you end up needing to paste it, or when something decides to be helpful and messes up the encodings.
This doesn't make much sense to me. The point of a passphrase is to be readable/writeable by a human. If you don't need that, you just want a binary key (which can be base64 encoded/decoded to be read/written by a human).
Using all utf-8 characters seems like it combines the downsides of both of these (not really human readable/writeable but also not using the full key space).
Because all possible UTF-8 characters give you passphrases that are hard to write down, hard to transcribe from paper onto keyboard or vice/versa, hard to repeat aloud, and hard to recognize visually.
The tradeoff for using a smaller character set is longer passphrases (for shorter character sets) versus "less humane" passphrases, for a given level of target entropy.
For people like me who don't have time to watch the talk, what's the answer to the question posed on the blog post? "Why aren’t we using the Language Server Protocol (LSP) or Language Server Index Format (LSIF)?"
For LSP, the short version is that running separate sidecar services in production for every language that we want to support is a complete non-starter. That would completely eat up my team's time budget handling operational duties.
LSIF is a great technology that lets you run LSP servers in a “batch” mode. But we really need our analysis to be incremental, where we can reuse results for unchanged files when new commits come in. Language servers tend to do monolithic analyses, where every file needs to be reanalyzed whenever any new commit comes in. If you want to analyze your dependencies, as well, that exacerbates the problem. LSIF (the data format) has recently grown the ability to produce incremental data, but that requires language servers to work in an incremental mode as well. Very few (if any?) do, and because language servers tend to piggy-back on existing compiler technology (which is also not typically incremental), it will be a heavy lift to get incrementality into the LSP/LSIF world.
Whereas stack graphs have incrementality out of the box. (This was the primary thing that we added to “scope graphs”, the academic framework that stack graphs are built on.) It's the core algorithm (which is implemented once for all languages) where the incrementality happens. The only language-specific parts are figuring out which graph structures you need to create to mimic the name binding rules of your language.