1. Telemetry to dirac.run/v1/event — Sends machine ID, token usage, model info, events, errors (first 500 chars), and platform info. Hardcoded API key. Defaults to opt-in (setting is "unset", not "disabled").
2. Feature flags from dirac.run/v1/event/decide — Polls every 60 minutes with your machine ID. Always enabled, independent of telemetry opt-out. No way to disable without code changes.
3. Web tools route through api.dirac.run — Web search and web fetch tools proxy through Dirac's own API server, sending your request content plus system headers (platform, version, machine ID).
4. Model list fetches — Calls OpenRouter, HuggingFace, Groq, etc. for model listings even when using the Anthropic provider.
This is something that needs to be deprecated entirely. The web fetch tool no longer is used or works. There is nothing even listening at api.dirac.run. This was the result of me stretching my capacity too thin and bulk renaming cline.bot to dirac.run
UPDATE (+1h): both Web search and web fetch tools are now nuked.
I've been pretty obsessed with FSRS in general (tldr: https://github.com/open-spaced-repetition/awesome-fsrs/wiki/...) It's a fantastic new-ish scheduler for spaced repetition - basically a machine learning model which adapts to you, and schedules flash (or anything, really, it's an algorithm) cards according to how well you are personally performing - surfacing data like retention, stability, recall, etc. It's a massive jump over previous "learning algorithms" like
For the past 60d I've been using Anki (a flash card program) and it's FSRS setting to learn my French deck (5000 most common French words) and I'm absolutely zooming. I can already follow a fair chunk of conversational French.
I've also been using the same system to learn Chess more deeply (endgames, tactics, openings) through Chessable and a few other websites that offer FSRS. It's levelled up my chess game a lot
Basically - the thing that hooked me was the data. Being able to see how many cards I've reviewed, how many cards are at 90/80% retention, the stability of every piece of that knowledge, the decay rate, etc... It's really cool.
FSRS is really cool. I'm trying to use it and a modified flashcard system to learn more abstract computer science and higher math. I hadn't considered it as a way of learning Chess - that's really interesting. I'm thinking about expanding my system to cover ear-training, birdsong recognition, a few other things like that.
My listening comprehension for Piano has always been lacking. A deck of piano sounds that map to actual notes (or even chords) might do wonders for it...
My current decks are as follows - I spend about an hour in total reviewing/learning them all, daily:
Are you finding the French decks helpful? I'm also trying to learn French (not using spaced repetition _per se_ but Pimsleur [which does use spaced repetition, really], InnerFrench, and reading [currently reading _Le Trône de fer_]).
I really liked the article, but food for thought: is a transformer that offloads computation to python really that different from Python code being read and then executed by a compiler?
Both examples are of a system we created to abstract most of the hard work.
I think a more important concept here is that the term "AI" has a lot of built-in assumptions, one of which being that it is (or will be) super intelligent, and so folks like the author here think (correctly) that it's important for the AI to be actually doing the work itself.
There's one difference that if a program is run as tool call, the internal states and control flow are not visible to the LLM. You can imagine this being useful for "debugging" in a meta-sense, the same way humans can use debuggers to figure out where something went awry it might be useful for the LLM to "simulate" something and have access to the execution trace.
Of course you can also just simulate this by peppering your code with print statements, so maybe it's not that useful in the end after all.
What are you talking about? Air literally always meant thin and light. Now they're treating it a premium product between normal and pro instead (see iPhone Air too)
Yeah they should never have tried to copy "Air" from MacBooks, precisely where it meant thinnest/lightest, to the iPad/iPhone line where the products are already thin and light. That has always seemed like a bizarre branding move to me.
If they need a mid-tier brand between entry-level and Pro, just call it Plus. The iPad Plus would make a lot more sense.
Apple News is a paid subscription. Facebook and Google are not. Apple is supposedly the premium brand that provides a curated experience (isn't that their reasoning behind the closed nature of the App Store?).
That could make sense as a criticism if Apple were some tiny struggling company. But they have the resources to do better. And a brand identity that definitely sets it apart from the rest of the internet.
Still a bit of a bummer that with Apple, you pay a premium to escape the ad-based ecosystem^W cesspool, both for the hardware and then here for Apple News itself, and then still not only get served ads, but tasteless scam ads.
I’m an Apple cultist but it is somewhat comical that Apple has their own content blocking format built into their own browser but somehow thinks I’d ever want to pay for a subscription to read ad-encumbered news in a separate webview app
1. Telemetry to dirac.run/v1/event — Sends machine ID, token usage, model info, events, errors (first 500 chars), and platform info. Hardcoded API key. Defaults to opt-in (setting is "unset", not "disabled").
2. Feature flags from dirac.run/v1/event/decide — Polls every 60 minutes with your machine ID. Always enabled, independent of telemetry opt-out. No way to disable without code changes.
3. Web tools route through api.dirac.run — Web search and web fetch tools proxy through Dirac's own API server, sending your request content plus system headers (platform, version, machine ID).
4. Model list fetches — Calls OpenRouter, HuggingFace, Groq, etc. for model listings even when using the Anthropic provider.
reply