I generally think codex is doing well until I come in with my Opus sweep to clean it up. Claude just codes closer to the way my brain works. codex is great at finding numerical stability issues though and increasingly I like that it waits for an explicit push to start working. But talking to Claude Code the way I learned to talk to codex seems to work also so I think a lot of it is just learning curve (for me).
The flip side is everything is being degraded by random mutation.
It's like holding a large ball in place on a hill that sees frequent tremors. If the ball is still halfway up the hill it's being held in place, if it's being held in place it's still halfway up the hill. It might be considered a tautology if you're only working with symbols and ignore all the mechanistics.
Remember, all improvements are changes, but most changes aren't improvements. The trick that makes evolution works is this: out of lots of random changes, most of which are harmful, the harmful ones tend to be weeded out and the useful ones tend to spread.
As other comments say, it was a major story months ago. I started moving off around December. It's a long process to switch over all email accounts. I only recently got self hosted kubernetes set up for immich as a Google photos replacement and some other hosting needs but for the most part I am off google. I get probably 1-2 emails a week still going to Gmail but when I do I just switch those accounts to my new email. It will be a while before the old Gmail is deleted entirely unfortunately.
I didn't mention it in op but I also moved to graphene os which tbh feels much better than android has recently.
Note that there was a major press cycle about this in October / November of last year - a quick Google showed stories in the Guardian, The Intercept, and the Cornell Sun, as well as commentary on Reddit. Not inconceivable that they found about it last October and had six months to leave and de-Googlify.
> Note that there was a major press cycle about this in October / November of last year
Fair point. However...the parent's comment is also fair because the article does a poor job of raising this material fact. You have to click through a sub-article.
It's almost like this article should be tagged (2025) because it's basically a replay of the author's account from 2025.[0]
Setting aside the fact that this is a new account and it's their only post, what about the timeline is difficult to understand?
The request came in April 2025, and the user was notified the following month. That's next to a year for them to hear about it internally and then quit and setup self-hosting prior to today.
If they were motivated enough by this story to delete 20 years worth of history maybe they were motivated enough to create an account and talk about it?
I don't care. The UX means I can't give it any credibility.
For all I know this could be somebody's OpenClaw spouting bullshit. The default credibility of all throwaways is zero and that was even true before 2023.
If you let it influence your opinion in any way you're a fool.
The content of the message is the credibility. It doesn't matter where it came from or who posted it. This exact topic comes up every time Google reveals its true self and lots of us have a resurgence of our latent interest to de-Google (the massive inconvenience being the major barrier).
From busterarm's profile: "Most people are stupid and/or on drugs."
The account is from 2013 but given that profile, I can't give it any credibility. After all, it could be somebody's OpenClaw having been granted control of the account.
> After all, it could be somebody's OpenClaw having been granted control of the account.
Luckily for HN, I actually have a post history. You can use my post history, textual analysis and statistics to make an informed decision about whether I'm a bot or not. Whether I'm being consistent or spouting any random bs.
The account I was responding to doesn't have anything.
> The account is from 2013 but given that profile, I can't give it any credibility.
What's in my profile is a statistical fact. It's there as a reminder, to me, not to expect everyone to see the world the same way that I do. To be comfortable with strong disagreement.
Just a hair shy of half the population is below average intelligence. Roughly 1 in 4 people has a cognitive impairment. This is of any age but trends upwards with age, reaching 2 in 3 by age 70. 1 in 4 Americans take psychiatric medication. 1 in 4 participates in illegal drug use. We haven't even touched on alcohol abuse.
My profile statement is just objective reality, whether you're comfortable with being stated openly or not.
This is a violation of the guidelines: "Throwaway accounts are ok for sensitive information, but please don't create accounts routinely. HN is a community—users should have an identity that others can relate to."
`Throwaway accounts are ok for sensitive information, but please don't create accounts routinely. HN is a community—users should have an identity that others can relate to.`
This just proves my point to discount what you say. You're basically admitting to being a pest.
Oh ok, I'm fine with that, but that newbie account is following the rules and being respectful. Same cannot even be said about some accounts with 9999 points.
More than that but they back up the things they say with something more than vapor.
You don't have to dox yourself, but people have to be able to at least call you out on consistency. There needs to be some indication that you're not _just_ a sockpuppet.
Otherwise I don't have any justification to engage with your expressions seriously.
One of their challenges is pricing of Max 20x. Max 20x is discounted 50% vs Pro and Max 5x. The way Anthropic's pricing currently works the $20(1x) and $100(5x) tiers are paying double for usage vs the heavy-user $200(20x) tier. That sort of non-linearity only makes sense if there is excess capacity. ChatGPT's new Plus/Pro pricing plan did not copy that aspect of Anthropic's pricing structure and kept "sane" linear pricing.
Generally if you give people unused cycles to burn, they'll feel entitled to finding ways to burn them. So someone who is hitting the wall at x5 goes x20 and now has an extra +x10 to burn. Again, that's good if hardware is sitting around idle and you're encouraging innovation and exploration. It can make less sense when resources are scarce.
They compute as total minutes down as a fraction of total time. What this means is that being down, say, 55min during peak-use counts the same as being down 55min when nobody is trying to use it. And congruently it counts being up when nobody is trying to use it as the same as being up when everyone is trying to use it.
There's https://github.com/badlogic/pi-share-hf by the creator of pi-coding-agent, to redact session data and publish on Huggingface. You can find others of the same idea for Claude Code/Codex on Github, though of varying redaction quality. Or have your LLM fork pi-share-hf to work for your preferred coding agent.
Clem Delangue (HF CEO) tweeted about this[1] and mentioned https://traces.com/ for exporting Claude sessions
Edit: It looks like HF now supports importing your agent's session directory directly[2] (I hope they're redacting PII?)
There is DataClaw https://github.com/peteromallet/dataclaw which uploads your Claude Code chats and more to HuggingFace in a single command. Nowadays there are many similar tools.
My understanding is that Apple keeps Safari fairly broken and doesn't care to implement the Googleverse and leaves a lot of things E_WONTFIX. I have read speculation that broken Safari encourages apps in the App Store.
hm yeah but the History API is not new or exclusive to Google, also my understanding was that the discussion is about the annoyance "working" on iOS Safari, but not on other platforms. Any way, too many variables here.
There's also CLAUDE_CODE_DISABLE_1M_CONTEXT and I'm really not clear on what the difference is and why to pick one over the other. But I guess one disables models that have 1m and the other keeps those models but sets the limit lower?
I don't hit limits either on $100, it's more that claude-code seems to be constantly broken and they added some vague bullshit about not using claude-code before 2pm so I just don't expect it to work anymore and tend to use codex-cli as my driver nowadays. I also never hit limits in codex but... codex is $20/mo not $100/mo so it's making me consider relocating the $100 I spend to Anthropic as play money for z.ai and other tools. I think claude-code has great training wheels (codex does not) but once the training wheels come off, and claude-code becomes as unreliable as it has been then it makes you consider alternatives.
Is that true? What I saw was an official announcement linked on the claude code subreddit that said that if you claude code within the high-demand times using a subscription account, then you will now burn through your usage faster than previously. They did have a promotion as a carrot but the stick is the stick.
I feel the Claude subreddits are mostly full of speculation and dramatics, not much productive discussion, like endless exaggerated complaining about downtime. Pretty much the same as a pretty significant chunk of reddit nowadays.
It does look pretty bad, especially not announcing it on a primary channel, but also they claim it's balanced out by efficiency gains and would affect 7% of users overall and 2% of 20x users.
> Your weekly limits remain unchanged. During peak hours (weekdays, 5am–11am PT / 1pm–7pm GMT), you'll move through your 5-hour session limits faster than before. Overall weekly limits stay the same, just how they're distributed across the week is changing.
I'm Eastern time and peak usage works out as 8am-2pm (the bulk of my work day). It's nice that Europe gets to use it in the morning and Pacific gets to use it in the afternoon, but this is completely bullshit and infuriating. I would have no problem if it were 2x outside peak but that's NOT what they're saying.
Yeah I hadn't been aware of that change previously. I'm also ET, but perhaps I just don't use it enough to hit the limits. They could definitely do to be more transparent, maybe instead of percentages, show a "credit" allocation such that the time-based variation in 5-hour windows is visible.
I am also not a heavy user, it's just so frustrating because nothing is ever defined. Something is always under one promotion or another. Abd the vagueness about weekly limits remain unchanged is they refuse to clarify whether that means tokens or number of 5hr blocks per week (that now get used faster during peak).
reply