> With that responsibility in mind, Instructure reached an agreement with the unauthorized actor involved in this incident. As part of that agreement:
> The data was returned to us. We received digital confirmation of data destruction (shred logs).
> We have been informed that no Instructure customers will be extorted as a result of this incident, publicly or otherwise.
> This agreement covers all impacted Instructure customers, and there is no need for individual customers to attempt to engage with the unauthorized actor.
I bet it depends on the institution and the IT team behind said institution, but at least for my university we apparently don't delete old course shells or anything.
I'm friends with a professor who complained to me a couple times about how sometimes he will need to scroll through pages and pages of courses he taught in the past. He also mentioned that profs aren't able to delete their own course shells either.
It wouldn't surprise me if most of it is still around. The amounts of data are probably fairly small, and thus unless intentionally deleted, it's probably still there (maybe unis in Europe are more likely to bother to click the relevant buttons as to comply with the GDPR?). I can't imagine storage becoming an issue unless you've got a huge uni or classes that deal with video (and even then, those probably end up on Youtube as private videos, or only as really small clips).
That's the takeaway - that people are supposed to be bored in line? I wasn't insulting the people, I was describing what to me is the awful human experience of shopping at Costco.
> there were hordes of people standing lifelessly in a huge line waiting to check themselves out
Where are the retail experiences where people waiting to checkout are expressing an abundance of joy in life to you? Is the problem the “horde”? Sure, popular places tend to have a lot of people. I’m not sure why Costco customers act way less fun to you than at other places? This whole comments reads like a petulant “everyone is a NPC but me” screed
so silly, right??! and of all the checkouts Costco is bussin! I am always with my daughter, we open a package of 85 croisants and eat like 5 while also opening 60lbs bag of walnuts and munch together while taking bets whether we picked the right line based on complex algorithms of who is working at checkout, who is in line and how much stuff they bought and another billions parameters :)
Sorry but this is rich. The vast, vast majority of times that Github goes down, even though the issue is almost always resolved within the day if not in the next couple of hours. Yet we'd all agree that "Github is down" posts are worth their time on HN, even though everyone knows how to access to status API, because it's not so much about being notified about the outage but understanding why it happened.
What exactly is "clickbait" here? Is the disappearance "mysterious" or not here? I'm not a banking tech engineer, so I don't have the slightest clue how a bank's app could completely glitch out for days about something as critical to people as their life savings. Were you even aware that this is something that could just happen? Do you have a notion that this issue would have resolved itself without the aggressive petitioning by the account holder? Explain to us how you would go to the FDIC with a claim when the FDIC covers customers with provable losses, and the article reports that this person was so ghosted out of the system that Fidelity's customer support was telling her, “Are you sure you shouldn’t be calling Schwab”?
From the article:
> Ms. Gruntmane felt she had little choice, and was forced to cancel her 20 or so patients for the day. After a quick stop at home to retrieve her personal computer, identification and other records, she got back into her car and started driving. “It just felt out of, like, a psychological thriller,” she said.
> As she was driving through Vail, she called her mother, who suggested trying to reach Fidelity’s fraud department one more time. She pulled over, and finally reached a rep who was more helpful. He also couldn’t immediately find any evidence of her accounts, but she had found one account number to share with him. After a second hourlong call, he promised they would continue to investigate, but said it was most likely a systems-related issue.
Explain to me how this isn't of interest to people who touch online systems? Is there a status.fidelity.com that we have access to, that you could point out how systemic or non-systemic this kind of incident is?
Genuine question: is the cost to keep a persistent warmed cache for sessions idling for hours/days not significant when done for hundreds of thousands of users? Wouldn’t it pose a resource constraint on Anthropic at some point?
No, the cache is a few GB large for most usual context sizes. It depends on model architecture, but if you take Gemma 4 31B at 256K context length, it takes 11.6GB of cache
note: I picked the values from a blog and they may be innacurate, but in pretty much all model the KV cache is very large, it's probably even larger in Claude.
To extend your point: it's not really the storage costs of the size of the cache that's the issue (server-side SSD storage of a few GB isn't expensive), it's the fact that all that data must be moved quickly onto a GPU in a system in which the main constraint is precisely GPU memory bandwidth. That is ultimately the main cost of the cache. If the only cost was keeping a few 10s of GB sitting around on their servers, Anthropic wouldn't need to charge nearly as much as they do for it.
That cost that you're talking about doesn't change based on how long the session is idle. No matter what happens they're storing that state and bring it back at some point, the only difference is how long it's stored out of GPU between requests.
Are you sure about that? They charge $6.25 / MTok for 5m TTL cache writes and $10 / MTok for 1hr TTL writes for Opus. Unless you believe Anthropic is dramatically inflating the price of the 1hr TTL, that implies that there is some meaningful cost for longer caches and the numbers are such that it's not just the cost of SSD storage or something. Obviously the details are secret but if I was to guess, I'd say the 5m cache is stored closer to the GPU or even on a GPU, whereas the 1hr cache is further away and costs more to move onto the GPU. Or some other plausible story - you can invent your own!
Storing on GPU would be the absolute dumbest thing they could do. Locking up the GPU memory for a full hour while waiting for someone else to make a request would result in essentially no GPU memory being available pretty rapidly. This type of caching is available from the cloud providers as well, and it isn't tied to a single session or GPU.
> Storing on GPU would be the absolute dumbest thing they could do
No. It’s not dumb. There will be multiple cache tiers in use, with the fastest and most expensive being on-GPU VRAM with cache-aware routing to specific GPUs and then progressive eviction to CPU ram and perhaps SSD after that. That is how vLLM works as you can see if you look it up, and you can find plenty of information on the multiple tiers approach from inference providers e.g. the new Inference Engineering book by Philip Kiely.
You are likely correct that the 1hr cached data probably mostly doesn’t live on GPU (although it will depend on capacity, they will keep it there as long as they can and then evict with an LRU policy). But I already said that in my last post.
Yesterday I was playing around with Gemma4 26B A4B with a 3 bit quant and sizing it for my 16GB 9070XT:
Total VRAM: 16GB
Model: ~12GB
128k context size: ~3.9GB
At least I'm pretty sure I landed on 128k... might have been 64k. Regardless, you can see the massive weight (ha) of the meager context size (at least compared to frontier models).
It’s arguable that opening the doors for greedy soldiers to do a little insider trading and inadvertently expose the illegal covert violent raid that they’re party to might be one of the few positive outcomes in a society gamified by Polymarket
Has the availability of deepfake porn generation reduced the demand for deepfake porn featuring real people? When deepfake generators are capable of creating convincing imagery of flawless ideal fake humans, why do you suppose there’s so many real humans who report being non-consensual subjects of deepfake porn?
> Has the availability of deepfake porn generation reduced the demand for deepfake porn featuring real people?
yes
> When deepfake generators are capable of creating convincing imagery of flawless ideal fake humans, why do you suppose there’s so many real humans who report being non-consensual subjects of deepfake porn?
> With that responsibility in mind, Instructure reached an agreement with the unauthorized actor involved in this incident. As part of that agreement:
> The data was returned to us. We received digital confirmation of data destruction (shred logs).
> We have been informed that no Instructure customers will be extorted as a result of this incident, publicly or otherwise.
> This agreement covers all impacted Instructure customers, and there is no need for individual customers to attempt to engage with the unauthorized actor.
reply