The limits in the max subscriptions are more generous and power users are generating loss.
I'm rather certain, though cannot prove it, that buying the same tokens would cost at least 10x more if bought from API. Anecdotally, my cursor team usage was getting to around 700$ / month. After switching to claude code max, I have so far only once hit the 3h limit window on the 100$ sub.
What Im thinking is that Anthropic is making loss with users who use it a lot, but there are a lot of users who pay for max, but don't actually use it.
With the recent improvements and increase of popularity in projects like OpenClaw, the number of users that are generating loss has probably massively increased.
I've spent $17.64 on on-demand usage in cursor with an estimated API cost of $350, mostly using Claude Opus 4.5. Some of this is skewed since subagents use a cheaper model, but even with subagents, the costs are 10x off the public API costs. Either the enterprise on-demand usage gets subsidized, API costs are 10x higher, or cursor is only billing their 10% surplus to cover their costs of indexing and such.
edit: My $40/month subscription used $662 worth of API credits.
oh, I figured out the costs for the enterprise plan. It's $0.04 per request, I'm not charged per token at all. The billing is completely different for enterprise users than regular users.
This exactly. I think this is why Anthropic simply don’t want 3rd party businesses to max out the subscription plans by sharing them across their own clients.
A lot of those relevant writings became relevant because of the horrible experiences the author went through forged them into an interesting writer. If we're assuming that we only know retrospectively whether the writing is important then the best course of action would be for people to write as a hobby and make choices that are likely (rather than unlikely) to lead to a comfortable life. Particularly in this current era where we might suspect that writing and publishing a book is getting much easier thanks to technology.
> A lot of those relevant writings became relevant because of the horrible experiences the author went through forged them into an interesting writer.
Sometimes artists suffer, but it's mostly a legend at this point. Plenty of great artists have perfectly fine lives. Look at like, any modern fantasy or sci fi author.
Are you arguing that most good writers from history were poor? This is after all the only "horrible experience" a subsidy would alleviate. I don't think that's actually supported by evidence, most great writers I can think of were relatively extremely sheltered (although they often were sensitive to the horrible experiences of others)
I think the argument is a) most writers have to do a lot of writing to achieve writing consumable/appreciated but sufficient to be considered successful, b) most great writers had to go through some shit in life to incorporate that in their writing to make it interesting in order to be successful.
This is not my experience any longer. With properly set feedback loop and frameworks documentation it does not seem to matter much if they are working with completely novel stuff or not. Of course, when that is not available they hallucinate, but who anymore does that even? Anyone can see that LLMs are just glorified auto-complete machines, so you really have to put a lot of work in the enviroment they operate and quick feedback loops. (Just like with 90% of developers made of flesh...)
I've came across llms.txt files in few services. I don't know how the agents.md compares to the llms.txt files, but I guess they could pretty much have the same content. See more also here https://llmstxt.org/
Anyhow, I have made few interesting observations, that might be true for the agents.md also:
Agents have trouble with these large documents, and they seem to miss many relevant nuances. However, its rather easy to point them to the right direction when all relevant information is in one file.
Another thing is that I personally prefer this style of documentation. I can just ctrl+f and find relevant information, rather than using some built in search and trying to read through documents. I feel that the UX of one large .txt file is better than the documentation scattered between multiple pages using some pretty documentation engine.
Tokyo might not be the best example. Shanghai, Peking, Moscow, as per my experience, there is a risk of getting stuck for 2+ hours with car. Even if it was faster sometimes by car, there is a risk of getting completely stuck.
I feel that it is a commom thing. You just have to "keep an eye on it". There are several failure modes with Claude. Maybe the most annoying is that it often uses kind of defensive programming, so it is harder to detect that there is a fatal mistake somewhere. It can hide those really well. And it loves to fix linter issues with any type in typescript.
Im using it regardless. Ive just learnt to deal with these and keep an eye on them. When it creates a duplicate interface I roll back to earlier prompt and be more explicit that this type already exists.
I try to not argue whether something it does is wrong or right. There is not point. I will simply rollback and try with another prompt. Claude is not a human.
Thank you for the hard work in this space! I think it is really important that there is a proper open source solution available.
I just found OptaPlanner and subsequently TimeFold few months ago, as I was searching for a solution for my wife's veterinary clinics employee scheduling problem. The problem is not big enough for anyone to pay for the solution, but big enough to cause stress for whom ever is dealing with manually doings the shifts.
It was interesting that there were a lot of online SaaS providers that claim to solve the problem but they just simply are not configurable for all kinds of constraints of a real workplace.
Unfortunately I also feel partially same with TimeFold, because designing those constraints really requires changing the way of thinking of many problems. While the engine is capable of doing what ever, there is a steep learning curve to do it.
So while the article mentions documentation, I would say that the documentation is far from sufficient for wide adaption.
Personally, I would have really needed documentation about a mental model of thinking about the problem, and then a ton of examples how to solve real employee scheduling problems. Problem written in a format which the business people use and then translated into an elegant constraint rule explained step by step.
I had to invest more than 40 hours to get a working MVP that solves real problems, not just those that are already coded in the example code. Most people are not willing to do that.
What I'm trying to say is that to making planner software popular, it should be also usable for trivial projects. I understand that it's hard to focus on everything, but just providing more information about real use cases and how they were solved and how to think about the design problems would make the market bigger, and bring you a lot more customers in the long run.
I just wonder how I might contribute to improve the documentation. I probably don't have deep enough understanding of the correct solution, but I will look into it.
Hi Tappio, read you loud and clear. We are actively looking into making it easier for all people to solve their planning problems. Our goal is to "free the world from wasteful scheduling" and we more than realize we can't do that alone. ;)
I scheduled large one-day events where attendees were recorded performing voiceover several times during the day. As many as 350 individual recordings throughout the day, each with an acting coach, studio engineer, studio room setup, custom sets of scripts, and a demographically optimized group of attendees that would go through the day together. Because each attendees journey through the day was (somewhat) customizable by them, each new attendee would change the schedule of some other attendees. So we would have to wait until ~80% of the tickets had been purchased to begin scheduling, each new attendee was progressively harder to schedule (making it hard to keep selling tickets), and we had to also be flexible with the support staff, engineers and coaches.
I've never heard that hearing. I've been living in multiple houses heated primarily with a heat pump in the winter during average temperature of minus 10 Celsius, ranging from 0 to -30.
My current garage has radiators installed but I've never used them as the heat pump is just fine.
I guess they must build the houses or heat pumps differently where you live!
We just launched a MVP for pdf data extraction https://excelifier.com/. The service is not open source and relies on open ai, which is probably a bit problematic in your case.
However, we understand that privacy concerns are really important for many organizations. Making it self-hostable and depend on a locally running LLM is something that we are looking into.
I'm rather certain, though cannot prove it, that buying the same tokens would cost at least 10x more if bought from API. Anecdotally, my cursor team usage was getting to around 700$ / month. After switching to claude code max, I have so far only once hit the 3h limit window on the 100$ sub.
What Im thinking is that Anthropic is making loss with users who use it a lot, but there are a lot of users who pay for max, but don't actually use it.
With the recent improvements and increase of popularity in projects like OpenClaw, the number of users that are generating loss has probably massively increased.
reply