Hacker Newsnew | past | comments | ask | show | jobs | submit | hadlock's commentslogin

You need a non-electronic way to bill land owners for property taxes. That's it. Physical snail-mail is the de-facto way for the government to legally serve property taxes and other bills to private citizens. Yes we live in 2026 and everyone has email, but there's no legal requirement to give the government your email address, or even have one. You are however, legally required to provide a mailing address for your property tax bill to be sent to.

Sure, by that standard we could probably reduce to weekly or even monthly mail service. It's been suggested since at least 2008 we drop Tuesday mail service as almost nobody sends mail on Saturdays and there's no mail service on Sundays.


I pay all of my property taxes online.

I'd be interested in seeing the source for this if you have a moment

some kind of top-level metric like avg tokens/task would be useful. e.g. yes stepfun is 5% the price of sonnet, but does it use 1x, 10x or 1000x more tokens to accomplish similar tasks/median per task. for example I am willing to eat a 20% quality dive from sonnet if the token use is < 10% more than sonnet. if token use is 1000x then that's something I want to know.

added https://app.uniclaw.ai/arena/model-stats

also added per battle stats in battle detail page


According to openrouter.ai it looks like StepFun 3.5 Flash is the most popular model at 3.5T tokens, vs GLM 5 Turbo at 2.5T tokens. Claude Sonnet is in 5th place with 1.05T tokens. Which isn't super suprising as StepFun is ~about 5% the price of Sonnet.

https://openrouter.ai/apps?url=https%3A%2F%2Fopenclaw.ai%2F


> the most popular model

It was free for a long time. That usually skews the statistics. It was the same with grok-code-fast1.


Exactly. When I read the headline I thought: "Ofc it is, its free."

I should have clarified I didn't use the free version...

I used to use these various models for my claw-like and what they had a habit of doing is taking way more agent rounds and way more tokens to produce something that Sonnet would produce from far less. My total cost ended up being the same to do useful things.

the real surprising part to me is that, despite being the cheapest model on board, stepfun is often able to score high at pure performance. Other models at the same price range (e.g. kimi) fails to do that.

Glm also has their subscription witch I would assume heavy users to use.

The fact that even they struggle with github actions is a real testimate to the fact that nobody wants to host their own CD workers.

> The fact that even they struggle with github actions is a real testimate to the fact that nobody wants to host their own CD workers.

What a weird takeaway


I suspect you'll (a small-medium business) be able to buy a Claude 4.6-class rack mount device for $6000 by 2030 that does 100 t/s with 1 million token context, which honestly, is probably adequate for an office (front office, back office, executive tier etc) of 10-300 unless you've got more than 4 engineers on staff. That kind of offline device is going to push everyone to provide that kind of cloud-enabled baseline service at very low cost. The Qwen 3.5 series is already showing you can almost (but not quite) squeeze that kind of performance out of consumer hardware. 256/512gb consumer video cards will get us there, eventually, if capacity ever catches up with demand.

If my options are run Opus 4.6 in the cloud for $200/mo or run Opus 4.6 locally for $275, I am absolutely going to self-host 100% of the time. Sending all that data to the cloud presents tremendous legal risk for companies. There's currently no retention rules about privately hosted AI.

openai has.... I'm not sure but let's say 500m free users, and it's not unreasonable to assume they eventually hit 1b. That is a lot of advertising revenue, which is what powers companies like Google and even smaller companies with only 300m users like Twitter. If ecommerce isn't a major focus for OpenAI then their board members are asleep at the wheel.

OpenClaw has a persistent memory, stored to disk, and an efficient way of accessing it. ChatGPT and Claude both added a rudimentary "memory" feature in March but it's nowhwere as extensible or vendor neutral.


ChatGPT had memory for a long time. Claude also had it for quite some time for paying customers.


The good news is that Google search results have degraded so much that competitors like Kagi can compete directly. I moved off Google search completely on all devices ~1 year ago and I don't miss it at all, most of the time I forget I have a kagi subscription.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: