Your Go example included zero information that Python wouldn't give you out-of-the-box. And FWIW, since this is "Go vs Rust vs Zig," both Rust and Zig allow for much more elegant handling than Go, while similarly forcing you to make sure your call succeeded before continuing.
This will probably be an unpopular reply, but "real median household income" — aka, inflation-adjusted median income — has steadily risen since the 90s and is currently at an all-time high in the United States. [1] Inflation includes the cost of housing (by measuring the cost of rent).
However, we are living through a housing supply crisis, and while overall cost of living hasn't gone up, housing's share of that has massively multiplied. We would all be living much richer lives if we could bring down the cost of housing — or at least have it flatline, and let inflation take care of the rest.
Education is interesting, since most people don't actually pay the list price. The list price has gone up a lot, but the percentage of people paying list price has similarly gone down a lot: from over 50% in the 90s for state schools to 26% today, thanks to a large increase in subsidy programs (student aid). While real education costs have still gone up somewhat, they've gone up much less than the prices you're quoting lead you to believe: those are essentially a tax on the rich who don't qualify for student aid. [2]
I think everyone has quibbles about the CPI. Ultimately though, it would take a lot of cherry-picking to make it seem like overall cost of living has gone up 3x while wages have gone up less. As a counterexample, an NES game in 1990 cost $50 new (in 1990 dollars! Not adjusted for inflation). Battlefield 6 cost $70 new this year (in 2025 dollars), and there were widespread complaints about games getting "too expensive." In real terms games have become massively less expensive — especially considering that the budget for Battlefield 6 was $400MM, and the budget for Super Mario World in 1990 was less than $2MM.
There are a zillion examples like this. Housing has gone way up adjusted for inflation, but many other things have gone way, way down adjusted for inflation. I think it's hard to make a case that overall cost of living has gone up faster than median wages, and the federal reports indicate the opposite: median real income has been going up steadily for decades.
Housing cost is visible and (of course, since it's gone up so much) painful. But real median income is not underwater relative to the 90s. And there's always outrage when something costs more than it used to, even if that's actually cheaper adjusted for inflation: for example, the constant outrage about videogame prices, which have in fact massively declined despite requiring massively more labor to make and sell.
Housing, vehicles, groceries, and health insurance are all up massively. Who gives a shit how much a game costs if you can't afford groceries and rent?
In 2010 I paid 3k for a 10 year old truck with 100k miles. That same truck today costs easily 15k. Same story for rent. Same story for groceries. Same story for health insurance.
Who gives a shit how much trinkets costs if you can't afford groceries and rent?
This is the major reason China has been investing in open-source LLMs: because the U.S. publicly announced its plans to restrict AI access into tiers, and certain countries — of course including China — were at the lowest tier of access. [1]
If the U.S. doesn't control the weights, though, it can't restrict China from accessing the models...
Why wouldn't China just keep their own weights secret as well?
If this really is a geopolitical play(I'm not sure if it is or isn't), it could be along the lines of: 1) most AI development in the US is happening at private companies with balance sheets, share holders, and profit motives. 2) China may be lagging in compute to beat everyone to the punch in a naked race
Therefore, releasing open weights may create a situation where AI companies can't as effectively sell their services, meaning they may curtail r&d at a certain point. China can then pour nearly infinite money into it and eventually get up to speed on compute and win the race
They are taking the gun out of USA's hand and unloading it, figuratively speaking. With this strategy they don't have the compete at full competency with the US, because everyone else will with cheaper models. If a cheaper model can do it, then why fork out for Opus?
I think it's just because China makes it's money from other sources, not from AI, and from what I've read, the advantage of China killing the US's AI advantage is killing it's stock market / disrupting.
Seems like it may have a chance of working if you look at the companies highest valued on the S&P 500:
NVIDIA, Microsoft, Apple, Amazon, Meta Platforms, Broadcom, Alphabet (Class C),
The share of revenue that Microsoft, Google, Meta, Apple, Alphabet and Amazon are currently deriving from the AI market as a share of their total revenue, is less than 10%.
Because they dont have the chips, but if people in countries with the chips provide hosting or refine their models they benefit from those breakthroughs.
The CCP controlling the government doesn't mean they micromanage everything. Some Chinese AI companies release the weights of even their best models (DeepSeek, Moonshot AI), others release weights for small models, but not the largest ones (Alibaba, Baidu), some keep almost everything closed (Bytedance and iFlytek, I think).
There is no CCP master plan for open models, any more than there is a Western master plan for ignoring Chinese models only available as an API.
Never suggested anything of the sort, involvement doesn’t mean direct control, it might be a passive ‘let us know if there’s progress’ issued privately, it might also be a passive ‘we want to be #1 in AI in 2030’ announced publicly, neither requires any micromanagement whatsoever: CCP’s expectation is companies figuring out how to align to party directives themselves… or face consequences.
This isn't even whataboutism, because the comparison is just insane.
The difference between the CCP, where "private" companies must actively pursue the party's strategic interests or cease to exist (and their executives/employees can be killed), and the US, where neither of those things happen and the worst penalty for a company not following the government's direction (while continuing to follow the law, which should be an obvious caveat) is the occasional fine for not complying with regulation or losing preference for government contracts, is categorical.
Only those who are either totally ignorant or seeking to spread propaganda would even compare the two.
They don't have to micromanage companies. A company's activities must align with the goals of the CCP, or it will not continue to exist. This produces companies that will micromanage themselves in accordance with the CCP's strategic vision.
I think "investing in research and hardware" is fairly relevant to my claim of "China has been investing in open-source LLMs." China also has partial ownership of several major labs via "golden shares" [1] like Alibaba (Qwen) and Zai (GLM) [2], albeit not DeepSeek as far as I know.
As far as I can tell AI is already playing a big part in the Chinese Fifteenth five year plan (2026-2030) which is their central top-down planning mechanism. That’s about as big a move as they can make.
It's obviously true that DeepSeek models are biased about topics sensitive to the Chinese government, like Tiananmen Square: they refuse to answer questions related to Tiananmen. That didn't magically fall out of a "predict the next token" base model (of which there is plenty of training data for it to complete the next token accurately); that came out of specific post-training to censor the topic.
It's also true that Anthropic and OpenAI have post-training that censors politically charged topics relevant to the United States. I'm just surprised you'd deny DeepSeek does the same for China when it's quite obvious that they do.
What data you include, or leave out, biases the model; and there's obviously also synthetic data injected into training to influence it on purpose. Everyone does it: DeepSeek is neither a saint nor a sinner.
All I'm saying is that if you want to hear your own propaganda, use your own state approved AI. Deepseek is obviously going to respond according to their own regulatory environment.
I really hate the way people like you talk about "narratives". I care about facts. Are denying it was a massacre? How many people do you think were killed?
Depends on who you ask! That's what I mean by "narratives". There's plenty of corroborating evidence that there was a large demonstration and riots. After that it gets hazy because different officials are claiming fatalities and casualties as high as 10k and as low as 300 all with differing ratios of soldier and student casualties. Wouldn't the numbers and/or ratios be similar if they were looking at the same facts?
I'm saying there's a massive disagreement both among western sources and between western sources and Chinese sources. The disagreement among western sources is what makes their reporting look made up. I'm not saying I believe what China has reported.
I dunno, the US routinely just states plainly how many people they massacre and folks in the US seem okay with it.
I'd assume that when the Chinese do bad things people in China feel the same way about that as folks in the US feel about the US doing evil stuff, which is to say "very little at all". Why would they need to lie, any more than the US needs to lie? Do the average Chinese folks have more conscience then the average US citizen?
"the US routinely just states plainly how many people they massacre and folks in the US seem okay with it."
What a nonsensical thing to say. The CCP ruthlessly sensors all discussion of the massacre and every LLM created in China sensors it. So stop it with the BS whataboutism
I recently learned about the (ancient?) greek concept of amathia. It's a willful ignorance, often cultivated as a preference for identity and ego over learning. It's not about a lack of intelligence, but rather a willful pattern of subverting learning in favor of cult and ideology.
The satisfies keyword is quite different than "as const." What it does is:
1. Enforce that a value adheres to a specific type
2. But, doesn't cause the value to be cast to that type.
For example, if you have a Rect type like:
type Rect = { w: number, h: number }
You might want to enforce that some value satisfies Rect properties... But also allow it to have others. For example:
const a = { x: 0, y: 0, w: 5, h: 5 };
If you wrote it as:
const a: Rect = // ...
TypeScript wouldn't allow you to also give it x and y properties. And if you did:
as Rect
at the end of the line, TypeScript would allow the x, y properties, but would immediately lose track of them and not allow you to use them later, because you cast it to the Rect type which lacks those properties. You could write an extra utility type:
type Location = { x: number, y: number };
const a: Location & Rect = // ...
But that can get quite verbose as you add more fields. And besides: in this example, all we actually are trying to enforce is that the object is a Rect — why do we also have to enforce other things at the same time? Usually TS allows type inference for fields, but here, as soon as you start trying to enforce one kind of shape, suddenly type inference breaks for every other field.
The satisfies keyword does what you want in this case: it enforces the object conforms to the type, without casting it to the type.
const a = { x: 0, y: 0, w: 5, h: 5 } satisfies Rect;
// a.x works
This was a fantastic writeup, thanks. If you don't mind an additional question...
How does this work,
function coolPeopleOnly(person: Person & { isCool: true }) {
// only cool people can enter here
}
const person = {
name: "Jerred",
isCool: true,
} satisfies Person;
coolPeopleOnly(person);
Since
- person isn't const, so person.isCool could be mutated
- coolPeopleOnly requires that it's input mean not only Person, but isCool = true.
If you ignore the `satisfies` for a moment, the type of `person` is the literal object type that you've written (so in this case, { "person": string, isCool: true }). So coolPeopleOnly(person) works, regardless of whether `satisfies` is there, because TypeScript sees an object literal that has all the person attributes and also `isCool: true`.
(You could mutate it to `isCool: false` later, but then TypeScript would complain because `isCool: false` is different to `isCool: true`. When that happens isn't always obvious, TypeScript uses a bunch of heuristics to decide when to narrow a type down to the literal value (e.g. `true` or `"Jerred"`), vs when to keep it as the more general type (e.g. `boolean` or `string`).)
What `satisfies` is doing here is adding an extra note to the compiler that says "don't change the type of `person` at all, keep it how it is, _but_ also raise an error if that type doesn't match this other type".
(This is only partially true, I believe `satisfies` does affect the heuristics I mentioned above, in that Typescript treats it a little bit like `as const` and narrows types down to their smallest value. But I forget the details of exactly how that works.)
So the `coolPeopleOnly` check will pass because the `person` literal has all the right attributes, but also we'll get an error on the literal itself if we forget an attribute that's necessary for the `Person` type.
It does; the code will still type-check without the satisfies operator. satisfies lets you say "if this value doesn't conform to this type then I want that to be an immediate compile error, even if it would otherwise be okay". Which isn't needed all that often since usually getting the type wrong would produce a compile error elsewhere, but occasionally it proves useful. When designing the feature they collected some use cases: https://github.com/microsoft/TypeScript/issues/47920
Yup. K8s is a bit of a pain to keep up with, but Chef and even Ansible are much more painful for other reasons once you have more than a handful of nodes to manage.
It's also basically a standard API that every cloud provider is forced to implement, meaning it's really easy to onboard new compute from almost anyone. Each K8s cloud provider has its own little quirks, but it's much simpler than the massive sea of difference that each cloud's unique API for VM management was (and the tools to paper over that were generally very leaky abstractions in the pre-K8s world).
But this is just part of how Singapore is different than America and Europe. China has even stricter controls in terms of limiting what individuals can do with their bank accounts (you can't transfer money to non-Chinese-citizens at all!).
Western countries put enormous value on personal liberty — America probably the most so, but even EU countries are extremely liberal in a liberty sense compared to historical norms, and even compared to some well-functioning economies today like China and Singapore. It's interesting, since I think the idea of personal liberty is so deeply engrained in many of our consciousnesses that we couldn't conceive of living like that. But... plenty of people do, and they're happy about it.
Plenty of people seem to be quite supportive of the idea that visa holders (ie not citizens), or simply brown people, should NOT be allowed to criticize the standing president, so I don't know that the idea of personal liberty is as strong as I believed it was growing up.
you can't transfer money to non-Chinese-citizens at all!
that's not true. you just have to document and explain the transfer, if it is a foreign bank account. if it is a local one then the citizenship of the account holder does not even matter.
Western countries put enormous value on personal liberty
in everyday life the limits on personal liberty in china are hardly noticeable. and they are contrasted with safety even when walking through dark neighborhoods at 3am in the morning.
in everyday life the limits on personal liberty in china are hardly noticeable. and they are contrasted with safety even when walking through dark neighborhoods at 3am in the morning.
The everyday life aspects aren't noticeable because everyone has adapted to not having the liberty to e.g. rally to protest the government, shut down major infrastructure in opposition to the government, do drugs in public and buy them off the Dark Web, etc. There's quite a famous rally in the late 80s where the "shut down major city sites" difference was proven... starkly. Contrast that to last year in America, where it was quite common for protests to shut down entire highways, and not even a single tank rolled over them.
I'm not arguing that America's system is necessarily better. But it's definitely different, and Americans find the restrictions of Chinese and Singaporean-style governance baffling, as per many comments in this thread.
I personally have used the "liberty to walk home alone at night" point in discussions to point out the benefits of China's system with friends and family too, so I'm not unsympathetic to the idea. It's just a different way of thinking than many Americans have, where the ability to oppose the government and do whatever you want with your money is considered sacrosanct, and giving up personal security is culturally viewed as so clearly worth it that alternatives aren't even considered.
A democratically elected government is demanding to see papers on the street, and this is celebrated by millions, so your claim about putting "enormous value on personal liberty" has been proven false.
FWIW, you're nearly doubling the actual prison incarceration rate. There were about 1.25 million people in prison as of the most recent federal data (which covers up until 2023). 2MM is the number of people who were ever in a prison or jail in 2023 (including e.g. holding cells for drunk drivers), but there wasn't a point in which 2MM were in prison at a single point in time.
Nonetheless, America has >10x the number of murders per capita than China, so it's no surprise that it has nearly 10x the people in prison for murder per capita than China (in fact, it's surprising it's not >10x, to match the murder rate). Ditto for basically any crime rate you can think of.
The downside of America's system includes much higher crime rates, which ends up in higher incarceration rates. That doesn't discount that in America, much more is legal than in China: people in America commit a lot more crime, but also do a lot more things that are legal in America but illegal in China (e.g. mass rallies to protest the government). In the security/liberty tradeoff, America and China are pretty much opposite ends of the spectrum. There are downsides to both, and upsides; China is a much safer place than America.
I think it's actually conceptually pretty different. LLMs today are usually constrained to:
1. Outputting text (or, sometimes, images).
2. No long term storage except, rarely, closed-source "memory" implementations that just paste stuff into context without much user or LLM control.
This is a really neat glimpse of a future where LLMs can have much richer output and storage. I don't think this is interesting because you can recreate existing apps without coding... But I think it's really interesting as a view of a future with much richer, app-like responses from LLMs, and richer interactions — e.g. rather than needing to format everything as a question, the LLM could generate links that you click on to drill into more information on a subject, which end up querying the LLM itself! And similarly it can ad-hoc manage databases for memory+storage, etc etc.
LLM is just one model used in A.I. It's not a panacea.
For generating deterministic output, probably a combination of Neural Networks and Genetic Programming will be better. And probably also much more efficient, energy-wise.
Multimodal LLMs already learn to generalize over text inside images. In my experience most multimodal LLMs are significantly better than traditional OCR, especially if there's any unusual formatting going on.
This thread is considering image input as an alternative to text input for text, not as an alternative to other types of OCR, so the accuracy bar is 100%.
I've had mixed results with LLMs for OCR.. sometimes excellent (zero errors on a photo of my credit card bill), but poor if the source wasn't a printed page - sometimes "reusing" the same image section for multiple extracted words!
FWIW, I highly doubt that LLMs have just learnt to scan pages from (page image, page text) training pairs - more likely text-heavy image input is triggering special OCR handling.
As usual with Alzheimer's studies, this is another "breakthrough" in mice. Unfortunately, every other Alzheimer's breakthrough in mice has failed to replicate in humans, because... Mice don't get Alzheimer's. We can create mice with dementia patterns that are surface-level similar to Alzheimer's (beta amyloid plaques!), and clear the plaques and even often reverse the dementia. Unfortunately this doesn't help much of anything, since those diseases we create in mice are not Alzheimer's and appear not to actually be similar causally to Alzheimer's. All of them have failed to replicate and we have many of them.
I generally wouldn't trust any Alzheimer's "breakthroughs" in mice models. The models are not accurate and have thus far had zero predictive power for actual Alzheimer's in humans.
The same is true for life extension in mice. We can massively extend the mouse life span but it doesn’t replicate in humans.
The reason there is pretty easy to grasp. Mice are a short lived more R-selected (lots of offspring, lower parental investment) species. They haven’t been heavily selected for longevity, which means there’s more low hanging fruit, more opportunities to tweak something and make a mouse live longer.
Humans meanwhile are among the longest lived large mammals and are extremely K-selected (few offspring, high parental investment). That means evolution has probably already tweaked all the easy life extension related knobs in humans. Going further will require going beyond the capacities of existing mammalian physiology, which is a lot harder. Probably possible, but requires a lot deeper of an understanding of what’s happening and more radical interventions.
reply