Hacker Newsnew | past | comments | ask | show | jobs | submit | zaphar's commentslogin

You essentially have to run in google to use them and that probably limits their ability to breakout. Anthropic might be doing this deal as a way to shore up their supply chain and cost of both inference and training by leveraging Google's hardware and chip manufacturing expertise.

Several customers like Citadel, run TPUs in their own datacenters (closer to Exchanges)

every tpu thats been made is in use and sold at a high margin, demand is not the issue.

Google does have a sort of temporary moat. They have a much better hardware supply line story than anyone else and the revenue to maintain that edge indefinitely.

This is the thing - Google is a real company with well established business, money of their own, hardware, server farms, etc. ChatGPT and Anthropic have none of that in the same way google does. They have an incentive to lie and 'fake it till you make it' so they can get out of the 'risk zone' of collapsing back in on themselves. Google can throw money at Gemini all day.

That may be true for OpenAI, less so for Antropic - which has much better margins. Both of these companies CEOs have come in public saying the same.

No doubt as of currently Google has a better business. But the same argument could have been said about Instagram or Whatsapp before Facebook (now Meta) acquired them.


If AI is commoditising, who is Bahrain and who are the Saudis?

The company with the access to cheap and plentiful energy and the real estate to build data centers will be Saudi Arabia in your analogy.

This is why SpaceX could be a dark horse in this race. Putting compute in space is expensive but so is building a data center in the US.


> Putting compute in space is expensive but so is building a data center in the US.

You know what's also really hard in a vacuum? Dissipating heat.


> You know what's also really hard in a vacuum? Dissipating heat

Correct. The economics of space-based DCs comes down to permitting delays versus radiator mass.

At ISS-weight radiators (12 to 15 W/kg (EDIT: kg/kW)), you need almost decade-long delays on the ground (or 10+ percent interest rates) to make lifting worthwhile. Get down to current state-of-the-art in the 5 to 10 W/kg (EDIT: kg/kW) range, however, and you only need permiting delays of 2 to 3 years.

If there is a game-changing start-up waiting to be built, it's in someone commercialising a better vacuum-rated radiator.


Would you want more wattage per kg for a better radiator?

Yes! Thank you–fixed.

Putting it centrally globally makes a lot of sense, just like connecting airports

Saudi will host the biggest data centers in the world


What does that mean?

> What does that mean?

I really couldn't have been more obscure, could I? :P

In 1932, "the first oil field in the Persian Gulf outside of Iran" was discovered in Bahrain [1]. (The same year Saudi Arabia announced unification [2].)

In the end, Saudi Arabia had larger reserves and wound up geopolitically dominating its first-moving rival. In commodities, the game tends to be scale in part through land grabbing. Less who got where first.

To close the analogy, if AI does wind up commoditised, the layers at which that commodity is held are probably between power and compute [3]. So if AI commoditises (commodifies?), Google selling computer (and indirectly power) to Anthropic and OpenAI is the smarter play than trying to advantage Gemini. (If AI doesn't commoditise, the opposite may be true–Google is supercharging a competitor.)

[1] https://en.wikipedia.org/wiki/Bahrain_Petroleum_Company

[2] https://en.wikipedia.org/wiki/Proclamation_of_the_Kingdom_of...

[3] The alternate hypothesis is it's at distribution.


Plus the whole thing of first mover advantage being a myth, especially in the tech industry

> Plus the whole thing of first mover advantage being a myth, especially in the tech industry

Source? That would be surprising!


https://hbr.org/2005/04/the-half-truth-of-first-mover-advant...

https://static1.squarespace.com/static/5654eb6ee4b0e19716ec5...

Showing how old I am with that reference

A more recent article https://www.productplan.com/learn/first-mover-advantage-fast...

I should say it’s “mostly” a myth, there are some fleeting competitive advantages to first mover but a lot of them don’t apply well to tech companies and there isn’t strong historical evidence supporting it.


Why? Being a first mover only counts for something if it can yield exclusivity that is durable.. you should know this being a VC and all. Real options - hello?

If you want to benefit massively off being a first mover, you better do the work in figuring it out how you are going to acquire exclusivity that lasts long enough that keeps most firms out.


I believe they were drawing a parallel to oil commoditization, but that's as far as I got.

The app layer is Bahrain.

Running AI at a loss long enough to kill the competition would run afoul of antitrust laws. Even more so since they’re bundling their AI products with their search monopoly.

Although I doubt this will stop them if they think it’s advantageous…


Lower real operating costs isn't the same thing as below cost pricing.

US law here is nuanced. Good quick primer https://www.ftc.gov/advice-guidance/competition-guidance/gui...


I thought that these type of antitrust laws are in no way enforced anymore in the tech industry. And that it's been that way for decades. I mean the sheer existence of Google shows that right? What about Maps, Mail, Books... basically everything apart from Search? Why would an AI Mode as one category of Search results be any different? They're not actively promoting Gemini in those search results. They're simply augmenting it with this new tool that exists now.

Yes anti-trust is very much theatre nowadays.

As long it further's American interests globally - monopoly is fine. Other countries need to take notice and start picking winners nationally in order to compete with the large American big tech firms.


Eh, I think this is actually not a specifically American thing. More of a neo-liberal mindset. Competition may be good in the long term. But a monopoly now may mean more money in your pocket now. The tech giants definitely give the US some geo-political power in some cases but in general the US would be better off with more competition.

ed: @er2d, can't reply to your comment for some reason, so doing it here: I don't agree. In theory a monopoly decreases the necessity for R&D. Of course this becomes more complex if the R&D is funded or steered by the state. But look at the current state of LLMs. There is fierce competition between 3 US companies. But geopolitically it's the same as if there would be one monopoly. The US being the clear technological leader in an industry is not dependent on that industry being a domestic monopoly.

And for the Europe comment: Also don't agree. Look at Boeing & Airbus. Both are companies where the US & EU have decided that they need to ensure the existence of a domestic airplane manufacturer. So in these cases they support these companies (often in violation of international trade laws). But it has nothing to do with monopolies. If a state decides to support a company to ensure its existence, a monopoly is the logical consequence and not the aim. Because if that industry would be profitable it wouldn't need to be supported in the first place.

But all these tech companies are not in industries that would move off-shore or stop existing because they're not profitable enough, so it's an entirely different setting.


Nope the reason for a monopoly is incentives for R&D and innovation.

The US understands that and allows it to happen as the former yields a compounding effect of power.

European states certainly don't get this.


You’re wrong actually I suggest you read a book on industrial organisation and why monopoly is a more efficient market structure in relation to incentives for R&D.

Why do people comment on stuff they barely have an understanding of? Comical. People like you create noise.


TSMC ?

Airbus ?


Are you claiming they are tech firms in the manner of a Apple, Google etc?

lol


> run afoul of antitrust laws

Now, that’s a name I haven’t heard in a long time.


> antitrust laws. Even more so since they’re bundling their AI products with their search monopoly.

couldn't this just be framed / spun as just using search data as training? i don't seem being bundled enough to run afoul with anti-trust.


> Running AI at a loss long enough to kill the competition would run afoul of antitrust laws.

Running at a loss long enough to kill the competition is basically the name of the game these days.

When Uber started, they were basically setting VC money on fire by selling rides at a loss to destroy the taxi market.


Who's going to enforce antitrust laws in this environment, pray tell?

>would run afoul of antitrust laws

Buwahahahahahahahhahah

They drop a little cash on some shitcoin the president controls and those problems go away.


I don't like aspartame because it's sickeningly sweet. I could care less if it's healthy or not.

No, it isn't. Twitter was absolutely brilliant marketing. It perfectly encapsulated what the site was at the time.

X is just a letter the current owner likes. It has absolutely no relevance to what the site does or is for.


I worked at google. k8s does not really look at all like what they used internally when I was there, aside from sharing some similar looking building blocks.

Yeah, but is the internal tool simpler? I'd be surprised.

Simpler to use? yes. Simpler under the hood? No.

If increasing spending had almost no impact over time why would cutting spending have an impact?

If filling a leaky bucket had almost no impact over time, why would stopping filling the bucket have an impact?

But filling a leaky bucket does have an impact. You just have to fill it faster than it empties. Which is probably your point.

My point is different. Study after study shows that below a specific floor spending has almost no impact on educational outcomes. The correlation is such that you can both determine that there is likely no leak and also that it has no effect.

The stuff that does have an impact is much harder to move the needle on though so everyone just scapegoats funding instead. Stuff like building up the nuclear family in an area, increasing income mobility, and holding parents accountable for child outcomes do have a measurable effect but are politically intractable today.


Unfortunately there is much more to the story than a number on a line. Just because you increase spending doesn't mean that the spending isn't earmarked for items like digital projectors and virtual textbooks that have minimal impact on learning outcomes.

So theoretically if your spending was hiring more and better teachers and better HVAC and more/smaller classes then spending would and has experimentally been verified to have an impact. Especially if you also paired it with getting rid of teacher who don't meet the bar.

But as a practical matter that is not what happens when a campaign to increase funding for a school happens. The problem is not insufficient money, the problem is not enough skill and political will in how you spend the money.


>If increasing spending had almost no impact over time why would cutting spending have an impact?

big if true. we should probably cut 100% of spending in that case.

edit: not sure if people are missing the /s, or if people legitimately believe that cutting spending has no impact.


I probably use a different interpretation of Postel's law. I try not "break" for anything I might receive, where break means "crash, silently corrupt data, so on". But that just means that I return an error to the sender usually. Is this what Postel meant? I have no idea.

I don't think that interpretation makes that much sense. Isn't it a bit too... obvious that you shouldn't just crash and/or corrupt data on invalid input? If the law were essentially "Don't crash or corrupt data on invalid input", it would seem to me that an even better law would be: "Don't crash or corrupt data." Surely there aren't too many situations where we'd want to avoid crashing because of bad input, but we'd be totally fine crashing or corrupting data for some other (expected) reason.

So, I think not crashing because of invalid input is probably too obvious to be a "law" bearing someone's name. IMO, it must be asserting that we should try our best to do what the user/client means so that they aren't frustrated by having to be perfect.


I actually dont think it's that obvious at all (unless you are a senior engineer). It's like the classic joke:

A QA engineer walks into a bar and orders a beer. She orders 2 beers.

She orders 0 beers.

She orders -1 beers.

She orders a lizard.

She orders a NULLPTR.

She tries to leave without paying.

Satisfied, she declares the bar ready for business. The first customer comes in an orders a beer. They finish their drink, and then ask where the bathroom is.

The bar explodes.

It's usually not obvious when starting to write an API just how malformed the data could be. It's kind of a subconscious bias to sort of assume that the input is going to be well-formed, or at least malformed in predictable ways.

I think the cure for this is another "law"/maxim: "Parse, don't validate." The first step in handling external input is try to squeeze it into as strict of a structure with as many invariants as possible, and failing to do so, return an error.

It's not about perfection, but it is predictable.


Hmm. Fair point. It's entirely possible that it's not obvious and that the "law" is almost a "reminder" of sorts to not assume you're getting well-formed inputs.

I'm still skeptical that this is the case with Postel's Law, but I do see that it's possible to read it that way. I guess I could always go do some research to prove it one way or the other, but... nah.

And yes, "Parse, don't validate." is one of my absolute favorite maxims (good word choice, by the way; I would've struggled on choosing a word for it here).


Right even for senior engineers this can be hard to get right in practice. Parse, don't validate is certainly one approach to the problem. Choosing languages that force you to get it right is another.

Yea, I interpret it as the same thing: On invalid input, don't crash or give the caller a root shell or whatever, but definitely don't swallow it silently. If the input is malformed, it should error and stop. NOT try to read the user's mind and conjure up some kind of "expected" output.

I think perhaps a better wording of the law would be: "Be prepared to be sent almost anything. But be specific about what you will send yourself".

I mean, if you are them and trying to detect when people are using your system incorrectly the detection system is going to be a little bit flaky. How do they prove you aren't violating your ToS by using OAuth for a system they didn't approve that usage for?

The fault here is not with Anthropic. It lies with cowboy coders creating a system that violates a providers terms of service and creating an adverse relationship.


Why assume it is javascript? The article doesn't indicate the language anywhere that I can see.

Ok, let's say that it is not JS, but an untyped, closure-based programming language with a strikingly similar array and sort API to JS. Sadly, this comparator is still wrong for any sorting API that expects a general three-way comparison, because it does not handle equality as a separate case.

And to tie it down to the mathematics: if a sorting algorithm asks for a full comparison between a and b, and your function returns only a bool, you are conflating the "no" (a is before b) with the "no" (a is the same as b). This fails to represent equality as a separate case, which is exactly the kind of imprecision the author should be trying to teach against.


> Sadly, this comparator is still wrong for any sorting API that expects a general three-way comparison, because it does not handle equality as a separate case.

Let's scroll up a little bit and read from the section you're finding fault with:

  the most straightforward type of order that you think of is linear order i.e. one in which every object has its place depending on every other object
Rather than the usual "harrumph! This writer knows NOTHING of mathematics and has no business writing about it," maybe a simple counter-example would do, i.e. present an ordering "in which every object has its place depending on every other object" and "leaves no room for ambiguity in terms of which element comes before which" but also satisfies your requirement of allowing 'equal' ordering.

Your reply only works if the article were consistently talking about a strict order. However, it is not. It explicitly introduces linear order using reflexivity and antisymmetry, in other words, a non-strict `<=`-style relation, in which equality IS a real case.

If the author wanted to describe a 'no ties' scenario where every object has its own unique place, they should have defined a strict total order.

They may know everything about mathematics for all I care. I am critiquing what I am reading, not the author's knowledge.

Edit: for anyone wanting a basic example, ["aa", "aa", "ab"] under the usual lexicographic <=. All elements are comparable, so "every object has its place depending on every other object." It also "leaves no room for ambiguity in terms of which element comes before which": aa = aa < ab. Linear order means everything is comparable, not that there are no ties. By claiming "no ties are permitted" while defining the order as a reflexive, antisymmetric relation, the author is mixing a strict-order intuition into a non-strict-order definition.


  Definition: An order is a set of elements, together with a binary relation between the elements of the set, which obeys certain laws.

  the relationship between elements in an order is commonly denoted as ≤ in formulas, but it can also be represented with an arrow from first object to the second.
All of the binary relations between the elements of your example are:

"aa" ≤ "aa"

"ab" ≤ "ab"

"aa" ≤ "ab"

> By claiming "no ties are permitted" while defining the order as a reflexive, antisymmetric relation, the author is mixing a strict-order intuition into a non-strict-order definition.

There aren't any ties to permit or reject.

  we can formulate it the opposite way too and say that each object should not have the relationship to itself, in which case we would have a relation than resembles bigger than, as opposed to bigger or equal to and a slightly different type of order, sometimes called a strict order.

It's obviously not a general 3-way comparison API, _because_ it's returning bool!

Extremely strange to see a sort that returns bool, which is one of two common sort comparator APIs, and assume it's a wrong implementation of the other common sort API.

I do see why you're assuming JS, but you shouldn't assume it's any extant programming language. It's explanatory pseudocode.


It could be a typed programming language where the sort function accepts a strict ordering predicate, like for example in C++ (https://en.cppreference.com/cpp/named_req/Compare).

> an untyped closure-based programming language with a similar array and sort api to JS

Ah! You're talking about Racket or Scheme!

```

> (sort '(3 1 2) (lambda (a b) (< a b)))

'(1,2,3)

```

I suppose you ought to go and tell the r6rs standardisation team that a HN user vehemently disagrees with their api: https://www.r6rs.org/document/lib-html-5.96/r6rs-lib-Z-H-5.h...

To address your actual pedantry, clearly you have some implicit normative belief about how a book about category theory should be written. That's cool, but this book has clearly chosen another approach, and appears to be clear and well explained enough to give a light introduction to category theory.


The syntax in the article is not scheme, you can clearly see it in my comment you're responding to.

As for your 'light introduction' comment: even ignoring the code, these are not pedantic complaints but basic mathematical and factual errors.

For example, the statement of Birkhoff’s Representation Theorem is wrong. The article says:

> Each distributive lattice is isomorphic to an inclusion order of its join-irreducible elements.

That is simply not the theorem. The theorem says "Theorem. Any finite distributive lattice L is isomorphic to the lattice of lower sets of the partial order of the join-irreducible elements of L.". You can read the definition on Wikipedia [0]

The article is plain wrong. The join-irreducibles themselves form a poset. The theorem is about the lattice of down-sets of that poset, ordered by inclusion. So the article is NOT simplifying, but misstating one of the central results it tries to explain. Call it a 'light introduction' as long as you want. This does not excuse the article from reversing the meaning of the theorem.

It's basically like saying 'E=m*c' is a simplification of 'E=m*c^2'.

[0] https://en.wikipedia.org/wiki/Birkhoff%27s_representation_th...


> That is simply not the theorem.

> The article is plain wrong.

> This does not excuse the article from reversing the meaning of the theorem.

What's with this hyperbole? Even the best math books have loads of errors (typographical, factual, missing conditions, insufficient reasoning, incorrect reasoning, ...). Just look at any errata list published by any university for their set books! Nobody does this kind of hyperbole for errors in math books. Only on HN do you see this kind of takedown, which is frankly very annoying. In universities, professors and students just publish errata and focus on understanding the material, not tearing it down with such dismissive tone. It's totally unnecessary.

I don't know if you've got an axe to grind here or if you're generally this dismissive but calling it "simply not the theorem" or "plain wrong" is a very annoying kind of exaggeration that misses all nuance and human fallibility.

Yes, the precise statement of Birkhoff's representation theorem involves down-sets of the poset of join-irreducibles. Yes, the article omits that. I agree that it is imprecise.

But it's not "reversing the meaning". It still correctly points to reconstructing the lattice via an inclusion order built from join-irreducibles. What's missing is a condition. It is sloppy wording but not a fundamental error like you so want us to believe.

Feels like the productive move here is just to suggest the missing wording to the author. I'm sure they'll appreciate it. I don't really get the impulse to frame it as a takedown and be so dismissive when it's a small fix.


Frankly everything I have seen about says that the people using LLMs to develop it can not be trusted with LLMs so no. I am not using it. I'm not anti-llm's I'm anti-stupid-llm-usage.

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: