Hacker Newsnew | past | comments | ask | show | jobs | submit | Centigonal's commentslogin

You're right: the MapReduce pattern is very old, and it is well-known that applying it to AI training to enable geographically distributed training runs would be very beneficial. We haven't done it yet because model training workloads are more difficult to parallelize with high intra-node latency than a lot of traditional workloads.

This paper proposes a work partitioning scheme that removes a constraint that makes parallelizing AI training inefficient. The idea of a work partitioning scheme isn't novel, but the scheme itself is.


Germany resisted Google Street View until 2023, which was something I thought was very impressive.

"UI" is a category that contains GUI as well as other UIs like TUIs and CLIs. "UX" encompasses a lot of design work that can be distilled into the UI, or into app design, or into documentation, or somewhere else.

> “UX" encompasses a lot of design work that can be distilled into the UI

like how git needs you to “commit” changes as if you’re committing a change to a row in a database table? thats a design/experience issue to me, not an “it has commands” issue.


it's a last gasp... except when it isn't, like with Google, youtube, facebook, reddit, etc

You mean you're not excited to use Copilot Chat in the Microsoft 365 Copilot App??

(This is the real, official name for the AI button in Office)


Microsoft 365 Copilot For Business? (which isn't real - but yeah, the naming is...)

Microsoft spent a lot of effort to develop a really powerful editing interface. If you can replace that interface with a text input box, then their applications moat becomes a lot shallower.

Just like fiber, just like power lines, just like... rail!?

The ghost of Cornelius Vanderbilt is rubbing his hands in glee.


I think LOC and "writing code" are largely irrelevant as metrics of productivity in a world with LLMs that love to churn out overly loquacious code.

I think the right way to explain the work done sounds something like, "I worked with Claude to create an app that does ______. I know it works because ______."


Is your definition of bullish "believes the technology will be widely adopted across society and accrue significant wealth to its owners?" - if so, I think it's very clear how someone could be bullish on AI and not blockchain. You don't have to like AI to see it as an inexorable transformer (ha!) of society and wealth.

Is your definition of bullish "believes the technology is a major net good for society?" - if so, you're comparing two technologies with significant social aspirations that come from very different philosophical backgrounds. While both are techno-optimist, Blockchain is a fundamentally libertarian technology, while generative AI comes from a more utilitarian, capital-focused background. People who value individual freedom above all else will get excited about blockchain and feel mixed-to-negative about AI, while people who want to elevate the overall capability of the human race to the exclusion of anything else will get excited by AI and see blockchain as a parlor trick.


I'll add to this by saying that globalization works as well as it does because the average person would suffer dramatically from a major war and the resulting breakdown of global supply chains. People who are wealthy enough to move anywhere in the world (including to a military-grade bunker somewhere remote like New Zealand) if their current domicile is negatively affected don't have as strong of an incentive to maintain peace.


As a corollary: people who, because of geography, are unlikely to suffer any traditional or novel military consequences of a war in country <X> (e.g. Americans w.r.t a war in the middle east) are only going to have moral reasons for avoiding such a war, other than the risk to members of their family and friends. This makes the risks from such countries significantly worse than those who are militarily at risk should they choose to attack another.

Of course, none of that stops terroristic responses to war, but those by themselves affect relatively small numbers of people (or have done so far; obviously terroristic use of nuclear weapons would change that).

We can see all of this in the voices of the segment of the American population that is "all in" for the war in Iran, safe in their belief that they will suffer no militaristic consequences from it.


> People who are wealthy enough to move anywhere in the world (including to a military-grade bunker somewhere remote like New Zealand) if their current domicile is negatively affected don't have as strong of an incentive to maintain peace.

Eh, if you’re a billionaire factory owner and landlord, the kind of war that would send you to a military grade bunker in New Zealand will be bad for your factories, properties, workers and tenants.

Also, a man can only go to the opera if the singers and orchestra aren’t busy scavenging for food or fighting mutant wolves. And the same is true of most other entertainment, fine dining, fashion and suchlike.

Sane wealthy people gain nothing from a world scale war, and in fact would face a big loss in quality of life.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: