Hacker Newsnew | past | comments | ask | show | jobs | submit | turtlesdown11's commentslogin

yeah why have data when you can just use vibes

VZW has 146m lines. Each employee supports 14,600 customers, seems like a reasonable number...

It's a very common approach. Telling low performing white men they should blame black people and women for their woes is a soundtrack that resonates well.

> There are literally dozens

Do you have his grave location by chance?

Scott Adams was an unrepentant racist.

I'm having steak and salad for dinner.

They have the largest free cash flow (over $100 billion a year). Meta and Amazon have less than half that a year, and Microsoft/Nvidia are between $60b-70b per year. The statement reflects a poor understanding of their financials.

No, of course the training costs aren't that high. Apple's ten years of future free cash flow is greater than a trillion dollars (they are above $100b per year). Obviously, the training costs are a trivial amount compared to that figure.

What I'm wondering - their future cash flow may be massive compared to any conceivable rational task, but the market for servers and datacenters seems to be pretty saturated right now. Maybe, for all their available capital, they just can't get sufficient compute and storage on a reasonable schedule.

I have no idea what AI involves, but "training" sounds like a one-and-done - but how is the result "stored"? If you have trained up a Gemini, can you "clone" it and if so, what is needed?

I was under the impression that all these GPUs and such were needed to run the AI, not only ingest the data.


> but how is the result "stored"

Like this: https://huggingface.co/docs/safetensors/index


Yes, serving requires infra, too. But you can use infra optimized for serving; nvidia GPUs are not the only game in town.

Theoretically it would be much less expensive to just continue to run the existing models, but ofc none of the current leaders are going to stop training new ones any time soon.

So are we on a hockey stick right now where a new model is so much better than the previous that you have to keep training?

Because almost every example of previous cases of things like this eventually leveled out.


Hiring the right people should also be trivial with that amount of cash.

The cash pile is gone, they have been active in share repurchase.

They still generate about ~$100 billion in free cash per year, that is plowed into the buybacks.

They could spend more cash than every other industry competitor. It's ludicrous to say that they would have to burn 10 years of cash flow on trivial (relative) investment in model development and training. That statement reflects a poor understanding of Apple's cash flow.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: