Hacker Newsnew | past | comments | ask | show | jobs | submit | donavanm's commentslogin

you probably don't go all in, but try to find real assets that are not _too_ strongly correlated and are likely to have durable long term value. The most compelling answers I've heard are things like desirable real estate, commodities, or futures/contracts like water rights. Even then its not no equities, just a much lower allocation to minimize down side while still capturing some of the exuberant growth

Generation and structure are important, but IME IDs arent complete without consideration of representation; encoding and opacity.

* User facing IDs must be opaque. If users can infer any structure or ordering from your ID they _will_ use and they _will_ create awkward dependencies on "your" implementation detail. My favorite example is the multi year and many many dev years of effort that went in to extending EC2 instance IDs. They were already assumed/intended to be opaque until clever users inferred the structure! The simplest answer of something like block cipher is so cheap as to be free (and can be accounted for as part of versioning).

* Encoding should be tailored for teh primary UX. Ex teh base32 variants are reasonably efficient and accommodating of text selection & input. Dictionary schemes (ala S/KEY rfc2289 or BIP39) may be more appropriate for voice communication.

* Following ID structure -> opacity -> encoding you should probably account for the block size and encoding efficiency to minimize padding or excess characters


I read the report when it come out. From memory, no. It never had any components or certification for human pressure vessels. IIRC theres no existing regs for carbon fiber and it would have cost like $50M to do the design and test work. They did buy some things, like the viewport, from companies who do certified parts, but instead opted for the same design minus any test certs to save money. The craft was never certified or inspected by the uscg. It did have a registration for a while, but they had to play find-a-new-district-sign-off shell games for a while, then… just stopped bothering.


Thanks for the detailed answer! It doesn't suprise me at all.


Bad news, 60/40 isnt a diversification strategy the last few years. 1) positive correlation in equity and fixed income 2) US treasuries/dollars have not had a “flight to safety” bump when volatility/l or bad news happens recently 3) treasuring moving to “all” short term debt, short term rate cuts resulting in long term rate increases.


> datacenters depend on used hardware sales to recoup cost

Who? Ive worked for few big infra companies with millions to billions of DC assets. After depreciation and then some, say 3-5 years, theyre effectively scrap. Its more cost effective to buy new, denser, racks than continue to MRC on the stranded space and power. The reseller is cheaper/easier effectively paid to scrap the parts as e-waste and recover what they can.


You answered your own question.

Datacenters hire companies to do this on their behalf, as they don't have the internal know-how to do this. Even the big cloud companies do this.

They're on 3-5 year cycles, yet the hardware life has a good 8 years in it. The reseller sifts through what comes out, sees what is still alive, scraps what isn't, and resells the rest.

What you don't understand is this isn't a one way relationship. If you're a major cloud company, your hardware probably already is worthless. Meta (and other companies) use a non-standard case and rack "standard" called Open Compute (OCP)... there is no resale market for these, as normal datacenters use normal racks with normal cases. Meta has to pay a company to dispose of these and lose 100% of their investment.

But lets say we go to someone else that does use standard hardware, and the rest of the industry buys this stuff up (to maintain existing fleets that don't need upgraded yet, not everybody is on some ridiculous 3-5 year churn), the company you partnered with to deal with your e-waste is likely to either give you a much better rate or even pay you for your hardware (if it is in high demand).

These enterprise inference machines have zero resale value. Even the OCP stuff I mentioned above has a small market (theres a few smaller datacenters out there toying with OCP due to it having better density, but aren't willing to buy new to test it out), but there is no market for these inference SBCs.

See my sibling comment to this for more information on why the SBCs are uniquely weird.


Maybe i read too much in to “depend on used hardware sales.” Ive worked for 2/5 and 3/15 largest us companies doing cloud and infra stuff. Recovered costs from EOL hardware has just never ever mattered. Not even a rounding error on P&L and hardware/dc org has 1000 higher value priorities. Ill admit maybe the offset costs were squirreled away in finance but not visible to the business.

Even with zero resale value thats “fine.” Anytime Ive owned capacity planning it’d be more cost effective to pay someone a multiple of rack MRC to get the hardware out and free up the space and whips. The impediment was almost always free hands and coordination functions that were being spent on new adds rather than replacement.


E-waste disposal is a huge cost, and it might be entirely possible you're not seeing the cost, or you're not aware of what a badly negotiated contract looks like.

Also, a lot of the industry runs on incredibly poor margins. The only datacenter space in the world right now printing money is either owned by clouds or owned by the AI bubble (which are sometimes the same companies, or the cloud leasing space to the AI bubble).

Mostly, profits are eaten by power deals (this is why Facebook put their biggest important DCs up where the cheapest power in the US is) or property ownership (buying land, building the DC, paying property taxes, maintaining the building, etc, that shit aint cheap), and then you get to buy hardware and hopefully get customers.

Amazon, Google, Facebook, et al all cheat their way through every loophole known to man to keep the costs down and the profit high; not a lot of it is from scale, even though they're still trying to chase that to the end, too.


They tried to do a LOT. An absolutely huge amount of work trying to abstract all of the existing Code* services, big chunks of other AWS services, and then corp (and non corp!) identity. The last part, getting human identity in to AWS, is such a fundamental gap. In the end its unsurprising that they couldnt get to a competitive place against gitlab/github/etc. I do hope theres more success with identity center picking up some of those IdP pieces.


FYI someone made their own firmware which will drive the motor at a slower speed. Significantly reduces the noise.


I remember seeing that, though iirc it was a lot more surgery than I wanted to do on my blinds (which already had a shaky wife-acceptance-factor due to spotty zigbee connections). Thanks for the reminder, though.


financing huge deals for use of OCI https://finance.yahoo.com/news/oracle-corp-orcl-q4-2025-0701.... See Q4 FY25

> Total Cloud Revenue (SaaS + IaaS): $6.7 billion, up 27%. CapEx (Full Year): $21.2 billion. The company is facing supply constraints, unable to meet the high demand for its cloud services, leading to scheduling customers into the future.

Much lower name recognition for smaller customers. But there are some big big name "AI" & B2C companies who have _huge_ spend with OCI. This isnt "rent a couple of instances" its much more like "provide a couple GW of compute for X years."


I don't know if this comment is one of ignorance, or juvenile "well actually", but it is tragically misinformed. From an Australian perspective all of the big players, CSR, John Mansviille, & James Hardy, knew asbestos was a significant hazard by _at least_ the 40s. There were early epidemiological studies of cancers around asbestos work sites, and workers, in the 50s here in NSW. Unions and gov health departments start to push back on exposure and seek meaningful damages in the 60s and 70s. There were _public_ campaigns about the dangers in the 70s and 80s. It wasnt meaningfully restricted, _and continued to be commonly used_, through the 80s. A complete ban, primarily workplaces IIRC, wasnt introduced until 2003. The randian wank fantasy of "the informed consumer knows best" has been repudiated innumerable times.

And, as others have pointed out, this is not an individual choice. The families who got asbestosis from washing their fathers work clothes didnt make a choice. The bloom of cancers for residential suburbs miles around james hardy in camellia didnt have a choice. There is no expiration date on the dangers of friable asbestos. It remains hidden in the common environment forever, until someone else stumbles on it.


Akamai talked about it in the early 2000s. Facebook content folks had a decent paper describing the latency collection and realtime routing around 2011ish, something like “pinpoint” I want to say. Though as you say was industry practice before then.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: