When I tried the Gemini/AI formula it didn’t work very well, gpt-5 mini or nano are cheap and generally do what you want if you are asking something straightforward about a piece of content you give them. You can also give a json schema to make the results more deterministic.
The linked post is more a story of someone not understanding what they're deploying. If they had found a random blog post about spot instances, they likely would have made the same mistake.
In this case, the LLM suggested a potentially reasonable approach and the author screwed themselves by not looking into what they were trading off for lower costs.
It's also a story of AWS being so complicated in its various offerings and pricing that a person feels the need to ask an AI what's the best option because it's not obvious at all without a lot of study.
No, they just ship their org chart. Which has grown incredibly, to the point where even internally nobody knows what's being shipped three branches over on the tree.
It is because they have a different vesting schedule from most companies. Instead of 25% of your stock annually with a 1 year cliff, they pay out 5%, 15%, 40%, 40%. So once your signing bonus is up, your shares ramp up.
My personal favorite take on this is the Dark Forest concept (probably because I just finished reading the book).
It makes sense to me that unless there were some way of ensuring mutually assured destruction, any advanced alien race would simply eliminate any sentient species they became aware of that could possibly become a threat in the future (whether that's hundreds, thousands, or millions of years).
Dunno, say a species starts to inhabit say 10% of a galaxy. They aren't particularly susceptible to losing a single planet and are smart (and advanced enough) to mitigate problems with the pesky upstarts.
Without FTL communications it's likely to be more of an organization than a single government, think NATO for space. Agreements for unified defense for a small fraction of the GNP or similar.
They might for instance require all ships have transponders (like ships and planes do today). Have a self replicating robot sensor that maintains the ideal density of sensors for say 1000 light years around their space (much like the current submarine sensor networks and sats monitors earth and it's oceans). Said sensors would track all ships within range, and any energy expenditures capable of accelerating mass up near the speed of light.
They would of course track their competition, but it would be far from easy for a small upstart to seriously annoy such a civilization. Even for two civilizations that each have 10% of the galaxy it wouldn't make particularly sense for them to go to war, much like today's MAD uses nukes to prevent (so far) WW3.
Not to mention hitting a planet isn't that easy unless it's on a pretty short timescale, any civilization that can launch ships at a decent fraction of light speed could likely change their planets orbital speed by 0.01% if they thought there was likely a extinction level event was heading their way in the next few decades.
Given this, it's also possible that it would take such minimal effort for such an advanced civilization to destroy upstarts, that they would automate it and be done with it.
They could, but it's not a good fit for the dark forest model. Sure occasionally a colony on the edge of your civilization is lost, but that happens. There's much more to be gained by not going to war and the upstarts aren't these lethal unstoppable machines.
You might be misremembering the numpy.random api, since you appear to only be generating one or two random numbers in your tests.
I get results about 100x slower those of the article's CPU benchmark when using numpy on a 2019 MBP:
python -m timeit -s 'import numpy as np; x = np.zeros(1000000)' -u msec 'np.random.normal(x)'
10 loops, best of 5: 30.9 msec per loop
edit: I'm no python wizard myself, so I'm perfectly willing to believe that there's a better way to generate a random array in numpy than what I'm doing.
I don't think that is measured correctly: A CPU at 5GHz would take 0.2ns per cycle, how is it supposed to generate 1 billion (!) numbers in only 100x the time? Even if you had 64c/128t threadrippers in a dualsocket and this workload perfectly distributed, you'd only get within 100'000x the time of a single cycle. There's still a factor of 10'000 left.
You might want to try this one:
np.random.normal(size=1000000000)
For size=1000 I get on my machine about 36 ns per number (total 36 µsecs). You probably measured for size=1 but a normal of 1 billion, not 1 billion numbers at a normal of 1.
Ah yes, thanks everyone for pointing out my mistake, I am not using the `size` parameter, so this is really only generating a single random number. My mistake!
It's called Middow and it basically takes two locations and finds the midpoint between them. You can then search for pretty much anything around the midpoint and meet people in the middle or whatnot.
I have already pushed out the first update which will save addresses you search for so you don't have to type the same ones over and over. That should be approved by the App Store either tomorrow or sometime soon after that.
Also, if you wanna be a sport and give me some free publicity, digg this story!