It was a long time ago so i might be misremembering, but i think the idea was that safari would leak the target of redirects cross domain, which allowed the attacker to capture some of the oauth tokens.
So safari was not following the web browser specs in a way that compromised oauth in a common mode of implementation.
It's also a fundamental problem of security research. Lot's of irrelevant, highly contextual "vulnerabilities", submitted to farm internet points (driven by a broken cve system). AI only amplifies this.
Sure, but at that point you go from bog standard to "enterprise grade redundancy for every single point of failure" which I can assure you is more heavily engineered than many enterprises (source: see current outage). Its just not worth the manpower and dollars for a vast majority of businesses.
OK, you pull it to your own repo. Now where do you store it? Do you also have fallback stores for that? What about the things which arent vendorable, ie external services?
Well, some engineer somewhere made the recommendation to go with AWS, even tho it is more expensive than alternatives. That should raise some questions.
Looking at the state of the original Coral TPU (which was basically abandoned, just like regular other Google stuff), would make me very wary to use this is a long term product.
the coral tpu does what it does and it's not bad at it. The documentation is good , and quite a few people use them practically. They're available readily.
What's upsetting about the state? Continued development?
however, to your point : being google affiliated is a huge red-flag for longevity.
They were completely sold out for 2-3 years (2020 onwards), and Google wiped documentation (https://coral.ai/products/accelerator/ redirects to the main page, which has no reference to the original Coral). I can't tell where there's an official place to buy this. I see some on Amazon, but that might be resold.
> But according to the repo, this project also uses both slice and frame multi-threading (as does ffmpeg, with all the tradeoffs).
Oh, I missed that since it doesn't have a separate file. In that case they're likely very similar performance-wise. H.264 wasn't well-designed for CPUs because the arithmetic coding could've been done better, but it's not that challenging these days.
> And SIMD usage is basically table-stakes, and libavcodec uses SIMD all over the place?
SIMD _intrinsics_. libavcodec doesn't write DSP functions in assembly for historical reasons - it's because it's just better! It's faster, just as maintainable, at least as easy to read and write, and not any less portable (since it already isn't portable…). They're basically a poor way to generate the code you want, interfere with other optimizations like autovectorization, and you might as well write your own code generator instead.
The downsides are it's harder to debug and analyzers like ASan don't work.
The obvious bias of the models, when it comes to Chinese politics and history, certainly does not help here.