Hacker Newsnew | past | comments | ask | show | jobs | submit | fisf's commentslogin

> I suspect that Chinese models are largely forced to open source as a trust building step because of general China-phobia in the west.

The obvious bias of the models, when it comes to Chinese politics and history, certainly does not help here.


TBF it obvious to us , in the same way many of our own bias are not obvious to us.


I do not understand. If auth is bypassable, this is not a browser issue, right?


It was a long time ago so i might be misremembering, but i think the idea was that safari would leak the target of redirects cross domain, which allowed the attacker to capture some of the oauth tokens.

So safari was not following the web browser specs in a way that compromised oauth in a common mode of implementation.


It's also a fundamental problem of security research. Lot's of irrelevant, highly contextual "vulnerabilities", submitted to farm internet points (driven by a broken cve system). AI only amplifies this.


Because you should not depend on one payment provider and pull unvendored images, packages, etc directly into your deployment.

There is no reason to have such brittle infra.


Sure, but at that point you go from bog standard to "enterprise grade redundancy for every single point of failure" which I can assure you is more heavily engineered than many enterprises (source: see current outage). Its just not worth the manpower and dollars for a vast majority of businesses.


Pulling unvetted stuff from docker hub, npm, etc. is not a question of redundancy.


OK, you pull it to your own repo. Now where do you store it? Do you also have fallback stores for that? What about the things which arent vendorable, ie external services?


Well, some engineer somewhere made the recommendation to go with AWS, even tho it is more expensive than alternatives. That should raise some questions.


Engineer maybe, executive swindled by sales team? Definitely.


If you are running k8s on prem, the "easy" way is to use a mature operator, taking care of all of that.

https://github.com/percona/percona-xtradb-cluster-operator https://github.com/mariadb-operator/mariadb-operator or CNPG for Postgres needs. They all work reasonable well, and cover all the basic (HA, replication, backups, recovery, etc).


This is blatantly false. There are MLIR based backhands for other languages. This is explicitly mentioned on the landing page at https://developers.googleblog.com/en/introducing-coral-npu-a....

Please language troll somewhere else.


Looking at the state of the original Coral TPU (which was basically abandoned, just like regular other Google stuff), would make me very wary to use this is a long term product.


the coral tpu does what it does and it's not bad at it. The documentation is good , and quite a few people use them practically. They're available readily.

What's upsetting about the state? Continued development?

however, to your point : being google affiliated is a huge red-flag for longevity.


They were completely sold out for 2-3 years (2020 onwards), and Google wiped documentation (https://coral.ai/products/accelerator/ redirects to the main page, which has no reference to the original Coral). I can't tell where there's an official place to buy this. I see some on Amazon, but that might be resold.


People like to think that they are anti-establishment and disruptive at a startup. They conveniently ignore who is writing the cheques.


Care to ellaborate? Not shitting on libavcodec here, I would also guess it just beats a new project on raw performance.

But according to the repo, this project also uses both slice and frame multi-threading (as does ffmpeg, with all the tradeoffs).

And SIMD usage is basically table-stakes, and libavcodec uses SIMD all over the place?


> But according to the repo, this project also uses both slice and frame multi-threading (as does ffmpeg, with all the tradeoffs).

Oh, I missed that since it doesn't have a separate file. In that case they're likely very similar performance-wise. H.264 wasn't well-designed for CPUs because the arithmetic coding could've been done better, but it's not that challenging these days.

> And SIMD usage is basically table-stakes, and libavcodec uses SIMD all over the place?

SIMD _intrinsics_. libavcodec doesn't write DSP functions in assembly for historical reasons - it's because it's just better! It's faster, just as maintainable, at least as easy to read and write, and not any less portable (since it already isn't portable…). They're basically a poor way to generate the code you want, interfere with other optimizations like autovectorization, and you might as well write your own code generator instead.

The downsides are it's harder to debug and analyzers like ASan don't work.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: