Rule #2 sounds dumb. If there can't be a single source of truth, for let's say permission checking, that multiple other services relay on, how would you solve that? Replicate it everywhere? Or do you allow for a new business requirement to cause massive refactors to just create a new root in your fancy graph?
That implies that every service has a `user -> permissions` table, no? That seems to contradict the idea brought up elsewhere in the thread that microservices should all be the size of one table.
For RBAC or capability-based permissions, the gateway can enrich the request or the it can be in (eg) a JWT. Then each service only has to know how to map roles/capabilities to permissions.
For ABAC it depends on lots of things, but you often evaluate access based on user attributes and context (which once again can be added to the request or go into the JWT) plus resource attributes (which is already in the microservice anyway).
For ACL you would need a list of users indeed...
Something like Google Zanzibar can theoretically live on the gateway and apply rules to different routes. Dunno how it would deal with lists, though.
After writing it down: sounds like an awful lot of work for a lot of cases.
Btw: the rule for microservices that I know of, is that they must have their own database, not their own table.
Good points about RBAC and ABAC, although my concern is now the gateway must know what capabilities are possible within the service. It seems like a lot of work, indeed.
> the rule for microservices that I know of, is that they must have their own database, not their own table.
That's the rule for microservices that I'm familiar with too, which is why I found the assertion elsewhere that microservices should just be "one table" pretty odd.
The simplest path is often auth offloaded onto STS or something like that with more complicated permissions needs handled by the services internally, if necessary (often it's not needed).
Look at screenshots -> wallpaper window. The spacing between elements is all over the place and it simply looks like shit. Seeing this I'm having doubts if the team who did this is competent at all
I know that not everybody spent 10 years fiddling with CSS so I can understand why a project might have a skill gap with regards to aesthetics. I'm not trying to judge their overall competence, just wanted to say that there are so many quick wins in the design it hurts me a bit to see it. And due to nature of open source projects I was talking about "empowering" a designer to improve it because oftentimes you submit a PR for aesthetic improvements and then notice that the project leaders don't care about these things, which is sad.
Can someone tell me how much more power efficient is ARM actually? Like under load when gaming, not in a phone that sleeps most of the time. I've heard both claims, that it's still a huge difference and that for new AMD Zen it's basically the same.
I think that's still highly debatable. Intel and AMD claim the instruction set makes no difference... but of course they would. And if that's really the case where are the power efficient x86 chips?
Possibly the truth is that everyone is talking past each other. Certainly in the Moore's Law days "marginal impact" would have meant maybe less then 20%, because differences smaller than that pretty much didn't matter. And there's no way the ISA makes 20% difference.
But today I'd say "marginal impact" is less than 5% which is way more debatable.
> And if that's really the case where are the power efficient x86 chips?
Where are the power inefficient x86 chips? If you normalize for production process and put the chips under synthetic load, ARM and x86 usually end up in a similar ballpark of efficiency. ARM is typically less efficient for wide SIMD/vector workloads, but more efficient at idle.
AMD and Intel aren't smartphone manufacturers. Their cash cows aren't in manufacturing mobile chipsets, and neither of them have sweetheart deals on ARM IP with Softbank like Apple does. For the markets they address, it's not unlikely that ARM would be both unprofitable and more power-hungry.
Intel spent years trying to get manufacturers to use their x86 chips in phones, but manufacturers turned them down, because the power efficiency was never good enough.
Well, they were targeting Android, and the apps were emulating ARM on x86, and they were going against a strong incumbent. Accounts on the web of this failure seem to bring up other failings as the main problems.
Well, apps tuned for performance and apps using native code have more than a little overlap. Even back then there were a lot of apps besides games that used native code for the hot code paths. But games of course are huge by themselves, and besides performance you need to have good power efficiency in running them.
You're basically reiterating exactly what I just said. Intel had no interest in licensing ARM's IP, they'd have made more money selling their fab space for Cortex designs at that point.
Yes, it cost Intel their smartphone contracts, but those weren't high-margin sales in the first place. Conversely, ARM's capricious licensing meant that we wouldn't see truly high-performance ARM cores until M1 and Neoverse hit the market.
> Intel had no interest in licensing ARM's IP, they'd have made more money selling their fab space for Cortex designs at that point.
Maybe, but the fact remains that they spent years trying to make an Atom that could fit the performance/watt that smartphone makers needed to be competitive, and they couldn't do it, which pretty strongly suggests it's fundamentally difficult. Even if they now try to sour-grapes that they just weren't really trying, I don't believe them.
I think we're talking past each other here. I already mentioned this in my original comment:
ARM is typically [...] more efficient at idle.
From Intel's perspective, the decision to invest in x86 was purely fiscal. With the benefit of hindsight, it's also pretty obvious that licensing ARM would not have saved the company. Intel was still hamstrung by DUV fabs. It made no sense to abandon their high-margin datacenter market to chase low-margin SOCs.
It's workload-dependent. On-paper, ARM is more power-efficient at idle and simple ops, but slows down dramatically when trying to translate/compose SIMD instructions.
You seem to have conflated SIMD and emulation in the context of performance. ARM has it's own SIMD instructions and doesn't take a performance hit when executing those. Translating x86 SIMD to ARM has an overhead that causes a performance hit, which is due to emulation.
Both incur a performance hit. ARM NEON isn't fully analogous to modern AVX or SSE, so even a 1:1 native port will compile down to more bytecode than x86. This issue is definitely exacerbated when translating, but inherent to any comparison of the two.
reply