Hacker Newsnew | past | comments | ask | show | jobs | submit | Perz1val's commentslogin

Would they? A gaming PC from 2015 is still a decent machine today, just don't use laggy ahh win11

Rule #2 sounds dumb. If there can't be a single source of truth, for let's say permission checking, that multiple other services relay on, how would you solve that? Replicate it everywhere? Or do you allow for a new business requirement to cause massive refactors to just create a new root in your fancy graph?


Services handle the permissions of their own features. Authentication is handled at the gateway.

Not sure if I agree its really the best way to do things but it can be done.


That implies that every service has a `user -> permissions` table, no? That seems to contradict the idea brought up elsewhere in the thread that microservices should all be the size of one table.


Well, depends on the permission model.

For RBAC or capability-based permissions, the gateway can enrich the request or the it can be in (eg) a JWT. Then each service only has to know how to map roles/capabilities to permissions.

For ABAC it depends on lots of things, but you often evaluate access based on user attributes and context (which once again can be added to the request or go into the JWT) plus resource attributes (which is already in the microservice anyway).

For ACL you would need a list of users indeed...

Something like Google Zanzibar can theoretically live on the gateway and apply rules to different routes. Dunno how it would deal with lists, though.

After writing it down: sounds like an awful lot of work for a lot of cases.

Btw: the rule for microservices that I know of, is that they must have their own database, not their own table.


Good points about RBAC and ABAC, although my concern is now the gateway must know what capabilities are possible within the service. It seems like a lot of work, indeed.

> the rule for microservices that I know of, is that they must have their own database, not their own table.

That's the rule for microservices that I'm familiar with too, which is why I found the assertion elsewhere that microservices should just be "one table" pretty odd.

The simplest path is often auth offloaded onto STS or something like that with more complicated permissions needs handled by the services internally, if necessary (often it's not needed).


Dealing with lists is complicated with ReBAC, but possible. See my other comment on this: https://news.ycombinator.com/item?id=45662850


This is exactly the example I thought of and came here to post.

The rule is obviously wrong.

I think just having no cycles is good enough as a rule.


You have forgotten Cortana


So did Microsoft


Cortana... ah yes, that thing that I immediately disabled. I had forgotten its name.


Look at screenshots -> wallpaper window. The spacing between elements is all over the place and it simply looks like shit. Seeing this I'm having doubts if the team who did this is competent at all


Exactly.

I know that not everybody spent 10 years fiddling with CSS so I can understand why a project might have a skill gap with regards to aesthetics. I'm not trying to judge their overall competence, just wanted to say that there are so many quick wins in the design it hurts me a bit to see it. And due to nature of open source projects I was talking about "empowering" a designer to improve it because oftentimes you submit a PR for aesthetic improvements and then notice that the project leaders don't care about these things, which is sad.


Yea, but those platforms were not 64bit


64 bit generally adds about 20% to the size of the executables and programs as t to last on x86, so it's not that big of a change.


Switch to an ILP32 ABI and you get a lot of that space back


Maybe the chemicals they've used to treat the wood were so hardcore


I know some people that need to apply this advice. Majority shouldn't, this is just a mitigation for a specific personality trait


They must be AI then


Can someone tell me how much more power efficient is ARM actually? Like under load when gaming, not in a phone that sleeps most of the time. I've heard both claims, that it's still a huge difference and that for new AMD Zen it's basically the same.


The instruction set has marginal impact. But many power efficient chips happen to be using the ARM instruction set today.


I think that's still highly debatable. Intel and AMD claim the instruction set makes no difference... but of course they would. And if that's really the case where are the power efficient x86 chips?

Possibly the truth is that everyone is talking past each other. Certainly in the Moore's Law days "marginal impact" would have meant maybe less then 20%, because differences smaller than that pretty much didn't matter. And there's no way the ISA makes 20% difference.

But today I'd say "marginal impact" is less than 5% which is way more debatable.


> And if that's really the case where are the power efficient x86 chips?

Where are the power inefficient x86 chips? If you normalize for production process and put the chips under synthetic load, ARM and x86 usually end up in a similar ballpark of efficiency. ARM is typically less efficient for wide SIMD/vector workloads, but more efficient at idle.

AMD and Intel aren't smartphone manufacturers. Their cash cows aren't in manufacturing mobile chipsets, and neither of them have sweetheart deals on ARM IP with Softbank like Apple does. For the markets they address, it's not unlikely that ARM would be both unprofitable and more power-hungry.


Jim Keller goes into some detail about what difference the ISA makes in general in this clip https://youtu.be/yTMRGERZrQE?si=u-dEXwxp0MWPQumy

Spoiler, it's not much because most of the actual execution time is spent in a handful of basic OPs.

Branch prediction is where the magic happens today.


>Spoiler, it's not much because most of the actual execution time is spent in a handful of basic OPs.

Yet, on a CISC ISA, you still have to support everything else, which is essentially cruft.


Does that matter? I lean towards the yes-the-ISA-matters camp, but I'm also under the impression that most silicon is dark.


Jim Keller has to say that.


The stage is yours if you choose to refute him.


Intel spent years trying to get manufacturers to use their x86 chips in phones, but manufacturers turned them down, because the power efficiency was never good enough.


Well, they were targeting Android, and the apps were emulating ARM on x86, and they were going against a strong incumbent. Accounts on the web of this failure seem to bring up other failings as the main problems.


Eg this review of the AZ210 phone from 2012 seems to think the battery life was good: https://www.trustedreviews.com/reviews/orange-san-diego

"Battery life during our test period seemed to be pretty good and perhaps slightly better than many dual-core Android phone’s we’ve tested."


> the apps were emulating ARM on x86

They weren't (except some games maybe). Most apps were written in Java and JITed.


Well, apps tuned for performance and apps using native code have more than a little overlap. Even back then there were a lot of apps besides games that used native code for the hot code paths. But games of course are huge by themselves, and besides performance you need to have good power efficiency in running them.

Here's some more details: https://www.theregister.com/2014/05/02/arm_test_results_atta...

(note it's a 2-part, the "next page" link is small print )


You're basically reiterating exactly what I just said. Intel had no interest in licensing ARM's IP, they'd have made more money selling their fab space for Cortex designs at that point.

Yes, it cost Intel their smartphone contracts, but those weren't high-margin sales in the first place. Conversely, ARM's capricious licensing meant that we wouldn't see truly high-performance ARM cores until M1 and Neoverse hit the market.


> Intel had no interest in licensing ARM's IP, they'd have made more money selling their fab space for Cortex designs at that point.

Maybe, but the fact remains that they spent years trying to make an Atom that could fit the performance/watt that smartphone makers needed to be competitive, and they couldn't do it, which pretty strongly suggests it's fundamentally difficult. Even if they now try to sour-grapes that they just weren't really trying, I don't believe them.


I think we're talking past each other here. I already mentioned this in my original comment:

  ARM is typically [...] more efficient at idle.
From Intel's perspective, the decision to invest in x86 was purely fiscal. With the benefit of hindsight, it's also pretty obvious that licensing ARM would not have saved the company. Intel was still hamstrung by DUV fabs. It made no sense to abandon their high-margin datacenter market to chase low-margin SOCs.


It's workload-dependent. On-paper, ARM is more power-efficient at idle and simple ops, but slows down dramatically when trying to translate/compose SIMD instructions.


You seem to have conflated SIMD and emulation in the context of performance. ARM has it's own SIMD instructions and doesn't take a performance hit when executing those. Translating x86 SIMD to ARM has an overhead that causes a performance hit, which is due to emulation.


Both incur a performance hit. ARM NEON isn't fully analogous to modern AVX or SSE, so even a 1:1 native port will compile down to more bytecode than x86. This issue is definitely exacerbated when translating, but inherent to any comparison of the two.


No earlier than 2027, it's valve you're talking about, they don't need to rush


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: