I think the problem is that, as a group, people who care about software quality/craft don't actually produce higher quality software. You'll get good quality and garbage software out of the craftsman and pragmatist groups at about equal rates. And folks in the craftsman group tend to have more and stronger opinions which isn't a good or bad thing except that having too many of them on a team can lead to conflict.
At this time I'm not really sure if anyone can really say there's a 'point' to passkeys anymore. They just are exportable now, both Google's and Apple's implementation are synced instead of device-bound putting them at the level of Bitwarden / KeepassXC. Backups and multi-device have become a critical feature for users which breaks attestation so it's really just those weirdos with Yubikeys.
I think we're verrry slowly inching toward shedding all the security nerd self-indulgences and getting to what I think is the eventual endgame which PassKeys are just keys and ultimately a fairly user friendly way of getting people to use a password manager without it feeling like one. All the other features seem like noise.
Slapping on OpenTelemetry actually will solve your problem.
Point #1 isn't true, auto instrumentation exists and is really good. When I integrate OTel I add my own auto instrumentors wherever possible to automatically add
lots of context. Which gets into point #2.
Point #2 also isn't true. It can add business context in a
hierarchal manner and ship wide events. You shouldn't have to tell every span all the information again. Just where it appears naturally the first time.
Point #3 also also isn't true because OTel libs make it really annoying to just write a log message and very strongly pushes you into a hierarchy of nested context managers.
Like the author's ideal
setup is basically using OTel
with Honeycomb. You get the querying and everything. And unlike rawdogging wide events all your traces are connected, can span multiple services and do timing for you.
Also NAT66 exists and I use it on my home network so you still have to have the machinery to do NAT traversal when needed. It's nice to use my public addresses like elastic IPs instead of delegating ports. IPv6 stans won't be able to bully their way into pretending that NAT doesn't exist on IPv6.
I am in no way saying that this is cheap but 300 TB will set you back a little less than $6k with tax. Very attainable for people other than OpenAI and Facebook. And it's not crazy at all to snag a server with enough bays to house all those.
For reference, considering you can purchase a 12-month Spotify Premium subscription via a $99 gift card at the moment, that same $6k could be used for 60 years of Spotify Premium.
For reference, cosidering the backup has 86 million music files, at an average of 3 minutes per file it would take you around 490 years to listen to all the tracks.
I mean these students now have the absolute funniest and most effective way to protest this bullshit. They could do more musical instruments, but I think bringing in comically fake depictions of outlandish weaponry would be legendary. A cereal box that just has the letters C4 written on it, an old timey cartoon bomb, a papier-mâché nuclear warhead.
Infra engineer here. The obvious reasons for needing the data is debugging. I collect logs, metrics, traces, and errors from everywhere, including clients. All of these come with identifying information including the associated user. From the perspective of this thread this is a huge amount of data although it's pretty modest compared to the wider industry.
This data is the tool we have to identify and fix bugs. It is considered a failing on our end if a user has to report an issue to us. Mullvad is in an ideal situation to not need this data because their customers are technical, identical, and stateless.
It's not my department but I think we would get laughed out of the room if we told our users that we couldn't do password resets or support SSO let alone the whole forgetting your 'credential' means losing all your data thing.
> Mullvad is in an ideal situation to not need this data because their customers are technical, identical, and stateless.
A lot of companies could be in similar situations, but choose not to be.
All of retail, for example. Target does significant amounts of data collection to track their customers. This is a choice. They could let users simply buy things, pay for them, and store nothing. This used to be the business model. For online orders, they could purge everything after the return window passed. The order data shouldn’t be needed after that. For brick and mortar, it should be a very straightforward business. However, I’m routinely asked for my zip code or phone number when I check out at stores. Loyalty cards are also a way to incentivize customers to give up this data (https://xkcd.com/2006/).
TVs are another big one. They are all “smart” now, and collect significant amounts of data. I don’t know anyone who would be upset with a simple screen that just let you change inputs and brightness settings, and let people plug stuff into it. Nothing needs to be collected or phone home.
A lot of the logs that are collected in the name of troubleshooting and bug fixing exist because the products are over-complicated or not thoroughly tested before release. The ability to update things later lowers the bar for release and gives a pass for adding all this complexity that users don’t really want. There is a lot of complexity in the smart TV that they might want logs for, but none of it improves the user experience, it’s all in support of the real business model that’s hidden from the user.
If you turn overcommit off then when you fork you double the memory usage. The pages are CoW but for accounting purposes it counts as double because writes could require allocating memory and that's not allowed to fail since it's not a malloc. So the kernel has to count it as reserved.
reply