Hacker Newsnew | past | comments | ask | show | jobs | submit | kafrofrite's commentslogin

It's probably not trivial to implement and there's already a bunch of problems that need solving (e.g., trusting keys etc.) but... I think that if we had some sort of lightweight code provenance (on top of my head commits are signed from known/trusted keys, releases are signed by known keys, installing signed packages requires verification), we could probably make it somewhat harder to introduce malicious changes.

Edit: It looks like there's already something similar using sigstore in npm https://docs.npmjs.com/generating-provenance-statements#abou.... My understanding is that its use is not widespread though and it's mostly used to verify the publisher.


I think that depends on...how are these malicious changes actually getting into these packages? It seems very mysterious to me. I wonder why npm isn't being very forthcoming about this?


The answer to your question is WebKit (because iOS), kernels (XNU, Linux, Windows) etc. In case you are not familiar with the domain I'd start with user-space exploitation and relevant write ups to get my feet wet. You'll find plenty of write ups, blogs etc. so I'll skip those. Some of the books I generally found interesting are [1],[2], [3]. There's more to that, including fundamental concepts of CS (e.g., compilers and optimization in JITs, OS architecture etc.). I believe also https://p.ost2.fyi/dashboard has some relevant training.

[1] https://nostarch.com/zero-day

[2] https://nostarch.com/hacking2.htm

[3] https://ia801309.us.archive.org/26/items/Wiley.The.Shellcode...


Most probably what Apple means is that since their codebase is shared, the vulnerability exists across devices. This does not mean that the vulnerability is actively exploited in iOS nor that it will not be actively exploited as part of some other campaign.


> Has this happened before? That iPhones had a security hole that could be exploited over the web? Yes, there were exploits in the past that could be exploited remotely, including some that were used for jailbreaking.


I work as a security engineer and, yes, the CT logs are extremely useful not only for identifying new targets the moment you get a certificate but also for identifying patterns in naming your infra (e.g., dev-* etc.).

A good starting point for hardening your servers is CIS Hardening Guides and the relevant scripts.


IIRC, in [1] it mentioned a few examples of AI that exhibited the same bias that is currently present in the judicial system, banks etc.

[1] https://en.wikipedia.org/wiki/Weapons_of_Math_Destruction


This is honestly what scares me the most. Our biases are built in to AI, but we pretend they're not. People will say "Well, it was the algorithm/AI, so we can't change it". Which is just awful and should scare the shit out of everyone. There was a book [0] written almost fifty years ago that predicted this. I still haven't read it, but really need to. The author claims it made him a pariah among other AI researchers at the time.

[0]https://en.wikipedia.org/wiki/Computer_Power_and_Human_Reaso...


https://en.wikipedia.org/wiki/Computers_Don%27t_Argue while not about AI directly and supposedly satirical really captures how the system works.


I'm not a fan of Windows but Stuxnet didn't happen because of Windows. Iran decided to spin up a nuclear program and Israel and the US had concerns and wanted to stop it. They had the resources to develop something tailored for this unique situation, which included windows, Siemens PLCs (IIRC), Centrifuges etc. and developed the malware based on their target. Even if their target used a different stack, they'd find a way to achieve the same result.


It's all about price. Attacking Linux will be harder, thus more expensive.

You make it sound easy, if that was the case they'd launch one attack every few months or so. This stuff is expensive, and making it 100x harder means 100x less attacks before the budget runs out.


I'll try my best to explain everything (trying to avoid too much security lingo, hopefully).

A password manager is a big database of passwords. There is a master password that decrypts the database and from there you can use your passwords. Notice that hashes are one-way operations thus not used in password managers. The benefits of using a password manager are that that users need to remember and handle only one password, that of their password manager, the rest of the passwords are unique and can be rotated quickly. Ideally, your password manager does a few more things, including taking precautions against leaving traces of passwords in memory etc.

There's another part of commercial password managers which is mostly convenience functionality. Passwords are synced across devices, specific members access specific passwords etc.

Some people do use local password managers, depending on their threat model (i.e., who's after them) and their level of expertise/time on their hands. Setting up something locally requires taking additional precautions (such as permissions, screen locks etc.) that are typically handled by commercial password managers.

Reg. Okta, Okta is an identity provider. In theory, identity providers can provide strong guarantees regarding a user, i.e., "I authenticated him thus I gave him those token to pass around". Strong guarantees can include a number of things, including Multi-factor Authentication, VPN restrictions etc.

Funny story: during an internal red team engagement on a previous employer of mine, we took over the local password manager of a subset of the security org, twice. The first time, they had a VNC, unauthenticated, with the password manager running and the file unlocked. The second time, a team conveniently used Git to sync their password manager file, with their password tracked.


Reminded me of a funny story. Maybe a decade ago, when moving to the cloud was all the rage, my then employer decided to check whether the cloud was any good. Long story short, he asked me to conduct penetration tests against the major providers. In one of the providers I pivoted through some network and hit a webpage that looked like some sort of control plane panel (but required authentication so...). I decided to google part of the HTML and... A stack overflow thread pops up with the code and parts of the backend code/logic. So much win.


> he asked me to conduct penetration tests against the major providers

That sounds madly illegal?


Most providers had a semi-automated process that granted you permission to conduct your pentest (assuming you'd share any findings reg. their infra with them). In reality though, most of the findings didn't come from poking around but from tapping the wire. I'd spin up VMs and tcpdump for hours, then look at the logs for odd packets, plaintext etc. etc. which makes it hard to detect such shenanigans

Edit: We went through the process for everything, including having a provider ship us a back-up solution to pentest. My desk became everyone's favourite place in the building :P


Knocking on someone’s front door and noticing it’s unlocked is perfectly legal. It’s actually walking in that’s illegal.


And at least in England, trespassing is not even a criminal offense afaik, just a civil one - and the owner will have a hard time winning that case too, without very explicit signage.

Unless one helps himself to the house contents, or does other Bad Things, walking through unlocked dwellings will get you at most a slap on the wrist.


Outside of the cybersecurity analogy, as an American, that's . . . very disturbing.

Much like someone open carrying a gun is seen as potentially a few seconds away from committing a Very Bad Crime, so is someone walking around your house uninvited.


England has some weird (to me) property privacy laws. IIRC, you cannot be charged for simply walking through someone's property as a shortcut. There's nothing they can do about it, you just can't linger on the property. I mean, it seems fine, I just haven't seen anything like it before.


It's the system throwing a bone to the general populace in order to maintain an extremely unequal order. Aristocratic landowners mostly do what they want, and there has been no land reform for centuries, so a few concessions were thrown in to allow peasants to make a living somehow.


Well cutting across someone's yard != walking through their house. My friends and I growing up would sometimes cut through neighbors' backyards to go somewhere, and while we didn't have formal permission, no one cared because we knew each other.


I don't the know the situation now, but in the UK you could break into an empty place, then change the locks, and from that point on they could not evict you without a long process involving going to court. There was (is?) a huge squatters community because of this.


From the story of the GP, and extending your analogy, this is more like if they walked into the house and found the safe and noted it was locked, so looked up the safe schematics online.

Not exactly legal.

But even stepping back, I suspect walking around and jiggling random peoples’ doorknobs to see if they’re unlocked is probably illegal.


It’s funny how often this works, there’s a ton of copypasta code in production out there.

I do some bug bounty hunting for fun, and just yesterday I Googled a weird snippet of frontend code from a major corporation, found the matching backend code in a blog post, and saw a bug in it. Alas, not a bug that could be used for anything interesting this time.


IIRC, Intel announced about a year later plans to develop something similar. That being said, at the time they didn't have a specific timeline.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: