We had a prod case where a server was being flooded with requests, and a downstream server kept falling over. We figured it was an attack of some sort and investigated, eventually traced it back to a computer inside our own network (we're a big computer, five floors of computers).
It had an open file share, containing some Delphi books and from which we got the computer name too. So we walked over to the Delphi team's side, and kept yelling the computer name until some dude said "Hey, that's me!"
Turns out he was running a test-case, in an infinite loop until it worked (because that's how test cases worked), and he thought he was pointed at QA, but he somehow had it set up to target Prod.
Our job was done at that point, we left the rest to management (who made sure he didn't get fired but didn't do it again).
Doesn't sound like a management failure to me. It sounds like there should be separate vlans for QA/test and Production to prevent this very thing (or potentially something more malicious like the spread of ransomware).
I've done security reviews for a dozen companies. This sort of thing is startlingly common. Every single company I've reviewed is doing something that in retrospect should have been obvious.
I try to tell people: "You don't need AI security, you need a checklist."
Colonial Pipeline reused passwords, shared passwords, used the same password for all VPN users, failed to rotate it when people left. (that's 4 insanely basic violations of password security). ANY human who did a security review would have caught that. Even an intern who knew nothing and furiously googled "information security review" on the bus on the way in to kick off the review. (no disrespect to interns in over their heads, my point is they didn't prioritize security so they didn't get security)
Capital One used an admin privileged instance profile attached to a publicly accessible admin interface for a security tool (which tool, by the way, had no need of admin credentials). They were hit by an SSRF vuln and leaked their admin credentials. They also failed to alert of unexpected use of those credentials (try it, use of admin credentials is rare enough you won't have a lot of noise) failed to alert on large outbound connection (this one is subtle, but worth doing if you can figure it out)
Equifax failed to apply security updates regularly (just turn on automatic security updates. People suck at chores) Failed to deploy a SIEM, failed to conduct periodic security reviews, failed to put capable security people in place.
The above are not my clients, just public reports to illustrate that everyone can benefit from a security review to catch the obvious errors.
One of the issues with Knights Capital was that they forgot about a server running an old bit of code and shut down all the new ones which just sent all the data to the old server which was causing all the problems. Not keeping track of that server was very expensive.
Did that with a printer that came up on an audit at a hospital once. IT director told me to go to X site to find it based on its IP in the schema and I just cranked out a job to it that said to call my extension. Two minutes later the phone rang...
Ah “net send” - I remember getting a friend in trouble in high school for telling him how to use it.
He sent one to “*” saying something about the FBI or some such, and evidently it ended up reaching computers across the entire local school system (not just our public school).
He was called out of class days later after they looked up the IP and library computer access logs.
This should really only ever happen with wireless connections. You should always be able to tell what switchport a computer is connected to and work from there.
Back in the 90s, I recall something similar. Due to cost of hardware, networking wasn't as hi-tech as it is now. So it would be common for medium sized office buildings to have CAT3/5 cables trunked from everywhere in the building down to a central patch room, in which there would be 1000s of patch ports and patch cables stringing everywhere into discrete hubs that had no-onboard management. To trace a connection you'd have to start with the wall or floor port number that the end device was plugged into, and hope it's mapped correctly to a patch panel/port number in the patch room, and then manually trace any patch cable from there onwards to the hub etc.
The whole system falls apart when you have no idea where in the building the end device is, if you are lucky there may be a managed switch on the network route somewhere that may help you narrow down the location somewhat.
So yes, it did happen sometimes that the only way to find a box was to send a desktop alert and hope the admin of that box contacted you.
And then? The cable disappears into a wall together with 100s of other cables (which most likely are not labeled or not correctly, otherwise you wouldn't have lost the machine in the first place)
It is completely irresponsible and without excuse for any main network operator/owner to not be completely aware of what each and every cable does which is connected to a switch/network router. If the owner refuses to determine this, they are responsible if there is a nefarious device on the network until they do. Wireless makes this much more complicated so any responsible admin will ensure the wireless network is completely isolated from the physical network and is privileged to only access the internet or separate devices.
This is a silly take.
People and orgs have a million reasons why their cables might be unlabeled. Shame on you for binary thinking without considering real world confounding factors.
It’s the thinking of someone who has only worked as places that are three years old and the person who built out the network still works there.
If you’re hired because the old person didn’t follow basic maintenance procedures, you’re still ignorant until you rewire or trace the whole company’s network.
What I am hearing is that is that it is not practical to expect network admins to be in control of their networks and sub-sequentially it is not practical to ensure no malicious devices are plugged into enterprise networks. Just because it’s difficult to do doesn’t mean it shouldn’t be done.
What you should be hearing is that it’s not necessarily irresponsible for somebody not to know something when they are inheriting a system, and that it’s totally reasonable to expect to encounter poorly done systems in the real world that need someone to fix them.
It’s often the case that somebody slapped something together in an area that wasn’t their expertise, it’s been noticed that it’s a real problem, and someone has been hired to fix that problem. The “not knowing” is often the reason they’ve been hired. Trying to sort out a real world scenario (while also handling other needs of the org) is almost definitionally Taking Responsibility. So let’s not shit on people trying to cleanup a bad situation by calling them irresponsible for not knowing.
Suif, you have a lot to learn my friend.
First is speaking in such absolutes.
The more senior I get, the more I realize there are often a multitude of reasons things are the way they are, and many times those are valid reasons, when seeing something that is broken.
Taking a beat before pontificating and making a fool of yourself will save a ton of heartache in your career.
When you see something so broken, ask yourself why? Then ask somebody else.
Some highlights from my career:
1) Last guy got cancer in the middle of a build.
2) Last guy worked his way up from one man help desk to Linux guru over 15 years all on his own, but was so busy putting out fires, he never had the chance to improve things.
3) Project started out as a proof of concept and was intended to be torn down.
4) Due to government contracts, the system has to be maintained exactly as delivered, no labels even allowed, and obviously no IT staff(?!) To make spreadsheets. Everything was paper notes by operators.
5) Pure laziness and incompetence as you alluded to.
All this to say, more often than not there is a good reason something is fucked up, finding out why may help you fix it (like in the case of politics, budget issues, firefighting, priorities, etc..)
Customer site, big insurance company. The started documenting cables and labeling them to get rid of old faulty documentation. Half way through their security department forced them to stop. Why? If an attacker gains access to the documentation he would have all the information he needed. So, the had three types of cables: old ones with faulty labels, cables with right labels and unlabeled cables. And then there was me, in the server room at 3 a.m. tracing a cable by pulling up floor tiles because the cable was handmade and the rj45 plug wouldn’t fit into the new switch we installed that night.
> Half way through their security department forced them to stop. Why? If an attacker gains access to the documentation he would have all the information he needed.
Some IT security departments have very confused ideas.
We moved into a building where the drop-ceiling had pretty much every generation of cable, going back to Twinax used by IBM 5250 terminals. Previous tenants had cut the connectors off and just shoved them up there when they moved out.
Network documentation in this case? No way. The only option is to pull it all out for recycling, and start over.
One of the many reasons that I dislike the push towards wifi/wireless for everything. It makes my hair stand on end to see people using wireless keyboards (which people usually have for at least 5 years). People seem so disgusted when you even suggest that these things are inherently bad ideas which will inevitably lead to consequences and immediately push you into a naysayer/antiprogressive category verbally or silently.
I just recently learned that Logitech unifying receivers were susceptible to “mousejacking”[1] for years before a firmware update fixed it in 2016. There’s still probably many non-updated receivers out there.
To a mildly capable and somewhat determined attacker (who can get relatively close to you) this means your keyboard is probably readable from the radio signals.
A
Physical keystroke logger if you want to think of it that way.
I have a few commercial-grade WAPs, but they are about four years old and do not do MIMO. I wonder if any of the current hardware records RTT to sufficient accuracy so the distance from the antenna to the client is recorded/available. I also wonder if the phased-array antenna processor records the vector to the client. Such information is available from the hardware, but can anyone tell me if ANY WAP vendors are providing it via their management interface?
Such features could alleviate some of the parent poster's concerns.
Have you considered that using a wireless keyboard and other tech is OK under their threat model? I use one at home and I honestly can not see any downside to it.
Some wireless keyboards don't bother with any kind of protection to the data stream between the keyboard and the wireless receiver. That's the most obvious instance of bad keyboards. However, these days most wireless keyboards do use some kind of encryption on the pairing between the keyboard and the receiver, so that is a bit of a moot point.
Even if the data stream itself is encrypted there's still a little bit of data leakage. Your keyboard isn't constantly sending data, it really only chirps when there's an actual keypress event. So if you look at the actual physical RF, you 'll notice patterns related to the user's typing. There is some research in trying to guess key presses based on typing cadence, although I'm not sure exactly how effective it really is.
I say all of this typing on a Logitech Unifi keyboard amd routinely use bluetooth keyboards. As others have mentioned it really depends on your threat profile, and in the case of wireless keyboards you probably aren't near the level where this paranoia is justified. Are you typing state secrets that a foreign government body really wants in a public place? Probably want to have a wired keyboard...or maybe just not type such things in such places. Are you typing out a comment on Hacker News in a private space? Probably have nothing to worry about with a wireless keyboard.
Switch port? Jump back a few decades and try combined kilometers of shared coax runs that effectively become embedded into a building over years of redecoration...
This was roughly 19 years ago and my department was not in any way involved with the networking.
Sure, in an ideal world that would be possible - but we didn't even have access to the switches. So either it's trying to hunt down the other department in another building who /might/ solve that riddle in an unspecified amount of time... or just do it :)
I've had something similar happen to me. I was freaking out that there was something I did not know on my network, as I was going through some router configurations. Searched my office, Bride's office, asked my kid - nothing. Had a pie connected to the back of a TV, drawing power and connected to my network. It bothered me for months that _something_ was there, in my house - that I had completely forgotten was mine. Christmas time rolls around and we try to plug the kid's new console into the wall mounted TV... and there it is taped to the back of the monitor.
I work for a company with around 50k machines globally... one time we discovered a machine that was supposed to have been decommissioned five years prior still sitting on the network, just waiting to do its job. We ended up scanning our entire IP space and finding 10-20 other machines in the same state.
We now have a process that routinely scans our entire IP space for machines that somehow get lost from our inventory system.
When I first read that back in the day I thought how absurd and improbable it sounded because of how big computers were at the time. Now that raspberry pis and arduinos with wifi are a thing it seems almost inevitable.
I was looking at my network today and I realized I didn't know what one of the devices on my network was. I knew its IP, but it had no hostname and a randomized MAC. And for the life of me I couldn't remember what it was, even though I knew which room it was in! (by the AP/signal strength)
I had to use my firewall to monitor the network traffic of the IP to determine what the device was. It turned out to be a long-forgotten smartwatch collecting dust on a charger tucked away somewhere.
I think the modern version of this is forgetting where a script, cron, lambda or whatever is running from.
I have something that sends me an occasional email. I haven't needed it in years, but it's not in any of the AWS regions I remember ever using. Nor in the obvious places I might have put it playing around with azure or google cloud or whatever. I'm sure I could find it if I really tried but t only emails me once or twice a year so I just let it be.
<erno> hm. I've lost a machine.. literally _lost_. it responds to ping, it works completely, I just can't figure out where in my apartment it is.
[1]: http://bash.org/?5273