An FBI agent from the Cyber Crimes division gave a talk while I was in college (>10 years ago). He was interested in brute force attacks against SSH daemons and created a couple hypotheses around number of logins and common passwords. To test this he setup two Honey Pot to record all of the username/passwords. The first one listened on standard SSH port 22, the other listened on a random high-numbered port. He left both of these running for ~6 months.
Results:
The honey pot listening on standard port 22 received 1,000s of login attempts (sorry, don't remember the exact number). The honey pot listening on the random high-numbered port received exactly 0.
I know this is just an anecdote and it might not necessarily be true today, but this experiment always sticks in my head. At least the guy used the scientific method: created a hypothesis, conducted the experiment, analyzed his results.
What I've found these days monitoring my own network is that there is now 2 waves -- a port scan and then the attack.
If I change a port for anything to another random port I won't get any login attempts for a few days but eventually I start getting hit again. I can repeat this over and over. I imagine what is happening is that the bad actors are scanning for open ports and they feed that periodically to another process that attempts logins.
The second wave is likely when public port scanning services such as shodan re-scan your host. (I wonder how hard it would be to fingerprint and subsequently blackhole shodan et al's scanning traffic)
Anecdotally, I have a machine with an exposed SSH, on a high number port. I get brute force attempts on a regular basis against it, just way less than when I run it on the standard port number. Security by obscurity is just one part of the steps I take with that machine. Using a high port number is dead simple and easily handled client side too, so I just do it.
> It’s where you keep the mechanism secret, not the key.
I think this can be, as you write, defense in depth if the secret of the mechanism is not the only defense.
As example the block cipher for the Common Scrambling Algorithm https://en.wikipedia.org/wiki/Common_Scrambling_Algorithm has been secret.
As it seems that has delayed the analysis of the system for about 8 years, but not damaged the procedure.
Technically defense in depth refers to multiple effective security measures (like cryptographic login), so security by obscurity isn't actually part of it.
(Moving SSH port plus something like fail2ban could be considered defense-in-depth against the incidental DDOS-like issues, though.)
Also anecdotally, I've been running SSH on the same 30xxx port since ~2004, including a cluster, which ran a public-facing service that was a popular target for various forms of abuse.
I recently tried to change ssh port to remove log noise. Well, it certainly helped a little bit, but bots quickly found out new port and started to brute force it, so in the end it did not help, just reduced noise. And as I don't see much difference between 100000 attempts and 1000 attempts, I decided to return it back. I don't care about brute force anyway, my passwords are not "root:root".
Let the server and client share a secret. Use that secret to encrypt the UTC date (2020-09-21), and sample some decimals from the first few bits (adding 100 or so, to avoid low-ports).
You could use that mechanism to rotate ports every 24 hours. This way, the bots wouldn't be able to learn the ssh port for more than 24 hours, without the shared secret.
Sounds like fun, or an easy way to lock yourself out of a box by mistake, depending on your perspective. :)
Or use a TOTP with a long period (10 minutes?) and use that value mod, say, 10k with a base of something like 9000. Easy to calculate the port in your head, impossible to guess without knowing the TOTP secret (I think) and can be extended with other fun* rules like "But subtract 10* the first digit" or "Add the first and second digits multiplied".
That's a fair point - e.g. Authy is currently broken on iOS 14 which means I'd be locked out if I was doing this and using Authy (although I have a VPN to one server which can then get to my others.)
When you have a large base of installations in a big organization, this can make a difference in practice because your incident responders have to sift through less data. This makes much less of a difference when you have great log management and SIEM systems in place. Many places don't, and some hygiene can make a difference at times.
When I see this in practice, the first thing I check is how auth is being done and the overall security of the host. Then, I look for how they are doing SIEM because cleaner logs is a common reason and they'd be better off with a more proactive management approach.
This is my observations as well. For 20+ years I have run ssh on a high port, with exception to my sftp server. The sftp server is hit every day, all day. I have received 0 hits to my ssh port on all my other servers. Even if they hit that port, they would not see anything, as I use a poor-maps port knocking using iptables string matching, but I would still see the attempts in the iptables counters and they are always 0.
FWIW, when I chose my port, I looked at port scanning statistics back in the day, looking for the least scanned ports. It appears those stats have held true for a couple decades at least.
If the attackers are using botnets to distribute the load across IPs, then perhaps we need to distribute the detection across IPs: https://www.abuseipdb.com/fail2ban.html
I used to manage VoIP systems and VoIPBL[0] was amazing.
"VoIPBL is a distributed VoIP blacklist that is aimed to protects against VoIP Fraud and minimizing abuse for network that have publicly accessible PBX's"
It's very similar to what you linked but is targeted to catching VoIP abuse.
I think the concern is a botnet with n IPs is that fail2ban tracks individual IPs, so if you have any kind of grace period before bannination, they get a linear speedup of n, and if there's an expiration period, get to try n times harder than a single bored script kiddie.
Worse, from an economic perspective, theres enough hosts listening on port 22 that a bot can try instead while they wait for timeouts, so you're not really imposing a cost on them. If you view running a botnet as a form of multi-armed bandit problem, the best you can really do is limit the economic value by slowing them down a tad versus their many, many other options.
But as soon as you found that "PermitRootLogin" can be set with no then all brute forces become useless since they can't match combination of user/password.
fail2ban has other uses: it prevents non-root user error (oops, one of your contractors reused a password…), it significantly reduces log noise, and it protects against any future exploit which doesn’t always work on the first 3 tries.
But for mine usage it increase memory usage. I'm using it on OrangePi Zero with 256MB RAM.
Port 22 is opened for world so anyone can join. Device have 2 users - root and jacob. I make a change and disable root login from WAN. Now can login from root from LAN.
Since noone knows that "jacob" exists i'm saved.
Not necessarily, plenty of people have common / guessable user accounts. For example every one of my servers in the cloud has an account called "user". (All my servers are also key-only authentication, obviously.)
Dito, a combination of forbidden root login, an obscure username, fail2ban and disabled password authentication worked well for me for the last 10 years. It's also quite simple to set up. The important part is to double and triple-check each step so that you don't lock yourself out (which has happened to me multiple times in the past, of course).
The entire point of everyone who rants against non standard ssh ports is that it adds more noise to the signal of "You should disable password login entirely and only use public / private key authentication."
And reading many of the responses in this thread it makes sense to me. So many people in this thread talking about how many less failed password login attempts they get when they change the port, indicating they allow password based login in the first place.
If you expect to be hit by that kind of attack (simple combination of username/password), then you should protect yourself from that kind of attack. It's never been easier nowadays to do this.
You may answer that you could still miss a few of theses simple password, that your solution would be more effective, sure, but then you use security by obscurity to protect yourself.
By the way, security by obscurity does works, it's not bad per say, just as that FBI agent just proved, it does have an effect. If it didn't, there wouldn't be so many case where it was used. The issue with security by obscurity is when you rely on that to protect from vulnerabilities and then ignore them. It only lower the likeliness of getting attack, it doesn't make attack less effective, it doesn't protect from any vulnerabilities.
Sadly, too many time, we just ignore it, hide everything and hope to avoid targeted attack which would foil that obscurity pretty quickly. This is when it get bad.
I have the same experience with non-standard port usage, and I think it's a very reasonable thing to do, while also caring for the security of the service behind that socket. SecOps will thank you for not having to wade through log spam in the endeavor of preventing attacks.
> I know this is just an anecdote and it might not necessarily be true today, but this experiment always sticks in my head. At least the guy used the scientific method: created a hypothesis, conducted the experiment, analyzed his results.
I don't need a research to see the difference in how much logging journalctl generates immediately after I disable port 22.
This is an interesting point. Imagine if you put a fake SSH agent on 22, it responds just like SSH but never allows a login. Would it make it even less likely that someone would bother trying another port?
Depends what we mean by sandbox. I wouldn't make a chroot the honeypot, but I don't see an issue with a program that just simulates a shell but doesn't allow exec or real fs access, for instance.
I don't think you would ever let them touch OS-level resources. There are plenty of third-party ssh server libraries where you just get a Reader and a Writer to the remote end. When they connect, you write "root@cool-computer# ". When they send bytes, you discard them, then print "root@cool-computer# " again.
While obviously accepting a TCP connection and allocating resources on your computer is more risky than just ignoring the connection, presumably it would be fun to do this, which is a good reason for doing something. You can set a memory limit, file descriptor limit, etc. and just crash if they're exceeded. You can run your little fake ssh daemon in gvisor and protect against attacks nobody even knows about yet. All in all, it would be pretty low risk, and also pretty interesting.
Moving SSH to a non-standard port is helpful only to reduce log noise coming from untargeted attacks. If an attacker is looking at your system, your attack surface is the same no matter where your SSH daemon is bound. I don't think it's worthy any extra effort to "distract" attackers like this.
> It’s an useless anecdote because SSH bruteforce attempts are not a threat and cost you nothing.
I can say from personal experience that this anecdote is both accurate (20 years ago and up till today) and meaningful.
No idea where you get this notion that brute force attack are no threat or without cost.
They certainly do pose a security risk (takes only one insufficiently trained employee/intern for a potential breach) and they certainly come at a cost (way beyond dirty just logs).
Botnets and fast networking stacks like DPDK have made port scanning the entire Internet a much more viable proposition than 20 years ago. Depending on your sshd settings you can be effectively locked out of your machine by a brute-force attack. Running on IPv6 and/or having a secondary sshd instance that only accepts connections from whitelisted IPs is cheap insurance.
That doesn't invalidate the observation (which I share) that these attempts are almost 0 when using a different port. It reduces logspam and if I start getting lots of brute force attempts on my non-standard port, this is useful and meaningful information (someone cares enough to do this).
> Botnets and fast networking stacks like DPDK have made port scanning the entire Internet a much more viable proposition than 20 years ago
True indeed, yet even today I have seen little evidence of scanning beyond standard ports (pretty much the same as in the past). Criminals are opportunistic by default and tend to go for low hanging fruit (standard ports, with standard server config). I certainly did see in increase on standard ports. Even while full range scanning has become more feasible, I have not seen much evidence of its use.
> takes only one insufficiently trained employee/intern for a potential breach
How? If they've leaked their key, why do you assume the port hasn't leaked too? On the other hand if they haven't leaked their key how would they get in?
Or are you allowing password authentication like it's 1999?
> Or are you allowing password authentication like it's 1999?
That is assuming you have such authority or technical means. If you're maintaining systems for a company, there's a good change that the product vendor simply won't allow fucking around with their system like that (ergo: yes, in practice you are indeed stuck with your 1999 authentication).
I'm not saying that it is good security (that's why layers security is often paramount), but it is situation I've encountered more than a few times.
Great for you, if you are GOD on all the systems you work with. Even then, your client/employer might simply tell you to stuff your objections and accept the bad authentication policy, because to them the risks are simply not worth the business disruption. I totally agree that is a flawed argument. But decisions usually aren't always (if ever) called on valid arguments.
Good for you, if you are in a position where you never had to deal with such real life situations.
Heck, some cloud providers have password logins by default. Since the instances are easy to setup I'd imagine many companies operating with a no-ops situation are vulnerable and don't even know it.
And then there are side projects. I remember being educated enough to know better, but doing it anyway as the server was a $5 digital ocean droplet, used to run a tiny minecraft server for some friends. Got brute forced and spent the next two weeks red-faced, trying to get DO to allow network access again so I could at least grab a backup before nerfing the droplet.
Now I use a basic ansible setup to automate changes to sshd so I don't have any excuse to be stupid again.
Not sure what you're arguing here. You either have control over sshd or you don't. Or are you really suggesting you can change the port of sshd but aren't allowed to disable password auth?
I'm a software engineer, so if my company gets hacked via ssh that's really not my problem. Worrying about such things would make me a busybody. But if you're a system admin and can't properly do your job, then I would seriously start looking for a new place to work. They will get hacked and you will be the guy that gets blamed.
> Not sure what you're arguing here. You either have control over sshd or you don't. Or are you really suggesting you can change the port of sshd but aren't allowed to disable password auth?
First, you'll have to separate two things here. One is the technical ability to control sshd, the second whether a company will allow you to tinker with the auth policy (whether that is password login, password login with only strong passwords, or rsa/ecdsa key access only).
The latter has nothing to do with control and only with what decision makers allow you to do (that sometimes is a large product vendor, not allowing anything beyond what they ship). If you work in a place where you have full control over the systems you work on, great for you. I can ensure you that it is not the norm (unless we're talking about hobby projects or projects with exclusive personal ownership).
As for the technical aspect, keep in mind that changing the public facing ssh port might not even be done on the host itself, but e.g. in port forwarding table in a router/firewall. This might not even always happen because it's technically impossible to do it on the box itself.
I'm pretty certain that tinkering with a box is regularly discouraged (especially if it is managed by some orchestration or vendor specific control/update tool), while effectively the same can be done by changing a router/firewall. There's a lot more things to be said about that, but please take it from me that hacking around in a systems you have not build yourself isn't always a bright idea (and it happens to be a very common situation).
> But if you're a system admin and can't properly do your job, then I would seriously start looking for a new place to work.
That's an interesting theory, but frankly not how I think the real world (usually) works. As a system admin you are there to solve problems for a client or employer. You can (and should) of course always warn for potential dangers, but refusing work or quitting a job/assingment because you're not getting full control over a system .. good luck with that. It is simply not an acceptable position in many situation. You must be in really high demand if you want to pull stunts like those and still have any work after a while.
Maybe it works different in software engineering land, but I highly doubt it. When was the last time you quit a job, because you preferred a different library or framework over the one your superiors/client dictated?
Please don't get me wrong. On a personal level I'm very principled about what I choose to work on or with (and what I refuse to take part of). But at the end of the day we are professionals, here to solve problems. If we can and a client/employers is willing to accept the risks of an imperfect solution that fits in their requirements, it ultimately is their call and responsibility. All within reason, of course.
> tinker with the auth policy (whether that is password login, password login with only strong passwords, or rsa/ecdsa key access only).
Or the port ssh works on
> that sometimes is a large product vendor, not allowing anything beyond what they ship
Surely you'd limit access to that on an IP level and bounce via a bastion (which you do control)
> tinkering with a box is regularly discouraged (especially if it is managed by some orchestration or vendor specific control/update tool), while effectively the same can be done by changing a router/firewall
"Tinkering" with a router/firewall sounds far more dangerous than a box -- you can knock out 2000 machines in one go.
> That's an interesting theory, but frankly not how I think the real world (usually) works
If the shit hits the fan, are you confident your management (which apparently refuse to allow you to implement basic security policies) will have your back, or will they pile the entire blame on your to save their skin.
Better to look for a company that respects your skills before you get pink slipped.
Calling that part of the auth policy (in the context if was responding to) is a bit of a stretch, but okay.
> Surely you'd limit access to that on an IP level and bounce via a bastion (which you do control).
What percentage of organizations have you seen do it that way? In my experience it's more often directly behind an internet facing NAT router, through a port forward. I'm not saying that's a good thing, I'm saying it's reality.
> "Tinkering" with a router/firewall sounds far more dangerous than a box -- you can knock out 2000 machines in one go.
You again appear to be missing the point I tried to make. It's not so much about danger, but more about control. A box is regularly far more of black hole (especially if it's a vendor appliance or legacy system) than a company's router/firewall is. Sure not without dangers, but that's why you're a professional that (hopefully) knows what he/she is doing. How often did you work on a router/firewall that controlled 2000 machines? In my case, I can count those on one or maybe two hands.
> If the shit hits the fan, are you confident your management (which apparently refuse to allow you to implement basic security policies) will have your back, or will they pile the entire blame on your to save their skin.
It works a bit different if you're contracted or working for clients, but either way: that's why you document things and make clear to those who make decisions that the risks are theirs and not yours.
--
But seriously though .. I'm not sure if you genuinely missed the point(s) I tried to make, if you might be pedantic on purpose (just for the sake of it), if you might be just another armchair general, or maybe have only worked in very privileged positions where you had full control and authority over the systems you had to deal with. The latter is certainly not the reality I've experienced for over two decades.
Maybe you are experienced, just in a very different reality/industry than mine. Still, I find these kind of arguments about companies "not allowing you to do basic security" or "not respecting your skills" rather childish and out of touch with the reality. I have not seen many gigs/companies where sysadmins (or even -architects) have this kind of god-like status. When I did see such situations, it often meant a company would have serious (potential) issues if/when their "guru" would piss off (leaving a collection of equipment in "status unknown", i.e. the next guy would not be allowed to touch anything and ergo my point about tinkering with boxes being discouraged).
How long have you been doing this (professionally)? That's not a rhetorical question. I'm genuinely curious.
Network administration (and system administration of about 150 linux machines) for about 10 years. I did a full port sweep of my network a few days ago, 1,555 IPs with port 80 open (although to be fair several of those are multi-IP connected). Before that 7 years of system administration and development.
My (my team's) network policy is that those web ports are not exposed on the internet - we provide proxies with 2 factor authentication up front. We find we get far happier users when we use carrots.
We operate a high wall policy, and while we do push towards a secure-everywhere system, we are more flexible that other corporate networks, and tend not to have exacting requirements. Your black box wants to use SNMP v2? Of course it does, that's fine. No you're not probing it from the internet though, we'll work with you to increase security.
If a team want a device that claims to run a proprietary protocol and needs TCP ports from the internet, that's fine, we do it. We discovered recently one of these devices was actually running a standard webserver on one of these ports after a firmware update. The device user didn't even know.
Ultimately we provide a network, you can take it or leave it, there's competition (go for one of the two non-shadow IT networks, or build your own).
I know from personal experience what happens when things go bad against my teams advice, all those emails saying "this will go bad" are work jack squat. Fortunately we had very air support for that specific event (front page national news), nobody cares about "I told you so".
From the context of your description, your earlier arguments certainly make more sense, more than they would in the situations I'm personally more familiar with.
My main conclusion here is that we are both referring to very different (work) environments.
For what it's worth, given the context you've described, I can well understand and support your arguments.
That said, I'm not so sure your context is representative of the industry as a whole. To be fair, I really don't know what would be. Only that we apparently have worked in very different environments with very different conditions and requirements.
Just to be clear: when I argued about warning for the risks of particular choices, I was never referring to internal company communications. Those are indeed worth little to nothing (if shit happens). I was referring to communication between separate legal entities. Withing B2B contract work, such communications do quickly become crucial (if shit happens), even from a legal perspective.
I hope you can see that while your arguments do hold well within the world you know, they might not be so applicable or useful in other (certainly existing) situations. The world certainly is more diverse than the context you've given.
Kind regards, and thank you for answering my question.
> Withing B2B contract work, such communications do quickly become crucial (if shit happens), even from a legal perspective.
Hah. Depends how large the companies are, but again in my experience all that B2B stuff is meaningless. Maybe it's only my company that's terrible at writing, measuring and enforcing service levels, and certainly awful at extracting any penalties (I guess because any downtime doesn't have a direct monetary loss, just a reputation lost which eventually leads to monetary loss)
That said my entire industry (broadcast) relies massively on IT - far more than in the past - and has absolutely no clue about security. In 20 minutes I found 130 of 200 devices on the internet with default credentials open on port 80. Case in point, using shodon, I can find a server and see within seconds that a Polish broadcaster is currently streaming some people playing violins from a studio - not sure if this is a live TV broadcast or being taped for later, it might be going out on "Program 2" on Polskie Radio, but I'm not an expert in the Polish broadcast landscape.
I'm just amazed how anyone could be in a position to have knowledge and authority to change the port SSH is listening on (thus breaking peoples workflows), but not change away from using passwords, even if a bastion and/or ip whilelisting isn't allowed.
They cause some real work because of the log noise they create. It's easier to see targeted SSH attacks if all the undirected attacks are filtered away.
This is absolutely true. I use fail2ban and I often find that it's using rather more CPU than I'd like. Sounds like moving my SSH port might solve that!
If you get 10,000 attempts on port 22, you're probably connected to the internet. If you get 10,000 attempts on port 63290, someone has taken a specific interest in you.
Personally? I'd decide the utility of having it public-facing is no longer worth the risk, and firewall it down to a much narrower set of source networks. I'd probably take a moment to brush up on my key hygiene too.
The fact that someone bothered to scan the entire range (or find your port at random) might indicate that they're specifically targeting you, and just being aware of that is an upside.
It shouldn't, but it does. Many smaller companies driven by business people, where maybe tech is just seen as a necessity on the side need a narrative like "people are trying to get in and if they do it's going to be a disaster" to take security seriously. Then or at the point where the disaster strikes.
I'm not really sure why this point was voted down below either; just because you work for someone who takes security seriously (at least to the point where it's insurance-satisfyingly safe) does not mean everyone does.
Years ago I worked at a small agency and every bit of time I spent had to be justified and produce tangible/visible results. "But is anyone really going to try to hack this local business" was a question I actually had to answer, since most other employees were creatives.
There's a lot of value. For example, if you see failed logins against random user names like "dbadmin" or "root" it's likely just random scanning, but what if suddenly lots and lots of valid user names appear?
That's a great point, but I get back to the root question: who's actually looking at this? If people are examining logs it's usually for a particular trigger or a problem and filtering that signal from the noise is hard.
It's more typical of the servers-as-pets than servers-as-cattle scenario, but sometimes one is simply curious [or extra cautious]. SSH honeypots exist at least in part for this reason.
Well that would highly depend on what I'm seeing. If it's a single user there might be an attack on the way against that user. If it's multiple users, there might have been a compromise of some credentials.
It's definitely something you need to investigate.
Did you ever had the "pleasure" of a server grinding to a halt because the logs filled up all the space? To where you had to mount the disk to another system and clean it up before it wants to boot from again. Can be a bitch if it's a machine on a remote location. Not everything is cloud (yet) these days.
Granted, there usually is a lot more at fault when you run into such problems, but I find people not looking at logs a rather weak argument for letting them get spammed full with garbage. Certainly terrible hygiene, at least.
>Did you ever had the "pleasure" of a server grinding to a halt because the logs filled up all the space?
I've never seen this issue on any systems I manage, mostly because they all have log rotation.
>but I find people not looking at logs a rather weak argument for letting them get spammed full with garbage. Certainly terrible hygiene, at least.
Why is it weak argument? If it's something that doesn't materially impact you, why should you expend effort into remediating it? Hygiene is only important for things we interact with on a regular basis. We as a society don't care about the hygiene of the sewer system, for instance.
>I've never seen this issue on any systems I manage, mostly because they all have log rotation.
Ah, yes - the age-old claim that log rotation will magically stop a belligerent from dumping 100s of gigs of log files before `logrotate` has time to run ... filling up your disk
And even if logrotate did try to run, you have no space for the compressing file to live while it's being made
> Everyone should be using logrotate, and if they actually read the things, shipping logs to ELK or Splunk or Greylog or whatever.
Certainly they should. That is, if they have that much control over the server and if it's not some legacy system build by some defunct organization or John Doe. I do not disagree with your theory, on the contrary. But then there is reality, where this theory isn't always feasible.
> Keeping the log file on the boot partition was the first mistake
Wrong assumption. With logs on a full system (but not) disk, your system can still grind to a halt during boot. Sure, if you do have access to the bootloader, you can do an emergency/recovery boot. But you do not always have that on systems build by others (especially product vendors).
I would not be making this point if I had not run into situations where this was an actual problem. I can assure you it was never the result of my personal bad architecture or maintenance and almost exclusively while dealing with third party products.
It would be valid to argue they should get their shit together, but the reality is that at the end of the day, companies buy systems like these and you still will have to deal with them.
Results: The honey pot listening on standard port 22 received 1,000s of login attempts (sorry, don't remember the exact number). The honey pot listening on the random high-numbered port received exactly 0.
I know this is just an anecdote and it might not necessarily be true today, but this experiment always sticks in my head. At least the guy used the scientific method: created a hypothesis, conducted the experiment, analyzed his results.