Obscurity is the only security
There’s a common belief in the security world that obscurity shouldn’t be used as a layer of protection.
This line of thinking is based on Kerchoffs’s Principle, which states that the security of a cryptographic system should depend on its key, not on the secrecy of its design. When analyzing cryptographic primitives or doing any sort of system audit, letting auditors in on the details makes complete sense. Skilled reviewers should spend their time searching out novel weaknesses and not on layers that are intended to slow an attacker or to alert someone to an attack.
That said, there is much to be gained through properly applied obfuscation in deployed systems.
If there’s one thing that the history of cryptography has taught us, it’s that each system has a lifespan. Some of this is expected. Over time, RSA key sizes have grown as machines have increased in speed and power.
Yet, experience shows that by investing a significant amount of time to expert research, along with appropriate funding, some flaw will be discovered that will compromise a system. If not found in the design, flaws will be discovered in the implementation or one of its dependencies (BIND, OpenSSL).
The implementation is always more susceptible to attack than the algorithms and protocols. Any system in frequent use by more than a dozen people could be breached by a patient nation-state that’s willing to expend serious resources to gain access.
We have seen this time and time again. Hushmail, one of the first email providers to have security as its main feature, eventually backdoored its client under subpoena. Google and Microsoft were discovered to be providing (sometimes without their own knowledge) various levels of access to the NSA through programs like PRISM.
We have also frequently seen that what is currently impossible or merely theoretical may soon become the favored tool of script kiddies. So when it comes to an audit, the trick is having a useful model for evaluating each layer, This is done so there is confidence in the level of delay a system is expected to produce against certain classes of attackers. Security developers can then expend resources in line with expectations, keeping in mind that complexity yields vulnerabilities.
Looking at a few trivial measures will give us a model. For instance, what does moving your SSH daemon to a different port give you? Well, someone wanting to scan the internet for all SSH daemons on port 22 would have to send four billion packets, minus known private IP space, RFC1918/Bogons and so forth. The BGP Report says there would be a little shy of 3 billion advertised (and therefore routable) IPs. By randomly choosing a port, an attacker would have to send 2^15 extra packets on average for each host that moved their SSH daemon. If the attacker has to work exponentially harder, that’s a big win.
Yet, not every host is going to do this. If one in forty hosts randomly chooses a port, then the attacker’s work has only gone up by a small percentage; the attacker only has to scan on hosts that don’t reply on 22. Instead of sending 2.7 billion packets, they might send an additional 30,000 packets, but only on 2.5 percent of hosts. That’s still under the 3 billion mark.
This model really only helps against the least skilled level of attacker. For targeted attacks, it only increases the attacker’s work by the number of servers you have times 30,000. When we can scan the entire internet in 10 minutes/port, that’s inconsequential. An attacker can scan an entire class B, all ports in the same 10 minutes. An attacker with a small botnet can do every port on every machine in the same amount of time (as was seen a few years ago with the internet census that utilized compromised routers). The multiple does add up, but the search space is too small for it to make enough of a difference.
Another way to examine this is through lockpicking. Think of each pin as a layer of security. If you have a lock with five pins and a lockpicker uses a simple raking technique, they only have to brute force the correct pin configuration to open the lock. Yet advanced lock pickers separately manipulate each pin until the lock opens. It’s like a video game with checkpoints. You don’t restart from the beginning. Attackers may want to pick your digital locks, but they should at least be forced to manipulate each pin.
The layers do serve a purpose. Some of these mechanisms are like a Nightingale Floor, which was used centuries ago to alert temples and palaces to an intruder. Others are like barbed wire or a floor covered in tacks; easily handled by a prepared attacker, but effective against more opportunistic or time constrained intruders. The best security measures are invisible until tripped, relying on the equivalent of tribal knowledge shared among your team. This could be something that causes a system to fire an alert and drop all further traffic if anyone runs the ‘w’ or ‘ps’ command in a shell.
The first time an attacker does that, they’ll know not to do it again, but you’ve already been alerted and are watching out (and you’ve still got other mechanisms like that for the next time).
All security mechanisms, at some point, are ultimately based in obscurity. There are multiple types that people should consider. It is up to each of us to determine whether we want the ultimate in obscurity, or the metaphorical equivalent of a floor covered in tacks.
Jonathan Wilkins is a 22 year veteran of the information security industry, and an expert in both offensive and defensive techniques. Over the past two decades, he’s helped Microsoft, MySpace, Zynga, Yelp and dozens of other Fortune 500 companies secure their systems. He currently serves as Chief Security Officer for Blockstream, Inc.