![]() ![]() In point of fact, most of our current systems have “security through obscurity” and it works! Every potential vulnerability in the codebase that has yet to be discovered by (or revealed to) someone who might exploit it is not yet a realized vulnerability. The obscurity may also help expose an attacker because it will require some probing to penetrate the obscurity, thus allowing some instrumentation and advanced warning. By that definition, obscurity and secrecy do provide some security because they increase the work factor an opponent must expend to successfully attack your system. One goal of securing a system is to increase the work factor for the opponent, with a secondary goal of increasing the likelihood of detecting when an attack is undertaken. However, the usual intent behind the current use of the phrase “security through obscurity” is not correct. And, analogously, if an attacker knows the vulnerability and hides that discovery, he can exploit it when desired. The mapping to OS vulnerabilities is somewhat analogous: if your security depends only (or primarily) on keeping a vulnerability secret, then that security is brittle-once the vulnerability is disclosed, the system becomes more vulnerable. Worse, if an attacker manages to discover the algorithm without disclosing that discovery then she can exploit it over time before it can be fixed. ![]() The point there is that the strength of a cryptographic mechanism that depends on the secrecy of the algorithm is poor to use Schneier’s term, it is brittle: Once the algorithm is discovered, there is no (or minimal) protection left, and once broken it cannot be repaired. The origin of the phrase is arguably from one of Kerckhoff’s principles for strong cryptography: that there should be no need for the cryptographic algorithm to be secret, and it can be safely disclosed to your enemy. None of us originated the term, but I know we helped popularize it with those items. I take some of the blame for helping to spread “no security through obscurity,” first with some talks on COPS (developed with Dan Farmer) in 1990, and then in the first edition of Practical Unix Security (with Simson Garfinkel) in 1991. This was originally written for Dave Farber’s IP list. The problem occurs when a flaw is discovered and the owners/operators attempt to maintain (indefinitely) the sanctity of the system by stopping disclosure of the flaw. cases, there is little or no danger to the general public UNTIL some yahoo publishes the vulnerability and an exploit far and wide. Every potential vulnerability in the codebase that has yet to be discovered by (or revealed to) someone who might exploit it is not yet a realized vulnerability. The mapping to OS vulnerabilities is somewhat analogous: if your security depends only (or primarily) on keeping a vulnerability secret, then that security is brittle - once the vulnerability is disclosed, the system becomes more vulnerable. The point there is that the strength of a cryptographic mechanism that depends on the secrecy of the algorithm is poor to use Schneier's term, it is "brittle": Once the algorithm is discovered, there is no protection (or minimal) left, and once broken it cannot be repaired. The origin of the phrase is arguably from one of Kerckhoff's principles for strong cryptography: that there should be no need for the cryptographic algorithm to be secret, and it can be safely disclosed to your enemy. I take some of the blame for helping to spread "no security through obscurity," first with some talks on COPS (developed with Dan Farmer) in 1990, and then in the first edition of "Practical Unix Security" (with Simson Garfinkel) in 1991. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |