In cryptography and computer security, security through obscurity (sometimes security by obscurity) is a controversial principle in security engineering, which attempts to use secrecy (of design, implementation, etc. Cryptography (or cryptology; from Greek grc κρυπτός kryptos, "hidden secret" and grc γράφω gráphō, "I write" This article describes how security can be achieved through design and engineering Security engineering is a specialized field of Engineering that deals with the development of detailed engineering plans and designs for security features controls and systems ) to provide security. Security is the condition of being protected against danger loss and criminals A system relying on security through obscurity may have theoretical or actual security vulnerabilities, but its owners or designers believe that the flaws are not known, and that attackers are unlikely to find them. The technique stands in contrast with security by design, although many real-world projects include elements of both strategies. Secure by design, in Software engineering, means that the software has been designed from the ground up to be secure
There is scant formal literature on the issue of security through obscurity. Books on security engineering will cite Kerckhoffs' doctrine from 1883, if they cite anything at all. Security engineering is a specialized field of Engineering that deals with the development of detailed engineering plans and designs for security features controls and systems In Cryptography, Kerckhoffs' principle (also called Kerckhoffs' assumption, axiom or law) was stated by Auguste Kerckhoffs in For example, in a discussion about secrecy and openness in Nuclear Command and Control:
In the field of legal academia, Peter Swire has written about the trade-off between the notion that "security through obscurity is an illusion" and the military notion that "loose lips sink ships" as well as how competition affects the incentives to disclose. 
The principle of security through obscurity was more generally accepted in cryptographic work in the days when essentially all well-informed cryptographers were employed by national intelligence agencies, such as the NSA. The National Security Agency/ Central Security Service ( NSA/CSS) is a cryptologic intelligence agency of the United States government Now that cryptographers often work at universities (where researchers publish many [perhaps even nearly all] of their results and publicly test others' designs and results) or in private industry (where results are more often controlled by patents and copyrights than by secrecy), the argument has lost some of its former popularity. An example is PGP released as source code, and generally regarded (when properly used) as a military-grade cryptosystem. Pretty Good Privacy (PGP is a Computer program that provides Cryptographic Privacy and Authentication. There are two different meanings of the word cryptosystem. One is used by the cryptographic community while the other is the meaning understood by the public The wide availability of high quality cryptography was disturbing to the US government, which seems to have been using a security through obscurity analysis to support its opposition to such work. Indeed, such reasoning is very often used by lawyers and administrators to justify policies which designed to control or limit high quality cryptography only to those authorized.
As mentioned above, in cryptography, the argument against security by obscurity dates back at least to Kerckhoffs' principle, put forth in 1883 by Auguste Kerckhoffs. Cryptography (or cryptology; from Greek grc κρυπτός kryptos, "hidden secret" and grc γράφω gráphō, "I write" In Cryptography, Kerckhoffs' principle (also called Kerckhoffs' assumption, axiom or law) was stated by Auguste Kerckhoffs in Dr Auguste Kerckhoffs ( 19 January 1835 – 9 August 1903) was a Dutch linguist and Cryptographer who was The principle states that design of a cryptographic system should not require secrecy and should not cause inconvenience if it falls into the hands of the enemy. This principle has been paraphrased in several ways:
If it is true that any secret piece of information constitutes a point of potential compromise, then fewer secrets makes a more secure system. Therefore, systems that rely on secret design or operational details, apart from the cryptographic key, are inherently less secure; that is, resident vulnerabilities in any such secret details will render the choice of key (eg, short and simple vs. long and complex) largely irrelevant.
The related full disclosure philosophy suggests that security flaws should be disclosed as soon as possible because the strength of the protection provided by keeping the cryptographic key secret has become weaker. In Computer security, full disclosure means to disclose all the details of a security problem which are known In Cryptography, a key is a piece of information (a Parameter) that determines the functional output of a cryptographic algorithm In this case there is now effectively more than one key that provides access: the old cryptographic key and a key composed of the newly discovered flaws.
For example, if somebody stores a spare key under the doormat, in case they are locked out of the house, then they are relying on security through obscurity. The theoretical security vulnerability is that anybody could break into the house by unlocking the door using that spare key. Furthermore, since burglars often know likely hiding places, the house owner will experience greater risk of a burglary by hiding the key in this -- not so secure -- way. The owner has in effect added another key—the fact that the entry key is stored under the doormat—to the system, and a very easy to guess one. The cryptographic key is no longer simply "the actual possession of the physical key that is used to open the door", but also it is now "the knowledge of the physical key's location". A key is a device which is used to open a lock. A typical key consist of two parts the blade, which slides into the Keyway of the lock and distinguishes
In the past, several algorithms, or software systems with secret internal details, have seen those internal details become public. Accidental disclosure has happened several times, for instance in the notable case of GSM confidential cipher documentation being contributed to the University of Bradford. GSM ( Global System for Mobile communications: originally from Groupe Spécial Mobile) is the most popular standard for Mobile phones in the The University of Bradford (est 1966 is a University in Bradford, West Yorkshire in the United Kingdom. Furthermore, vulnerabilities have been discovered and exploited in software, even when the internal details remained secret. Taken together, these examples suggest that it is difficult or ineffective to keep the details of systems and algorithms secret.
Linus's law that many eyes make all bugs shallow also suggests improved security for algorithms and protocols whose details are published. Linus's Law can refer to two different notions both named after Linus Torvalds. More people can review the details of such algorithms, identify flaws, and fix the flaws sooner. We would thus expect, and Linus Torvalds claims that it is true in actual fact, the frequency and severity of security compromises will be less severe for open than for proprietary or secret software.
Perfect or "unbroken" solutions provide security, but absolutes may be difficult to obtain. Although relying solely on security through obscurity is a very poor design decision, keeping secret some of the details of an otherwise well-engineered system may be a reasonable tactic as part of a defense in depth strategy. Defence in depth is a Military strategy sometimes referred to as elastic defence or deep defence. For example, security through obscurity may (but cannot be guaranteed to) act as a temporary "speed bump" for attackers while a resolution to a known security issue is implemented. Here, the goal is simply to reduce the short-run risk of exploitation of a vulnerability in the main components of the system.
Security through obscurity can also be used to create a risk that can detect or deter potential attackers. For example, consider a computer network that appears to exhibit a known vulnerability. Lacking the security layout of the target, the attacker must consider whether to attempt to exploit the vulnerability or not. If the system is set to detect this vulnerability, it will recognize that it is under attack and can respond, either by locking the system down until proper administrators have a chance to react, by monitoring the attack and tracing the assailant, or by disconnecting the attacker. The essence of this principle is that raising the time or risk involved, the attacker is denied the information required to make a solid risk-reward decision about whether to attack in the first place.
A variant of the defense in the previous paragraph is to have a double-layer of detection of the exploit; both of which are kept secret but one is allowed to be "leaked". The idea is to give the attacker a false sense of confidence that the obscurity has been uncovered and defeated. An example of where this would be used is as part of a honeypot. In computer terminology a honeypot is a trap set to detect deflect or in some manner counteract attempts at unauthorized use of information systems In neither of these cases is there any actual reliance on obscurity for security; these are perhaps better termed obscurity bait in an active security defense.
However, it can be argued that a sufficiently well-implemented system based on security through obscurity simply becomes another variant on a key-based scheme, with the obscure details of the system acting as the secret key value.
There is a general consensus, even among those who argue in favor of security through obscurity, that security through obscurity should never be used as a primary security measure. It is, at best, a secondary measure; and disclosure of the obscurity should not result in a compromise.
Software which is deliberately released as open source can never be said, certainly in theory, and in practice as well, to be relying on security through obscurity (the design being publicly available), but it can nevertheless also experience security debacles (e. Open source is a development methodology which offers practical accessibility to a product's source (goods and knowledge g. , the Morris worm of 1988 spread through some obscure—if widely visible to those who bothered to look—vulnerabilities). The Morris worm or Internet worm was one of the first Computer worms distributed via the Internet; it is considered the first worm and was certainly the Year 1988 ( MCMLXXXVIII) was a Leap year starting on Friday (link displays 1988 Gregorian calendar) An argument sometimes used against open-source security is that developers tend to be less enthusiastic about performing deep reviews as they are about contributing new code. Such work is sometimes seen as less interesting and less appreciated by peers, especially if an analysis, however diligent and time-consuming, does not turn up much of interest. Combined with the fact that open-source is dominated by a culture of volunteering, security sometimes receives less thorough treatment than it might in an environment in which security reviews were part of someone's job description. 
One version of Security through obscurity is to use a product which is not widely adopted, in order to lower the attack profile against random attacks. This does not currently appear to have a single defining term, "minority" being the most common but "rarity", "unpopularity", "scarcity", "lack of interest", and others also being used.
This concept is most commonly encountered in explanations why the number of known vulnerability exploits for products with the largest market share tends to be higher than a linear relationship to market share would indicate, but is also a factor in product choice for large organisations.
Security through minority is good for organisations who will not be subject to targeted attacks, suggesting the use of a product in the long tail. However, finding a new vulnerability in a market leading product is harder, as the low hanging fruit vulnerabilities are more likely to have already been caught, which suggests these products are better for organisations who expect to receive many targeted attacks. The issue is further confused by the fact that new vulnerabilities in minority products cause all known users of that product to become targets. With market leading products, the likelihood of being randomly targeted with a new vulnerability may be lower.
This is closely linked with, and depends upon, the more well-documented term Security through diversity - the wide range of "long tail" minority products is clearly more diverse than a single-entity monolithic market leader, so any random attack will be less likely to succeed.
There are conflicting stories about the origin of this term. Fans of MIT's ITS say it was coined in opposition to Multics users down the hall, for whom security was far more an issue than on ITS. ITS, the Incompatible Timesharing System (named in comparison with the Compatible Time-Sharing System also in use at MIT was an early revolutionary and influential Multics ( Mult iplexed I nformation and C omputing S ervice was an extremely influential early Time-sharing Operating system Within the ITS culture the term referred, self-mockingly, to the poor coverage of the documentation and obscurity of many commands, and to the attitude that by the time a tourist figured out how to make trouble he'd generally got over the urge to make it, because he felt part of the community. Tourism is Travel for Recreational or Leisure purposes The World Tourism Organization defines tourists as people who "travel
One instance of deliberate security through obscurity on ITS has been noted: the command to allow patching the running ITS system (altmode altmode control-R) echoed as ##^D. Typing alt alt ^D set a flag that would prevent patching the system even if the user later got it right.