Existing Citations

  • acceptable risk : A risk that is understood and tolerated by a system’s user, operator, owner, or accreditor, usually because the cost or difficulty of implementing an effective countermeasure for the associated vulnerability exceeds the expectation of loss. (See: adequate security, risk, "second law" under "Courtney’s laws".) (†1365)
  • adequate security : "Security commensurate with the risk and magnitude of harm resulting from the loss, misuse, or unauthorized access to or modification of information." (See: acceptable risk, residual risk.) [US DoD] (†1366)
  • anonymous (s.v. "anonymity"): The condition of an identity being unknown or concealed. (See: alias, anonymizer, anonymous credential, anonymous login, identity, onion routing, persona certificate. Compare: privacy.) (†1367)
  • anonymous (s.v. "anonymity"): Tutorial: An application may require security services that maintain anonymity of users or other system entities, perhaps to preserve their privacy or hide them from attack. To hide an entity’s real name, an alias may be used; for example, a financial institution may assign account numbers. Parties to transactions can thus remain relatively anonymous, but can also accept the transactions as legitimate. Real names of the parties cannot be easily determined by observers of the transactions, but an authorized third party may be able to map an alias to a real name, such as by presenting the institution with a court order. In other applications, anonymous entities may be completely untraceable. (†1368)
  • archives (s.v. "archive"): n. ~ 1a. A collection of data that is stored for a relatively long period of time for historical and other purposes, such as to support audit service, availability service, or system integrity service. (Compare: backup, repository.) – v. ~ 1b. To store data in such a way as to create an archive. (Compare: back up.) (†1369)
  • risk (s.v. "risk"): 1. (I) An expectation of loss expressed as the probability that a particular threat will exploit a particular vulnerability with a particular harmful result. (See: residual risk.) – 2. (O) /SET/ "The possibility of loss because of one or more threats to information (not to be confused with financial or business risk)." [SET2] (†1353)
  • risk (s.v. "risk"): Tutorial: There are four basic ways to deal with a risk [SP30]: · "Risk avoidance": Eliminate the risk by either countering the threat or removing the vulnerability. (Compare: "avoidance" under "security".) · "Risk transference": Shift the risk to another system or entity; e.g., buy insurance to compensate for potential loss. · "Risk limitation": Limit the risk by implementing controls that minimize resulting loss. · "Risk assumption": Accept the potential for loss and continue operating the system. (†1354)
  • risk analysis (s.v. "risk analysis"): (I) An assessment process that systematically (a) identifies valuable system resources and threats to those resources, (b) quantifies loss exposures (i.e., loss potential) based on estimated frequencies and costs of occurrence, and (c) (optionally) recommends how to allocate available resources to countermeasures so as to minimize total exposure. (See: risk management, business-case analysis. Compare: threat analysis.) (†1355)
  • risk analysis (s.v. "risk analysis" ): Tutorial: Usually, it is financially and technically infeasible to avoid or transfer all risks (see: "first corollary" of "second law" under "Courtney’s laws"), and some residual risks will remain, even after all available countermeasures have been deployed (see: "second corollary" of "second law" under "Courtney’s laws"). Thus, a risk analysis typically lists risks in order of cost and criticality, thereby determining where countermeasures should be applied first. [FP031, R2196] ¶ In some contexts, it is infeasible or inadvisable to attempt a complete or quantitative risk analysis because needed data, time, and expertise are not available. Instead, basic answers to questions about threats and risks may be already built into institutional security policies. For example, U.S. DoD policies for data confidentiality "do not explicitly itemize the range of expected threats" but instead "reflect an operational approach ... by stating the particular management controls that must be used to achieve [confidentiality] ... Thus, they avoid listing threats, which would represent a severe risk in itself, and avoid the risk of poor security design implicit in taking a fresh approach to each new problem". [NRC91] (†1356)
  • security (s.v. "security"): 1a. (I) A system condition that results from the establishment and maintenance of measures to protect the system. ¶ 1b. (I) A system condition in which system resources are free from unauthorized access and from unauthorized or accidental change, destruction, or loss. (Compare: safety.) – 2. (I) Measures taken to protect a system. (†1347)
  • security (p. 264): Parker [Park] suggests that providing a condition of system security may involve the following six basic functions, which overlap to some extent: · "Deterrence": Reducing an intelligent threat by discouraging action, such as by fear or doubt. (See: attack, threat action.) · "Avoidance": Reducing a risk by either reducing the value of the potential loss or reducing the probability that the loss will occur. (See: risk analysis. Compare: "risk avoidance" under "risk".) · "Prevention": Impeding or thwarting a potential security violation by deploying a countermeasure. · "Detection": Determining that a security violation is impending, is in progress, or has recently occurred, and thus make it possible to reduce the potential loss. (See: intrusion detection.) · "Recovery": Restoring a normal state of system operation by compensating for a security violation, possibly by eliminating or repairing its effects. (See: contingency plan, main entry for "recovery".) · "Correction": Changing a security architecture to eliminate or reduce the risk of reoccurrence of a security violation or threat consequence, such as by eliminating a vulnerability. (†1348)
  • threat (s.v. "threat"): 1a. (I) A potential for violation of security, which exists when there is an entity, circumstance, capability, action, or event that could cause harm. (See: dangling threat, INFOCON level, threat action, threat agent, threat consequence. Compare: attack, vulnerability.) – 1b. (N) Any circumstance or event with the potential to adversely affect a system through unauthorized access, destruction, disclosure, or modification of data, or denial of service. [C4009] (See: sensitive information.) – 2. (O) The technical and operational ability of a hostile entity to detect, exploit, or subvert a friendly system and the demonstrated, presumed, or inferred intent of that entity to conduct such activity. (†1349)
  • threat (s.v. "threat"): Tutorial: A threat is a possible danger that might exploit a vulnerability. Thus, a threat may be intentional or not: · "Intentional threat": A possibility of an attack by an intelligent entity (e.g., an individual cracker or a criminal organization). · "Accidental threat": A possibility of human error or omission, unintended equipment malfunction, or natural disaster (e.g., fire, flood, earthquake, windstorm, and other causes listed in (†1350)
  • vulnerability (s.v. "vulnerability"): (I) A flaw or weakness in a system’s design, implementation, or operation and management that could be exploited to violate the system’s security policy. (†1351)
  • vulnerability (s.v. "vulnerability"): Tutorial: A system can have three types of vulnerabilities: (a) vulnerabilities in design or specification; (b) vulnerabilities in implementation; and (c) vulnerabilities in operation and management. Most systems have one or more vulnerabilities, but this does not mean that the systems are too flawed to use. Not every threat results in an attack, and not every attack succeeds. Success depends on the degree of vulnerability, the strength of attacks, and the effectiveness of any countermeasures in use. If the attacks needed to exploit a vulnerability are very difficult to carry out, then the vulnerability may be tolerable. If the perceived benefit to an attacker is small, then even an easily exploited vulnerability may be tolerable. However, if the attacks are well understood and easily made, and if the vulnerable system is employed by a wide range of users, then it is likely that there will be enough motivation for someone to launch an attack. (†1352)