Sunday, January 27, 2013
Metrics and Measurement Problems Quantitative Risk Management
The very first activity for successful risk assessment is data collection. These data
should include new threats, identified vulnerabilities, exposure times and the available
safeguards. This collection and dissemination of data should be in real time to ensure
a proactive approach to risk management and also self-adapting security systems. In
order to accomplish this, acquisition and distribution process have to be automated.
It needs to be emphasized that, although security in IS has been an important issue
for a few decades, there is a lack of appropriate metrics and measurement methods.
During measurement activity, numerals are assigned to measures under different rules
that lead to different kinds of scales.
Qualitative scale types are nowadays used predominantly for information security
measurements. Under Steven’s taxonomy [23], they are classified as nominal
(categorical) and ordinal. Ordinal scales determine only greater or lesser operation
between two measurements. The difference operation between two measurements is
not allowed and has no meaning, because successive intervals on the scale are
generally unequal in size. Nevertheless, statistical operations such as mean and
standard deviation are frequently performed on rank-ordering data, but conclusions
drawn from these operations can be misleading.
Instead, quantitative scale types, such as interval and ration scales, should be used
to outcome shortcomings described above and consequently, to provide more accurate
feedback information. Some important advances have been achieved towards
quantitative vulnerability metrics recently. The first such advances are constituted by
two databases, the MITRE Corporation Common Vulnerabilities and Exposures [18]
and U.S. National Vulnerability Database [19]. These are closely related efforts in
which online acquisition and distribution of related data have been enabled by the
security content automation protocol SCAP [21]. The main procedure with the first of
the databases is as follows.
• The basis is the ID vulnerability, which is an 11-digit number, in which the first
three digits are assigned as a candidate value (CAN), the next four denote the year
of assignment, and the last four denote the serial number of vulnerability or exposure
in that year.
• Once vulnerability is identified in this way, the CAN value is converted to common
vulnerability and exposure (CVE).
The data contained in this database are in one of two states. In the first state there are
weaknesses with no available patch and in the second state are those variables for
which a publicly available patch exists.
This is the basis for the metric called daily vulnerability exposure DVE [14]. DVE
is conditional summation formula to calculate how many asset vulnerabilities were
public at given date with no corresponding patch and thus possibly leaving a
calculated number of assets exposed to threat. DVE values are obtained as follows.
DVE is useful to show whether an asset is vulnerable and how many vulnerabilities
contribute to asset’s exposure. In addition, derived DVE trend metric is useful in risk
Towards Quantitative Risk Management for Next Generation Networks 235
management process to adjust security resources to keep up with rate of disclosed
vulnerabilities. Also, additional filtering can be used with DVE such as Common
Vulnerability Scoring System (CVSS) [17] to focus on more severe vulnerabilities
and exposures. But extra care should be taken when interpreting filtered results,
because CVSS filter takes qualitative inputs for quantitative impact evaluation and
suffers from same deficiencies as similar qualitative risk assessment approaches.
Another useful metric for our purpose has been proposed by Harriri et al., called
the vulnerability index VI [5]. This index is based on categorical assessments of the
state of a system: normal, uncertain, or vulnerable. Each network node has an agent
that measures the impact factors in real time and sends its reports to a vulnerability
analysis engine. The vulnerability analysis engine VAE statistically correlates
received data and computes component or system vulnerability and impact metrics.
Impact metrics can be used in conjunction with risk evaluation criteria to assess and
prioritize risks.
More precise description of VI calculation will be demonstrated for the following
fault scenario FSk. During normal network operation, node’s transfer rate is TRnorm.
Transfer rate may deviate around this value but should not fall below TRmin. For each
node, CIF is calculated using TRmeasurement.
Having all CIF values, the component operating state COS can be computed. For each
fault scenario FSk, a operational threshold d(FSk) is set according to organization’s
risk acceptance criteria. Next, CIF value is compared to the operational threshold
d(FSk). The resulting COS value equals 1 when the component operates in an
abnormal state and 0 when it operates in a normal state.
Finally, a system impact factor SIF can be computed that identifies how a fault affects
the whole network and shows the percentage of components operating in abnormal
states, i.e., where CIF exceeds normal operational threshold d(FSk), in relation to the
total number of component. The obtained SIF value can be used can be used in
conjunction with risk evaluation criteria to assess and prioritize risks.
At the end of this sub-section, graph based methods need to be mentioned, which are
often used in IS risk assessment and management. One well-established technique in
this field is suggested by Schneier [22] and is called attack trees. Attack trees model
security of system and subsystems. They support decision making about how to
improve security, or evaluate impacts of new attacks. Rote nodes are the principal
goals of an attacker and leaf nodes at the bottom represent different attack options.
236 I. Starc and D. Trček
Intermediate nodes are placed to further refine attacks towards the root node. Nodes
can be classified into two categories. When attack can be mounted in many different
ways, an “OR” node is used. When attack precondition exists, an “AND” node is
used. After the tree is constructed, various metrics can be applied, e.g., cost in time
and resources to attack or defend, likelihood of attack and probability of success, or
statements about attack such as “Cheapest attack with the highest probability of
success”.
Subscribe to:
Post Comments (Atom)
Your label here
Widget3
Labels
- asurement Problems Quantitative Risk Management (1)
- computer security (1)
- construction risk (1)
- economics of security (1)
- managemen construction (1)
- risk assessment (1)
- Risk Assessment Methodology (1)
- risk management (1)
- risk management construction (2)
- security measurement. (2)
- security metrics (1)
No comments:
Post a Comment