The Analysis of Risk in Business and Industry

Edward Melnick, Department of Information Operations and Statistics, New York University Stern School of Business


Even though risk is ubiquitous in business and industry, it has a multiplicity of meanings in the realm of statistics. One definition on the internet is “the probability of a negative impact to an asset or some characteristic of value that may arise from some present process or future event”. Another definition of risk is the expected loss conditioned on the probability that a hazard has occurred.

Risk analysis is a cross-cutting topic that exists in almost all areas of human endeavors including such diverse areas as: Engineering (quality and reliability, effect of human factors), Security Management (disasters, homeland security), Healthcare (environmental hazards, clinical risk), Finance (insurance, financial securities), and Business and Management (competition, hostile takeover). Interestingly, although concerns about risk are everywhere, the study of risk has never evolved into a discipline with its own language and methodologies. In this blog, I will overview the study of risk as it relates to business.

Not surprisingly, the early formal study of risk was in finance (Markowitz’s portfolio theory, 1952). The overall goal was to maximize the expected return of a portfolio for a given level of risk. In this setting, variability was considered the riskiness of a portfolio as measured by the portfolio’s variance. In 1999, Artzner et al. published 5 axioms of coherence that every measure of risk must satisfy.   These axioms were based on Markowitz’s observation that risk (variance) was reduced in a diversified portfolio, a property known as subadditivity.

In statistics, as in science, there is a continuous search for better and more accurate tools. Based on Markowitz’s implied definition that risk is equivalent to uncertainty, statistics were developed that addressed the limitations of variance as a measure of risk. Objections to using variance include: noting that variance is inconsistent with investor behavior (variance is symmetrical but investors tend to be more risk averse when making decisions that might improve wealth), variance is computed on historical data but concerns for risk are based on future conditions, and variability occurs in both positive and negative outcomes but financial risk is often focused on negative events (losses). Although many alternative measures of risk have been proposed, the most popular statistic is the Value at Risk (VaR), which is a confidence interval that under normal conditions covers the largest expected loss from a portfolio with a given probability over a fixed time interval.

VaR was introduced in the 1990s as part of JP Morgan’s RiskMetrics and quickly became the gold standard for measuring portfolio risk. At about the same time, non-financial service firms were seeking strategies for modeling uncertainty about projects in which they wished to invest. In particular, they sought to determine a project’s exposure to adverse market movements over a specified time period to be confident that the company had the financial resources to ensure that the losses could be covered without putting the firm into financial risk. Measure(s) of risk for these firms needed to include market risk factors and their probability distributions that change over time. To this end, a modified VaR is often used because of its simplicity and intuitive appeal even though it does not include risk categories such as political risk, liquidity risk, or regulatory risk.

VaR was originally developed to reflect risk exposure in normal financial markets over a specified time window by determining reserves of capital needed by banks, setting premiums by insurance companies, and determining maximum portfolio losses with accompanying ruin probabilities. Over time, its applications have been extended to non-financial firms, which must be confident that they have the capital and cash reserves needed to cover potential losses when considering large investment projects.

Many computational strategies have been proposed for computing VaR. In the most general setting, VaR is computed using Monte Carlo simulations by developing a model for future outcomes based on historical data and/or subjective information (forecasts) and running multiple trials through the model. VaR is estimated by recording the frequency of losses exceeding a specified value. Obviously the accuracy of the results depend on the quality of the data; for example, extreme outliers or sudden changes in the process will contaminate the estimate of the VaR. Simple computations are possible when the risk process is stable and there exists a large informative historical data set. The reliability of VaR deteriorates for inaccurate assumptions about the probability distribution functions as well as for non-linear models such as portfolios with option contracts.

Popular non-finance usages of VaR include considering the intrinsic business risk associated with entering into a new project. Managers understand the inherent risks of their projects and need to estimate a safety margin for their projects by avoiding mistakes in order to maximize their returns.   In this setting VaR is the cash flow (possible losses from the project) with the risk factors being capitalization expenditures, operating expenses, cost expenses, and taxes. Also built into the model are the possible outcomes from the project and the timing as to when the profits become available.

The concept (but not the name) for VaR was implied by Markowitz in his development of portfolio theory, which assumed that financial portfolios followed a normal distribution with the mean being the expected return and the variance being the measure of risk. He also demonstrated that by combining 2 portfolios with slightly positive (or negative) correlation had lower risk than the sum of the risks associated with each portfolio; i.e., a diversified portfolio formed by combining 2 portfolios with small to negative correlation had less risk exposure than the sum of risks of the portfolio – subadditivity). This property is not necessarily satisfied with VaR when the underlying probability distribution is either fat-tailed, very skewed, or contains outliers. Further, although VaR can be computed over any time window (it loses accuracy as the time span widens), it was intended to be computed over short time spans because securities can be easily converted back to cash. Non-financial firms, on the other hand, must consider risk exposure over much longer time spans because they are required to invest in physical assets including the plant and physical equipment and thus would take a much longer time to liquidate.

Motivated by the fact that VaR does not indicate the amount of wealth at risk nor does it satisfy the coherency axiom of sub-additivity, the Conditional Value at Risk (CVaR) was introduced. CVaR is the expected loss (sometimes called expected shortfall) given that the loss exceeded VaR. Computing CVaR is difficult because it is deep in the tail of the loss function where there is little data. This focuses the choice of underlying probability functions to a family of right skewed distributions such as the limiting distribution of extreme order statistics (Fisher and Tippett, 1928), distributions that are classified by a statistic governing the right tail behavior. Another family of commonly used extreme value distributions is known as Peaks over Threshold (POT), which is the probability that a set of iid random variables exceed a predetermined threshold, u. Further, if the limit exists as u →∞, then POT converges to the tail of a Pareto distribution.

An example of modifications for specifying risk is in the insurance industry. The problem is to model aggregate claims during a fixed policy period for an insurance policy when considering a group of independent insureds. In this setting, models for aggregate claims have the possibility of having multiple claims. For this problem, Panjer (1981) defined classes of compound distributions for the number and size of losses. His models have been used for experimentation to see the effect of changing parameter values as well as to estimate tail distributions of claims by finding models that best fit the data.

The analysis of risk in industry is an important developing area complicated by the fact that assessments are made in the future and only scant historical data are available. The lack of data occurs when one is concerned about low probability events that occur with severe consequences. In this note I focused on an important measure in the study of risk, VaR, that was introduced in the study of finance and has been generalized to studying risk in industry. One extension to the study of industrial risk is to consider a series of market risk factors. In many such studies, a common assumption is that the risk factors are normally distributed and therefore their relationships are represented by their covariances. In practice, the normal distribution assumption is not supported by the data. A more general indicator of the dependency structure of the risk factors is by developing copulas that bind the risk factors marginal distributions together. One area of current interest is the development of time dependent copulas for a set of risk factors.

Leave a comment