Numeraire DCF Valuator

HOME   |   PORTAL   |   MODEL TOUR   |   MODEL TYPES   |   MODEL CHOICE

MODULES   |   DATA   |   TOOLS   |   NEWSLETTER   |   DISCUSS   |   MAP

Where facts conquer fiction. Dig deep for facts. Dig deeper for value.

 

 

Simulation

Deterministic models use ordinary variables that are assigned particular values. Stochastic models include one or more random variables that are characterized by a probability distribution from which values are drawn. A deterministic model that makes a point estimate can be converted to a stochastic or probabilistic model that estimates a value range which reflects uncertainty about the true value. This helps to characterize the range of potential values in a stock valuation and assess the probability of reaching specific target values. Results can be presented as either frequency or cumulative distributions which communicate clearly the range and likelihood of possible values. Two techniques to estimate such ranges of value are statistical regression and statistical simulation. Statistical regression is sometimes used in stock pricing models but not in stock valuation models.

A deterministic model is evaluated one time to produce a single point estimate. Converting a deterministic model to a stochastic model using Monte Carlo simulation involves three steps: first, replace uncertain input values with pseudo-random number generators; second, run a simulation that evaluates the model many times and saves the outcome of each evaluation as one observation; and analyze the results of all observations.

Random numbers are generated and then used to simulate a draw from a probability distribution. There is a need for a realistic probability distribution to fit each uncertain variable. There is also a conflicting need for mathematical tractability of the distribution selected. Commonly used are the uniform, triangular, normal and lognormal probability distributions. If one of these is not a sufficiently close approximation to the desired distribution shape, then a new distribution "type" may be created with a formula that converts a random number to a probability value. For highly speculative common stocks, the sensitivity of calculated intrinsic value to changes in the choice of probability distribution types applied to input variables may be small. In other words, the uncertainty of the random variables in the valuation of such highly speculative stocks may outweigh the choice of any reasonable distribution type.

Probability

Probability theory is used for statistical hypothesis testing and stochastic modeling. The mathematics of applied probability is not as difficult as the concepts. The crucial difference between ex post and ex ante or between a priori and a posteriori is important to avoid logical circularity that leads to findings that are not valid. A brilliant teacher illustrated this difference, as reported in the book entitled Six Easy Pieces, 1995, Richard P. Feynman, Reading, MA: Perseus Books, Special Preface from Lectures on Physics, pages xx and xxi:

What came to Feynman by "common sense" were often brilliant twists that perfectly captured the essence of his point. Once, during a public lecture, he was trying to explain why one must not verify an idea using the same data that suggested the idea in the first place. Seeming to wander off the subject, Feynman began talking about license plates. "You know, the most amazing thing happened to me tonight. I was coming here, on the way to the lecture, and I came in through the parking lot. And you won't believe what happened. I saw a car with the license plate ARW 357. Can you imagine? Of all the millions of license plates in the state, what was the chance that I would see that particular one tonight? Amazing!" A point that even many scientists fail to grasp was made clear through Feynman's remarkable "common sense."

The same phenomenon can be observed in the finding of a marketable security or fund that experienced a very high annual return for the past one, three or five years. In any screening or filtering of historical data, some security has to have achieved the highest performance by whatever criterion is used for ranking. This is merely a transient flash in the pan. Just about any stock selection system can be shown to have its day in the sun -- after the fact with 20/20 hindsight.

Probability Distributions

The normal distribution is often used because it is the most mathematically tractable distribution, and it is the only tractable distribution for multivariate regression models with two or more random variables. An ideal symmetrical normal distribution is defined by two parameters, its mean and its standard deviation. A standard-deviation unit of measurement is referred to as a sigma. Common notation is Normal(Mean, Standard Deviation) as in Normal(1,0.333). With the normal distribution, a random draw of a value equal to the mean plus one standard-deviation unit, a so-called one-sigma event, is 68.26 percent of the area under the normal curve, a two-sigma event is 95.46 percent of the area under the curve, and a three-sigma event is 99.73 percent of the area. A randomly-drawn value of 2 from Normal(1,0.333) is a three-sigma event because it is greater than or equal to 1.999, the mean plus three sigmas (1 + 3*0.333).

John Burr Williams (The Theory of Investment Value, 1998 reprint, pages 67-68) writes in the section headed Uncertainty and the Premium for Risk: "If the investor is uncertain about the future, he cannot tell for sure just what is the present worth of the dividends or of the interest and principal he will receive. He can only say that under one set of possible circumstances it will have one value and under another, another. Each of these possible values will have a different probability, however, and so the investor may draw a probability curve to express the likelihood that any given value, V, will prove to be the true value."

[Probability Distribution Curves]

The above chart is adapted from Williams (Diagram 6 on page 68) and includes both a probability density function and the associated cumulative distribution function. As he writes, "The various possible values, V, of the bond, from zero to par, are shown by the abscissa of the curve, while the likelihood, f(V), that any given value will prove to be the true value, is shown by the ordinates. A uni-modal curve, of the form usual for probability curves, could not be used in this case, because it would fail to show the relative high chances of receiving all or none of the interest and principal. Whenever the value of a security is uncertain and has to be expressed in terms of probability, the correct value to choose is the mean value...

"The customary way to find the value of a risk security has always been to add a "premium for risk" to the pure interest rate, and then use the sum as the interest rate for discounting future receipts....

"Strictly speaking, however, there is no risk in buying the bond in question if its price is right. Given adequate diversification, gains on such purchases will offset losses, and a return at the pure interest rate will be obtained. Thus the net risk turns out to be nil. To say that a "premium for risk" is needed is really an elliptical way of saying that payment of the full face value of interest and principal is not to be expected on the average. This leads to the mathematical definition of the "premium for risk" as the value of x that will satisfy the following two equations:...

"If the mean value [of V] is known, [the second] equation can be solved for i, the proper yield. Or, if i is known, the same equation can be solved for [the mean value of V]. The problem can be approached in either way. Most people are used to going about it in the latter way, however, and find it easier to think in terms of interest and principal at face value heavily discounted than in terms of interest and principal at reduced value lightly discounted. They think they can make a better estimate of the proper rate of discount in any given situation than of the various possibilities of partial or complete default."

Benoit Mandelbrot ("The variation of certain speculative prices", Journal of Business, October, 1963, pages 394-419) shows that stock prices and related phenomena follow a so-called Stable Law distribution. The normal or Gaussian, Poisson, Cauchy, Gamma and other distributions are special cases of the Stable Law Distribution. Unfortunately, there is no statistical theory based on the Stable Law probability distribution due partly to its undefined or infinite standard deviation.

Although prices of all securities may follow a Stable Law distribution, the estimates of the intrinsic value of one company is more likely to follow a normal distribution because it is conforms more closely to the criteria of a lower standard deviation, a narrower range, and symmetry. Estimates of growth are described by either the normal or the lognormal distribution.

The first four moments of an empirical normal distribution will suffice to define it: first, mean; second, standard deviation; third, skewness; and fourth, kurtosis. Skewness and kurtosis are measures of nonsymmetry. Skewness is lop-sidedness. Kurtosis is the thicknesses of the left and right tails relative to the rest of the density function. Kurtosis is not modal peakedness relative to the rest of the density function, as often thought. A bimodal distribution with two modes is a deviation from the central tendency of the standard distribution types which are unimodal.

Note on Terminology (from An Introduction to Probability Theory and Its Applications, Volume I, William Fellner, 1950, New York: John Wiley & Sons, page 179). "The term distribution function is used in the mathematical literature for never-decreasing functions of x which tend to 0 as x [approaches negative infinity], and to 1 as x [approaches positive infinity]. Statisticians currently prefer the term cumulative distribution function, but the adjective "cumulative" is redundant. A density function is a non-negative function f(x) whose integral, extended over the entire x-axis, is unity. The integral from negative infinity to x of any density function is a distribution function. The older term frequency function is a synonym for density function." The familiar bell-shaped curve is a density function.

Lognormal Distribution

The following excerpts give some background on the log function which is used to transform the normal function and create the lognormal distribution.

Fundamental Methods of Mathematical Economics, 3rd edition, 1984, Alpha C. Chiang, New York: McGraw-Hill (see General Books), Chapter 10 "Exponential and Logarithmic Functions", pp 268-292:

"Exponential functions, as well as the closely related logarithmic functions, have important applications in economics, especially in connection with growth problems, and in economic dynamics in general. ... A function whose independent variable appears in the role of an exponent is called an exponential function. ... In its simple version, the exponential function may be represented in the form y = f(t) = bt, (b > 1), where y and t are the dependent and independent variables, respectively, and b denotes a fixed base of the exponent. ... That is, the dependent variable y is invariably positive, regardless of the sign of the independent variable t. ... The monotonicity of the exponential function entails at least two interesting and significant implications. First, we may infer that the exponential function must have an inverse function, which is itself monotonic. This inverse function, we shall find, turns out to be a logarithmic function. Second, ... it is possible to express any positive number y as a power of any base b > 1. ... the exponential function y = bt can now be generalized to the form y = abct where a and c are "compressing" or "extending" agents. ... Some bases are more convenient than others as far as mathematical manipulations are concerned. ... thus e may be defined as the limit as m approaches infinity: e = lim f(m) = (1 + (1/m))m....

"Mathematically, the number e is the limit expression ... But does it also possess some economic meaning? The answer is that it can be interpreted as the result of a special process of interest compounding. ... Suppose that, starting out with a principal (or capital) of $1, we find a hypothetical banker to offer us the unusual interest rate of 100 percent per annum ($1 per year). If interest is to be compounded once a year, the value of our asset at the end of the year will be $2; we shall denote this value by V(1), where the number in parenthesis indicates the frequency of compounding within 1 year ... By analogous reasoning, ... in general, V(m) = (1 + (1/m))m where m represents the frequency of compounding in 1 year. In the limiting case, when interest is compounded continuously during the year, i.e., when m becomes infinite, the value of the asset will grow in a "snowballing" fashion, becoming at the end of 1 year lim V(m) = e (dollars). Thus, the number e = 2.71828 can be interpreted as the year-end value to which a principal of $1 will grow if interest at the rate of 100 percent per annum is compounded continuously. Note that the interest rate of 100 percent is only a nominal interest rate, for if $1 becomes $e after 1 year, the effective interest rate is in this case approximately 172 percent per annum. ... The continuous-compounding process just discussed can be generalized in three directions, to allow for: (1) more years of compounding, (2) a principal other than $1, and (3) a nominal interest rate other than 100 percent. Consequently, we find the asset value in the generalized continuous-compounding process to be V = lim V(m) = Aert. [m is compounding periods in a year, is A is principal, r is nominal interest rate, and t is years of continuous compounding] Note that t is a discrete (as against a continuous) variable: it can only take values that are integral multiples of 1/m. ... When m approaches infinity, however, 1/m will become infinitesimal, and accordingly the variable t will become continuous. ... The problem of discounting is the opposite one of finding the present value A of a given sum V which is to be available t years from now.

"Exponential functions are closely related to logarithmic functions (log functions, for short.). ... It should be clear ... that the logarithm is nothing but the power to which the base must be raised to attain a particular number. ... consequently, a negative number or zero cannot possess a logarithm."


Copyright 1997-2003 Numeraire.com. All rights reserved.