Close

This tutorial written and reproduced with permission from Peter Ponzo

In a neat article in the Journal of Financial Planning, Gobind Daryanani has a very interesting application of sensitivity analysis to problems in finance. We consider a variable z which depends upon two random variables x and y, like so:

(1) z = f(x,y)

We want to determine the statistical properties of z, given the statistical properties of x and y. In particular, we suppose:

The Mean of the random variable x is M(x) = 0
The Mean of the random variable y is M(y) = 0
S(x), the Standard Deviation of random variable x, is known
S(y), the Standard Deviation of random variable y, is known

What are M(z) and S(z), the Mean and Standard Deviation of z ?

I assume that M(this) is the Mean of this … and S(that) is the Standard Deviation of that?

Yes. We now suppose that the variables x and y vary little from their Mean, which is 0. Hence, if and when terms of third or higher order occur, such as x3 or x2y or x2y2 … we’ll ignore such terms.

Can you do that?

Read the title again. It’s all about approximations. Okay, so we expand the function f(x,y) in a Taylor series, about the Mean values: x = y = 0, like so:

z = f(0,0) + fx(0,0) x + fy(0,0) y + (1/2) fxx(0,0) x2
+ fxy(0,0) x y + (1/2) fyy(0,0) y2 +

… or (ignoring those higher order terms), simply

(2) z = f(0,0) + Ax + By + Cx2 + Dxy + Ey2

where the constants A, B, etc. are partial derivatives of the function f, evaluated at x = y = 0, and the higher order terms are ignored. Equation (2) is a Quadratic approximation to f(x,y). A Linear approximation would be z = f(0,0) + A x + B y. If we let the random varables x and y vary (according to their statistical properties) and average the resultant z-values, we get the Mean of z, which we’re calling M(z), by taking the Mean of equation (2). However, we first note that:

The Mean of a sum is the sum of the Means.
The Mean of a constant is the value of that constant.
S(u), the Standard Deviation of a random variable u, is given by:

S2(u) = Mean(u2) – Mean2(u)
(the average square) – (the square of the average)

If Mean(u) = 0, then S2(u) = Mean(u2) and we’re calling this M(u2)
If the Mean of x and y are both 0, then Mean of xy is the Covariance of x and y:

M(xy) = COV(x,y)

r, the Pearson Correlation Coefficient (between random variables x and y), is given by

r = COV(x,y) / {S(x) S(y)}

See Pearson Correlation

Okay, we’re now ready to calculate the Mean of z … as given by the approximation in (2):

M(z) = M(f(0,0)) + A M(x) + B M(y) + C M(x2) + D M(xy) + E M(y2)
using 1, above

= f(0,0) + 0 + 0 + C S2(x) + D S(x) S(y) r + E S2(y)
using a bunch of stuff, above

Finally, then, we have: If x, y are random variables with Means: M(x) = M(y) = 0 and Standard Deviations S(x), S(y) and r is the (x,y) Pearson Correlation Coefficient and z = f(0,0) + A x + B y + C x2 + D x y + E y2 then M(z) = f(0,0) + C S2(x) + r D S(x) S(y) + E S2(y)

Patience.

And what about pictures? Don’t you have any …?

Patience. We’re getting there. We now have to determine a formula for S(z), the Standard Deviation of z, using the approximation in (2), above. Now S2(z) is the Average Squared Deviation of z from its Mean M(z), and we already have this Mean, above, so we need to square z – M(z) and find the Average of that and then we’ll …

So, just do it.

Okay.

[z – M(z)]2 = [{f(0,0) + Ax + By + C x2 + D xy + Ey2} –
{f(0,0)+CS2(x)+DS(x) S(y)r+ES2(y)}]2

= [ Ax+By+C{x2-S2(x)}+D {xy-rS(x)S(y)}+E{y2-S2(y)}]
= A2x2 + B2y2 + 2AB xy     ignoring higher order terms

Now we take the Mean and get:

S2(z) = A2 M(x2) + B2 M(y2) + 2 ABM(x y)
= A2 S2(x) + B2 S2(y) + 2 ABS(x) S(y) r

We give this a position of importance:

If x, y are random variables with Means: M(x) = M(y) = 0 and Standard Deviations S(x), S(y) and r is the (x,y) Pearson Correlation Coefficient and z = f(0,0) + A x + B y + C x+ D x y + E y2 then S2(z) = A2 S2(x) + B2 S2(y) + 2 r A B S(x) S(y) approximately.

Notice that, in these approximations, the Mean of z depends upon C, D and E, the coefficients of the second order terms whereas the Standard Deviation of z depends upon A and B, the coefficients of the first order terms.

And what about that portfolio stuff?

If z is the value of a portfolio, and it depends upon annual returns x and y, each with some statistical distribution characterized by their Mean and Standard Deviation, then we can determine the statistical properties of this portfolio value … in terms of the statistical properties of the returns x and y. The point here is that, whereas it’d be difficult to obtain the Mean and Standard Deviation of z = f(x,y) directly, it’s pretty easy to get the Mean and SD of the quadratic expansion.

Approximately! And the portfolio value depends upon just two returns, x and y? Are you kidding?

Well … no. But we walk before we run.

Then start running!

Okay, but in the meantime look at some pictures:

Figure 1

We look at f(x,y) = (1+x)(1+y) where x and y are annual returns, selected at random from some statistical distribution. Figure 1 shows a typical distribution of (x,y) together with an indication of the correlation between f(x,y) and the Linear Approximation f(0.08,0.12) + df/dx (x-0.08) + df/dy (y-0.12) where the derivatives df/dx and df/dx are evaluated at the Means: M(x) = 0.08, M(y) = 0.12.

The point where x=M(x), y=M(y) is shown in red, in the left chart … and the light gray line in the right chart illustrates an exact match. (For this example, a Quadratic approximation would be exact!) Note that (x-0.08) and (y-0.12) replace the x and y we were considering earlier; these have Means which are 0. Note, too, that the values of df/dx and df/dy give some indication of the sensitivity of f(x,y) to changes in x and y.

Uh … not very.

Yes, but …

How about a real life example, say 25% bonds and 75% stocks and …?

Okay. Suppose an initial \$1.00 portfolio is rebalanced yearly to maintain a 25% + 75% allocation, and we consider the portfolio growth over 5 years where the growth factor would be f(x,y) = (1+0.25 x + 0.75 y)5 where x and y denote bond and stock returns and their distributions are characterized by their Means and Standard Deviations with M(x) = 0.06, meaning 6%, and M(y) = …

A picture is worth a thousand words!

Okay. Here’s a picture showing an (x,y) distribution and the resultant Linear and Quadratic approximations, L(x,y) and Q(x,y), respectively. The red dots show the deterministic result, when x and y are constant, equal to their Mean values: x = M(x) = 0.06 and y = M(y) = 0.10.

Figure 2