FE501 – 1207 - NLP Week 2

Portfolio Optimization Problem.

The problem was first introduced by Markowitz in 1981 (in Modern Portfolio Theory).

More information about this topic can be found here.

 

Generic types of portfolio optimization problems

 

  1. Risk minimization
minvariance[investment](namely risk) s.t.E[return]α
  1. Profit Maximization
maxE[return]s.t.variance[investment]R

 

Statistaical Supplementary Knowledges about the Problem

 

  1. Expectation

    CleanShot 2022-12-20 at 14.46.16

  2. Variance

    image-20221220144736390

  3. Covariance

    image-20221220144955051

Note: Also check the supplementary PDF file from Hocam here.

Calculations

YearStocksGoldT-Bills
196811115
1969987
19704147
197114144
197219444
197315667
197427648
19753706
197624225
19777185
19787317
1979195910
1980339911
198152515
198222411
198323119
198461510
198532128
198619166
19875225
19881726

CleanShot 2022-12-20 at 15.28.19

 

Solving Portfolio Optimization Problems with Excel Solver

NOTE:

For ramdom variables (x1,x2,...xn):

var(c1X1+c2X2+...+cnXn)=[c1,c2,...cn](coveriance matrix)[c1,c2,.....cn]T

Formula for coveriance matrix is:

(1)[Var(x1)...Cov(x1,xn):.::.:Cov(xn,x1)...Var(xn)]

So that for the original problem, we have:

CleanShot 2022-12-20 at 15.56.39

 

Notice: My exel file did not work on Mac, so I’ve attached the original excel file from Hocam.

Excel: here

 

Notice: Variance always positive (cause there is a square in the formula) and coveriance can be negative.

Sensitivity Report

CleanShot 2022-12-20 at 16.03.54

 

For an NLP we can also obtain a sensitivity report, but there are several changes.

CleanShot 2022-12-20 at 16.07.03

For NLP we have Lagrange Multiplier, it is similar to the shadow price in LP, which is :

An estimated value change for a small amount of change in the RHS. (not for 1 unit, cause the function is not linear, it is a curve)

 

When we change the

RHS constraint 1000 to 1001, the objectvie value will become smaller, but not exactly as the

“180.95” but will be close because of the nonlinearity.

 

Generalization of Nonlinear Programming

(2)minf(x)=f(x1,x2,...xn)s.t.g(x)0h(x)=0

where f(x) is a fucntion of n decision variables and g(x) is the inequality constraints, and

h(x) is the equality constraints.

For a NLP (minimization) we have,

Iff :

  1. objective function is convex
  2. feasible region is convex

Any local optimal solution is the global optimal point.

 

For a NLP (maximization) we have,

Iff :

  1. objective function is concave
  2. feasible region is convex

Any local optimal solution is the global optimal point.

 

Feasible region can be :

A objective function can be :

  • Convex
  • Concave
  • Neither Convex nor Concave

 

Convex and Concave Functions

 

A function f(x1,x2,...xn) is a convex function on a convex set S (S is the domain of the function) if for any xS and xS

(3)f[λx+(1λ)x]λf(x)+(1λ)f(x)

holds for 0λ1

 

 

A function f(x1,x2,...xn) is a concave function on a convex set S (S is the domain of the function) if for any xS and xS

(4)f[λx+(1λ)x]λf(x)+(1λ)f(x)

holds for 0λ1

 

Explanation:

for any 0λ1 , the new point x=λx1+(1λ)x2 is a point between x1 and x2 (and iff λ=0,x=x2, and iff λ=1,x=x1 )

CleanShot 2022-12-20 at 18.58.22

This means that, if we took two random points from the domain of the function, function value of the new point, will always below(3) (or above (4)) the line connects those two points.

CleanShot 2022-12-20 at 19.00.51

This is a convex function.

 

CleanShot 2022-12-20 at 19.03.46

This is a concave function.

 

CleanShot 2022-12-20 at 19.07.14

This function is NOT a convex function NEITHER a concave function.

 

Question: Determine the type ( convex/concave) of function: f(x)=lnx

Answer:

f(x)=ln is a concave function.

CleanShot 2022-12-20 at 19.11.14

 

CleanShot 2022-12-20 at 19.11.14

 

Also: f(x)=x1/2 is concave function.

Properties

image-20221220192610212

Determining convexity (concavity) of the function using slope.

 

f(x) is a convex function if f(x2)f(x1)+f(x1)(x2x1) , x2x1 .

Tangent_function_animation

CleanShot 2022-12-20 at 19.23.20

Determine Convexity of a single variable function (concavity) with Second derivitive check

 

Recall that if f (x) is a convex function of a single variable, the line joining anytwo points on y   f (x) is never below the curve y   f (x).

 

Theorem: Suppose f(x) exists for all x in a convext set S. Then f(x) is a convex function on S if and only if f(X)0 for all x in S.

Theorem: Suppose f(x) exists for all x in a convext set S. Then f(x) is a concave function on S if and only if f(X)0 for all x in S.

 

Single Variable Examples: Determine if A function is Convex or Concave

 

  1. f(x)=x2

    (5)f(x)=2xf(x)=20convex
  2. f(x)=ex

    (6)f(X)=exf(x)=ex0 convex

    image-20221220204410890

  3. f(x)=x12

    (7)f(x)=12x1/21=12x12f(x)=1/4x2/30(x(0,))concave
  4.  

(8)f(x)=af(x)=0 f(x) is both convex and concave function

 

Convexity of the Feasible Set

A set S of points in Rn is a convex set, if the segment joining two points of S also belongs to the set, then the set is convext.

Parsing: Take two points randomly from the set, connect the two points with a segment, if the all points of segment is inside the Set, then the set is convex.

440px-Convex_polygon_illustration1.svg

440px-Convex_polygon_illustration2.svg

Example: Check convexity of y2x(x0)

(9)y2xyx

CleanShot 2022-12-20 at 21.07.47

This is a convex set.

Convex Optimization Problem

 

A problem is convex optimization problem when objective function is convex and feasible region is a convex set for minimization,

OR,

when objective function is concave and feasible region is a convex set for maximization problem.

 

Tailor Series, Tailor Expansion and Hessian Matrix

 

Higher order derivitives

CleanShot 2022-12-20 at 22.13.08

Source: https://www.youtube.com/watch?v=BLkz5LGWihw

 

Taylor Series

Taylor series is the approximation of a given complex function.

CleanShot 2022-12-20 at 21.45.10

image-20221220214530291

CleanShot 2022-12-20 at 21.47.59

 

Hessian Matrix

Hessian Matrix is a way of packing (organizing all the second partial derivative onformation of a multivariable function).

(10)Hf=[2fx12amp;2fx1x2amp;amp;2fx1xn2fx2x1amp;2fx22amp;amp;2fx2xnamp;amp;amp;2fxnx1amp;2fxnx2amp;amp;2fxn2]

OR:

(11)(Hf)i,j=2fxixj.

Quadratic Approximation

The basic formula for quadratic approximation with base point x0=0 is:

(12)f(x)f(x0)+f(x0)(xx0)+f(x0)2(xx0)2(xx0)

This works for values of x near 0; it is just the formula for linear approximation with one new term.

Wher did that extra term come from, and what is the 12 doing in front of the x2 ? We'll explain this in terms of what happens if the graph of our function is aparabola. If the graph of a function is a parabola, that function is a quadraticfunction.

Ideally, the quadratic approximation of a quadratic function shouldbe identical to the original function.

Is there a reason why a is the constant term? I'd be inclined to rewrite thisas ax2+bx+c.

For instance, consider:

(13)f(x)=a+bx+cx2;f(x)=b+2cx;f(x)=2c:

Set the base point x0=0. Now we try to recover the values of a, b and c fromour information about the derivatives of f(x).

CleanShot 2022-12-21 at 08.43.12

This tells us what the coe cients of the quadratic approximation formula mustbe in order for the quadratic approximation of a quadratic function to equalthat function.

Determining convexicty of a multiple variables function using Hessian Matrix

 

Calculating Hessian

 

The Hessian of f(x1,x2,..xn) is the n×n matrix whose ijth entry is:

(14)2fxixj

We let H(x1,x2,..xn) denote the value of the Hessian at (x1,x2,,,,xn). For example, if

(15)f(x1,x2)=x13+2x1x2+x22

then:

(16)H(x1,x2)=[f2x2f2xyf2yxf2y2]

And we have:

(17)fx1=3x12+2x2+0fx2=0+2x1+2x2
(18)H(x1,x2)=[f2x2f2xyf2yxf2y2]=[6x1222]

 

Principle Minor Calculation

 

An ith principal minor of an m×n matrix is the determinant of any i×i matrix obtained by deleting ni rows and the corresponding ni columns of the matrix.

 

Thus, for the matrix

(19)[2114]

CleanShot 2022-12-21 at 08.49.36

CleanShot 2022-12-21 at 08.49.59

The first principle minors are PM11=2 and PM21=4,

and the second principle monor PM2 is:

(20)2×(4)(1)×(1)=7

For any matrix, the first principal minors are just the diagonal entries ofthe matrix.

 

kth leading principle minor

 

The kth leading principle minor of an n×n matrix is the determinant of the k×k matrix obtained by deleting the last nk rows and columns of the matrix.

 

We let Hk(x1,x2,...xn) be the kth leading principle minor of the Hessian matrix evaluated at the point (x1,x2,,,xn) . Thus, if f(x1,x2)=x13+2x1x2+x22, then, H1(x1,x2)=6x1,H2(x1,x2)=6x1(2)2(2)=12x14

 

Using principle minors to check Convexity and Concavity

 

Suppose f(x1,x2,....xn) has continous second-order partial derivatives for each point x=x(x1,x2,,,,xn)S. Then,

 

f(x1,x2,...xn) is a convex function on S iff (if and only if) for each xS, all principal monors of H are non-negative (0)

 

f(x1,x2,..xn) is a concave function on S iff for each xS and k=1,2,...n all nonzero principal minors have the same sign as (1)k

(or:(PMk)(1)k>0)

 

Expl. f(x1,x2)=x12+2x1x2+x2 is a convex function on S=R2 .

(21)H(x1,x2)=[2222]

The first principal minors (2 >=0 ), and the second principal minor is 4-4=0 >=0 .

So this is a convex function.

 

Expl. f(x1,x2)=x12x1x22x22 .

(22)H(x1,x2)=[2114]

first order principal minors (-2, -4) are negative. so as (1)1 .

second order principal minor (7) is positive.

 

So, the function is a concave function.

 

Expl. f(x1,x2)=x123x1x2+2x22

(23)H(x1,x2)=[2334]

First order principal minors are positive (2,4)

Second PM = (8-9=-1) < 0. (1)2 > 0.

So this function is neither convex nor concave function.

 

Practice1: Check convexity or concavity for

(24)S=R3,f(x1,x2,x3)=x12+x22+2x32x1x2x2x3x1x3

(63420)

 

 

Practice2:

(25)f(x,y,z)=x2+y2+z2

(Nota5200)

 

Inflaction Point, Saddle Point

 

For a single variable function, x is a inflaction point, if function’s second derivative f(x) changes sign.

Such point is called saddle point in multi-dimensional function.

 

 

[End of 1207]