Giter VIP home page Giter VIP logo

Comments (3)

microprediction avatar microprediction commented on June 14, 2024

from precise.

Marzio-USI avatar Marzio-USI commented on June 14, 2024

My mistake; in case someone is having the same doubts, this might be helpful:

$$ \Sigma = \begin{bmatrix} A & B\\ C & D \end{bmatrix} $$

Where $\Sigma \in \mathbb{R}^{n \times n}$ and:

  1. $A \in \mathbb{R}^{p \times p}$
  2. $B \in \mathbb{R}^{p \times q}$
  3. $C \in \mathbb{R}^{q \times p}$
  4. $D \in \mathbb{R}^{q \times q}$
  5. $p + q = n$

The minimum variance portfolio:

$$ w \propto \Sigma^{-1} \vec{1} $$

We can view any expression of the form $\Sigma^{-1}$ in terms of the minimum variance portfolio $w(\Sigma)$ and its portfolio variance $\nu(\Sigma)$, viz:

$$ \Sigma^{-1}\vec{1} = \vec{1}^T \Sigma^{-1} \vec{1} \cdot w(\Sigma) = \dfrac{1}{\nu(\Sigma)}w(\Sigma) $$

In particular if $B = 0$ then the global minimum variance allocation is proportional to:

$$ w \propto \Sigma^{-1}\vec{1} = \begin{bmatrix} A & 0\\ 0 & B\\ \end{bmatrix}^{-1} \vec{1} = \begin{bmatrix} A^{-1} \vec{1}\\ D^{-1} \vec{1} \end{bmatrix} = \begin{bmatrix} \dfrac{1}{\nu(A)}w(A)\\ \dfrac{1}{\nu(D)}w(D)\\ \end{bmatrix} $$

Schur Allocation

Instead with Schur allocation we can say that:

$$ \Sigma^{-1} = \begin{bmatrix} A & B\\ C & D \end{bmatrix}^{-1} = \begin{bmatrix} (A - BD^{-1}C)^{-1} & 0 \\ 0 & (D - CA^{-1}B) \end{bmatrix} \begin{bmatrix} I_p & -BD^{-1} \\ -CA^{-1} & I_q \end{bmatrix} $$

Where $I_p$, $I_q$ are identity matrices $\in \mathbb{R}^{p \times p}$ and $\in \mathbb{R}^{q \times q}$ respectively.

Thus, the global minimum variance allocation is proportional to:

$$ \begin{aligned} w \propto \Sigma^{-1}\vec{1} &= \begin{bmatrix} A & B\\ C & D \end{bmatrix}^{-1} \vec{1}\\ &= \begin{bmatrix} (A - BD^{-1}C)^{-1} & 0 \\ 0 & (D - CA^{-1}B) \end{bmatrix} \begin{bmatrix} I_p & -BD^{-1} \\ -CA^{-1} & I_q \end{bmatrix} \cdot \vec{1} \\ \end{aligned} $$

With $A^c = (A- BD^{-1}C)$ and $D^c = (D - CA^{-1}B)$ then:

$$ \begin{aligned} w \propto \Sigma^{-1}\vec{1} &= \begin{bmatrix} (A^c)^{-1} & 0 \\ 0 & (D^c)^{-1} \end{bmatrix} \begin{bmatrix} I_p & -BD^{-1} \\ -CA^{-1} & I_q \end{bmatrix} \cdot \vec{1} \\ &\\ &= \begin{bmatrix} (A^c)^{-1} & -(A_c)^{-1}BD^{-1} \\ -(D^{c})^{-1}CA^{-1} & (D^c)^{-1} \end{bmatrix} \cdot \vec{1} \end{aligned} $$

Which can be transformed as follows:

$$ \begin{aligned} \begin{bmatrix} (A^c)^{-1} & -(A_c)^{-1}BD^{-1} \\ -(D^{c})^{-1}CA^{-1} & (D^c)^{-1} \end{bmatrix} \cdot \vec{1} &= \begin{bmatrix} (A^c)^{-1} & -(A_c)^{-1}BD^{-1} \\ -(D^{c})^{-1}CA^{-1} & (D^c)^{-1} \end{bmatrix} \begin{bmatrix} \mathbf{\vec{1}_p}\\ \mathbf{\vec{1}_q}\\ \end{bmatrix} \\ &\\ &= \begin{bmatrix} (A^c)^{-1} \mathbf{\vec{1}_p} - \big((A_c)^{-1}BD^{-1}\big) \mathbf{\vec{1}_q}\\ (D^c)^{-1} \mathbf{\vec{1}_q} - \big((D_c)^{-1}CA^{-1}\big) \mathbf{\vec{1}_p}\\ \end{bmatrix} \end{aligned} $$

We can then define a matrix $M$ such that:

$$ M \mathbf{\vec{1_p}}= \mathbf{\vec{1_q}} $$

To simplify it to:

$$ \begin{aligned} \begin{bmatrix} (A^c)^{-1} & -(A_c)^{-1}BD^{-1} \\ -(D^{c})^{-1}CA^{-1} & (D^c)^{-1} \end{bmatrix} ;; \vec{1} &= \begin{bmatrix} (A^c)^{-1} & -(A_c)^{-1}BD^{-1} \\ -(D^{c})^{-1}CA^{-1} & (D^c)^{-1} \end{bmatrix} \begin{bmatrix} \mathbf{\vec{1}_p}\\ \mathbf{\vec{1}_q}\\ \end{bmatrix} \\ &\\ &= \begin{bmatrix} (A^c)^{-1} \mathbf{\vec{1}_p} - \big((A_c)^{-1}BD^{-1}\big) \mathbf{\vec{1}_q}\\ (D^c)^{-1} \mathbf{\vec{1}_q} - \big((D_c)^{-1}CA^{-1}\big) \mathbf{\vec{1}_p}\\ \end{bmatrix} \\ &\\ &= \begin{bmatrix} (A^c)^{-1} \mathbf{\vec{1}_p} - \big((A_c)^{-1}BD^{-1}\big) M^{(1)}\mathbf{\vec{1}_p} \\ (D^c)^{-1} \mathbf{\vec{1}_q} - \big((D_c)^{-1}CA^{-1}\big) M^{(2)}\mathbf{\vec{1}_q}\\ \end{bmatrix} \\ \end{aligned} $$

So that we can group them together

$$ \begin{aligned} \begin{bmatrix} (A^c)^{-1} & -(A_c)^{-1}BD^{-1} \\ -(D^{c})^{-1}CA^{-1} & (D^c)^{-1} \end{bmatrix} \cdot \vec{1} &= \begin{bmatrix} (A^c)^{-1} \mathbf{\vec{1}_p} - \big((A_c)^{-1}BD^{-1}\big) M^{(1)}\mathbf{\vec{1}_p} \\ (D^c)^{-1} \mathbf{\vec{1}_q} - \big((D_c)^{-1}CA^{-1}\big) M^{(2)}\mathbf{\vec{1}_q}\\ \end{bmatrix} \\ &\\ &= \begin{bmatrix} \bigg((A^c)^{-1} - (A^c)^{-1}BD^{-1}M^{(1)}\bigg)\mathbf{\vec{1}_p} \\ \bigg((D^c)^{-1} - (D^c)^{-1}CA^{-1}M^{(2)}\bigg)\mathbf{\vec{1}_q}\\ \end{bmatrix} \\ &\\ &= \begin{bmatrix} \bigg((A^c)^{-1}\Big( I_p - BD^{-1}M^{(1)} \Big)\bigg)\mathbf{\vec{1}_p} \\ \bigg((D^c)^{-1}\Big( I_q - CA^{-1}M^{(2)} \Big)\bigg)\mathbf{\vec{1}_q}\\ \end{bmatrix} \end{aligned} $$

Thus following the matrix inverse property:

$$ (A B)^{-1} = B^{-1}A^{-1} $$

We can do:

$$ \begin{aligned} w \propto \Sigma^{-1}\vec{1} &= \begin{bmatrix} A & B\\ C & D \end{bmatrix}^{-1} \vec{1}\\ &\\ &= \begin{bmatrix} \bigg((A^c)^{-1}\Big( I_p - BD^{-1}M^{(1)} \Big)\bigg)\mathbf{\vec{1}_p} \\ \bigg((D^c)^{-1}\Big( I_q - CA^{-1}M^{(2)} \Big)\bigg)\mathbf{\vec{1}_q}\\ \end{bmatrix} \\ &\\ &= \begin{bmatrix} \dfrac{1}{\nu(Ag)}w(Ag)\\ \dfrac{1}{\nu(Dg)}w(Dg)\\ \end{bmatrix} \end{aligned} $$

where

$$ Ag = \bigg( (A^c)^{-1}\Big( I_p - BD^{-1}M^{(1)} \Big) \bigg)^{-1} = \Big( I_p - BD^{-1}M^{(1)} \Big)^{-1} \Big((A^c)^{-1}\Big)^{-1} = \Big( I_p - BD^{-1}M^{(1)} \Big)^{-1} A^c $$

and

$$ Dg = \bigg( (D^c)^{-1}\Big( I_q - CA^{-1}M^{(2)} \Big) \bigg)^{-1} = \Big( I_q - CA^{-1}M^{(2)} \Big)^{-1} \Big((D^c)^{-1}\Big)^{-1} = \Big( I_q - CA^{-1}M^{(2)} \Big)^{-1} D^c $$

In conclusion:

$$ \begin{aligned} w \propto \Sigma^{-1}\vec{1} &= \begin{bmatrix} A & B\\ C & D \end{bmatrix}^{-1} \vec{1}\\ &\\ &= \begin{bmatrix} \dfrac{1}{\nu(Ag)}w(Ag)\\ \dfrac{1}{\nu(Dg)}w(Dg)\\ \end{bmatrix} \\ &\\ &= \begin{bmatrix} \dfrac{1}{\nu\Bigg(\Big( I_p - BD^{-1}M^{(1)} \Big)^{-1} A^c\Bigg)}w\Bigg(\Big( I_p - BD^{-1}M^{(1)} \Big)^{-1} A^c\Bigg)\\ \dfrac{1}{\nu\Bigg(\Big( I_q - CA^{-1}M^{(2)} \Big)^{-1} D^c\Bigg)}w\Bigg(\Big( I_q - CA^{-1}M^{(2)} \Big)^{-1} D^c\Bigg)\\ \end{bmatrix} \end{aligned} $$

One question that still remains is how to choose $M$

From my perspective, $M$ can be defined as follows:

$$ M^{(1)} = \begin{cases} I_p & \text{if } p = q \\ \dfrac{1}{p} \mathbf{\vec{1}_q}\mathbf{\vec{1}_p^T} \end{cases} $$

$$ M^{(2)} = \begin{cases} I_q & \text{if } p = q \\ \dfrac{1}{q} \mathbf{\vec{1}_p}\mathbf{\vec{1}_q^T} \end{cases} $$

Let me know if something is wrong or missing, and if there is a particular motivation for which $M$ is defined as the 'step-up matrix'.

from precise.

microprediction avatar microprediction commented on June 14, 2024

That's exactly right the choice of step-up matrix is a bit arbitrary here.

from precise.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.