Giter VIP home page Giter VIP logo

precise's Introduction

precise docs tests tests-scipy-173 tests-sans-ppo License: MIT

Contents:

  1. A collection of online (incremental) covariance forecasting and portfolio construction functions. See docs.

  2. "Schur Complementary" portfolio construction, a new approach that leans on connection between top-down (hierarchical) and bottom-up (optimization) portfolio construction revealed by block matrix inversion. See my posts on the methodology and its role in the hijacking of the M6 contest.

  3. A small compendium of portfolio theory papers tilted towards my interests. See literature.

One observes that tools for portfolio construction might also be useful in optimizing a portfolio of models.

NEW: Some slides for the CQF talk.



Usage

See the docs, but briefly:

Covariance estimation

Here y is a vector:

from precise.skaters.covariance.ewapm import ewa_pm_emp_scov_r005_n100 as f 
s = {}
for y in ys:
    x, x_cov, s = f(s=s, y=y)

This package contains lots of different "f"s. There is a LISTING_OF_COV_SKATERS with links to the code. See the covariance documentation.

Portfolio weights

Here y is a vector:

    from precise.skaters.managers.schurmanagers import schur_weak_pm_t0_d0_r025_n50_g100_long_manager as mgr
    s = {}
    for y in ys:
        w, s = mgr(s=s, y=y)

This package contains lots of "mgr"'s. There is a LISTING_OF_MANAGERS with links to respective code. See the manager documentation.

Install

pip install precise 

or for latest:

pip install git+https://github.com/microprediction/precise.git

Trouble? It probably isn't with precise per se.

pip install --upgrade pip
pip install --upgrade setuptools 
pip install --upgrade wheel
pip install --upgrade ecos   # <--- Try conda install ecos if this fails
pip install --upgrade osqp   # <-- Can be tricky on some systems see https://github.com/cvxpy/cvxpy/issues/1190#issuecomment-994613793
pip install --upgrade pyportfolioopt # <--- Skip if you don't plan to use it
pip install --upgrade riskparityportfolio
pip install --upgrade scipy
pip install --upgrade precise 

Miscellaneous

  • Here is some related, and potentially related, literature.
  • This is a piece of the microprediction project aimed at creating millions of autonomous critters to distribute AI at low cost, should you ever care to cite the same. The uses include mixtures of experts models for time-series analysis, buried in timemachines somewhere.
  • If you just want univariate calculations, and don't want numpy as a dependency, there is momentum. However if you want univariate forecasts of the variance of something, as distinct from mere online calculations of the same, you might be better served by the timemachines package. In particular I would suggest checking the time-series elo ratings and the "special" category in particular, as various kinds of empirical moment time-series (volatility etc) are used to determine those ratings.
  • The name of this package refers to precision matrices, not numerical precision. This isn't a source of high precision covariance calculations per se. The intent is more in forecasting future realized covariance, conscious of the noise in the empirical distribution. Perhaps I'll include some more numerically stable methods from this survey to make the name more fitting. Pull requests are welcome!
  • The intent is that methods are parameter free. However some not-quite autonomous methods admit a few parameters (the factories).

Disclaimer

Not investment advice. Not M6 entry advice. Just a bunch of code subject to the MIT License disclaimers.

precise's People

Contributors

marcogorelli avatar microprediction avatar peterdcotton avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

precise's Issues

cvxpy

    >>> import precise
    (CVXPY) May 23 01:56:46 PM: Encountered unexpected exception importing solver GLOP:
    RuntimeError('Version of ortools (7.8.7959) is too old. Expected >= 9.3.0.')
    (CVXPY) May 23 01:56:46 PM: Encountered unexpected exception importing solver PDLP:
    RuntimeError('Version of ortools (7.8.7959) is too old. Expected >= 9.3.0.')

[BUG] Default port in schur complementary portfolio

In hierarchical_schur_complementary_portfolio_with_defaults, the default port is diagonal_portfolio_factory

It is called here with the covariance as a positional argument.
However, diagonal_portfolio_factory takes pre as it's first positional argument (here) meaning that the allocation is not an inverse variance.

I think it should be port(cov=cov) as opposed to port(cov) (with that change, the allocation will be homogenous to HRP when gamma=0)

fix windows compat

Get rid of runthis dependency as this fails on windows and example files prevent install of precise

Unittests for precision

I would not just rely on a blog post that is based on Wikipedia...
Instead, devise unit tests to check the precision and performance of the methods.

A discussion of different approaches, their performance, and their accuracy can be found in the publication:

Schubert, Erich; Gertz, Michael (9 July 2018). Numerically stable parallel computation of (co-)variance. ACM. p. 10. doi:10.1145/3221269.3223036. ISBN 9781450365055. S2CID 49665540.

It also discusses AVX parallelization to further improve performance, but that is mostly relevant for the univariate case.

Either way, it discusses the main numerical issue with the naive approach, and this way how to test the accuracy against this problem.

(And, no, 1e-3 is not consider to be precise. If you really want to have high precision, you will likely want to even use Kahan summation or the Shewchuk algorithm if you are willing to trade run time performance.)

schur complementary portfolio - difference between theoretical formulation and code

When inspecting the Shur complementary portfolio, I encountered some issues.

In the formulation of your presentation this can be found:

1*WJUNzwuJfEXrJmD0Qkot6g
Screenshot 2024-05-07 at 18 06 19

with
$$A^{c}(\gamma)= A - \gamma B D^{-1}C$$
By following the third step of the algorithm (Augment and allocate) starting from

precise/precise/skaters/portfoliostatic/schurportfactory.py line 122

  # 3. Augment and allocate
  Ag, Dg, info = schur_augmentation(A=A, B=B, C=C, D=D, gamma=gamma) <--------HERE

precise/precise/skaters/portfoliostatic/schurportutil.py , line 13

 def schur_augmentation(A,B,C,D, gamma):
    """
       Mess with A, D to try to incorporate some off-diag info
    """
    if gamma>0.0:
        max_gamma = _maximal_gamma(A=A, B=B, C=C, D=D)
        augA, bA = pseudo_schur_complement(A=A, B=B, C=C, D=D, gamma=gamma * max_gamma) # <-------------HERE
        augD, bD = pseudo_schur_complement(A=D, B=C, C=B, D=A, gamma=gamma * max_gamma) # <---------HERE TOO

        augmentation_fail = False
        if not is_positive_def(augA):
            try:
                Ag = nearest_pos_def(augA)
            except np.linalg.LinAlgError:
                augmentation_fail=True
        else:
            Ag = augA
        if not is_positive_def(augD):
            try:
                Dg = nearest_pos_def(augD)
            except np.linalg.LinAlgError:
                augmentation_fail=True
        else:
            Dg = augD

        if augmentation_fail:
            print('Warning: augmentation failed')
            reductionA = 1.0
            reductionD = 1.0
            reductionRatioA = 1.0
            Ag = A
            Dg = D
        else:
            reductionD = np.linalg.norm(Dg)/np.linalg.norm(D)
            reductionA = np.linalg.norm(Ag)/np.linalg.norm(A)
            reductionRatioA = reductionA/reductionD
    else:
        reductionRatioA = 1.0
        reductionA = 1.0
        reductionD = 1.0
        Ag = A
        Dg = D

    info = {'reductionA': reductionA,
                'reductionD': reductionD,
                'reductionRatioA': reductionRatioA}
    return Ag, Dg, info

We arrive at this pseudo_schur_complement function where $A^{c}(\gamma)= A - \gamma B D^{-1}C$ is computed:

precise/precise/skaters/portfoliostatic/schurportutil.py , line 57

def  pseudo_schur_complement(A, B, C, D, gamma, lmbda=None, warn=False):
    """
       Augmented cov matrix for "A" inspired by the Schur complement
    """
    if lmbda is None:
        lmbda=gamma
    try:
        Ac_raw = schur_complement(A=A, B=B, C=C, D=D, gamma=gamma)  
        nA = np.shape(A)[0]
        nD = np.shape(D)[0]
        Ac = to_symmetric(Ac_raw)
        M = symmetric_step_up_matrix(n1=nA, n2=nD)
        Mt = np.transpose(M)
        BDinv = multiply_by_inverse(B, D, throw=False)
        BDinvMt = np.dot(BDinv, Mt)
        Ra = np.eye(nA) - lmbda * BDinvMt
        Ag = inverse_multiply(Ra, Ac, throw=False, warn=False)
    except np.linalg.LinAlgError:
        if warn:
            print('Pseudo-schur failed, falling back to A')
        Ag = A
    n = np.shape(A)[0]
    b = np.ones(shape=(n,1))
    return Ag, b

However after that the following operations are performed:

$$ \begin{aligned} Ag &= Ra^{-1} \cdot Ac\\ &= (I - \lambda B D^{-1}M^T)^{-1} \cdot Ac \\ &= (I - \lambda B D^{-1}M^T)^{-1} \cdot \text{tosim}(Ac_raw) \\ &= (I - \lambda B D^{-1}M^T)^{-1} \cdot \text{tosim}(A^{c}(\gamma)) \\ &= (I - \lambda B D^{-1}M^T)^{-1} \cdot \text{tosim}(A - \gamma B D^{-1}C) \\ \end{aligned} $$

So if we assume that $Ac$ is already symmetric (to simplify the process):

$$ \begin{aligned} Ag &= (I - \lambda B D^{-1}M^T)^{-1} \cdot (A - \gamma B D^{-1}C) \\ \end{aligned} $$

Which does not match what I expect from the presentation:

  1. "Before performing inter-group allocation we make a different modification. We multiply the precision of $A^c$ by $b_Ab_A^T$ element-wise (and similarly, multiply the precision of $D^c$ by $b_Db_D^T$)".

Meaning:

$$ A' = (A - \gamma B D^{-1}C)^{*b_A} $$

Specifically I dont understand where $M$ the symmetric step up matrix comes from and why $b_A$ is never computed.

And again when we perform the sub allocation:
precise/precise/skaters/portfoliostatic/schurportfactory.py lines 132 and 138

# Sub-allocate
wA = hierarchical_schur_complementary_portfolio(cov=Ag, port=port, port_kwargs=port_kwargs,
                                               alloc=alloc, alloc_kwargs=alloc_kwargs,
                                               splitter=splitter, splitter_kwargs=splitter_kwargs,
                                               seriator=seriator, seriator_kwargs=seriator_kwargs,
                                               seriation_depth = seriation_depth-1,
                                               delta=delta, gamma=gamma)
wD = hierarchical_schur_complementary_portfolio(cov=Dg, port=port, port_kwargs=port_kwargs,
                                                alloc=alloc, alloc_kwargs=alloc_kwargs,
                                                splitter=splitter, splitter_kwargs=splitter_kwargs,
                                                seriator=seriator, seriator_kwargs=seriator_kwargs,
                                                seriation_depth=seriation_depth - 1,
                                                delta=delta, gamma=gamma)

the same augmented matrix $Ag$ used to allocate in:

precise/precise/skaters/portfoliostatic/schurportfactory.py line 122

# 3. Augment and allocate
Ag, Dg, info = schur_augmentation(A=A, B=B, C=C, D=D, gamma=gamma)
aA, aD = alloc(covs=[Ag, Dg]) #<----------HERE

is also passed to the next "iteration", while on the slides I found:

  1. The intra-group allocation pertaining to block $A$ is determined by covariance matrix $A_{/ b_A(\lambda)}^c$. In this notation the vector $b_A(\lambda)=\overrightarrow{1}-\lambda B D^{-1} \overrightarrow{1}$. The generalized Schur complement is $A^c(\gamma)=A-\gamma B D^{-1} C$. The notation $A_{/ b}^c$ denotes $A^c /\left(b b^T\right)$ with division performed element-wise.

$$ A'' = A^c(\gamma)/b_Ab_A^T $$

vine copulas

Include some copula methods (see code examples from here perhaps)

gerber

Introduce some gerber statistic based methods (paper)

Understanding m6 examples

Hi,

I'm trying to figure out how the m6 examples work internally. As I understand from reading the code the following steps are performed in the simplest examples:

  1. Download data and preprocess, generate difference log returns
  2. Estimate covariance matrix with the chosen skater
  3. Generate n draws from a 100-d-normal distribution with mean all zero and the estimated covariance matrix.
  4. Calculate the 5 quantiles, convert all draws into their respective quantile values (1-5)
  5. Average for every rank the draws for every asset to generate probabilities

As I'm not from a finance background, I have some questions. I would also appreciate links for deeper understanding:

  1. Why is the mean estimation coming from the estimator not used? Shouldn't there be some kind of "trend" in some cases so that mean is not zero?
  2. Why assume the multivariate normal distribution for returns? Couldn't it be any other distribution? Or is this just the simple way to do it without more knowledge?
  3. Looking through the code I'm missing somehow traditional forecasting models? Before reading this package my approach would have been to pick a forecasting model with uncertainty e.g. some bayesian regression/gaussian process/prophet etc. Then generate a forecast from that, define class boundaries based on the uncertainty estimate (Although I struggle to define how to set the boundaries). Any comments on that?

Thanks for any insights :)

Issue with get_prices function in M6

Hi

While using the precise package for M6 portfolio optimization I'm facing an issue with get_prices() method. To estimate the covariance it needs the data from Yahoo finance in get_prices() method but it is backing off the API. Please resolve this.

Thanks in advance.

Porting this to River

Hello there! I hope you're doing well.

I recently saw this repo pop up and I like it very much. There are some things that we don't yet have in River. In particular, I'm thinking of the OnlineEmpiricalCovariance class.

Would it be ok if we ported some of this stuff into River? I rather ask a gentleman rather savagely copy the code.

Kind regards.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.