Giter VIP home page Giter VIP logo

robertmartin8 / pyportfolioopt Goto Github PK

View Code? Open in Web Editor NEW
4.1K 127.0 915.0 9.47 MB

Financial portfolio optimisation in python, including classical efficient frontier, Black-Litterman, Hierarchical Risk Parity

Home Page: https://pyportfolioopt.readthedocs.io/

License: MIT License

Python 27.73% Dockerfile 0.05% Jupyter Notebook 72.22%
finance portfolio-optimization portfolio-management quantitative-finance algorithmic-trading investing efficient-frontier covariance python investment

pyportfolioopt's People

Contributors

88d52bdba0366127fffca9dfa93895 avatar anchitshrivastava avatar andyherfer avatar arcaputo3 avatar armbruer avatar bhutraaditya avatar bvonboyen avatar dependabot[bot] avatar dpapakyriak avatar duranvrnubank avatar fdabrandao avatar gliptak avatar gpfins avatar gumblex avatar lbrummer avatar mkeds avatar phschiele avatar pmn4 avatar robertmartin8 avatar ryanrussell avatar samatix avatar schneiderfelipe avatar seapea1 avatar stevediamond avatar thenordine avatar tommybark avatar tschm avatar tuantp7 avatar wilm0r avatar yosukesan avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pyportfolioopt's Issues

Another optimizer on CVAR function

Hi @robertmartin8

I was thinking of trying the below optimiser package with the cvar problem.

https://github.com/uqfoundation/mystic/blob/master/examples/example08.py

Would I just need to create another function in this class -
https://github.com/robertmartin8/PyPortfolioOpt/blob/master/pypfopt/value_at_risk.py

and replace the line -

    result = noisyopt.minimizeSPSA(
        objective_functions.negative_cvar,
        args=args,
        bounds=self.bounds,
        x0=self.initial_guess,
        niter=1000,
        paired=False,
    )

with -

# use DE to solve 8th-order Chebyshev coefficients
npop = 10*ndim
solver = DifferentialEvolutionSolver2(ndim,npop)
solver.SetRandomInitialPoints(min=[-100]*ndim, max=[100]*ndim)
solver.SetGenerationMonitor(stepmon)
solver.enable_signal_handler()
solver.Solve(chebyshev8cost, termination=VTR(0.01), strategy=Best1Exp, \
             CrossProbability=1.0, ScalingFactor=0.9, \
             sigint_callback=plot_solution)
solution = solver.Solution()

I know its not exact, but am I on the write track?
Best,
Andrew

Functions only accept prices data not returns data

This was originally what I intended, but a user has notified me that some data sources (e.g the Fama and French 30) only provide returns. If you are having a problem with this, a quick workaround is to construct a new series of "prices" by adding a row of 1s to your dataset then using df.Series.cumprod() to get the cumulative product. This works because most methods first take the percentage change to make a returns series, and this percentage change does not care about the starting value.

Would love to hear any opinions on whether this is an issue, and if so, what would be the cleanest way to implement optionally providing returns instead of prices. I guess I could do it by having a boolean parameter price_data=True; if False then the data passed in will be interpreted as returns. However, I don't want to clutter the API without good reason.

CVar Bugs?

Just a q.

Really like the lib, thank you.

I notice this comment in the examples "# CVaR optimisation - very buggy"

What are the known issues? I'm keen to use so maybe I can try and fix some of them on the way.

Cheers

Is there a limit on the number of assets?

I tried to apply the Markowitz model with 61 assets from CSV files.

This functions execute and show the data that is suposed.
mu = expected_returns.mean_historical_return(df)
S = CovarianceShrinkage(df).ledoit_wolf()
ef = EfficientFrontier(mu, S)

but when it comes to this
raw_weights = ef.max_sharpe()

it just shows NAN on all the positions.

I have run the code with less assets and it works just fine.

Is there a limit on the number of assets?

Does it make sense to set benchmark=risk_free_rate in semicovariance?

It seems that semicovariance, as implemented in pyportfolioopt, sets a penalty for assets that go below a certain threshold. Even though it's natural to demand this threshold to be non-negative, wouldn't it be more reasonable to give it a more meaningful value as default, e.g., the risk free rate, instead of zero?

Sorry if I'm missing something here, is there a reason for this being zero by default?

Undocumented shrinkage estimators

Hi, #20 has offered two variants on the Ledoit-Wolf shrinkage estimator that are undocumented. I did some backtests and I believe that the single factor target may be better than the default constant variance target.

I have two questions regarding this:

  1. Is there a reason why the single factor target isn't documented?
  2. Is there a theoretical reason why the single factor might be better?

ImportError: No module named setuptools

I am trying, after cloning it, to install this module on a Linuxmint computer. When executing the command python setup.py install, inside the cloning directory, I get the error:

enri@enri-Presario-CQ57-Notebook-PC:/media/enri/TRABAJO/PyPortfolioOpt-master$ python setup.py install
Traceback (most recent call last):
  File "setup.py", line 1, in <module>
    from setuptools import setup
ImportError: No module named setuptools

What can be the cause?

e

pandas version for "@" operator between DataFrame and Series

I just tested the latest repo version of PyPortfolioOpt and got some errors related to the usage of @: TypeError: unsupported operand type(s) for @: 'DataFrame' and 'Series'.

I installed PyPortfolioOpt with pip3 -e ., which means all dependencies were satisfied.

Pandas stable (0.25.3) says @ can be used for dot product. But since I'm using Pandas 0.25.1, it might be a version issue.

Since PyPortfolioOpt currently depends on Pandas 0.21.0 or higher, should we change requirements.txt?


Full test output:

➜  PyPortfolioOpt git:(master) ✗ pytest-3
=============================================================================================== test session starts ===============================================================================================
platform linux -- Python 3.6.9, pytest-3.3.2, py-1.5.2, pluggy-0.6.0
rootdir: /home/schneider/Dropbox/PyPortfolioOpt, inifile:
collected 140 items                                                                                                                                                                                               

tests/test_base_optimizer.py ............                                                                                                                                                                   [  8%]
tests/test_black_litterman.py ..........FF                                                                                                                                                                  [ 17%]
tests/test_cla.py ..........                                                                                                                                                                                [ 24%]
tests/test_custom_objectives.py ......                                                                                                                                                                      [ 28%]
tests/test_discrete_allocation.py ..............                                                                                                                                                            [ 38%]
tests/test_efficient_frontier.py .........................................                                                                                                                                  [ 67%]
tests/test_expected_returns.py .........                                                                                                                                                                    [ 74%]
tests/test_hrp.py .....                                                                                                                                                                                     [ 77%]
tests/test_objective_functions.py .......                                                                                                                                                                   [ 82%]
tests/test_risk_models.py ...................                                                                                                                                                               [ 96%]
tests/test_value_at_risk.py .....                                                                                                                                                                           [100%]

==================================================================================================== FAILURES =====================================================================================================
____________________________________________________________________________________________ test_market_implied_prior ____________________________________________________________________________________________

    def test_market_implied_prior():
        df = get_data()
        S = risk_models.sample_cov(df)
    
        prices = pd.read_csv(
            "tests/spy_prices.csv", parse_dates=True, index_col=0, squeeze=True
        )
        delta = black_litterman.market_implied_risk_aversion(prices)
    
        mcaps = {
            "GOOG": 927e9,
            "AAPL": 1.19e12,
            "FB": 574e9,
            "BABA": 533e9,
            "AMZN": 867e9,
            "GE": 96e9,
            "AMD": 43e9,
            "WMT": 339e9,
            "BAC": 301e9,
            "GM": 51e9,
            "T": 61e9,
            "UAA": 78e9,
            "SHLD": 0,
            "XOM": 295e9,
            "RRC": 1e9,
            "BBY": 22e9,
            "MA": 288e9,
            "PFE": 212e9,
            "JPM": 422e9,
            "SBUX": 102e9,
        }
>       pi = black_litterman.market_implied_prior_returns(mcaps, delta, S)

tests/test_black_litterman.py:281: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

market_caps = {'AAPL': 1190000000000.0, 'AMD': 43000000000.0, 'AMZN': 867000000000.0, 'BABA': 533000000000.0, ...}, risk_aversion = 2.6854910662283147
cov_matrix =           GOOG      AAPL        FB      BABA      AMZN        GE       AMD  \
GOOG  0.093211  0.046202  0.030801  0.02...  0.056298  0.070269  0.034757  0.146893  0.049530  
SBUX  0.028284  0.050217  0.046886  0.024195  0.049530  0.152589  
risk_free_rate = 0.02

    def market_implied_prior_returns(
        market_caps, risk_aversion, cov_matrix, risk_free_rate=0.02
    ):
        r"""
        Compute the prior estimate of returns implied by the market weights.
        In other words, given each asset's contribution to the risk of the market
        portfolio, how much are we expecting to be compensated?
    
        .. math::
    
            \Pi = \delta \Sigma w_{mkt}
    
        :param market_caps: market capitalisations of all assets
        :type market_caps: {ticker: cap} dict or pd.Series
        :param risk_aversion: risk aversion parameter
        :type risk_aversion: positive float
        :param cov_matrix: covariance matrix of asset returns
        :type cov_matrix: pd.DataFrame or np.ndarray
        :param risk_free_rate: risk-free rate of borrowing/lending, defaults to 0.02.
                               You should use the appropriate time period, corresponding
                               to the covariance matrix.
        :type risk_free_rate: float, optional
        :return: prior estimate of returns as implied by the market caps
        :rtype: pd.Series
        """
        mcaps = pd.Series(market_caps)
        mkt_weights = mcaps / mcaps.sum()
        # Pi is excess returns so must add risk_free_rate to get return.
>       return risk_aversion * cov_matrix @ mkt_weights + risk_free_rate
E       TypeError: unsupported operand type(s) for @: 'DataFrame' and 'Series'

pypfopt/black_litterman.py:44: TypeError
________________________________________________________________________________________ test_black_litterman_market_prior ________________________________________________________________________________________

    def test_black_litterman_market_prior():
        df = get_data()
        S = risk_models.sample_cov(df)
    
        prices = pd.read_csv(
            "tests/spy_prices.csv", parse_dates=True, index_col=0, squeeze=True
        )
        delta = black_litterman.market_implied_risk_aversion(prices)
    
        mcaps = {
            "GOOG": 927e9,
            "AAPL": 1.19e12,
            "FB": 574e9,
            "BABA": 533e9,
            "AMZN": 867e9,
            "GE": 96e9,
            "AMD": 43e9,
            "WMT": 339e9,
            "BAC": 301e9,
            "GM": 51e9,
            "T": 61e9,
            "UAA": 78e9,
            "SHLD": 0,
            "XOM": 295e9,
            "RRC": 1e9,
            "BBY": 22e9,
            "MA": 288e9,
            "PFE": 212e9,
            "JPM": 422e9,
            "SBUX": 102e9,
        }
>       prior = black_litterman.market_implied_prior_returns(mcaps, delta, S)

tests/test_black_litterman.py:351: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

market_caps = {'AAPL': 1190000000000.0, 'AMD': 43000000000.0, 'AMZN': 867000000000.0, 'BABA': 533000000000.0, ...}, risk_aversion = 2.6854910662283147
cov_matrix =           GOOG      AAPL        FB      BABA      AMZN        GE       AMD  \
GOOG  0.093211  0.046202  0.030801  0.02...  0.056298  0.070269  0.034757  0.146893  0.049530  
SBUX  0.028284  0.050217  0.046886  0.024195  0.049530  0.152589  
risk_free_rate = 0.02

    def market_implied_prior_returns(
        market_caps, risk_aversion, cov_matrix, risk_free_rate=0.02
    ):
        r"""
        Compute the prior estimate of returns implied by the market weights.
        In other words, given each asset's contribution to the risk of the market
        portfolio, how much are we expecting to be compensated?
    
        .. math::
    
            \Pi = \delta \Sigma w_{mkt}
    
        :param market_caps: market capitalisations of all assets
        :type market_caps: {ticker: cap} dict or pd.Series
        :param risk_aversion: risk aversion parameter
        :type risk_aversion: positive float
        :param cov_matrix: covariance matrix of asset returns
        :type cov_matrix: pd.DataFrame or np.ndarray
        :param risk_free_rate: risk-free rate of borrowing/lending, defaults to 0.02.
                               You should use the appropriate time period, corresponding
                               to the covariance matrix.
        :type risk_free_rate: float, optional
        :return: prior estimate of returns as implied by the market caps
        :rtype: pd.Series
        """
        mcaps = pd.Series(market_caps)
        mkt_weights = mcaps / mcaps.sum()
        # Pi is excess returns so must add risk_free_rate to get return.
>       return risk_aversion * cov_matrix @ mkt_weights + risk_free_rate
E       TypeError: unsupported operand type(s) for @: 'DataFrame' and 'Series'

pypfopt/black_litterman.py:44: TypeError
================================================================================================ warnings summary =================================================================================================
tests/test_black_litterman.py::test_input_errors
  /home/schneider/Dropbox/PyPortfolioOpt/pypfopt/black_litterman.py:173: UserWarning: Running Black-Litterman with no prior.
    warnings.warn("Running Black-Litterman with no prior.")

tests/test_black_litterman.py::test_parse_views
  /home/schneider/Dropbox/PyPortfolioOpt/pypfopt/black_litterman.py:173: UserWarning: Running Black-Litterman with no prior.
    warnings.warn("Running Black-Litterman with no prior.")

tests/test_black_litterman.py::test_dataframe_input
  /home/schneider/Dropbox/PyPortfolioOpt/pypfopt/black_litterman.py:173: UserWarning: Running Black-Litterman with no prior.
    warnings.warn("Running Black-Litterman with no prior.")

tests/test_black_litterman.py::test_default_omega
  /home/schneider/Dropbox/PyPortfolioOpt/pypfopt/black_litterman.py:173: UserWarning: Running Black-Litterman with no prior.
    warnings.warn("Running Black-Litterman with no prior.")

tests/test_black_litterman.py::test_bl_returns_no_prior
  /home/schneider/Dropbox/PyPortfolioOpt/pypfopt/black_litterman.py:173: UserWarning: Running Black-Litterman with no prior.
    warnings.warn("Running Black-Litterman with no prior.")

tests/test_black_litterman.py::test_bl_relative_views
  /home/schneider/Dropbox/PyPortfolioOpt/pypfopt/black_litterman.py:173: UserWarning: Running Black-Litterman with no prior.
    warnings.warn("Running Black-Litterman with no prior.")

tests/test_black_litterman.py::test_bl_cov_default
  /home/schneider/Dropbox/PyPortfolioOpt/pypfopt/black_litterman.py:173: UserWarning: Running Black-Litterman with no prior.
    warnings.warn("Running Black-Litterman with no prior.")

tests/test_black_litterman.py::test_bl_weights
  /home/schneider/Dropbox/PyPortfolioOpt/pypfopt/black_litterman.py:173: UserWarning: Running Black-Litterman with no prior.
    warnings.warn("Running Black-Litterman with no prior.")

-- Docs: http://doc.pytest.org/en/latest/warnings.html
================================================================================ 2 failed, 138 passed, 8 warnings in 19.60 seconds ================================================================================

pipenv support

current using pipenv install PyPortfolioOpt would result in 'Not Found' error

Does your library have a 'long only' constraint?

I am curious to know if your library has a 'long only' constraint while optimizing a portfolio. It seems like stocks' weights are always postiive, but the documents do not specify whether it is the case or not.

Thank you for creating such a nice portfolio optimization library.

Sigue el error: No module named 'pulp'

I have re-created the environment and installed pulp with:
pip install pulp==1.6.10
Al ejecutar
from pypfopt import discrete_allocation
It keep returning me the error.
`ModuleNotFoundError Traceback (most recent call last)
in
----> 1 from pypfopt import discrete_allocation

C:\Anaconda3\lib\site-packages\pyportfolioopt-0.5.1-py3.7.egg\pypfopt\discrete_allocation.py in
6 import numpy as np
7 import pandas as pd
----> 8 import pulp
9
10

ModuleNotFoundError: No module named 'pulp'
`

Rolling window

How would I implement a rolling window on the risk models and efficientfrontier?

ModuleNotFoundError: No module named 'pulp'

Dear Robert
As you can see in the attached image, the import of modules returns the error mentioned in the query title.
GestionValores
Perhaps modifications to the API have affected some package names. `What should be the new names in the API, of "DiscreteAllocation or get_latest_prices"?
I will appreciate your help. Best regards

Tests for Hierarchical Risk Parity

Right now, I've only got one test for HRP, and it doesn't meaningfully target the inner workings:

def test_hrp_portfolio():
    df = get_data()
    returns = df.pct_change().dropna(how="all")
    w = hrp_portfolio(returns)
    assert isinstance(w, dict)
    assert set(w.keys()) == set(df.columns)
    np.testing.assert_almost_equal(sum(w.values()), 1)

I would appreciate help in testing some of the components, like the clustering and linkages

Calculate correlation matrix by using sample covariance.

I want to draw correlation graph by correlation matrix. So I need to calculate the correlation matrix through the sample covariance. Based on the achieved formula, the correlation matrix is calculated by using covariance matrix .

def cov2cor(cov_in):
    if not isinstance(cov_in, pd.DataFrame):
        warnings.warn("cov are not in a dataframe", RuntimeWarning)
        cov_in = pd.DataFrame(cov_in)
        cov = cov_in.values
    else:
        cov = cov_in.values
    
    p = len(cov)
    nd = cov.diagonal()
    nsd = np.sqrt(nd)
    e = np.eye(p)
    for i in range(p):
        for j in range(p):
            e[i,j] = nsd[i]*nsd[j]
    cor = cov/e
    cor_df = pd.DataFrame(cor ,index = cov_in.index, columns = cov_in.columns)
    return cor_df

Hopefully this will work for others as well.

Conditional value-at-risk bug

Currently, the CVaR optimisation using NoisyOpt is a little buggy. Because the weights aren't normalised by default, we must post-process them. However, this post-processing also means that the final weights don't respect the initial bounds. I'd appreciate any suggestions for a fix.

Weird behaviour wit max_sharpe and efficient_risk methods

I have been loading two datasets :

tickers = ['VBMFX', 'VTSMX']
select_val = 'Adj Close'
#tickers = []'AGG', 'VTSMX']

df_complete = web.DataReader(tickers ,data_source='yahoo', start='2000-01-01', end="2020-01-16")[select_val]

I then calculate the variables required by efficient frontier:

mu = expected_returns.ema_historical_return(df_complete)
S = CovarianceShrinkage(df_complete).ledoit_wolf()
ef = EfficientFrontier(mu, S,
                       #gamma=1,
                       )

# weights = ef.max_sharpe(risk_free_rate=0.005)
eff_risk = ef.efficient_risk(target_risk=0.1, risk_free_rate=0.005)


ef.portfolio_performance(verbose=True)

As a result I get:

Expected annual return: 7.6%
Annual volatility: 3.8%
Sharpe Ratio: 1.46
(0.0759353668927616, 0.03834329016364384, 1.4588045693011014)

It does not matter what risk I indicate it does not change fiven values. I have though no issue with efficient_target. Could you take a look why it seems to get stuck without adjusting to a target volatility. Manually I can tune it to 10% but the algorithm should be able to do so as well.

Market neutral weights should be normalised

The result of market neutral optimisation is essentially a long and short portfolio. The sum of weights for the longs and the sum of weights for the shorts should probably both add up to one so that it is easier for the user.

Question:How to set the total number of stocks

Thanks for this program! It is awesome!
Counld you hlep me figure out how to set the limit of total number of stocks?
For example, the data contains about 2000 stocks, and I want to get a portfolio which has only 100

Output portfolio weights to text

One user has raised a point that we should be able to output weights to a text file. I think this is a good point.

Something simple like

ef.save_to_text()

Is there any stock limit for Efficient frontier

Can I use 500 stocks for Portfolio optimization to calculate weights (like recommend gamma value, in the docs it says for 20 stocks gamma value is 1, what if we use 500 stocks ) ?

We tried with gamma value as 100 for 500 stocks but 80% of the tickers were assigned zero.

Can you please help us with this ?

Only for shorts

Hi there!!

can we optimize only for shorts? I tried setting the weight_bounds to (-1,0) but the result are nan's.

tks!

backtest integration

You put it on your roadmap already but I would like to integrate this into a backtrader backtest. Where you think I should start?

The covariance matrix calculation problem

I assume you are very much aware of the famous paper Optimal Versus Naive Diversification: How Inefficient is the 1/N Portfolio Strategy? that pointed out the issues with trying to calculate the covariance matrix for portfolio optimization. The documentation states that:

Includes both classical methods (Markowitz 1952), suggested best practices (e.g covariance shrinkage), along with many recent developments and novel features, like L2 regularisation, shrunk covariance, hierarchical risk parity.

From these recent advances in the field you mentioned, would you consider the problem to be reasonably solved now? Would you kindly point out some papers with those advances in portfolio optimization? Thank you.

Question: Is there a way to set a minimum allocation?

Hi Robert... This is awesome, thanks so much for your work on this. I have two related questions:

  1. I've found that the library is telling me to allocate 0.1% to something, and that's just not realistic for me, partly because my investment platform has a minimum purchase of $250. That's a cool quarter mil... :) I tried weight_bounds=(0.1, 1), but that didn't do what I thought it would, which makes sense now that I think about it. Is there a way to set minimum allocations?

  2. I also tried setting gamma, but that basically told me to buy a roughly equal weight of every possible asset. That may well be optimal, but it's a pain for similar reasons. Is there a way to limit the number of assets chosen?

Thanks!

Initial weights Guess

Hi,

How can I add initial weights x0 for the optimization algorithm, for example in max_sharpe ?

Allow the use of center of mass, half-life or alpha as alternatives to span

Right now, expected_returns.ema_historical_return and risk_models.exp_cov support only using span, but it could be changed to allow direct usage of the other equivalent optional inputs of pandas.DataFrame.ewm (namely com, span, halflife, alpha). (risk_models._pair_exp_cov should be modified as well.)

This is simple to do. If there's interest, I could prepare a PR myself.

Include grouped industry constraints

Hi,
Is there a way to include a portfolio optimisation with grouped by industry constraint?
for example:

  • Between 0 and 10% of my portfolio weights have to be in industry "Technology"
  • Between 15% and 30% of my portfolio weights have to be in industry "Energy"

A very good example is shown here, which uses Scipy as well.
Thanks

Additional Optimisers

Hi,
Thanks for putting out this amazing library. Have you (would you) consider adding these two additional optimization methods in the near future?

  1. Risk Parity
  2. Diversification Ratio

In case you are not planning on adding these. How difficult do you think they would be to implement using your custom objective function for a novice python person?

Thanks a lot.

Add new shrinkage estimators

Hi @robertmartin8,

I'm trying to implement 2 estimators of Ledoit-Wolf in python: Sharpe’s single factor (or single-index) model and The constant-correlation model. And I think it's suitable to contribute it into your repo, is it okay or do you have any suggestion for me?

Updated function - Portfolio opt

Hi @robertmartin8

I am getting an error with the below code, and i am not sure what is the updated function?

imports a tool to convert capital into shares

from pypfopt import discrete_allocation

# returns the number of shares to buy given the asset weights, prices, and capital to invest
alloc = discrete_allocation.portfolio(
    weights, 
    df_buy_in['Buy In: 2014-12-31'], 
    total_portfolio_value=capital
)

# returns same as above but for the MIS
mis_alloc = discrete_allocation.portfolio(
    mis_weights, 
    df_mis_buy_in['Buy In: 2014-12-31'],
    total_portfolio_value=capital
)
# imports a tool to convert capital into shares
from pypfopt import discrete_allocation
​
# returns the number of shares to buy given the asset weights, prices, and capital to invest
alloc = discrete_allocation.portfolio(
    weights, 
    df_buy_in['Buy In: 2014-12-31'], 
    total_portfolio_value=capital
)
​
# returns same as above but for the MIS
mis_alloc = discrete_allocation.portfolio(
    mis_weights, 
    df_mis_buy_in['Buy In: 2014-12-31'],
    total_portfolio_value=capital
)
---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
<ipython-input-63-7ff1e26da9af> in <module>
      3 
      4 # returns the number of shares to buy given the asset weights, prices, and capital to invest
----> 5 alloc = discrete_allocation.portfolio(
      6     weights,
      7     df_buy_in['Buy In: 2014-12-31'],

AttributeError: module 'pypfopt.discrete_allocation' has no attribute 

scipy.stats.kde: LinAlgError: singular matrix

Thanks a lot for coding and sharing this awesome library!
When I use min_cvar() in value_at_risk.py, a LinAlgError raised:

> LinAlgError                               Traceback (most recent call last)
<ipython-input-36-b8e9d23c399e> in <module>()
----> 1 a.opt_min_cvar()

<ipython-input-29-cc78a27bceb6> in opt_min_cvar(self, s, beta, random_state)
    159             x0=self.initial_weights,
    160             niter=1000,
--> 161             paired=False,
    162         )
    163         return result

C:\ProgramData\Anaconda3\lib\site-packages\noisyopt\main.py in minimizeSPSA(func, x0, args, bounds, niter, paired, a, c, disp, callback)
    323             xplus = project(x + ck*delta)
    324             xminus = project(x - ck*delta)
--> 325             grad = (funcf(xplus, **fkwargs) - funcf(xminus, **fkwargs)) / (xplus-xminus)
    326         x = project(x - ak*grad)
    327         # print 100 status updates if disp=True

C:\ProgramData\Anaconda3\lib\site-packages\noisyopt\main.py in funcf(x, **kwargs)
    306         # freeze function arguments
    307         def funcf(x, **kwargs):
--> 308             return func(x, *args, **kwargs)
    309 
    310     N = len(x0)

<ipython-input-29-cc78a27bceb6> in _obj_cvar(self, wgts, ret_mat, s, beta, random_state)
    129         # Sample from the historical distribution
    130         print(pf_rets)
--> 131         dist = scipy.stats.gaussian_kde(pf_rets)
    132         sample = dist.resample(s)
    133         # Calculate the value at risk

C:\ProgramData\Anaconda3\lib\site-packages\scipy\stats\kde.py in __init__(self, dataset, bw_method)
    170 
    171         self.d, self.n = self.dataset.shape
--> 172         self.set_bandwidth(bw_method=bw_method)
    173 
    174     def evaluate(self, points):

C:\ProgramData\Anaconda3\lib\site-packages\scipy\stats\kde.py in set_bandwidth(self, bw_method)
    497             raise ValueError(msg)
    498 
--> 499         self._compute_covariance()
    500 
    501     def _compute_covariance(self):

C:\ProgramData\Anaconda3\lib\site-packages\scipy\stats\kde.py in _compute_covariance(self)
    508             self._data_covariance = atleast_2d(np.cov(self.dataset, rowvar=1,
    509                                                bias=False))
--> 510             self._data_inv_cov = linalg.inv(self._data_covariance)
    511 
    512         self.covariance = self._data_covariance * self.factor**2

C:\ProgramData\Anaconda3\lib\site-packages\scipy\linalg\basic.py in inv(a, overwrite_a, check_finite)
    973         inv_a, info = getri(lu, piv, lwork=lwork, overwrite_lu=1)
    974     if info > 0:
--> 975         raise LinAlgError("singular matrix")
    976     if info < 0:
    977         raise ValueError('illegal value in %d-th argument of internal '

LinAlgError: singular matrix

The input data is the monthly simple returns of 3 stocks (APPLE, MICROSOFT AND GOOGLE) from Jan 2015 to Dec 2018 :

|AAPL.O|MSFT.O|GOOGL.O
2005-01-31|0.194099|-0.016467|0.014679
2005-02-28|-0.416645|-0.042618|-0.039004
2005-03-31|-0.071110|-0.039348|-0.039789
2005-04-29|-0.134629|0.046752|0.218769
2005-05-31|0.102579|0.019763|0.260318
2005-06-30|-0.074172|-0.037209|0.060879
2005-07-29|0.158653|0.030998|-0.021724
...
2018-06-29|-0.009418|-0.002327|0.026536
2018-07-31|0.027983|0.075753|0.086814
2018-08-31|0.196227|0.058918|0.003732
2018-09-28|-0.008303|0.018161|-0.020068
2018-10-31|-0.030478|-0.066101|-0.096514
2018-11-30|-0.184045|0.038199|0.017486
2018-12-31|-0.116698|-0.084047|-0.058298

It seems one of iterations by noisyopt.minimizeSPSA is all zero matrix. Then scipy.stats.kde gives LinAlgError: singular matrix.

I would appreciate help in solving this problem.
Thanks!

Refactor risk and return models

Currently, there is a lot of repeated code within risk_models.py and expected_returns.py.

Almost all of the functions therein take prices as inputs, before processing them into returns, with the following couple of lines repeated a lot.

    if not isinstance(prices, pd.DataFrame):
        warnings.warn("prices are not in a dataframe", RuntimeWarning)
        prices = pd.DataFrame(prices)
    daily_returns = prices.pct_change().dropna(how="all")

In the spirit of DRY, I'd like to refactor this without complicating the API. Haven't decided the best way of proceeding. I suppose I could put these lines into a function, but that would probably need to go in a separate file (not very elegant IMO).

EfficientFrontier - erorr

Is this some python-version issue?
It worked fine until this point.

ef = EfficientFrontier(mu, S)
Traceback (most recent call last):
File "", line 1, in
File "pypfopt/efficient_frontier.py", line 84, in init
super().init(len(tickers), tickers, weight_bounds)
TypeError: super() takes at least 1 argument (0 given)

Update examples.py

examples.py hasn't been updated since v0.2.0, and as such may not be working since the refactor. I should probably also add examples of the new functionality.

Divide by zero when passing in omega with 0 on the diagonal

I'm slowly but surely trying to recreate the results in the Idzorek paper using PyPortfolioOpt.

On p. 21 he mentions "Setting all of the diagonal elements of omega equal to zero is equivalent to specifying 100% confidence in all of the K views".

But when I pass in a zero diagonal matrix as omega I get a divide by zero error when I call bl_returns(). This is also a problem with bl_cov()

During the Coursera course https://www.coursera.org/learn/advanced-portfolio-construction-python the instructor notes that inverting omega is not always possible so he references two alternative ways to calculate the expected returns and covariance matrix:

image

Maybe that can inspire?

The instructor specifically refers to https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1314585 where Walters has rewritten the two expressions to versions without inversing omega. When I compare the formulas on the screen dump with the Walters paper it seems that the first plus should be a minus but given that it is quite a long time since I deep dived into this kind of math I might be wrong.

BTW, I can approximate a solution by filling the diagonal with np.fill_diagonal(omega_100, 1E-8) but that seems pretty hack-ish.

omega_inv = np.diag(1 / np.diag(self.omega))

Generalize bounds to be specific for each stock

Hi,

I have a portfolio for which I want to keep certain stocks at a given level and optimize the rest, but that's not possible with the current implementation.

Can we have the bounds input be either a tuple or an array of tuples? The change is simple enough and I have something working already. I'd be happy to push it up

cannot import name 'hrp_portfolio' from 'pypfopt.hierarchical_risk_parity'

Dear All

Within the directory where I have the jupyter notebook script I want to work with, which uses this library, clone "PyPortfolioOpt and run the setup for installation. When executing the script I get the error" ImportError: cannot import name 'hrp_portfolio 'from' pypfopt.hierarchical_risk_parity '(C: \ Anaconda3 \ lib \ site-packages \ pyportfolioopt-0.5.1-py3.7.egg \ pypfopt \ hierarchical_risk_parity.py) ".
I will appreciate help to overcome this problem.

Best regards

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.