Giter VIP home page Giter VIP logo

og-core's Introduction

OG-Core

Org PSL cataloged OS License: CCO-1.0 Jupyter Book Badge
Package Python 3.9 Python 3.10 Python 3.11 PyPI Latest Release PyPI Downloads Code style: black
Testing example event parameter example event parameter example event parameter Codecov

OG-Core is an overlapping-generations (OG) model core theory, logic, and solution method algorithms that allow for dynamic general equilibrium analysis of fiscal policy. OG-Core provides a general framework and is a dependency of several country-specific OG models, such as OG-USA and OG-UK. The model output includes changes in macroeconomic aggregates (GDP, investment, consumption), wages, interest rates, and the stream of tax revenues over time. Regularly updated documentation of the model theory--its output, and solution method--and the Python API is available here.

Disclaimer

The model is constantly under development, and model components could change significantly. The package will have released versions, which will be checked against existing code prior to release. Stay tuned for an upcoming release!

Using/contributing to OG-Core

There are two primary methods for installing and running OG-Core on your computer locally. The first and simplest method is to download the most recent ogcore Python package from the Python Package Index (PyPI.org). A second option is to fork and clone the most recent version of OG-Core from its GitHub repository and create the conda environment for the ogcore package. We detail both of these methods below.

Installing and Running OG-Core from Python Package Index (PyPI.org)

  • Open your terminal (or Conda command prompt), and make sure you have the most recent version of pip (the Python Index Package manager) by typing on a Unix/macOS machine python3 -m pip install --upgrade pip or on a Windows machine py -m pip install --upgrade pip.
  • Install the ogcore package from the Python Package Index by typing pip install ogcore.
  • Navigate to a folder ./YourFolderName/ where you want to save scripts to run OG-Core and output from the simulations in those scripts.
  • Save the python script run_ogcore_example.py from the OG-Core GitHub repository in the folder where you are working on your local machine ./YourFolderName/run_ogcore_example.py.
  • Run the model with an example reform from terminal/command prompt by typing python run_ogcore_example.py
  • You can adjust the run_ogcore_example.py script by modifying model parameters specified in the og_spec dictionary.
  • Model outputs will be saved in the following files:
    • ./run_example_plots
      • This folder will contain a number of plots generated from OG-Core to help you visualize the output from your run
    • ./ogcore_example_output.csv
      • This is a summary of the percentage changes in macro variables over the first ten years and in the steady-state.
    • ./OUTPUT_BASELINE/model_params.pkl
      • Model parameters used in the baseline run
      • See execute.py in the OG-Core repository for items in the dictionary object in this pickle file
    • ./OUTPUT_BASELINE/SS/SS_vars.pkl
      • Outputs from the model steady state solution under the baseline policy
      • See SS.py in the OG-Core repository for what is in the dictionary object in this pickle file
    • ./OUTPUT_BASELINE/TPI/TPI_vars.pkl
      • Outputs from the model timepath solution under the baseline policy
      • See TPI.py in the OG-Core repository for what is in the dictionary object in this pickle file
    • An analogous set of files in the ./OUTPUT_REFORM directory, which represent objects from the simulation of the reform policy

Note that, depending on your machine, a full model run (solving for the full time path equilibrium for the baseline and reform policies) can take more than two hours of compute time.

If you run into errors running the example script, please open a new issue in the OG-Core repo with a description of the issue and any relevant tracebacks you receive.

The CSV output file ./ogcore_example_output.csv can be compared to the ./run_examples/expected_ogcore_example_output.csv file in the OG-Core repository to confirm that you are generating the expected output. The easiest way to do this is to copy the example-diffs and example-diffs.bat files from the OG-Core repository and use the sh example-diffs command (or example-diffs on Windows) from the run_examples directory. If you run into errors running the example script, please open a new issue in the OG-Core repo with a description of the issue and any relevant tracebacks you receive.

Installing and Running OG-Core from GitHub repository

  • Install the Anaconda distribution of Python
  • Clone this repository to a directory on your computer
  • From the terminal (or Conda command prompt), navigate to the directory to which you cloned this repository and run conda env create -f environment.yml
  • Then, conda activate ogcore-dev
  • Then install by pip install -e .
  • Navigate to ./run_examples
  • Run the model with an example reform from terminal/command prompt by typing python run_ogcore_example.py
  • You can adjust the ./run_examples/run_ogcore_example.py script by modifying model parameters specified in the og_spec dictionary.
  • Model outputs will be saved in the following files:
    • ./run_examples/run_example_plots
      • This folder will contain a number of plots generated from OG-Core to help you visualize the output from your run
    • ./run_examples/ogcore_example_output.csv
      • This is a summary of the percentage changes in macro variables over the first ten years and in the steady-state.
    • ./run_examples/OUTPUT_BASELINE/model_params.pkl
      • Model parameters used in the baseline run
      • See execute.py for items in the dictionary object in this pickle file
    • ./run_examples/OUTPUT_BASELINE/SS/SS_vars.pkl
      • Outputs from the model steady state solution under the baseline policy
      • See SS.py for what is in the dictionary object in this pickle file
    • ./run_examples/OUTPUT_BASELINE/TPI/TPI_vars.pkl
      • Outputs from the model timepath solution under the baseline policy
      • See TPI.py for what is in the dictionary object in this pickle file
    • An analogous set of files in the ./run_examples/OUTPUT_REFORM directory, which represent objects from the simulation of the reform policy

Note that, depending on your machine, a full model run (solving for the full time path equilibrium for the baseline and reform policies) can take more than two hours of compute time.

If you run into errors running the example script, please open a new issue in the OG-Core repo with a description of the issue and any relevant tracebacks you receive.

The CSV output file ./run_examples/ogcore_example_output.csv can be compared to the ./run_examples/expected_ogcore_example_output.csv file that is checked into the repository to confirm that you are generating the expected output. The easiest way to do this is to use the sh example-diffs command (or example-diffs on Windows) from the run_examples directory. If you run into errors running the example script, please open a new issue in the OG-Core repo with a description of the issue and any relevant tracebacks you receive.

Core Maintainers

The core maintainers of the OG-Core repository are:

  • Jason DeBacker (GitHub handle: jdebacker), Associate Professor, Department of Economics, Darla Moore School of Business, University of South Carolina; President, PSL Foundation; Vice President of Research and Co-founder, Open Research Group, Inc.
  • Richard W. Evans (GitHub handle: rickecon), Advisory Board Visiting Fellow, Center for Public Finance, Baker Institute for Public Policy at Rice University; President, Open Research Group, Inc.; Director, Open Source Economics Laboratory

Citing OG-Core

OG-Core (Version #.#.#)[Source code], https://github.com/PSLmodels/OG-Core

og-core's People

Contributors

andersonfrailey avatar asmockler avatar chrisrytting avatar chusloj avatar duncanhobbs avatar evan-magnusson avatar gdrosos avatar harisjoshi avatar hdoupe avatar isaacswift avatar jdebacker avatar jpycroft avatar matthjensen avatar maxghenis avatar nikhilwoodruff avatar parkerrogers avatar peterdsteinberg avatar prrathi avatar rickecon avatar rkasher avatar salimfurth avatar sherwinlott avatar talumbau avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

og-core's Issues

Matching CBO forecasts

Our model will not match CBO forecasts exactly because it is OLG and the CBO model is not. We need a method of ensuring that our model generates as close to the same tax revenues as possible in the same categories as the CBO model generates when its assumptions are run through the MicroSim model. A document "HittingCBOTargets" details how to ensure our model does this over a chosen time horizon.

International Price Response Functions

We need to calibrate the responsiveness of world interest rates to changes in the US net supply/demand of savings for each period in the model. This is potentially time-varying since foreign supplydemand are likely to grow faster than US net supply/demand.

We also need calibrations of the responsiveness of world prices to changes in the US net supply/demand for tradable goods.

How do we handle consumption goods imports/exports vs production goods imports/exports?

Do we want to deal with labor migration?

generalize wealth_data.py

Generalize the function in wealth_data to accept any vector of bin weights. Currently, the function is not used, and instead wealth_data.py outputs the 7 data vectors needed by the wealth tax paper automatically.

Minimizer Problems

The minimizer is not changing the values after iterating with method 2. I think this is either a problem with xtol and mindist in the SS.py file, or the scaling and tol in the minimizer.

Separate pickling of variables and parameters

Make a pickle just for the exogenous parameters, and another for just endogenous variables. There is currently one for parameters, and another for parameters and endogenous variables. Remove the overlap.

How to Add Effects on Individual States

Treat states as small open economies which take the aggregate macro effects as given. With detailed demographic information on each state, we can see how their set of households evolve over time and the federal tax burdens each household will pay. Similarly, we should be able to find out the mix of industries in each state and approximate taxes from each of these.

Note we will need migration estimates into and out of each state, including (primarily) migration between states, but within the US.

Add back in consumption tax

Back in October, using the old SS solution method, we had to take out the consumption tax because of a fixed point in the solution method. In order to know consumption, we had to know the lump sum tax, but we could know the lump sum tax unless we knew consumption, since consumption times the consumption tax was needed to compute the lump sum. To avoid a nested fsolve, we removed the consumption tax from the paper. However, Isaac noted that now that we are guessing the lump sum tax (T_H) in an outer loop, this problem no longer exists in SS or TPI. If @rickecon or @kerkphil can retype up the euler equations with the consumption tax (they should be saved somewhere, but we should make sure they're right), I think it would be really easy to put that back in on the household side.

Generalize code so it works for any S and J

Many parts of the code, especially those dealing with the calibration (such as func_to_min in SS.py) only work for J=7 or S=80. We need to go through the code and get rid of these problems eventually.

income.py vs. Income_nopoly.py

Income.py has a better fit, using polynomials which Jason computed. However, we only have the polynomials for the 7 percentile groups used in the wealth tax paper. Unless we can get polynomial fits for all percentile groups, we should stop using income.py, and move to income_nopoly, which just uses the raw data.

TPI Euler equations as functions

In TPI, all the euler equations are not functions. We might want to change this so we only have to change the euler equations in one place.

What is the best way to create inputs that would run the code to completion quickly?

It's typically quite helpful to have a 'toy' dataset around that one can quickly use execute the code from start to finish. This serves as a reasonable 'smoke test' that lets one know that some code refactoring didn't do major damage. After such a 'smoke test', one would then typically go run the real test (which may take tens of minutes or more). This dataset only has to achieve two goals:

  • it must be acceptable input for the code
  • it should execute fairly quickly

It doesn't have to make mathematical sense as input, nor produce reasonable output. Issue #42 seems to indicate that we could now choose a very small J and S - is that correct? If so, is this likely to make the fsolves faster? For example, we could set J=3 and S=8. Is it easy to describe how one would create such a 'mini-input'? If so, could someone sketch out the necessary steps? I could dig in here, I just wonder how much expertise one would need to create such an input dataset.

Speed up minimizer by not pickling

Find a different way to update the guesses for b and n in the minimizer, without calling pickle twice per iteration, as this slows it down.

Rename and condense output folders

First, the nothing folder should be renamed (perhaps to pickle_storage), and then any reference to that folder in the code should be changed.

Also, there are currently 3 output folders: OUTPUT, OUTPUT_incometax, and OUTPUT_wealthtax. These should be changed/renamed to OUTPUT_tax1 and OUTPUT_tax2 (we only need 2, not 3).

Consolidation of the Model

The individual pieces of the extended model need to be folded into a single document that lays out the model in its entirety.

Renaming functions and variables

Rename b1, b2, b3, b1_2, and b2_2, as well as Kssmat2, 3, etc, to more descriptive names.

Also, rename euler1 and other functions with more descriptive names.

Fsolve constraint issues

Add comments to why c and b are constrained to be positive (according to the Euler equations, they can go negative, but theoretically they shouldn't). Test to see if the n constraints bind, if not, delete those constraints.

derivation of p_{s,t}

Check the derivation of the price of the composite consumption good for age s households in period t, p_{s,t}. Derived in Equation 1.18 in Section 1.2.1 of AEIBYUDyn.pdf. Without the sol'n of FR1993, the problem becomes difficult and maybe intractable.

Financial Intermediary

We need a formal write up of the financial intermediary sector which accepts deposits from households and diversifies investment over the various firms in the model.

Separate the calibration from SS.py

Make calibration.py its own function. Also, in SS.py there is code that calculates the percent differences between the data and the model moments. Move this to calibration.py as well.

Put all python functions in one file

Make a .py file that contains all the python functions used by SS and TPI to prevent overlap and condense the code. The functions will most likely need to be altered to take the parameters as inputs, as they might not work if the parameters are simply globals.

Error in calculating replacement rates for payroll tax

In the replacement_rate_vals() function in the tax.py file there is a bug of sorts. When we get the replacement rates, all but the lowest 25% of individuals hit the maximum payment. It is unlikely that the bottom 50% and above are making more than 30K a month, so there must be either a problem in the code itself, or the data/method we used to calculate the bins of the PIA.

Reduce number of globals

In SS.py and TPI.py, variables and parameters are unpickled and put in globals namespace. We need to figure out another way to do this that is more...elegant.

Wage vs Capital Income Taxes

Write up the model with these two taxes being imposed separately. The tax rate on wage income will differ from that on capital income, but both will be functions of total income.

We already have calibrated functions for these based on 2012 data in the Excel file, CalibratingTwoIncomeTaxes.xlsx.

Solving dynamic model when tax schedules are not always progressive

My understanding is that in the current version of the dynamic model, if tax schedules are
not uniformly progressive, then a model solution cannot be found because there are
multiple roots. It seems to me that the size of the OASDI payroll tax and its sharp
discontinuity at the maximum taxable earnings level (with the MTE wage indexed) implies
a need to spend the extra computation time to handle this situation correctly. It would
seem that the first priority is to get this logically correct, then worry about how long it
takes the dynamic model to execute. Remember that for many people (a majority?)
their payroll tax liability is greater than their income tax liability, and that the nonlinearity
is substantial: a marginal tax rate of 12.4 percent below the MTE and a marginal tax
rate of zero percent above the MTE. Do my concerns make sense? Or, is there a way
to work around this without changing the root-finding algorithm?

Is `e_final` normalized properly?

In the comments for get_e, we see that the returned matrix e of shape SxJ is "normalized so the mean is one." However, neither the rows nor the columns of e have mean 1, which can be seen easily just by running the latest master and setting a breakpoint before return e_final. This is likely a trivial issue, but I just wanted to point it out. Perhaps I'm missing something eles?....

make the code base into a Python package

It's best to work with Python code as a package. It's easier to call certain pieces of the code and share it with others. I'll have a go at making a package. I intend to start with the code in Python/TESTED_DONT_TOUCH, since the goal is to create a Python package of the current "gold standard" version of the model. Let me know if I should start somewhere else @evan-magnusson @rickecon @isaacswift @jdebacker

Income.py get_e function add description

Add description to the get_e() function in income_polynomial.py to add more description on what is going on. Also, pull the parameters to the top of the file.

Parallelize SS.py and TPI.py

There are multiple ways we can parallelize SS.py and TPI.py.

SS.py:
In the SS_solver() function, the for loop {for j in xrange(J)} on line 157 can be parallelized, as each j index of the array does not rely on anything that happens before it. J is 7 right now, and so the speedup wouldn't be huge right now. However, this would allow us to easily set J=100, and not have massive slowdowns. Also, if there is a way to parallelize the minimizer, huge speed ups would be had. The minimizer tests each element of the gradient (there are 87 elements to test), jumps, tests each element, jumps, etc. However, testing each of the elements does not require any info from testing the previous element. So it could test each element in parallel, jump, test each element in parallel, jump, etc. This would be much faster. So we either need to build a minimizer or find one that is parallelized.

TPI.py:
In the while loop that starts on line 275, there are a couple for loops that can be parallelized. The easiest would be the for loop on line 287 {for j in xrange(J)}. As in SS.py, this can be parallelized. Within this for loop (for each j), there are two for loops (not nested, separate) on line 288 and 299. These can happen at the same time, and these loops themselves can be parallelized for each s and t. So, we can parallize the J for loop, and then each processor can spawn 2 processes, for the S-2 loop and the T loop. Then, those can also spawn S-2 or T processes, respectively. We might not want to parallelize this much, as this is hundreds of processors. We might want to just parallelize the T for loop, as T is the largest. But we have lots of options here.

Future issues with demographics.py

There are warnings being issued when demographics.py is run. It would appear that the way we are splicing arrays will not be valid in the future, so we should fix this soon.

Firms issuing equities

Need to work on Extended Firms Problem.tex to include the firm's choice of equity shares outstanding.

Profit-shifting elasticity

We need to calibrate a response of profit-shifting to tax changes for each industry. The easiest way is with an elasticity assumption. However, a mean-variance risk aversion model on the part of the firms could generate this response function endogenously as a function of home and foreign rates of return, home and foreign taxes, home and foreign variances on returns, and a risk aversion parameter. I quick writeup of such a model and a calibrating spreadsheet has been added under the Model Writeup folder with the titles ProfitSharingModel.

Distribution of Bequests

Need to look into estimating the distribution of bequest from agent j,s,t who dies to agents next period over all types, j, and all ages, s,.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.