Giter VIP home page Giter VIP logo

switch's Introduction

This contains version 2 of the Switch electricity planning model.
This optimization model is modular and can be used with varying levels
of complexity. Look in the examples directory for demonstrations of
using Switch for investment planning or production cost simulation. The
examples enable varying levels of model complexity by choosing which
modules to include.

See INSTALL for installation instructions.

To generate documentation, go to the doc folder and run ./make_doc.sh.
This will build html documentation files from python doc strings which
will include descriptions of each module, their intentions, model
components they define, and what input files they expect.

TESTING
To test the entire codebase, run this command from the root directory:
	python run_tests.py

EXAMPLES
To run an example, navigate to an example directory and run the command:
	switch solve --verbose --log-run

CONFIGURING YOUR OWN MODELS

At a minimum, each model requires a list of Switch modules to define the model
and a set of input files to provide the data. The Switch framework and
individual modules also accept command-line arguments to change their behavior.

Each Switch model or collection of models is defined in a specific directory
(e.g., examples/3zone_toy). This directory contains one or more subdirectories
that hold input data and results (e.g., "inputs" and "outputs"). The models in
the examples directory show the type of text files used to provide inputs for a
model. You can change any of the model's input data by editing the *.csv files
in the input directory.

Switch contains a number of different modules, which can be selected and
combined to create models with different capabilities and amounts of detail.
You can look through the *.py files within switch_mod and its subdirectories to
see the standard modules that are available and the columns that each one will
read from the input files. You can also add modules of your own by creating
Python files in the main model directory and adding their name (without the
".py") to the module list, discussed below. These should define the same
functions as the standard modules (e.g., define_components()).

Each model has a text file which lists the modules that will be used for that
model. Normally this file is called "modules.txt" and is stored in the main
model directory or in an inputs subdirectory. Switch will automatically look in
those locations for this list; alternatively, you can specify a different file
with the "--module-list" argument.

Use "switch --help", "switch solve --help" or "switch solve-scenarios --help"
to see a list of command-line arguments that are available.

You can specify these arguments on the command line when you solve the model
(e.g., "switch solve --solver cplex"). You can also place frequently used
arguments in a file called "options.txt" in the main model directory. These can
all be on one line, or they can be placed on multiple lines for easier
readability (be sure to include the "--" part in all the argument names in
options.txt). The "switch solve" command first reads all the arguments from
options.txt, and then applies any arguments you specified on the command line.
If the same argument is specified multiple times, the last one takes priority.

You can also define scenarios, which are sets of command-line arguments to
define different models. These additional arguments can be placed in a scenario
list file, usually called "scenarios.txt" in the main model directory (or you
can use a different file specified by "--scenario-list"). Each scenario should
be defined on a single line, which includes a "--scenario-name" argument and
any other arguments needed to define the scenario. "switch solve-scenarios"
will solve all the scenarios listed in this file. For each scenario, it will
first apply all arguments from options.txt, then arguments from the relevant
line of scenarios.txt, then any arguments specified on the command line.

After the model runs, results will be written in tab-separated text files (with
extension .csv) in the "outputs" directory (or a different directory specified
via the "--outputs-dir" argument).

switch's People

Contributors

anamileva avatar bmaluenda avatar desmondzhong avatar hecerinc avatar josiahjohnston avatar mateoatr avatar mfripp avatar mseaborn avatar rodrigomha avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

switch's Issues

Upgrading codebase for compatibility with Python 3

Given recent issues with package installation ( #111 ) and the fact that Python 2 will cease to be maintained by January 1st, 2020, I propose we start planning for an upgrade to Python 3.

I haven't had experience with upgrading projects to be compatible with new versions of programming languages, so I'm not sure what the best approach for this work is:

-Should we tackle this by creating a new branch and upgrading one module at a time until everything is ready and then merge?
-Should we create one branch per module and merge one at a time (with backwards compatibility to avoid breaking Python 2 modules)?
-Should we divide tasks between us beforehand? Or just commit-as-much-as-you-can?
-Should we look for external help (maybe Mark or someone from the ERG)?

Happy to hear your thoughts on this. I can personally commit some hours per week to support this project (if this is something we want to embark upon).

Legacy renewable projects: failing to assign capacity factors when periods > gen_max_age

When building a long-term scenario (2050) I included renewable energy plants installed in 2015, however I kept getting the following error:

RuntimeError: Failed to set value for param=gen_max_capacity_factor, index=('renewPlant', 338), value=0.248987. source error message="Error setting parameter value: Index '('renewPlant', 338)' is not valid for array Param 'gen_max_capacity_factor'"

It seems that SWITCH can't assign capacity factors for legacy plants (and hence can't construct the model to be run) if their max age have been reached before an investment period. In this particular point, no capacity factors could be assigned after 2040 (2015 init period +25 commission = 2040) because they weren't mean to exist then.

A quick workaround to construct and run was to extend these projects' lifetime.

Customizable tabular reporting system

It would be nice to have a standardized system for creating tabular output files, where users can call a helper function to register any expressions that they want reported, along with the indexes and (optional) filename to use for them. Then a standard reporting module would create a single output file for each specified filename and/or indexing set, and would automatically create a column for each expression that's been assigned to that file (there would probably also be some standard expressions registered automatically). Tables like this are very handy for reviewing results in Excel or Pandas.

Tables like this are already created by switch_mod.hawaii.util.write_table(), but it would be nice to generalize this approach as described above. This would reduce the amount of nesting that is often needed in calls to write_table() and also make it so that all the code related to one output file doesn't have to be in one place.

Sunk costs

I just noticed that transmission sunk costs -the capital and fixed O&M costs of existing capacity- are not considered in the objective function. This is due to the TX_BUILDS_IN_PERIOD set not including builds indexed by 'Legacy'.
This is not a problem by itself, but it happens that the model does include generation sunk costs -the capital and fixed O&M costs of predetermined capacity-. If the model allowed for early decommissioning or mothballing, these costs would not be sunk, but we still haven't enabled those features.

So, for consistency, either Tx sunk costs should be included in the OF or Gen sunk costs should be excluded. Either way, both total costs and (total - sunk) costs should be reported on model exit.

Minimum downtown constraint is confusing and appears buggy

This is an update of the stale pull request #97 . Either this is a bug that needs to be fixed, or a confusing modeling formulation that needs clarification in documentation and/or formulation.

The minimum downtime constraint is currently:
CommitGen[t] <= max(CommitUpperLimit[t_prior] for t_prior in time_window + 1) - sum(m.ShutdownGenCapacity[t_prior] for t_prior in time_window)

I think it needs to be:
CommitGen[t] <= CommitUpperLimit[t] - sum(m.ShutdownGenCapacity[t_prior] for t_prior in time_window)

Matthias has stated using max() is necessary to track a band of capacity that needs to stay down for maintenance. To me, it looks like it will be overestimating available capacity if more capacity was available in prior timepoints.

Matthias's comments from prior documentation & a post May 19, 2017 on Pull Request #96. The max(...) term finds the largest fraction of capacity that could have been committed in the last x hours, including the current hour. We assume that everything above this band must remain turned off (e.g., on maintenance outage). Note: this band extends one step prior to the first relevant shutdown, since that capacity could have been online in the prior step.
... This implements a band of capacity that does not participate in the minimum downtime constraint. Without that term, the model can turn off some capacity, and then get around the min-downtime rule by turning on other capacity which is actually forced off by gen_max_commit_fraction, e.g., due to a maintenance outage.

My response: Any forced maintenance outage encoded by gen_max_commit_fraction will be directly reflected into ShutdownGenCapacity and subject to minimum downtime, if that capacity was online when the maintenance event started. If sufficient capacity was offline prior to maintenance, then min downtime would not need to be tracked separately. I don't see a separate band of capacity that needs to be implicitly tracked. The way I read it, the max(...) term would overestimate available capacity if prior timepoints in the window had more capacity available. I think the max(...) term needs to be replaced by m.CommitUpperLimit[g, t].

Transmission lines vs. transfer corridors

In switch_model.transmission.transport, we generally refer to entities called "transmission lines". These are actually transfer corridors: aggregated transfer capacity between two zones, calculated from the capabilities of the underlying network. Users often estimate this as a derated sum of the ratings of the transmission lines connecting these zones, but there are other ways to do it (e.g., consult the WECC Path Rating Catalog). At any rate, these are definitely not transmission lines, and it could bug electrical engineers to hear them called that. So we should probably rename them as transfer corridors, transmission corridors, transfer paths or similar. That will also keep the name space open for actual transmission line modeling later.

It would also be helpful to add bidirectional ratings for these paths (at least for initial capacity, and possibly different costs to update the capacity in each direction, based on a detailed transmission study). This reflects the fact that transfer paths often have different ratings in each direction.

Naming conventions

I am working with some of the set arrays, and realized that we have two different naming conventions going. We should probably standardize on one or the other.

Option 1: (index_set)_(listed_items), e.g.,
TS_TPS
PROJ_FUEL_USE_SEGMENTS
PERIOD_RELEVANT_TRANS_BUILDS

Option 2: (listed_items)_(preposition)_(index set), e.g.,
PROJECTS_ACTIVE_IN_TIMEPOINT
CONNECTIONS_DIRECTED_INTO_WN
ACTIVE_PERIODS_FOR_PROJECT

Option 1 is more terse, but it is a little unclear where the index name ends and the item name begins. These can also have naming conflicts with product-style sets, e.g., if PROJ_DISPATCH_POINTS is a set of tuples from within PROJECTS x TIMEPOINTS, what should we call the indexed set of TIMEPOINTS for each PROJECT? So I would recommend standardizing on Option 2.

With Option 2, we might also want to decide whether the name should be a noun_adjective_in_something (PROJECTS_ACTIVE_IN_TIMEPOINT) or an adjective_noun_for_something (ACTIVE_PROJECTS_FOR_TIMEPOINT). I doubt we can standardize on which preposition to use (ACTIVE_PROJECTS_FOR_TIMEPOINT or ACTIVE_PROJECTS_IN_TIMEPOINT?), but that should be OK.

Another naming issue: we sometimes write out "project" or "PROJECT" in names of components and files, and sometimes write "proj" or "PROJ". We should probably standardize on the long or the short form. I don't really have an opinion about which.

I think if we straighten these out, it will help people in learning what each set does, and guessing the name of the set they should use (or create) for a particular task.

Standardize installation and execution

I think we need to standardize how SWITCH is installed and run, because we seem to have two different versions methods in use. I'll describe below how I do it, which is my recommended approach. But I would like to hear if people have other suggestions. I would then like to rewrite the instructions so users can get up and running pretty easily.

I install SWITCH by cloning the repository, then cd'ing into the switch directory and running python setup.py develop or python setup.py develop --user. This installs the package in-place, so I can edit it and use it at the same time (and if I want to change to another installation, I just go there and run python setup.py develop again). Users who don't want to edit the package can instead run python setup.py install or python setup.py install --user. Once we put the package on pypi and conda, they can just do pip install switch or conda install -c switch-model switch . These commands work well under anaconda, and would probably work with homebrew python. With the standard system python, users might need to use sudo with the system-wide versions.

These commands do a few useful things:

  • install the switch_mod package in the python system path
  • install dependencies specified in setup.py (Pyomo and its dependencies for now, but the list could be extended)
  • install the switch command-line script (equivalent to python -m switch_mod.main)

I then setup models in various locations in my file system (not inside the switch folder) and solve them by running switch solve or switch solve-scenarios. If I edit the local copy of the switch package or use git pull, those changes are automatically reflected the next time I use the command-line script. This works well, and makes it easy to setup other users and share models with them (via separate repositories for each model).

As I understand it from the INSTALL file, other SWITCH users are editing their PYTHONPATH to point to the switch repository and then using pip install --user -r pip_requirements.txt to install the dependencies. I don't know how other users are activating the switch command-line script, if they use it at all. This approach has a few disadvantages compared to my approach, which make it difficult to give general installation instructions for new users:

  • it is hard to write general instructions for users to modify their PYTHONPATH appropriately;
  • using pip requires installing pip on some systems, and may cause problems for people who use anaconda
  • it's nearly impossible to write cross-platform instructions to install the switch command-line script, which is very handy, especially for new users

All of these problems are addressed automatically by setuptools, which is used by setup.py. This takes care of all the cross-platform issues other than installing the solver.

Any objections to making setup.py the standard way to install switch, via the various commands listed in the second paragraph? Or making switch solve ... the standard way to run it?

Error while using Gurobi

What could be the source of this error?

Deterministic concurrent LP optimizer: primal simplex, dual simplex, and
barrier Showing barrier log only...

Root barrier log...

Elapsed ordering time = 5s Ordering time: 139.02s

Barrier performed 0 iterations in 568.30 seconds Error termination


Explored 0 nodes (0 simplex iterations) in 570.04 seconds Thread count was
1 (of 12 available processors)

Solution count 0

Solve interrupted (error code 10001) Best objective -, best bound -, gap -
Traceback (most recent call last):
  File "<stdin>", line 5, in <module> File
  "C:\Users\Public\Software\lib\site-
  packages\pyomo\solvers\plugins\solvers\GUROBI_RUN.py", line 114, in
  gurobi_run
    model.optimize()
  File "model.pxi", line 833, in gurobipy.Model.optimize
gurobipy.GurobiError: Out of memory

Traceback (most recent call last):
File "C:\Users\Public\Software\Scripts\switch-script.py", line 11, in
load_entry_point('switch-model', 'console_scripts', 'switch')()
File "c:\users\puneet chitkara\switch\switch_model\main.py", line 39, in main
main()
File "c:\users\puneet chitkara\switch\switch_model\solve.py", line 161, in main
results = solve(instance)
File "c:\users\puneet chitkara\switch\switch_model\solve.py", line 731, in solve
results = model.solver_manager.solve(model, opt=model.solver, **solver_args)
File "C:\Users\Public\Software\lib\site-packages\pyomo\opt\parallel\async_solver.py", line 28, in solve
return self.execute(*args, **kwds)
File "C:\Users\Public\Software\lib\site-packages\pyomo\opt\parallel\manager.py", line 107, in execute
ah = self.queue(*args, **kwds)
File "C:\Users\Public\Software\lib\site-packages\pyomo\opt\parallel\manager.py", line 122, in queue
return self._perform_queue(ah, *args, **kwds)
File "C:\Users\Public\Software\lib\site-packages\pyomo\opt\parallel\local.py", line 58, in _perform_queue
results = opt.solve(*args, **kwds)
File "C:\Users\Public\Software\lib\site-packages\pyomo\opt\base\solvers.py", line 600, in solve
"Solver (%s) did not exit normally" % self.name)

Solve.py MIPgap definition

Solve.py defines solver_options_string to add the maximum number of iterations, MIPgap and others (screenshot of the lines). What is the format in which I have to write for example: solver_options_string mipgap=0.001?
I have tried many ways but it doesn't work for me.

Thank you very much.

Updating Pyomo version

Dear @mfripp and @josiahjohnston,

I was updating the WECC branch of switch to the most recent stable release from the upstream, but found some errors.

First, it looks like the way it the master branch of switch is coded right now only works for Pyomo==5.6.5 and PyUtilib==5.8.0. If we upgrade to Pyomo 5.7 we can not longer use PyUtilib==5.8.0 due to deprecation of the enum module. The setup.py does not specify this version so by default it install the most recent ones and is not working properly. An easy fix will be to specify this version on the setup.py. However, I think in the long run we want to update to the most recent version of Pyomo. Some people that have reported also this issue: #130

Here is the error when trying to run switch with the most recent version of Pyomo:

ValueError: Unexpected keyword options found while constructing 'AbstractOrderedSimpleSet': rule

Pyomo deprected the rule argument for Abstract set, I think we could simply replace that argument with initialize or validate depending on the set.

I am going to create a pull request for fixing this, but want to know if you are already working on this.

cc @PatyHidalgo

Merge improvements from SWITCH-WECC

This issue will serve as a central point to discuss merging the issues from the SWITCH WECC repo.
As of now the plan is as follows.

Action plan

  1. Move the SWITCH WECC latest version into a wecc branch in this repo.
  2. Move all wecc specific files into a wecc package.
  3. Create branches master-with-black-formatted-history and wecc-with-black-formatted-history which are the master and wecc branches rewritten but with their history modified such that every commit is formatted according to black. This reduces merge conflicts in the next step (see notes on how this was done below).
  4. Merge master into wecc
  5. Create merge requests one after another to include the changes from the wecc branch into the main branch as small features. These merge requests will have to be merged more or less in order. See chart below.

WECC features to include into master

  • Graph generation (status: not started)
  • Auto input data loading (status: not started)
  • Tracking of other GHGs (status: not started)
  • Variable scaling for numerical properties (status: not started)
  • Addition of documentation (status: not started)
  • Addition of switch drop tool (status: not started)
  • Addition on some dual values in the output (status: not started)
  • Improving README and docs (e.g. switch to .md format) (status: not started)
  • Include wecc folder for others to see / use (status: not started)
  • New policy modules (status: not started)
  • Add switch compare (status: not started)

master features to include in WECC

  • Specification of dimen in set definition

Conflicting changes

  • Changes related to simultaneous upgrade to latest version of Pyomo
  • Use of ordered=False in wecc but unique_list in master
  • wecc specific examples e.g. ca_policies or stochastic example
  • config.yaml approach to generating scenarios

Notes

Process for creating master-with-black-formatted-history and wecc-with-black-formatted-history

This is achieved with the following commands in Powershell.

  1. Run git merge-base wecc master to find the commit where the two branches first diverged.

  2. Checkout that commit on a new branch called common-wecc-parent.

  3. Reformat the code with black --fast -t py310 --extend-exclude 'switch_wecc/wecc' ./switch_model and make a new commit.

  4. Checkout master on switch to a new branch master-with-black-formatted-history.

  5. Run git rebase --rebase-merges --exec "black --fast -t py310 --extend-exclude 'switch_wecc/wecc' ./switch_model; git commit -a --amend --allow-empty" common-wecc-parent
    a. The --rebase-merges ensures we keep the topography of the merges
    b. The --exec "black --fast -t py310 --extend-exclude 'switch_wecc/wecc' ./switch_model; git commit -a --amend --allow-empty" means that after every commit, we amend the commit such that the switch_model files are reformatted.

  6. This will generate significant merge conflicts. For each one run
    a. git restore --theirs -s REBASE_HEAD . to update the local files to the commit that is being applied.
    b. black --fast -t py310 --extend-exclude 'switch_wecc/wecc' ./switch_model to reformat the files.
    c. git add . to mark any conflicts as resolved.
    d. git rebase --continue to continue with the rebase. To avoid the popup to edit the commit message see instructions here.
    To make these commands automatically run up to 100 times in a loop, in Powershell simply run
    1..10 | % {git restore --theirs -s REBASE_HEAD . ; black --fast -t py310 --extend-exclude 'switch_wecc/wecc' ./switch_model ; git add . ; git rebase --continue}

  7. Check that there are no differences between master-with-black-formatted-history and master (you might need to run black on master)

  8. Redo step 4 to 7 but for the wecc branch.

Pandas Sort Deprecated

New installation fails as Pandas deprecated sort() in favor of sort_values() or sort_index(). Should I require a specific pandas version before version 0.20 or attempt to update the calls to sort() with sort_values()?

Removing DumpPower variable

I tried to remove the DumpPower variable, but some examples increased their total cost. The Geothermal projects have to lower their baseload power outputs to exactly match demand on the first timepoints, so more fuel has to be burned in later timepoints.

With Rodrigo we believe that the variable should be eliminated, even if it changes outputs of some examples (we would update them). A solution to reduce cost increases would be to allow free power curtailment for baseload projects. Variable O&M costs for operating the baseload plant at a certain dispatch level would need to be accounted for, but less fuel would be burned in the whole horizon of the simulation.

invalid literal for int() with base 10: 'c'

Hi all,

I am interested in using your SWITCH tool with research on alternative ES and generation systems. I tried setting up SWITCH, but am running into the following errors when I run the run_tests.py:

invalid literal for int() with base 10: 'c'
invalid literal for int() with base 10: 'c'
invalid literal for int() with base 10: 'c'
invalid literal for int() with base 10: 'c'
invalid literal for int() with base 10: 'c'
invalid literal for int() with base 10: 'c'
invalid literal for int() with base 10: 'c'
invalid literal for int() with base 10: 'c'
invalid literal for int() with base 10: 'c'
invalid literal for int() with base 10: 'c'
invalid literal for int() with base 10: 'c'
invalid literal for int() with base 10: 'c'
invalid literal for int() with base 10: 'c'
To exit: use 'exit', 'quit', or Ctrl-D.
An exception has occurred, use %tb to see the full traceback.

SystemExit: True

I will admit, trying to get SWITCH set up was a bit confusing for me, so this could very well be an issue on my end. To help debug this issue, here are some details about what I have installed and the process I took to set up SWITCH:

Environment:

  • Windows 10
  • Python 2.7.12 :: Anaconda 4.1.1 (32-bit)
  • Spyder IDE

Process to set up SWITCH:

  • Long ago I had tried installing Pyomo, so that was already on my computer. I also had gone through the process to install the GLPK solver. I remember that being a huge headache, but I believe I did get that fully installed based on glpsol --help working from any directory in the Windows command prompt (see http://www.pyomo.org/blog/2015/8/24/checking-glpk-installation). I wish I could remember the exact process I used, but I don't recall. When I type glpsol --version in the command prompt it says: GLPSOL: GLPK LP/MIP Solver, v4.59
  • I forked switch-model/switch to my own GitHub
  • In the Windows command prompt within the switch folder I used: pip install --user -r pip_requirements.txt
  • In the Windows command prompt I used: python setup.py develop
  • Added all switch folders/subfolders to my PYTHONPATH
  • I have tried going to https://projects.coin-or.org/Cbc multiple times, but it keeps timing out

After all of this, I try running run_tests.py in Spyder and it gives me the above errors. Any ideas what the problem could be? Thanks for any help.

Miles

Edit: I forgot to also mention that I have not been able to successfully run make_doc.sh. I downloaded cygwin and from there went to the doc folder and tried

bash make_doc.sh

It then gives me:

no Python documentation found for '../\r'

Switch doesn't allow having zero existing builds

Switch currently requires there to be at least one existing build.

Steps to reproduce: Take examples/copperplate0 and modify it by deleting all rows from proj_existing_builds.tab and proj_build_costs.tab. Run switch_mod.solve.

Expected result: No error.

Actual result: Switch gives the following error:

ERROR: Constructing component 'proj_existing_cap' from data={None: 'PROJECT'} failed:
    RuntimeError: Failed to set value for param=proj_existing_cap, index=None, value=PROJECT.
        source error message="Error setting parameter value: Index 'None' is not valid for array Param 'proj_existing_cap'"
Traceback (most recent call last):
...
  File ".../pyomo/core/base/param.py", line 802, in construct
    % (self.cname(True), str(key), str(val), str(msg)) )
RuntimeError: Failed to set value for param=proj_existing_cap, index=None, value=PROJECT.
    source error message="Error setting parameter value: Index 'None' is not valid for array Param 'proj_existing_cap'"

I debugged this and I discovered that Pyomo has a dubious special case for 1-line .tab files. (See the elif len(tmp) == 1 case in pyomo/core/plugins/data/text.py.)

I would expect Pyomo to treat a 1-line file as containing no rows of data (just a row of headings). However, instead Pyomo seems to be converting the file to the declaration param proj_existing_cap := PROJECT.

A possible fix would be to change load_aug() in utilities.py so that it skips any .tab files that contain zero rows. load_aug() already has a check for that, but it's only enabled when it's called with optional=True.

Argument parser not correctly working

If arguments that set options for the solve module are inputted in the command line along with the python -m switch_mod.solve command, then they are also passed to the model's _ArgumentParser, which aims to get command line options for other modules, raising an error since no module uses the option verbose, for example.

I think a way to get around this would be to modify the define_AbstractModel() and create_model() functions in the utilities module so their default values are empty lists, instead of all of the command line options: change sys.argv[1:] for []. That way, the solve module will be able to parse the basic options and non of those will be passed to the model's module option parser.

If a developer is writing a new module which requires a command line option, then he can actually pass an argument list instead of leaving the default value.

An alternative would be to write some lines to filter the system's argument list for non-solve options and pass that list as the default.

I'm not sure what approach would be more suitable given pending pull request [https://github.com//pull/17]. Maybe defaulting to an empty list would be better in the very short term so that new users can get SWITCH working in their first run.

Streamline dependencies and installation

There are a number of ideas in pull request #115 that won't make it into the 2.0.6 release, so I'm gathering them here for future consideration.

First, unaddressed goals from the start of that pull request:

  • simplify dependencies
    • group optional dependencies into one additional set? (conda install switch_model_extras / pip install switch_model[extras])
    • require users to explicitly install some specialized dependencies ad hoc? (rpy2 and numpy for iterative demand response, psycopg2-binary or eventually google-cloud-bigquery for accessing the Hawaii back-end database)
    • should we include testing-oriented packages (pint, maybe testfixtures?) in extras, or just list them as something to add ad hoc if running tests? (may be moot if we can figure out a way to get our tests to run without needing pint, which pyomo needs during testing but does not include in the distribution)
  • streamline installation instructions
    • recommend this sequence for most users?
      1. conda install -c defaults -c conda-forge switch_model or pip install switch_model (+find glpk somewhere) for most users?
      • these users can look at the source code on github or in their system install directory if needed, which is how I interact with Pyomo
      • in my experience long-term dependcy problems are reduced by making conda-forge the top-priority channel (conda config --add channels new_channel). Should we recommend this for all users, or would that be too much meddling in people's system configuration?
    • recommend this sequence only for people who want to edit the source code and contribute back?
      1. git clone https://github.com/switch-model/switch.git && cd switch
      2. if running anaconda: conda install --only-deps switch_model
      3. pip install --editable . or python setup.py develop (note: conda develop doesn't install command-line scripts or dependencies; see conda/conda-build#1992 (comment))
  • maybe add some commands to give easier access to source code hidden in a site-packages directory:
    • switch find <module>: report file path to specified module (possibly just a submodule within switch_model)
      • then users can view or edit source code via commands like atom `switch find switch_model` , mate `switch find pyomo` or maybe atom `switch find discrete_commit` .
    • switch install examples: copy examples directory to local directory
    • switch solve --trace [[<module>|<file>[:<function>|<line>]], [<module>[:<function>]], ...]: invoke the debugger (a) when particular callbacks are called, (b) when any callback in the specified module is called (if no function specified), or (c) whenever any callback is called (if no modules specified).
      • This could be implemented either via tests when we call callbacks, or by creating tracing wrappers for the specified callbacks, or by subclassing Pdb and setting breakpoints in the right places (that is nice and general but may run slower and ties us to Pdb?)

Autogenerating Documentation

I would really like to have documentation for each module with a number of features:

  • compact but comprehensive list of components defined in that module
  • descriptions of the components where needed
  • concise representation of the rules used to define each component

We have a lot of this in the Supplemental Information file for the Switch 2.0 paper. But ideally these elements would be automatically extracted from the source code, to make sure we cover everything. In the near term, this would be helpful for cross-checking that everything is covered in the Supplemental Information file, and possibly to add some extra detail there (cross-reference Python names for the Latex terms, list the tables that parameters are defined in, etc.) In the longer term, this could help us create web-based documentation that uses Python terms rather than Latex, is more readable than our current source code, and allows cross-referencing terms between modules.

I've been playing around this week to see what might be possible along these lines. I'm pretty sure now that we could do this by inserting our detailed comments throughout the main code in each module, using docstring format (triple-quotes). This text would be similar (often identical) to the comments currently written at the top of each module, but would be dispersed throughout the module instead. Once that is done, I'm pretty sure we (I) could automatically generate documentation pages for each module by following these steps:

  • read the module using AST, then scan through all the standard callback functions in the module and...
  • add docstrings directly to a reStructuredText (rst) file (in order)
  • convert Constraint, Expression, Set, Param, Param.default and Var.bound rules into easier-to-read Python-style expressions and insert them into the rst file (in sequence with the docstrings)
  • insert function definitions into the rst file (in sequence with the docstrings)
  • move component definition expressions just above the rule functions they rely on and add "where Constraint_rule is given by:" to improve readability
  • accumulate lists of Params, Expressions, Vars and Sets and relevant descriptors (indexing set, domain, has default value, etc.)
  • accumulate extra information on Params from the load_aug calls in the load_inputs function (filename where params come from)
  • write summary tables of all components defined in the module at the top of the rst file.

Once this is done, it's fairly easy to convert the rst file into HTML, Tex, PDF, etc.

At a later stage, we may be able to use standard translations to convert the Python component names (and eventually maybe even the sum()-type expressions) from this rst file into equivalent Tex terms (i.e., shorter variable names), and write additional Tex-oriented rst files. Then it might be pretty quick work to tweak that to good Tex code and/or we could use a system to retain any translated code that has been manually tweaked, until the corresponding Python code changes (this would probably be something for later in the year).

To help you see how this could work, I have attached a zip file commit_autodoc.zip (but see new version in comment below) containing three example files:

  • a Python module (generators.core.commit.operate) which has had about 2/3 of the comments interleaved in the body
  • an rst file which I created manually from this, but which I think I could produce automatically after a day or two of work
  • an html file generated from the rst file (needs a better stylesheet, but you get the idea)

So my questions are:

  1. does this sound like a good idea?
    • pro: unified management of code and documentation, "literate programming", less brain requirement to link the documentation (formerly at top of file) to code
    • con: code is a little more cluttered (but this may be made up by having easier to read documentation)
  2. Rodrigo has done amazing work to prepare the Tex documentation and Josiah did amazing work to write documentation at the top of all the module files. This would potentially unify those two threads of work. Is that helpful?

switch package name (formerly 'split switch_mod, examples and sandbox')

As currently setup, the switch repository is designed to be used like this (I think):

switch/
    switch_mod/
    user_module.py
    inputs/
    outputs/

This has two problems:

  1. Any files added within the switch/ folder (e.g., user_module.py) need to be added to switch/.gitignore, and then .gitignore also has to ignore itself, or else collect a list of every module every user has added and ignored. Alternatively, users can leave this files untracked, and live with git's warning messages.
  2. If you want user_module.py to be part of your own local repository, you are probably out of luck (since it falls within the area dedicated to the switch repository).

We have partially resolved these issues for switch-hawaii by using directory structures like this:

switch-hawaii-models/ (repo)
    project1/
        switch/ (repo)
            switch_mod/
        switch-hawaii/ (repo)
        user_module.py
        solve.py

In this setup, switch-hawaii-models/.gitignore is set to ignore the switch and switch-hawaii directories within it, so it only includes all the project-specific code (as desired). However, this makes the python path requirements a little complicated. I find switch_mod by including code in project1/solve.py that adds the adjacent switch directory to the python path before running. Then I can just execute cd switch-hawaii-models/project1; python solve.py.

This works fine, but I would now like to start using the standard switch_mod.solve module. To do that, I need to have both project1/switch and project1 in my python path. It's easy to add project1 to the path simply by cd'ing into that directory before starting. And if switch_mod resided at the top level instead of the second level, it would also be in the path at that point (along with switch-hawaii). i.e., I would like to have a directory structure like this:

switch-hawaii-models/ (repo)
    project1/
        switch_mod/ (repo)
        switch-hawaii/ (repo)
        user_module.py
        (solve.py no longer needed)

This would have a couple of additional advantages:

  • users who don't want the examples don't have to download them, and
  • it would help smooth the path toward making switch a standard python package (in that case, switch would be installed within the site-packages directory, and would always be in the path, but should not include editable examples).

Along with this, I would like to suggest renaming switch_mod to simply switch. I know it's a pain, but it would be a much more natural name for this module.

Increase input files extensions

Hey @josiahjohnston

I was wondering if we could add more extensions for the input files such as *.csv, *.tsv, etc. I think this will give more flexibility for some users of switch. I can do the pull request for this. It is an easy feature implementation.

Parallelization

I was wondering if there is any way that Switch simulations could be parallelized to speed up runs. It has more to do with Pyomo rather than Switch itself, but I thought this would be the ideal place to ask first.

I've found that there are two main processes that take up most of the time of my simulations: the model instantiation (load_inputs function) and the printing of the LP file that is passed to Gurobi, the solver. Actual optimization time is around the same order of magnitude of these processes, but it's already being parallelized by Gurobi, so no speed ups seem possible in that area.

Both model instantiation and pre solving use up only one core of the server and take significant time to complete. Do you know if it's possible to speed them up? As for the pre solving process, I tried using a direct Python interface with Gurobi (loading the model directly from Pyomo components instead of writing and parsing an LP file), but it was even a bit slower.

Git workflow

I have been pondering on the SWITCH git workflow and have some questions/comments:

Why are we always doing pull requests from branches in our own forks of the repo, instead of pushing our changes to a new branch in the same SWITCH repo and doing the pull request from there?

I think it would simplify the work flow, since it would be:

(Option 1) When your changes consist in several commits that make it easier to follow the history:

-Write changes / new features into a new branch of your local SWITCH repo
-Push them to a new branch in the switch_model repo
-Do a Pull Request from that new branch to the master branch
-Make changes/additions according to the peer review

Once everything is approved, then:
-If the master branch received new commits from someone else in the meantime:
----Pull them to your local repo
----Rebase the local master branch into your new_branch to get those commits in the past history of the branch
-Rebase your local new_branch into the local master branch to put your commits after the master's Head (and this way you avoid getting that extra Merge commit that only says "this is a merge" and get a linear history)
-Push your local master branch into the remote. I tested it and GitHub even recognizes that you included all changes from the Pull Request and automatically closes it tagging it as "merged". The master's branch history becomes linear, with the original commits, then the commits someone else made and then your new features.
-Delete the remote branch to avoid accumulating branches

(Option 2) When you are doing some small or very condensed changes, so only 1 commit is necessary to understand them:

-Write changes / new features into a new branch of your local SWITCH repo
-Push them to a new branch in the switch_model repo
-Do a Pull Request from that new branch to the master branch
-Make changes/additions according to the peer review

Once everything is approved, then:
-Merge and squash the changes into a single commit. This provides a linear history, makes it really easy for inexperienced Git users to include changes (I count myself in that group) and is really quick (only 1 click is necessary on the GitHub interface). I did this for my latest commits and the history remains linear and is effective for small changes.

What do you think about this workflow? I find this option very convenient, since I like experimenting with my Fork and then just manually writing the features I want to add into the main SWITCH repo files. It would be simple if I could just push that new branch and then directly do a Pull Request.

Installation with pyomo 5.7 fails

Dear all,

thanks for your work on Switch. It seems really interesting.
However, I failed to install Switch and run examples. I have installed both from conda and in dev mode with pip install with the same results. when running most tests or examples, I get the error

Traceback (most recent call last):
  File "C:\Users\TGi\AppData\Local\Continuum\anaconda3\envs\switch\Scripts\switch-script.py", line 9, in <module>
    sys.exit(main())
  File "C:\Users\TGi\AppData\Local\Continuum\anaconda3\envs\switch\lib\site-packages\switch_model\main.py", line 39, in main
    main()
  File "C:\Users\TGi\AppData\Local\Continuum\anaconda3\envs\switch\lib\site-packages\switch_model\solve.py", line 90, in main
    model = create_model(modules, args=args)
  File "C:\Users\TGi\AppData\Local\Continuum\anaconda3\envs\switch\lib\site-packages\switch_model\utilities.py", line 100, in create_model
    module.define_components(model)
  File "C:\Users\TGi\AppData\Local\Continuum\anaconda3\envs\switch\lib\site-packages\switch_model\generators\core\no_commit.py", line 83, in define_components
    rule=lambda m:
  File "C:\Users\TGi\AppData\Local\Continuum\anaconda3\envs\switch\lib\site-packages\pyomo\core\base\set.py", line 2208, in __init__
    Set.__init__(self, **kwds)
  File "C:\Users\TGi\AppData\Local\Continuum\anaconda3\envs\switch\lib\site-packages\pyomo\core\base\set.py", line 1934, in __init__
    IndexedComponent.__init__(self, *args, **kwds)
  File "C:\Users\TGi\AppData\Local\Continuum\anaconda3\envs\switch\lib\site-packages\pyomo\core\base\indexed_component.py", line 182, in __init__
    Component.__init__(self, **kwds)
  File "C:\Users\TGi\AppData\Local\Continuum\anaconda3\envs\switch\lib\site-packages\pyomo\core\base\component.py", line 402, in __init__
    % ( type(self).__name__, ','.join(sorted(kwds.keys())) ))
ValueError: Unexpected keyword options found while constructing 'AbstractOrderedSimpleSet':
        rule

It seems this has to do with Pyomo. I have tried installing older versions of Pyomo instead (4.4.1 and 5.6.4) but then there was another error:

Traceback (most recent call last):
  File "C:\Code\switch\switch-model\tests\utilities_test.py", line 26, in test_save_inputs_as_dat
    return_model=True, return_instance=True
  File "C:\Code\switch\switch-model\switch_model\solve.py", line 90, in main
    model = create_model(modules, args=args)
  File "C:\Code\switch\switch-model\switch_model\utilities.py", line 100, in create_model
    module.define_components(model)
  File "C:\Code\switch\switch-model\switch_model\generators\core\no_commit.py", line 83, in define_components
    rule=lambda m:
  File "C:\Users\TGi\AppData\Local\Continuum\anaconda3\envs\switch\lib\site-packages\pyomo\core\base\set.py", line 2208, in __init__
    Set.__init__(self, **kwds)
  File "C:\Users\TGi\AppData\Local\Continuum\anaconda3\envs\switch\lib\site-packages\pyomo\core\base\set.py", line 1934, in __init__
    IndexedComponent.__init__(self, *args, **kwds)
  File "C:\Users\TGi\AppData\Local\Continuum\anaconda3\envs\switch\lib\site-packages\pyomo\core\base\indexed_component.py", line 182, in __init__
    Component.__init__(self, **kwds)
  File "C:\Users\TGi\AppData\Local\Continuum\anaconda3\envs\switch\lib\site-packages\pyomo\core\base\component.py", line 402, in __init__
    % ( type(self).__name__, ','.join(sorted(kwds.keys())) ))
ValueError: Unexpected keyword options found while constructing 'AbstractOrderedSimpleSet':
        rule

I am using python 3.7.
thank you for your help.

T

Support predetermined builds after period start

Hello there!

I'm opening this issue so that we don't forget about something we recently discovered in the Switch code. We don't have time to fix it right now, however I'd like to keep track of it.

Currently if we have a period named 2020 and we have generation being pre-built in the year 2020, the following code behaves unexpectedly.

def gen_build_can_operate_in_period(m, g, build_year, period):
if build_year in m.PERIODS:
online = m.period_start[build_year]
else:
online = build_year

When this function is called for generation being pre-built in the year 2020, build_year in m.PERIODS is true despite the project not being a period build. This means the project will retire sooner than normal (i.e. generation pre-built in 2020 will retire before generation pre-built in 2019).

I don't think there's an easy fix to the problem as currently gen_build_costs.csv doesn't differentiate between pre-built generation and investment builds so there's no way to differentiate between the two. The best course of action is likely to display a warning/error if pre-build years conflict with a period label.

I wonder if the switch code wasn't designed to have pre-build years after a period starts.

Thank you!

Martin

run_tests.py scan fails on fresh installation

Reed had trouble with run_tests.py on a fresh install. I'll ask him to assign himself as well to join in the conversation.

Reeds-MacBook-Air-2:/ reedHaubenstock$ ls -l /Users/reedHaubenstock/Desktop/switch_py
/Users/reedHaubenstock/Desktop/switch_py:
total 72
-rwxr-xr-x@  1 reedHaubenstock  staff    342 Jul  8 20:15 AUTHORS
-rwxr-xr-x@  1 reedHaubenstock  staff   1104 Jul  8 20:15 INSTALL
-rwxr-xr-x@  1 reedHaubenstock  staff  11555 Jul  8 20:15 LICENSE
-rwxr-xr-x@  1 reedHaubenstock  staff    732 Jul  8 20:15 LICENSE.BOILERPLATE
-rwxr-xr-x@  1 reedHaubenstock  staff    956 Jul  8 20:15 README
drwxr-xr-x@ 30 reedHaubenstock  staff   1020 Jul  8 20:15 doc
drwxr-xr-x@  9 reedHaubenstock  staff    306 Jul 14 12:24 examples
-rwxr-xr-x@  1 reedHaubenstock  staff   2257 Jul  8 20:15 run_tests.py
-rw-r--r--+  1 reedHaubenstock  staff   1609 Jul 14 12:19 run_tests.pyc
drwxr-xr-x@ 15 reedHaubenstock  staff    510 Jul  8 20:15 sandbox_dev
drwxr-xr-x@ 32 reedHaubenstock  staff   1088 Jul 14 12:18 switch_mod
drwxr-xr-x@  4 reedHaubenstock  staff    136 Jul  8 20:15 tests
Reeds-MacBook-Air-2:/ reedHaubenstock$ echo $PYTHONPATH
:/Users/reedHaubenstock/Desktop/switch_py
Reeds-MacBook-Air-2:/ reedHaubenstock$ cd Users/reedHaubenstock/Desktop/switch_py 
Reeds-MacBook-Air-2:switch_py reedHaubenstock$ ./run_tests.py

----------------------------------------------------------------------
Ran 0 tests in 0.000s

OK

Improved documentation

Our documentation needs a lot of improvement. Extensive documentation is embedded in docstrings of most modules and functions, but the pydoc results are crap. When learning the model, the Switch-Mexico team chose to read the code directly instead of the pydoc stuff and re-write the documentation in LaTeX.

Our working plans for cleaning this up are:

  • Set up Sphynx, and write rst documentation source that is independent of the code. Portions of the docstrings will be migrated over. This will pave the way to writing more how-to-guides and high-level documentation that can include equations and figures as needed.
  • Move the component documentations from the pydoc string of their module's define_components() to attributes of the components themselves. See pull request #57 for some discussion. Write some new code to construct a model, then introspect it to describe each component and its documentation (possibly including equations). The module- and component-level documentation could either be compiled into a rst file or otherwise passed into Sphynx.
    • Use the "doc" attribute as a very short text label (<= 1 sentence). This gets printed out in Pyomo dumps & pprints.
    • Add a "rst_doc" attribute as succinct documentation (<= 1 paragraph). This could include formatted equations, but no references to external files/figures. Anything more extensive needs to go in its own rst file.
    • Add a "units" attribute to assist people in manual unit checks. In an ideal world a Pyomo utility library could introspect each equation and ensure that units cancelled out appropriately.
  • Use Sphynx's auto API document engine to scan the directories and summarize the API (the remaining pydoc strings, modules, interfaces, etc.)

Note, the component documentation could either be in the doc argument of its definition, or assigned to an arbitrary attribute immediately after the definition. I'm pretty sure we will have to write some custom code to inspect the pyomo model, so assigning our extra documentation to our own extra attributes seems like a fairly clean solution.

Versioning conventions

This post assumes you have read Choosing a Versioning Scheme, which recommends Semantic Versioning.

Right now we are in beta phase 2.0.0b_X_. We'll shift to release candidate 2.0.0rc_X_ around the time the paper is being reviewed, and a full release 2.0.0 sometime after the paper is published. Subsequently, we'll need to iterate version numbers every time we push to PyPi or Anaconda.

Semantic Versioning Summary

Given a version number MAJOR.MINOR.PATCH, increment the:

  1. MAJOR version when you make incompatible API changes,
  2. MINOR version when you add functionality in a backwards-compatible manner, and
  3. PATCH version when you make backwards-compatible bug fixes.

Additional labels for pre-release and build metadata are available as extensions to the MAJOR.MINOR.PATCH format.

Ideally, every validated pull request to the main branch would increment the version number, and there is probably some software available to help automate that. Minimally, we will need to manually increment the version number whenever we introduce a change that requires reformatting/upgrading the input files. Typically, those changes would increment MINOR, but we are currently in beta stage, so we are incrementing the beta suffix

We also have an option to use developer versions *2.0.0dev_X_ for playing around with things that are primarily meant for other developers, but I don't know if that will offer any benefit over feature-specific git branches.

Does anyone have questions, comments, amendments or counter-proposals?

Data outputs should be optional

Various standard modules now have post_solve() functions with built-in report-writing behavior. There are no command-line options to control this, so users have no easy way to turn this off.

This can be a problem when running iterative models on an HPC system, e.g., I am currently running a model that solves for 2 years of hourly data in each of 6 study periods, which makes about 1 GB of output per iteration. None of that is needed, because I only use a few kB of diagnostic statistics each iteration, which are written by a different module. But all this output burdens the HPC's network file system and could slow down the iterations -- they only take a couple of minutes each when running a lot of solutions in parallel, but may take much longer when the file system is backlogged.

I am able to turn off a lot of output by leaving out switch_model.reporting, but it's not so easy to turn off the post_solve() functions in the standard modules. I am currently doing that from one of my custom modules via monkey-patching, as follows:

# suppress standard reporting to minimize disk access (ugh)
from importlib import import_module
for module in [
    'balancing.load_zones', 'generators.core.build', 'generators.core.dispatch',
    'generators.extensions.storage'
]:
    imported_module = import_module('switch_model.' + module)
    del imported_module.post_solve

But this is not a good long-term solution.

I would recommend moving these standard outputs into the reporting module, and having it 'duck type' the outputs, i.e., only generate outputs for components that are present in the model, and not worry about the others. Then these outputs can be suppressed simply by omitting the reporting module. This would also move reporting to a higher level ("report this element if present, regardless of what module created it"), which would make the output more standardized (roughly the same outputs whether you use the standard modules or some alternative replacements) and avoid repetition of reporting code in alternative models.

I would also recommend adding some command-line flags to control the level of reporting -- per-variable outputs, per-expression outputs, only certain variables and expressions, horizontal-table output or only certain horizontal-table output.

Output file names or directories should reflect scenario name

Various functions now produce standardized output files. However, when multiple scenarios are run in the same directory, the later output files usually overwrite the earlier ones, making them irretrievable.

We should modify these output functions, so that if a scenario name is defined in model.options.scenario_name, it will be appended to the file name or used to define a subdirectory within outputs/, where all the output files will be stored. Either of these would make it possible to save results from multiple scenarios and then retrieve them later.

The reporting routines in the switch.hawaii modules append the scenario name to the end of each file. This makes it easy to browse for a particular output file, e.g., to open it in Excel. The subdirectory option would be neater, but would also take a little more digging to inspect the results.

Potential severe bug in storage module

The following lines of code seem to be wrong. With the current implementation, StorageEnergyInstallCosts is the same every period.

mod.StorageEnergyInstallCosts = Expression(
mod.PERIODS,
rule=lambda m, p: sum(m.BuildStorageEnergy[g, bld_yr] *
m.gen_storage_energy_overnight_cost[g, bld_yr] *
crf(m.interest_rate, m.gen_max_age[g])
for (g, bld_yr) in m.STORAGE_GEN_BLD_YRS))
mod.Cost_Components_Per_Period.append(
'StorageEnergyInstallCosts')

I think the correct expression would be similar to how GenCapitalCosts is implemented where only build years for the period are included in the sum.

mod.GenCapitalCosts = Expression(
mod.GENERATION_PROJECTS, mod.PERIODS,
rule=lambda m, g, p: sum(
m.BuildGen[g, bld_yr] * m.gen_capital_cost_annual[g, bld_yr]
for bld_yr in m.BLD_YRS_FOR_GEN_PERIOD[g, p]))

For example if we consider a 20 year project, currently the overnight cost is annualized over 20 years but included in every period (possible a lot more than 20 years).

Permuting reference input sets; Tracking data history

We need more robust methods for re-using reference input sets, that lets us:
a) specify permutations of inputs for exploring a wider space
b) not duplicate data on disk
c) allows clean and compact diffs
d) readily deployable
e) track development, history, and and stakeholder approval process
f) easily enables derivative work

This issues of permutation and history tracking may have distinct solutions, but I am wondering if we could design a way to use git and data organizational conventions to accomplish all of these.

Matthias has many constant use cases for a-c.

Sergio and his team are actively working through d as they compile data for Switch-Mexico. They have a chance to do it well, and could use some help in figuring out how to navigate tools. They are using google drive, I suggested moving to git (and github if their repositories don't have a size restriction).

I've had separate conversations with Mark and Ana about these issues lately.

That's it for now. I wanted to start a thread on this topic before leaving on vacation for the week.

-Josiah

Resolution of Missing Upper Bound Issue When Solving With GLPK


Reed: Hey Josiah,
I just wanted to give you an update on investigating the GLPK issue.

I first ran all of the example models from the examples folder with cbc as the solver and they all worked. Then I tried to run each of them using GLPK as the solver which resulted in the models 3zone_toy, copperplate1, and discrete_build producing errors and copperplate0, custom_extension, and production_cost_models running without error.

Furthermore, the errors that resulted in running the models 3zone_toy, copperplate1, and discrete_build were all actually the same error, which is "missing upper bound". I then looked into the glpk code to see where this error message was being produced and found that it was being produced in the glpcpx.c file. More specifically, it was being produced by the parsing_bounds function, and that the token that was being found after the "<=" / " less than or equal to" token was a symbolic name token, which doesn't trigger any of the if statements and gets thrown in to the else case which is throwing the error that we see.

I haven't been able to figure out more than that, but I will keep working on it. In all honesty I'm not sure whether I should be examining the python code or how pyomo works or how glpk works in order to find out what is going wrong, but I'll check the web to see if anything similar has happened to anyone.


Josiah: That's weird. I think the problem lies in the fuel_markets module because it only appears in the problematic examples. The strange thing is that neither of the two constraints in fuel_markets use a "<="; both are equality constraints.

I'm not sure the best way to proceed either, so checking for similar problems on the web and/or making a simple example that reproduces this error and posting it with a question on the developer's list or stackoverflow makes sense.


Reed: I haven't tried looking into the fuel_markets module, but I'll look into it when I get the chance.

I think the error is fairly simple, but I'm not sure how to fix it. So the actual error I was getting looked something like this

ERROR: "[base]/site-packages/pyomo/opt/base/solvers.py", 428, solve
    Solver (glpk) returned non-zero return code (1)
ERROR: "[base]/site-packages/pyomo/opt/base/solvers.py", 433, solve
    Solver log:
    GLPSOL: GLPK LP/MIP Solver, v4.55
    Parameter(s) specified in the command line:
     --write /var/folders/fp/86bgl9md7q90fc9jztz5m4980000gn/T/tmpfKTxhX.glpk.raw
     --wglp /var/folders/fp/86bgl9md7q90fc9jztz5m4980000gn/T/tmpA8GzVb.glpk.glp
     --cpxlp /var/folders/fp/86bgl9md7q90fc9jztz5m4980000gn/T/tmpKOUxAv.pyomo.lp
    Reading problem data from '/var/folders/fp/86bgl9md7q90fc9jztz5m4980000gn/T/tmpKOUxAv.pyomo.lp'...
    /var/folders/fp/86bgl9md7q90fc9jztz5m4980000gn/T/tmpKOUxAv.pyomo.lp:5257: missing upper bound name 
    CPLEX LP file processing error
Traceback (most recent call last):
  File "solve.py", line 30, in <module>
    results = opt.solve(switch_instance, keepfiles=False, tee=False)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/pyomo/opt/base/solvers.py", line 435, in solve
    "Solver (%s) did not exit normally" % self.name )
pyutilib.common._exceptions.ApplicationError: Solver (glpk) did not exit normally

So I went into the file /var/folders/fp/86bgl9md7q90fc9jztz5m4980000gn/T/tmpKOUxAv.pyomo.lp and looked at line 5257 to see what was causing the error, and sure enough the line was just 0 <= x1294 <= inf , but for some reason GLPK only seems to recognize infinity when it is preceded by a + sign or a - sign, which is kind of interesting because in the 5,256 lines before it there were plenty of +inf 's which didn't cause GLPK any problems.

Thanks for the advice regarding what to do next, and I might end up posting something on stack exchange if I can't figure it out.


Josiah: I replicated your process on copperplate1, edited that line from inf to +inf, and got glpsol to solve it, which supports your assessment:

siah-macbookpro:copperplate1 josiah$ ./solve.py 
ERROR: "[base]/site-packages/pyomo/opt/base/solvers.py", 428, solve
    Solver (glpk) returned non-zero return code (1)
ERROR: "[base]/site-packages/pyomo/opt/base/solvers.py", 433, solve
    Solver log:
    GLPSOL: GLPK LP/MIP Solver, v4.52
    Parameter(s) specified in the command line:
     --write /var/folders/yv/q91xdmmh8xj3z80059_xrqzr0000gn/T/tmptblBpo.glpk.raw
     --wglp /var/folders/yv/q91xdmmh8xj3z80059_xrqzr0000gn/T/tmpcrjXQm.glpk.glp
     --cpxlp /var/folders/yv/q91xdmmh8xj3z80059_xrqzr0000gn/T/tmpWmfzOA.pyomo.lp
    Reading problem data from `/var/folders/yv/q91xdmmh8xj3z80059_xrqzr0000gn/T/tmpWmfzOA.pyomo.lp'...
    /var/folders/yv/q91xdmmh8xj3z80059_xrqzr0000gn/T/tmpWmfzOA.pyomo.lp:337: missing upper bound
    CPLEX LP file processing error
Traceback (most recent call last):
  File "./solve.py", line 26, in <module>
    results = opt.solve(switch_instance, keepfiles=False, tee=False)
  File "/Library/Python/2.7/site-packages/pyomo/opt/base/solvers.py", line 435, in solve
    "Solver (%s) did not exit normally" % self.name )
pyutilib.common._exceptions.ApplicationError: Solver (glpk) did not exit normally
siah-macbookpro:copperplate1 josiah$ bbedit /var/folders/yv/q91xdmmh8xj3z80059_xrqzr0000gn/T/tmpWmfzOA.pyomo.lp
siah-macbookpro:copperplate1 josiah$ glpsol --cpxlp /var/folders/yv/q91xdmmh8xj3z80059_xrqzr0000gn/T/tmpWmfzOA.pyomo.lp
GLPSOL: GLPK LP/MIP Solver, v4.52
Parameter(s) specified in the command line:
 --cpxlp /var/folders/yv/q91xdmmh8xj3z80059_xrqzr0000gn/T/tmpWmfzOA.pyomo.lp
Reading problem data from `/var/folders/yv/q91xdmmh8xj3z80059_xrqzr0000gn/T/tmpWmfzOA.pyomo.lp'...
52 rows, 44 columns, 111 non-zeros
342 lines were read
GLPK Simplex Optimizer, v4.52
52 rows, 44 columns, 111 non-zeros
Preprocessing...
23 rows, 27 columns, 55 non-zeros
Scaling...
 A: min|aij| =  5.586e-01  max|aij| =  4.383e+03  ratio =  7.846e+03
GM: min|aij| =  6.357e-01  max|aij| =  1.573e+00  ratio =  2.474e+00
EQ: min|aij| =  4.067e-01  max|aij| =  1.000e+00  ratio =  2.459e+00
Constructing initial basis...
Size of triangular part is 23
      0: obj =   2.029492572e+07  infeas =  7.797e+01 (0)
*     9: obj =   3.939574534e+07  infeas =  0.000e+00 (0)
*    15: obj =   2.667139398e+07  infeas =  1.339e-30 (0)
OPTIMAL LP SOLUTION FOUND
Time used:   0.0 secs
Memory used: 0.1 Mb (74014 bytes)
siah-macbookpro:copperplate1 josiah$ 

I skimmed through pyomo's interface to GLPK (pyomo/solvers/plugins/solvers/GLPK.py), but had trouble tracing down where the cpxlp file actually gets written out. I finally grep'ed their codebase for cpxlp and found pyomo/repn/plugins/cpxlp.py. Searching that file for 'inf' brought me to line 771-774, which has an if statement that translates no upper bound to " <= +inf\n" and otherwise prints the value of the upper bound. In python, positive infinity is printed as 'inf'. I edited the if statement to be:

            if vardata_ub is not None and value(vardata_ub) != float('inf'):
                output_file.write(ub_string_template % value(vardata_ub))
            else:
                output_file.write(" <= +inf\n")

and that fixed the problem.

I looked fuel_markets.py and found the source of the problem. The upper bound of fuel that you can purchase in a given supply tier for a particular price will default to infinity if no upper bound was provided - effectively giving an unlimited supply at a given price. This parameter is used as the upper bound for the decision variable FuelConsumptionByTier and is translated into a constraint. Since the upper bound was defined, it used the python depiction of its value, which comes to 'inf'. I added some logic to the lines that specify the upper bound to replace float('inf') with None, and it works now. I just pushed the fix to github.

About installing the "advanced" dependencies

Dear sir,
I want to use certain advanced features of SWITCH, when I install the advanced dependencies via "pip install --upgrade --editable .[advanced]",the following situation has occurred.
(base) C:\ProgramData\Anaconda2\Lib\site-packages\switch-2.0.2>pip install --upgrade --editable .[advanced]
DEPRECATION: Python 2.7 will reach the end of its life on January 1st, 2020. Please upgrade your Python as Python 2.7 won't be maintained after that date. A future version of pip will drop support for Python 2.7.
Obtaining file:///C:/ProgramData/Anaconda2/Lib/site-packages/switch-2.0.2
Requirement already satisfied, skipping upgrade: Pyomo>=4.4.1 in c:\programdata\anaconda2\lib\site-packages (from switch-model==2.0.2) (5.6.5)
Requirement already satisfied, skipping upgrade: testfixtures in c:\programdata\anaconda2\lib\site-packages (from switch-model==2.0.2) (6.9.0)
Requirement already satisfied, skipping upgrade: pandas in c:\programdata\anaconda2\lib\site-packages (from switch-model==2.0.2) (0.24.2)
Requirement already satisfied, skipping upgrade: numpy in c:\programdata\anaconda2\lib\site-packages (from switch-model==2.0.2) (1.16.2)
Requirement already satisfied, skipping upgrade: scipy in c:\programdata\anaconda2\lib\site-packages (from switch-model==2.0.2) (1.2.1)
Collecting rpy2 (from switch-model==2.0.2)
Using cached https://files.pythonhosted.org/packages/8d/7c/826eb74dee57e54608346966ed931674b521cf098759647ed1a103ccfa79/rpy2-3.0.4.tar.gz
Complete output from command python setup.py egg_info:
rpy2 is no longer supporting Python < 3.Consider using an older rpy2 release when using an older Python release.

----------------------------------------

Command "python setup.py egg_info" failed with error code 1 in c:\users\kevin_xo\appdata\local\temp\pip-install-3etqdj\rpy2\

So what should I do? Thanks a lot!

Compatibility with Pyomo 5.6.1

Pyomo 5.6.1 introduces some changes that can break Switch models. One is that it adds a util object to pyomo.environ, which masks a util module imported in switch_model.hawaii.save_results, used by the main Hawaii model (users get "AttributeError: 'module' object has no attribute 'write_table'").

I also saw problems (maybe the same one) when trying to run the advanced demand response model with Pyomo 5.6.1.

A quick fix is to roll back to Pyomo 5.1.1, but that is not sustainable long-term.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.