Giter VIP home page Giter VIP logo

ipet's People

Contributors

alexhoen avatar ambros-gleixner avatar fschloesser avatar fserra avatar gregorch avatar jakobwitzig avatar leoneifler avatar sofranac-boro avatar spoorendonk avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

ipet's Issues

Wrong classification of instances as better

Two issues, I am not sure:

  1. In the MINLP evaluation of the release report, some solved_not_verified instances were classified as better (-> limit), because of a bug having to do with the =bestdual= status in the solu file.

  2. I just encountered the same in this MIP comparison:
    https://rubberband.zib.de/result/AWBY7plxU7LMof4FxoM9?compare=AWFres7tqjghvac026R1#summary
    4 instances (e.g.csched07) are also marked as better instead of solved_not_verified.

Suspicion: Do we need to reimport the old runs?

@fschloesser

Evaluation may try to construct an index automatically.

I don't know how this would work exactly, but for starters, we may pick the values from a fixed set of Keys that are usually around (like ProblemName, Settings, Seeds, Version, LPSolver, LogFileName) and use the number of distinct values and check which combinations lead to an indexing that pivots as much as possible into the rows (to make the table please the screen).

Defaultgroup

You set the defaultgroup in the evaluationgui via "Arg1:Arg2 ..." which is then parsed and represented as "('Arg1', 'Arg2')". The representation in the inputfield should not change.

Improving ipet gui

Some bugs and missing convenience features I noticed while using ipet to evaluate scip test runs, primarily relating to the gui. Note that the gui used was not the one from the ipet-gui script, but the ipet-evaluate script with argument -A.

First, a checklist and overview of the points. A more detailed explanation to every point follows below.

Bugs and missing convenience features in ipet gui

  • Filter groups filtertype 'union' not working
  • Default columns not readable
  • Order of columns and filter groups fixed
  • Adding children pops up all subtrees

-- Filter groups filtertype 'union' not working --
Setting a filter group's filtertype to 'union' from 'intersection' does nothing. It still generates the intersection of its filters.

-- Default columns not readable --
Adding a new column and saving it to the evaluation file without changing anything will lead to the evaluation file being unusable, only to be fixed by opening the .xml in an editor and deleting the column. This might be intended, but can be a trap nonetheless.

-- Order of columns and filter groups fixed --
Being able to change the order of columns and filter groups would be a nice convenience feature. Nothing fancy like drag and drop is needed, just an up and a down arrow would already be amazing.

-- Adding children pops up all subtrees --
Adding a child to anything will pop up every hidden subtree instead of just the selected one.

@GregorCH
@fschloesser

function to "average" statuses

Hi.
When analyzing runs with seeds and wanting to obtain a column with the "averaged" status of all the runs I had to implement this extra functionality. I leave here my implementation in case it is of any use:

def getAverageStatus(listofstatuses):
    ''' 
    given a list of status, returns the worst status: From worst to better the
    statuses are:
    fail_abort, readerror, fail_dual_bound, failed_objective_value,
    failed_solution_on_infeasible_instance, failed_solution_infeasible, fail,
    limit different from timeout, timeout, unknown, solved_not_verified, ok
    '''
    bad_statuses = ["fail_abort", "readerror", "fail_dual_bound",
            "failed_objective_value", "failed_solution_on_infeasible_instance",
            "failed_solution_infeasible", "fail"]
    good_statuses = ["timeout", "unknown", "solved_not_verified", "ok"]
    return_status = "ok"
    for status in listofstatuses:
        if return_status in bad_statuses:
            if status in bad_statuses and bad_statuses.index(status) < bad_statuses.index(return_status):
                return_status = status
            continue
        if status in bad_statuses:
            return_status = status
            continue
        if return_status in good_statuses:
            if status not in good_statuses or good_statuses.index(status) < good_statuses.index(return_status):
                return_status = status
    return return_status

Old README

The README should be deleted or updated

Incorrent primal integral result for SCIP

I am finding that the primal integral values reported by ipet are incorrect. The issue seems to be related to when the First Solution is captured. This adds the first solution to the end of the primal bound list. When the primal integral is calculated, this is supposed to be appended to the start of the list. However, this does not occur.

I don't understand the getProcessPlotData function, so I am unable to find a fix for this solution. The current fix is to just ignore the table entry "First Solution". By ignoring this entry, an good approximation of the primal integral is calculate.

problem with writing data and then reading as external data

A status of fail (solution infeasible) is written as 3 words when writing with ipet-evaluate.py blabla -f txt which makes it impossible to read it afterwards with ipet-evaluate.py blabla -E inst_combined.txt (I get something like ValueError: Expected 9 fields in line 5, saw 17)

Bug - loading output files from gui

Running ipet-gui, loading readers-example.xml and evaluation.xml and subsequently selecting

Experiment -> Load Output Files -> check.mipdev-complete.scip-4.0.0.2.linux.x86_64.gnu.opt.spx2.none.M610.default.out

leads to:

Traceback (most recent call last):
File "/nfs/OPTI/bzfviern/workspace/ipet/venv/lib/python3.5/site-packages/ipet-0.0.9-py3.5.egg/ipetgui/IPETParserWindow.py", line 120, in updateView
if idx > linesend:
TypeError: unorderable types: int() > NoneType()

@GregorCH
@fschloesser

Computing better mean integral plots

The current code for plotting mean integrals (ipet/misc/integrals.py) uses a fixed number of equidistant points to compute the average primal and dual gap as a function of time. However, one can do better by

  1. collecting all the histories
  2. storing them in a priority queue sorted by their next point in time when a primal (dual) bound changes
  3. in each step, pop the next smallest point in time history from the queue, and append to the mean integral the new point in time, and the marginal change of the mean function

In this way, one would obtain an "exact" plot of the primal and dual bound histories that is arguably faster than the current method.

Additional Information for Rubberband

Rubberband assumes the existence of some particular columns that ipet used to provide, such as several "Constaints_Number_*" columns.
These should be added to the SCIP Solver class.
(Maybe some code can be reactivated from the previous version of IPET?)

Unused fields in gui

There are some fields in the gui that are not being used anymore and should be removed.
These include 'sortlevel', 'groupkey' in the Evaluation and possibly 'translevel' in Columns?

Aaggregate multiple columns with a regular expression fails

IPET-Version: 9b26c87
virtualenv-Version: 15.0.1
Evaluation file: bug.xml.txt (github won't allow the xml extension)
Example out-file: check.lookahead_bug_60.scip-lookahead-1.3-15s.linux.x86_64.gnu.opt.cpx.none.M620.ALAB.out.txt

Trying to aggregate multiple columns with a regular expression fails with the following error:

2017-09-12 13:46:14,399 - INFO - No external data file
2017-09-12 13:46:14,836 - WARNING - An error occurred for the column 'OtherCutoffs':
{'regex': 'BranchingRules_Cutoffs_.*', 'active': 'True', 'reduction': 'meanOrConcat', 'name': 'OtherCutoffs'}
Traceback (most recent call last):
  File "/nfs/optimi/kombadon/bzfschuc/ipet/master-venv/lib/python3.5/site-packages/pandas/indexes/base.py", line 1876, in get_loc
    return self._engine.get_loc(key)
  File "pandas/index.pyx", line 137, in pandas.index.IndexEngine.get_loc (pandas/index.c:4027)
  File "pandas/index.pyx", line 157, in pandas.index.IndexEngine.get_loc (pandas/index.c:3891)
  File "pandas/hashtable.pyx", line 675, in pandas.hashtable.PyObjectHashTable.get_item (pandas/hashtable.c:12408)
  File "pandas/hashtable.pyx", line 683, in pandas.hashtable.PyObjectHashTable.get_item (pandas/hashtable.c:12359)
KeyError: 'OtherCutoffs'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/nfs/optimi/kombadon/bzfschuc/ipet/master-venv/lib/python3.5/site-packages/pandas/core/internals.py", line 3350, in set
    loc = self.items.get_loc(item)
  File "/nfs/optimi/kombadon/bzfschuc/ipet/master-venv/lib/python3.5/site-packages/pandas/indexes/base.py", line 1878, in get_loc
    return self._engine.get_loc(self._maybe_cast_indexer(key))
  File "pandas/index.pyx", line 137, in pandas.index.IndexEngine.get_loc (pandas/index.c:4027)
  File "pandas/index.pyx", line 157, in pandas.index.IndexEngine.get_loc (pandas/index.c:3891)
  File "pandas/hashtable.pyx", line 675, in pandas.hashtable.PyObjectHashTable.get_item (pandas/hashtable.c:12408)
  File "pandas/hashtable.pyx", line 683, in pandas.hashtable.PyObjectHashTable.get_item (pandas/hashtable.c:12359)
KeyError: 'OtherCutoffs'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/nfs/optimi/kombadon/bzfschuc/ipet/master-venv/bin/ipet-evaluate", line 4, in <module>
    __import__('pkg_resources').run_script('ipet==0.0.9', 'ipet-evaluate')
  File "/nfs/optimi/kombadon/bzfschuc/ipet/master-venv/lib/python3.5/site-packages/pkg_resources/__init__.py", line 719, in run_script
    self.require(requires)[0].run_script(script_name, ns)
  File "/nfs/optimi/kombadon/bzfschuc/ipet/master-venv/lib/python3.5/site-packages/pkg_resources/__init__.py", line 1504, in run_script
    exec(code, namespace, namespace)
  File "/nfs/optimi/kombadon/bzfschuc/ipet/master-venv/lib/python3.5/site-packages/ipet-0.0.9-py3.5.egg/EGG-INFO/scripts/ipet-evaluate", line 205, in <module>
    theeval.evaluate(ExperimentManagement.getExperiment())
  File "/nfs/optimi/kombadon/bzfschuc/ipet/master-venv/lib/python3.5/site-packages/ipet-0.0.9-py3.5.egg/ipet/evaluation/IPETEvalTable.py", line 1394, in evaluate
    reduceddata = self.reduceToColumns(data, reduceddata)
  File "/nfs/optimi/kombadon/bzfschuc/ipet/master-venv/lib/python3.5/site-packages/ipet-0.0.9-py3.5.egg/ipet/evaluation/IPETEvalTable.py", line 912, in reduceToColumns
    raise e
  File "/nfs/optimi/kombadon/bzfschuc/ipet/master-venv/lib/python3.5/site-packages/ipet-0.0.9-py3.5.egg/ipet/evaluation/IPETEvalTable.py", line 909, in reduceToColumns
    df_long, df_target = col.getColumnData(df_long, df_target, evalindexcols)
  File "/nfs/optimi/kombadon/bzfschuc/ipet/master-venv/lib/python3.5/site-packages/ipet-0.0.9-py3.5.egg/ipet/evaluation/IPETEvalTable.py", line 506, in getColumnData
    df_long[self.getName()] = result
  File "/nfs/optimi/kombadon/bzfschuc/ipet/master-venv/lib/python3.5/site-packages/pandas/core/frame.py", line 2348, in __setitem__
    self._set_item(key, value)
  File "/nfs/optimi/kombadon/bzfschuc/ipet/master-venv/lib/python3.5/site-packages/pandas/core/frame.py", line 2415, in _set_item
    NDFrame._set_item(self, key, value)
  File "/nfs/optimi/kombadon/bzfschuc/ipet/master-venv/lib/python3.5/site-packages/pandas/core/generic.py", line 1459, in _set_item
    self._data.set(key, value)
  File "/nfs/optimi/kombadon/bzfschuc/ipet/master-venv/lib/python3.5/site-packages/pandas/core/internals.py", line 3353, in set
    self.insert(len(self.items), item, value)
  File "/nfs/optimi/kombadon/bzfschuc/ipet/master-venv/lib/python3.5/site-packages/pandas/core/internals.py", line 3454, in insert
    placement=slice(loc, loc + 1))
  File "/nfs/optimi/kombadon/bzfschuc/ipet/master-venv/lib/python3.5/site-packages/pandas/core/internals.py", line 2461, in make_block
    return klass(values, ndim=ndim, fastpath=fastpath, placement=placement)
  File "/nfs/optimi/kombadon/bzfschuc/ipet/master-venv/lib/python3.5/site-packages/pandas/core/internals.py", line 84, in __init__
    len(self.mgr_locs)))
ValueError: Wrong number of items passed 13, placement implies 1

Evaluations with respect to reference run

In the aggregated Ipet table I would like to display

  • number of instances faster/slower by a certain threshold
  • create a filter group with all instances that have the same solving path as the default, heuristically identified by same number of nodes and LP iterations

I guess that both require a similar extension to how Ipet currently computes evaluations. I assign this to @GregorCH for now.

@fschloesser

issues I had during installation

Hey Gregor

good to see you here.

I just want to comment on some things I had to do to get ipet working again:
In scripts/ipet-gui.py changed qipetapplication = QIPETApplication() for qipetapplication = QIPETApplication.QIPETApplication()
In install-pyqt4-in-virtual-environment.sh I had to use #!/bin/bash as the first line
In ipet/Experiment.py

     if arguments.saveexperiment is not False:
-        experiment.saveToFile(arguments.experimentfile)
+        experiment.saveToFile(arguments.saveexperiment)

Display evaluated tables in evaluation browser

  • - implement a data model for the data frames that the evaluation produces
  • - use the Qt widget for displaying tables
  • - enable automatic updates
  • - add a boolean parameter for automatic updates
  • - add a button for manual reevaluation

Tests

Especially for ipet-evaluate, there should be tests against mistakes in userinput.

Create a custom formatter for tables

The formatter class should be representable as XML element tree. It should be able to do the following:

  • allow to transpose the resulting data frame
  • reorder and delete columns that are not necessary (this can be done in one step by keeping a list of the necessary columns)
  • allow to rename the axes. The names must pass the tex-conversion.
  • allow to format the precision for individual columns.

ipet cannot handle empty lines in a solution file

When a solution file has empty lines I get

Traceback (most recent call last):
  File "<my-path>/ipet/scripts/ipet-parse", line 140, in <module>
    experiment.addSoluFile(solufile)
  File "<my-path>/ipet/ipet/Experiment.py", line 109, in addSoluFile
    self.validation = Validation(solufilename, self.gaptol)
  File "<my-path>/ipet/ipet/validation/Validation.py", line 51, in __init__
    self.referencedict = self.readSoluFile(solufilename)
  File "<my-path>/ipet/ipet/validation/Validation.py", line 80, in readSoluFile
    marker = spline[0]
IndexError: list index out of range

It would be nice if this easy case can handled by ipet without a crash.

Filter throw unorderable type error

This happens when a filter is referring to a column that has been discharded earlier, i.e. an original column that is not being used in the columns. Then the filter treats the columname as string and can't compare it to any values.
Is this behaviour intended?

Bug - evaluating .trn files from console with default evaluation.xml

Running

ipet-evaluate -t check.mipdev-complete.scip-4.0.0.2.linux.x86_64.gnu.opt.spx2.none.M610.default.trn -e evaluation.xml

leads to:

Traceback (most recent call last): File "/nfs/OPTI/bzfviern/workspace/ipet/venv/bin/ipet-evaluate", line 4, in <module> __import__('pkg_resources').run_script('ipet==0.0.9', 'ipet-evaluate')
[...]
File "/nfs/OPTI/bzfviern/workspace/ipet/venv/lib/python3.5/site-packages/ipet-0.0.9-py3.5.egg/ipet/evaluation/IPETEvalTable.py", line 728, in addComparisonColumns compcol = dict(list(grouped))[self.defaultgroup] KeyError: 'benchmark.gurobi.out'

However, replacing the line

Evaluation defaultgroup="benchmark.gurobi.out" index="ProblemName LogFileName"

from evaluation.xml with

Evaluation comparecolformat="%.3f" defaultgroup="default" evaluateoptauto="True" groupkey="Status" index="ProblemName Settings" indexsplit="-1" sortlevel="0"

resolves the issue.

@GregorCH
@fschloesser

Finish interactive parsing tool

  • - enable out file display, one instance at a time
  • - write semi-automated detection of Custom Reader attributes for text selection
  • - add a user-prefix parameter to facilitate the quick generation of custom readers

Levelonedf unused

if we will never use this again it should be removed from the code

KeyError if index has nonexistent columnname

If the user specifies a column in as index that doesn't exist a KeyError gets thrown.
What is the way to handle this? We could
a) throw an error (that explains the problem better to the user than now) and abort or
b) warn and try to fix this by either deleting the faulty column specifier or generating a new index from scratch.
I would do variant a)

Better solution for requirements

Currently, we have requirements.txt as well as requirements_3.9.txt. With even newer python versions (e.g. 3.11) neither works and I had to change up the numpy and scipy versions to newer ones than in requirements_3.9.txt.

Does anyone see a good solution how the pip install can work with all these different versions? I can code it but I don't really know a clean solution right now (having a bunch of these requirement files and then choosing based on the python version maybe?)

Use individual column reductions that may be different from the overall indexing of the evaluation

Currently, an index is created, and all columns are reduced the same way, and only in case the index is not unique, a reduction is applied to the value list. What we need is an individual, column-based reduction index. I suggest to introduce this functionality very flexibly:

  1. introduce a new "reductionidx" member of a column, that accepts both simple integers, "None" (the default), and String Tuples.
  2. Assume an evaluation index of three elements [a,b,c]. The current functionality would use this evaluation index and reduce the values in the data frame to match this three-level index in case that there are more. However, it might be interesting to learn about the "minimum" value for every index element in a x b regardless of the values of c. A reduction over c would then be achieved by specifying a reduction index of 2, or -2. Similarly, it should be possible to reduce by specifying 'a b' as the reduction index ('c' is now left out). Whenever a sublist of [a b c] is wanted as reduction index, it is enough to use an integer. In the more complicated case that a reduction index consists of nonconsecutive or whatever parts of the evaluation index, a string tuple must be specified.

Experiment.py possible problem with string "objectiveLimit"

I stumbled upon this line and i guess that the variable objlimitreached will always be false, i think the string "objectiveLimit" has to be replaced by some SolverStatusCode from Key.py.

ipet/Experiment.py:417:        # TODO What is this "objectiveLimit", where is it set and what does it imply?
ipet/Experiment.py:418:        objlimitreached = (solverstatus == "objectiveLimit")

SoPlex hash

It would be nice if ipet would also parse the SoPlex hash (if SoPlex was used), so that rubberband can read this data.

SCIPs time limit is not read / used

Somehow the time limit is not read or not correctly used by ipet. I would expect that the solving time will be capped by ipet if SCIP exceeds its time limit slightly, e.g., 7201.3 instead of 7200 seconds. Currently, ipet prints a time quotient != 1.0 if SCIP exceeds the time limit in two different runs slightly (e.g., 7200.3 and 7202.0) on the same instance.

Problematic would be if ipet uses the uncapped solving time for its internal calculations, which is in particular bad if a small time limit is used.

Thanks for fixing.
Jakob

Tests for corner cases of evaluation

There should be tests for some corner cases of the evaluation (empty row/column index) in combination with 0, 1, or several columns with aggregations. We are always running into cases where pandas internally transposes such corner cases into a shape that we do not like.

Fix default evaluation file

Running

ipet-evaluate -e empty_evaluation.xml -t raw/check.mipdev-complete.scip-4.0.0.2.linux.x86_64.gnu.opt.spx2.none.M610.default.trn -A

with the .trn file generated from the cluster script works fine. However, running

ipet-parse -l MIPDEV_cluster_run/check.mipdev-complete.scip-4.0.0.2.linux.x86_64.gnu.opt.spx2.none.M610.default.out

on the generated .out file and subsequently

ipet-evaluate -e empty_evaluation.xml -t MIPDEV_cluster_run/check.mipdev-complete.scip-4.0.0.2.linux.x86_64.gnu.opt.spx2.none.M610.default.trn -A

on the resulting .trn file leads to

Traceback (most recent call last):
  File "/nfs/OPTI/bzfviern/workspace/ipet/venv/lib/python3.5/site-packages/pandas/indexes/base.py", line 1876, in get_loc
    return self._engine.get_loc(key)
  File "pandas/index.pyx", line 137, in pandas.index.IndexEngine.get_loc (pandas/index.c:4027)
  File "pandas/index.pyx", line 157, in pandas.index.IndexEngine.get_loc (pandas/index.c:3891)
  File "pandas/hashtable.pyx", line 675, in pandas.hashtable.PyObjectHashTable.get_item (pandas/hashtable.c:12408)
  File "pandas/hashtable.pyx", line 683, in pandas.hashtable.PyObjectHashTable.get_item (pandas/hashtable.c:12359)
KeyError: 'ProblemName'

@GregorCH
@fschloesser

bug in gui

When loading or saving an evaluation, the title of the window is the same "Evaluation editor - Load an evaluation"

if instances in solufile don't have extension, then they are not recognized

Hi

I have an out file which has an instance: smallinvSNPr2b020-022.osil.
This instance gets stored as smallinvSNPr2b020-022.osil in the testrun (at least that is what I get when I call testrun.getProblems)
However, the solu file has
=opt= smallinvSNPr2b020-022 0.261126842
and so in storeToStatistics of StatisticReader_SoluFileReader, the condition self.testrun.hasInstance(instance) is false.

I am guessing that this is not really intended behavior

OptAuto

Will optauto be fixed or discharded?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.