Giter VIP home page Giter VIP logo

europa-pso's People

europa-pso's Issues

Handle gcc versioning issues regarding non-standard extensions



For now, the solution will be conditional includes. So:

    * Update code
    * Test with old and new compilers 

From the email discussion:

Yes, it's a versioning problem--one of the many potential issues when using
non-standard library extensions. I use g++ 4.0.1, so I haven't run into
this or the other compile issues. We should probably move up to supporting
4.3.1, if it's catching syntactic errors that prior versions aren't. Also,
both 4.0.1 and, I imagine, 4.3.1 have hash-table implementations in tr1/,
which is likely not to change until the C++0x standard is ratified and
supported, so maybe we should go through the code and see if anything with
gnu_cxx:: has a tr1:: analog. Or we can just start using Boost for everything.

~MJI

On Jun 24, 2008, at 3:55 PM, Matthew E. Boyce wrote:

    You'll possibly want to be running on a slightly older version of gcc,
or you'll need to update the parts of EUROPA which require the gcc hash map
extension stuff... looks like the files which might require some fixing
are: src/PLASMA/Resource/component/SAVH_FlowProfile.hh
src/PLASMA/Resource/component/SAVH_MaxFlow.hh
src/PLASMA/Resource/component/SAVH_Types.hh
src/PLASMA/Solvers/base/FlawManager.cc
src/PLASMA/Solvers/base/FlawManager.hh
src/PLASMA/Utils/base/HashPriorityQueue.hh
src/PLASMA/Utils/base/LabelStr.cc src/PLASMA/Utils/base/LabelStr.hh I'd
guess it's limited to LabelStr?.hh though... I've CCed Michael Iatauro, who
made the change from the STL map to the gcc extension and might be more
able to help with any updating. I'm fairly certain EUROPA has been tested
up to version 4.2 of gcc, so you won't have to go too far back to avoid
needing to make such modifications. ~MEB On Jun 24, 2008, at 3:20 PM,
Tristan Smith wrote:

        Thoughts? -- Tristan B. Smith Mission Critical Technologies, Inc.
MCT Contractor at NASA Ames Research Center Intelligent Systems
Division/Planning & Scheduling Group t: 650.604.1661 f: 650.604.7563
office: 269/239 tsmith@… From: "Philip L Courtney" <pcourtney@…> Date: June
24, 2008 3:05:39 PM PDT To: "'Tristan Smith'" <tsmith@…> Subject: RE:
EUROPA build error Hi Tristan, Thanks for your reply. I checked out the
PlanWorks? and PLASMA trunks (at revision 5022), and attempted to do a
PLASMA ant build. The build did continue to compile the targets, but I got
a number of errors. I have attached the output from the build. Most of them
were related to "Utils/base/LabelStr.hh:17:26: error: ext/hash_fun.h: No
such file or directory". This may be a gcc version issue. I am using gcc
version 4.3.1 20080612 (Red Hat 4.3.1-2). The hash_fun.h file is in the
/usr/include/c++/4.3.1/backward directory (not the /ext directory). There
were also a number of errors related to symbols that were not declared in
scope and some deprecated header warnings. Should I be using an older gcc
version? Your installation page just states GCC version 3.3+. Thanks, Phil


Original issue reported on code.google.com by [email protected] on 15 Sep 2009 at 3:55

Debug output segfaults

See the attached sample problem.  In Debug.cfg, add the line:

:Solver

Running the problem results in a seg fault due to a failed attempt to
output some internal data because a transaction on the stack is deleted due
to relaxations etc going on elsewhere.

Attached is a trail of email discussion involving the bug (the 'Debug'
message describes the details of what is going on while the other 7 are a
discussion amongst us as to how this should be handled).

Original issue reported on code.google.com by [email protected] on 9 Sep 2009 at 11:09

Attachments:

Expose initial capacity through Resource API

currently it's not possible to get to the initial capacity through the
Resource API. until issue #21 is taken care of, a getInitialCapacity()
method needs to be added.

An alternative is to try to be consistent with what is proposed in issue
#21 now and have the user specify initial capacity through a fact instead
of through an arg in the Resource constructor.

Original issue reported on code.google.com by [email protected] on 9 Sep 2009 at 11:51

Clean up constraints

First, use consistent naming conventions.  See ConstraintLibraryReference.
My suggestion (but we should see what users typically do), is to use the
concise lowerCase version of everything, and implement such versions for
constraints that don't have it. For example, perhaps:

    * Define and use lt (and then use lt consistently everywhere there is
currently a LessThan)
    * Add testLT, ltSum, etc. 

Second, eliminate deprecated names, if possible (and it won't cause trouble
for users).

Original issue reported on code.google.com by [email protected] on 9 Sep 2009 at 8:05

Object domains should contain entity keys rather than double-cast pointers

Object domains are currently enumerated domains containing the addresses of
the objects cast to doubles. This is untenable on 64-bit platforms because
pointers can conceivably be larger than is accurately representable by a
standard double. As a result, I suggest moving to holding integer entity
keys in those domains and providing a method on Entity for getting Ids from
keys.

Original issue reported on code.google.com by [email protected] on 15 Sep 2009 at 3:59

ant build in PSUI expects a NDDLHellowWorld Driectory that is not present

What steps will reproduce the problem?
1.cd $EUROPA_HOME/src/PLASMA/System/component/PSUI
2.ant


What is the expected output? Something should run.

What do you see instead?

run:
     [echo] Running NDDLHelloWorld project
     [java]
/wg/adw/mcgann/ros/ros-pkg/wg-ros-pkg/stacks/trex/trex_core/PLASMA/src/PLASMA/Sy
stem/component/PSUI/build.xml:107:
/u/mcgann/workspace/NDDLHelloWorld is not a valid directory
     [java]     at org.apache.tools.ant.taskdefs.Java.fork(Java.java:732)
     [java]     at org.apache.tools.ant.taskdefs.Java.executeJava(Java.java:171)
     [java]     at org.apache.tools.ant.taskdefs.Java.execute(Java.java:84)
     [java]     at
org.apache.tools.ant.UnknownElement.execute(UnknownElement.java:275)
     [java]     at org.apache.tools.ant.Task.perform(Task.java:364)
     [java]     at org.apache.tools.ant.Target.execute(Target.java:341)
     [java]     at org.apache.tools.ant.Target.performTasks(Target.java:369)
     [java]     at
org.apache.tools.ant.Project.executeSortedTargets(Project.java:1216)
     [java]     at org.apache.tools.ant.Project.executeTarget(Project.java:1185)
     [java]     at
org.apache.tools.ant.helper.DefaultExecutor.executeTargets(DefaultExecutor.java:
40)
     [java]     at org.apache.tools.ant.Project.executeTargets(Project.java:1068)
     [java]     at org.apache.tools.ant.Main.runBuild(Main.java:668)
     [java]     at org.apache.tools.ant.Main.startAnt(Main.java:187)
     [java]     at org.apache.tools.ant.launch.Launcher.run(Launcher.java:246)
     [java]     at org.apache.tools.ant.launch.Launcher.main(Launcher.java:67)

Original issue reported on code.google.com by [email protected] on 9 Sep 2009 at 2:32

check_error (irrecoverable assertion failure) vs. check_runtime_error (recoverable runtime error)



check_error seems to be used to address at least a couple scenarios :

- to specify debug assertions (extra checks that catch run-time errors, and
that can be compiled out depending on the debug level) - to specify
run-time errors which a client (of the method that contains the check_error
statement) could choose to ignore or recover from.

we need to have at least 2 versions of check_error (a second one could be
called check_runtime_error), where the second one will throw an exception
that the client can choose to ignore/recover from. we need to go through
the code and classify all the calls to check_error (is it an irrecoverable
failure? or a run-time error?) appropriately.

Mike : let's discuss briefly in some more detail when you're ready to
tackle this. - Javier

Original issue reported on code.google.com by [email protected] on 15 Sep 2009 at 3:23

Add NDDL methods

for the procedural part of NDDL, add the ability to define methods with the 
following syntax :

methodDefinition : returnDataType methodName '(' args ')' '{'  methodStmt '}'

where method stmt is any of the procedural constructs already available in NDDL

Original issue reported on code.google.com by [email protected] on 11 Aug 2009 at 12:15

Transactions re-factor and re-design

There are a few problems with the DbClientTrasactionPlayer? and Log.
Firstly, they both deal with XML directly, rather than a more abstract
Transaction data structure. This creates a dependency on XML parsers and
makes it more difficult to generalize the code to interpret the
transactions, since most of the "state" transactions have two distinct and
nearly incompatible forms: a direct form with a tag that is the name of the
transaction and one that uses the "invoke" tag. Second, the usage of a
client interface and a transaction log/player should be extended to other
modules (really just the constraint engine).

Original issue reported on code.google.com by [email protected] on 15 Sep 2009 at 4:06

Parts of the NDDL XML transaction language need inspection



There are many bizarre things about the XML transaction language. For
instance: 
-There are two distinct representations for the constrain, free, activate,
merge, reject, cancel, and specify transactions. 
-The "goal" tag can be used to create new tokens or to introduce a temporal
relation between two tokens.

Things such as these should be remedied before somebody else sees them.

Original issue reported on code.google.com by [email protected] on 15 Sep 2009 at 4:09

Add constraint-violation explanations to NDDL

Having constraint-violation explanations in the nddl model can be very useful, 
for example :

{{{
end <= dueDate - releaseBuffer : "Due date is not met" 
use (resource,qty,start,end) : " Not enough fuel"
}}}

getting the messages like "Due date is not met" and  "Not enough fuel" will be 
much more useful 
than the generic messages referring to variables and object ids that we can 
currently generate.

the initial syntax is 
':' string

it can be made more sophisticated liter to allow for expressions that refer to 
the scope when the 
constraint is created.

the mechanisms for this are already in place in C++ (see 
Constraint::getViolationExpl()). this 
mostly entails exposing it through nddl


Original issue reported on code.google.com by [email protected] on 11 Aug 2009 at 12:07

Remove redundant registration for constraint types

Currently most constraint types are registered with more than one name.
See ModuleConstraintEngine::initialize.

Each constraint type sould only be registered under one name.
Naming should be consistent, camel case and full words should be used, for
instance : AddEqual, LessThan, Equal, etc.

regression tests will need to be updated and wiki docs updated accordingly

Original issue reported on code.google.com by [email protected] on 24 Aug 2009 at 10:53

Support 64-bit numeric data types



The ensemble team wants to be able to take advantage of the full range of
representation for 64-bit numeric data types.

Paul Morris has already done some exploration on a branch, see note below.

Assigning to Mike for now since he seems to have thought about this the most.

From Paul (see also attached diff files) :

I have been able to successfully modify Europa to use the 'long' type for
Time on the openeuropa-new 64-bit machine.  It is probably too late to use
this for the G3 delivery because I would recommend prolonged use before
delivering this to acquire confidence.  However, I would like to record
here the changes needed, and the issues that needed to be addressed, for
future use.

The core Europa files that needed to be modified are the following:

M      TemporalNetwork/base/DistanceGraph.hh
M      TemporalNetwork/TemporalNetworkDefs.hh
M      Utils/CommonDefs.hh
M      Utils/base/Utils.cc
M      ConstraintEngine/component/IntervalIntDomain.hh
M      ConstraintEngine/component/IntervalIntDomain.cc

The most extensive changes are to the IntervalIntDomain files, which were
modified to use an IntInt datatype instead of int.  I have attached diff
files for those.  IntInt is then typedefed to long in CommonDefs.

Note that because the internal representation of values in ALL Domains uses
the double type, IntervalIntDomains and hence Time must still be restricted
to at most 52 or 53 bits (the significand part of double).  I have
restricted it to 50 bits in CommonDefs just to be on the safe side.  This
is still large enough to represent millseconds.

A further issue that arises is that PLUS_INFINITY and MINUS_INFINITY are
used as generalized infinities through the Europa core code, including uses
as ints, unsigned ints, and floats, so I couldn't just reset those to 50
bit sizes.  Instead I separated out PLUS_INFINITE_TIME from PLUS_INFINITY
as separate values, and used the former for Time and IntInt uses, and kept
the latter for generalized uses.

One caveat is that durations (but not start/end times) are still restricted
to [1, PLUS_INFINITY] because there are several places in the 
NddlResource.cc and NddlToken.cc files, where IntervalIntDomain(1,
PLUS_INFINITY) is called for durations, that I didn't want to change.
Another caveat is that PLUS_INFINITY needs to be chosen so that

       PLUS_INFINITY = (int)(float)PLUS_INFINITY

because the +inff float value in the NDDL Resource definitions gets
translated to PLUS_INFINITY and is then checked to be <= PLUS_INFINITY,
which could fail if there is roundoff error in the (int)(float) cast.
I was able to make PLUS_INFINITY be INT_MAX/2 + 1, which casts ok.

These considerations allowed the following definitions to work in the
CommonDefs.hh file.

#typedef long IntInt;
#DECLARE_GLOBAL_CONST(IntInt, g_maxInt);
#DECLARE_GLOBAL_CONST(IntInt, g_infiniteTime);
#DECLARE_GLOBAL_CONST(IntInt, g_noTime);

#define PLUS_INFINITE_TIME (1125899906842624) // 250 (double mantissa)
#define MINUS_INFINITE_TIME (-1125899906842624)

// NDDL use of float +inf relies on
// PLUS_INFINITY = (int)(float)PLUS_INFINITY
#define PLUS_INFINITY ( INT_MAX/2 + 1 )
#define MINUS_INFINITY ( -INT_MAX/2 - 1 )

(These are the effective changes for INFINITE_TIME, but should be done in a
cleaner way in terms of g_maxInt and g_infiniteTime.)

Now the 'long' definitions of Time in previous versions of the
TemporalNetwork could be restored and work with a 64-bit long.

I also had to replace PLUS/MINUS_INFINITY in the DynamicEuropa files by
PLUS/MINUS_INFINITE_TIME where appropriate (i.e. for Time uses).

Paul


Original issue reported on code.google.com by [email protected] on 15 Sep 2009 at 3:47

Re-integrate transaction replay into system tests

Now that the transaction player can filter transactions and play them
backwards, we can re-integrate the replay tests in System/test which were
excluded because replaying the model caused conflicts. 

Original issue reported on code.google.com by [email protected] on 15 Sep 2009 at 4:10

Review semantics for Timeline::constrain and Object::constrain

See [http://babelfish.arc.nasa.gov/trac/europa/changeset/5007]. 

If we already have A -> B, and Timeline:constrain is asked to put C between
A and B, it will call Object:constrain twice (there are two additional
constraints to add). If auto-propagation is on, the first can cause things
to be inconsisent.

Is this acceptable? Should Timeline:constrain check consistency before
going ahead with the extra constrain? How do we want Europa to behave now
that inconsistencies are allowed?

Original issue reported on code.google.com by [email protected] on 9 Sep 2009 at 11:58

Output of Examples should be included in tests

The RUN results from the various Examples now run during the tests should
be compared with saved results so that we know behavior isn't changing.

Do we have any similar tests set up and going somewhere (ie checking
current results against old ones)?

See also #14, whose solution might solve this too.  Note that examples are
already run when testing, it's just that results aren't considered.

Original issue reported on code.google.com by [email protected] on 9 Sep 2009 at 10:39

Investigate uses of LabelStr as a hash function

There are many places where LabelStr? is used as a hash function where it
doesn't really need to be, unnecessarily polluting the limited space of
LabelStrs? with strings that aren't user data. Any place where LabelStr? is
used to map from string -> double but not the reverse should use a regular
hash map instead. 

Original issue reported on code.google.com by [email protected] on 15 Sep 2009 at 4:02

Interpreter should allow Unary resource to substitute for Timeline

This would allow the MCC model to not get more complicated only because we
want constraint violation information reported, for example.

Mike reports that you could do this in code generation (and pointed me to
System/test stuff).



From email by Mike that Javier responded to:

Michael Iatauro (ARC-TI)[QSS GROUP INC] wrote:


The code generator allows you to swap in token implementation classes as
well, so you can say stuff like:

  <binding nddl="Timeline" cpp="EUROPA::SAVH::UnaryTimeline"
include="SAVH_Reusable.hh"/>
ok, this can be accomplished by the cpl of lines I sent Tristan in my
previous message

    <binding nddl="Timeline.*" cpp="NddlUnaryToken"
include="SAVH_Reusable.hh"/>

right, I had forgotten about this and the fact that it takes advantage of
C++ inheritance to get the desired behavior.
this needs to be added to the interpreter, shouldn't be hard but it'll take
more than a few minutes, I'll try to take care of it next week.

Original issue reported on code.google.com by [email protected] on 15 Sep 2009 at 3:42

Transactions getting created without owners



From Paul:

In Resource/component/Reservoir.cc:

TransactionId? trans = (new Transaction(t->getTime(), t->getQuantity(),
t->isConsumer()))->getId();

Note the call does not pass the m_owner value to the new transaction. As a
consequence I am getting null for the getOwner() accessor.

Interestingly, adding the resource id to the transaction constructor call
causes transactions to live forever:

    [exec] ExecuteTarget?
RUN_runProblem_SOLVER_g_rt.backtr.xml.DefaultPlannerConfig?.xml.nddl-xml
[exec] FAILED = DID NOT CLEAN UP ALLOCATED IDs: [exec] 10
N6EUROPA11TransactionE [exec] Id Contents: (13998240, 2,N6EUROPA8DataTypeE)
(16914112, 3,N6EUROPA8DataTypeE) (16914144, 6,N6EUROPA8DataTypeE)
(16914176, 5,N6EUROPA8DataTypeE) (16914208, 4,N6EUROPA8DataTypeE)
(16914272, 7,N6EUROPA8DataTypeE) (16914304, 1,N6EUROPA8DataTypeE)
(18984112, 3411,N6EUROPA11TransactionE) (19027584,
2265,N6EUROPA11TransactionE) (19164672, 3876,N6EUROPA11TransactionE)
(19296592, 502,N6EUROPA11TransactionE) (19350560,
663,N6EUROPA11TransactionE) (19492864, 4009,N6EUROPA11TransactionE)
(19528720, 4128,N6EUROPA11TransactionE) (19606112,
5053,N6EUROPA11TransactionE) (19654480, 5299,N6EUROPA11TransactionE)
(19691232, 5422,N6EUROPA11TransactionE) [exec] Were 7 IDs before; 17 now


Original issue reported on code.google.com by [email protected] on 15 Sep 2009 at 3:22

NDDL docs need update

See the phrase "There are 2 ways to introduce a token into the plan
database using NDDL transactions."

Only one (goal) is described in the subsequent text!

Original issue reported on code.google.com by [email protected] on 9 Sep 2009 at 11:47

Jam clean/recompile doesn't work right

To reproduce, go to one of the example directories:

1. Run 'jam clean' 
2. Run 'jam' - this build will fail. 
3. Run 'jam' again - this time it will succeed.

It looks like the first try is doing things in the wrong order (nddl stuff
is expected before it is built), and it appears to be related to PSUI, so
I'm hoping you know exactly what the problem is. :) 

Original issue reported on code.google.com by [email protected] on 30 Sep 2009 at 6:24

Add Plan Comparison to System/base regression tests

The regression tests in System/base should have an extra step where the
plans generated through code generation and by the interpreter are dumped
out and compared to see that they match exactly. the comparator needs to be
a little smart to ignore things like differences in token and variable ids.




from Mike:

Here's the script that I wrote way back when for comparing the output of
the PlanWriter?. It assumes that it's working on two different directories,
but it shouldn't be hard to modify it so that it just works with files of
different extensions.

~MJI

{{{
#!/usr/bin/perl -w
BEGIN {
  push @INC, "/home/miatauro/lib/perl5/site_perl/5.8.0";
}
use warnings qw/all/;
use strict;

use Algorithm::Diff;

my %plans = ();
my $token_key_rx = qr/Key=\d+\s+Master=(?:\d+|NONE)/;
my $merge_key_rx = qr/Merged Key=\d+/;

collect_plan_output_files($_) foreach @ARGV;

foreach(sort keys %plans) {
  if(@{$plans{$_}} != @ARGV) {
    print "Not every set of output files has $_.  Skipping.\n";
    next;
  }
  plan_compare(@{$plans{$_}});
}

sub collect_plan_output_files {
  my $dir = shift;
  local $_;
  opendir my $dh, $dir or die "Couldn't open directory $dir: $!\n";
  my @files = grep {/RUN_.+\.output/} readdir $dh;
  closedir $dh;

  foreach my $plan (@files) {
    $plan =~ /RUN_(.+?)\./;
    $plans{$1} = [] unless exists $plans{$1};
    push @{$plans{$1}}, "$dir/$plan";
  }
}

sub plan_compare {
  my $planfile1 = shift;
  my $planfile2 = shift;

  my $plan1 = get_plan($planfile1);
  my $plan2 = get_plan($planfile2);

  my $diff = Algorithm::Diff->new($plan1, $plan2);

  my %diffs1 = ();
  my %diffs2 = ();
  my @lines1 = ();
  my @lines2 = ();

  $diff->Base(1); #use line numbers
  while($diff->Next()) {
    next if($diff->Same()); #skip anything that's the same
    my @items1 = $diff->Items(1);
    my @items2 = $diff->Items(2);

    @items1 = remove_rx($token_key_rx, @items1); #token key differences
don't matter
    @items2 = remove_rx($token_key_rx, @items2);
    next if (@items1 == @items2 && @items1 == 0);

    if(@items1 == @items2) {
      @items1 = remove_rx($merge_key_rx, @items1); #merged key differences
don't matter
      @items2 = remove_rx($merge_key_rx, @items2); #as long as there are
the same number of merged tokens
      next if(@items1 == @items2 && @items1 == 0);
    }

    @items1 = remove_rx(qr/world\./, @items1); #differences in the world
object don't matter
    @items2 = remove_rx(qr/world\./, @items2);

    @items1 = remove_rx(qr/ound plan/, @items1); #differences in step
numbers don't matter
    @items2 = remove_rx(qr/ound plan/, @items2);
    next if(@items1 == @items2 && @items1 == 0);

    if(@items1 > 0) {
      $diffs1{$diff->Min(1)} = \@items1;
      push @lines1, $diff->Min(1);
    }
    if(@items2 > 0) {
      $diffs2{$diff->Min(2)} = \@items2 ;
      push @lines2, $diff->Min(2);
    }
  }

  #if(@lines1 != @lines2) {
  #  print "Plans $planfile1 and $planfile2 are very definitely different.\n";
  #}

  my $min = (@lines1 < @lines2 ? @lines1 : @lines2);

  foreach my $i (0..$min) {
    next if !(defined($lines1[$i]) && defined($lines2[$i]));

    my $subdiff = Algorithm::Diff->new($diffs1{$lines1[$i]},
$diffs2{$lines2[$i]});
    while($subdiff->Next()) {
      next if ($subdiff->Same());
      my @subitems1 = $subdiff->Items(1);
      my @subitems2 = $subdiff->Items(2);
      print "=====================\n";
      print "$planfile1: [", $lines1[$i] + $subdiff->Min(1), "]\n";
      map {print $_} @subitems1;
      print "======================\n";
      print "$planfile2: [", $lines2[$i] + $subdiff->Min(2), "]\n";
      map {print $_} @subitems2;
    }
  }

  if(@lines1 > @lines2) {
    foreach($min+1..$#lines1) {
      print "===================\n";
      print "$planfile1: [", $lines1[$_], "]\n";
      map {print $_} @{$diffs1{$lines1[$_]}};
    }
  }
  elsif(@lines2 > @lines1) {
    foreach($min+1..$#lines2) {
      print "===================\n";
      print "$planfile2: [", $lines2[$_], "]\n";
      map {print $_} @{$diffs2{$lines2[$_]}};
    }
  }
}

sub get_plan {
  my $file = shift;
  open my $fh, $file or die "Failed to open file $file: $!\n";
  return extract_plan($fh);
}

sub extract_plan {
  my $fh = shift;
  my @plan = ();
  local $_;
  while(<$fh>) {
    #print "$.: $_";
    last if /Objects\s+\*+/;
  }
  #print "Pushing line: $_";
  push @plan, $_;
  while(<$fh>) {
    #given the current plan output
    #there are only merged and inactive tokens after this point,
    #which don't really matter
    last if(/Merged Tokens:\s*\*{4,}/);

    if(/.+\s*\*{4,}/ || # Objects **** or Variables ***** etc.
       /.+=.+:.+/ || #object.var=type:DOMAIN
       /\[\s+.+:.+\s+\]/ || # [ INT_INTERVAL:CLOSED[50, 65] ]
       /\.+\((?:.+=.+[}\]])*\)/ ||
#object.predicate(parameter=type:DOMAINparameter=type:DOMAIN)
       /$token_key_rx/ || #Key=123 Master=none
       /$merge_key_rx/ ) {
    #print "Pushing line: $_";
      push @plan, $_;
    }
  }
  return \@plan;
}

sub remove_rx {
  my $rx = shift;
  return grep {$_ !~ /$rx/} @_;
}

}}}

Original issue reported on code.google.com by [email protected] on 9 Sep 2009 at 10:37

Create unittests for throwing exceptions from parser

As of recently, NDDL parser throws all lexer/parser exceptions wrapped in a
single object. Need to add unittests verifying that

1. The combined exception reaches C++ code
2. The combined exception reaches Java code (through Swig)
3. The AST parser returns the combined exception as part of the returned
string (C++ test would be enough).

Original issue reported on code.google.com by [email protected] on 16 Sep 2009 at 7:14

add performance tests to autobuild

we need to add time measurements for planning to the nightly build. some of
the tests that involve generating a plan need to be timed and the results
included in the summary for the autobuild 

Original issue reported on code.google.com by [email protected] on 15 Sep 2009 at 3:52

Schema browser

Add schema browser. Need both Eclipse and Swing versions

Original issue reported on code.google.com by [email protected] on 28 Apr 2009 at 3:24

Purged objects don't notify variables pointing to them...



See [http://babelfish.arc.nasa.gov/trac/europa/changeset/4967]. The problem
was that:

    * The plan database is cleaned up first, so objects are discarded
    * Because purging is turned on, the object does not notify the plan
database that it has disappeared
    * Therefore, the plan database doesn't do anything about the fact that
there might be variables sitting around pointing to the object (e.g. a
token's 'OBJECT' variable)
    * Later, the constraint engine is purged and if debugging is turned on
(ConstrainedVariable::handleDiscard, specifically), a message tries to
print out info about the variable getting deleted, which includes a pointer
to the object, which has disappeared... 

So, the current situation is that no info is printed during purging. Is
this acceptable, or a sign of design issues we might want to address?

Original issue reported on code.google.com by [email protected] on 9 Sep 2009 at 11:46

Update constraint docs

First, we need documentation on the alternative ways to represent
constraints. For example, the table currently gives 'a contained_by b'
syntax but doesn't mention the more common 'contained_by(a myNameForA)' and
variants of that. 

Second, we need documentation for all the changes/improvements that have
been made by the WillowGarage team this summer.

Original issue reported on code.google.com by [email protected] on 9 Sep 2009 at 10:31

FlowProfile and IncrementalFlowProfile behave differently

See the attached CrewPlanning example.  Running 'make' results in a
solution found after 336 steps.  If you replace "IncrementalFlowProfile"
with "FlowProfile" on line 27 in the *model.nddl file, it instead gets
exhausted after a handful of steps (which incidentally, is exactly what
happens when TimetableProfile is used).

Apparently Flow and IncrementalFlow should have the same behavior.

Original issue reported on code.google.com by [email protected] on 15 Sep 2009 at 3:49

Attachments:

Cross-platform ability to load shared libraries in C++


We want the equivalent of LibraryLoader.java for C++. The two pieces
missing in p_dlopen are:

    * Given just the name, determine the library name (from X determine
libX_g.so but in a cross-platform way)
    * Given just the name, use LD_LIBRARY_PATH to find the above library. 

When this is implemented, it should probably be used in lieu of the two
'addModule' calls currently in X-Main.cc (produced by makeproject).

(I'm assigning this to myself only to avoid annoying someone else with it,
not because I have any clue how to do it :) )

Original issue reported on code.google.com by [email protected] on 9 Sep 2009 at 11:56

Add support for Capacity Limits Profile to Resources



The current resource implementation allows the user to specify :
- one [upperBound,lowerBound] interval for the initial capacity
- one [upperBound,lowerBound] interval for the valid capacity limits on the
Resource's level at any time

The user should be able to specify a capacity limit profile instead, that
is, a set of tuples {instant,upperBound,lowerBound} that specifies the
capacity limits over time. For instance, if we're modeling a person or a
machine, we should be able to specify intervals where overtime is valid
this way. To deal with situations like overtime, the current approach
requires the user to come up with a max possible upper limit, then lower
the limit with artificial activities to bring capacity down to "normal"
levels. That workaround complicates modeling and app development and has
adverse performance implications.

Also, it seems to be a cleaner modeling interface to have the user set the
capacity at time 0 by a explicit fact in the initial state, instead of
specifying it as "initialCapacity" in the constructor for the resource.

Original issue reported on code.google.com by [email protected] on 9 Sep 2009 at 11:49

Compile Ids as straight C++ pointers through build flag



having a build flag that compiles ids to straight C++ pointers would be
useful for : - speed reasons : for an app that is stable and wants to get
every last ounce of performance - helping drop object wrapping in PSEngine
(see #114). once we have this, we can expose internal classes through SWIG
without having to deal with template issues.

Original issue reported on code.google.com by [email protected] on 15 Sep 2009 at 4:00

Nddl 'if' statement not handled correctly



See that attached code (we discussed it in the EUROPA meeting yesterday).
There are two alternative 'if' statements that should be equivalent.
Michael believes there's a bug in how: if( thing.isHappy == true ) is
handled, where thing is a member variable pointing to a non-unique object
and isHappy is a boolean member variable in that object.

Original issue reported on code.google.com by [email protected] on 15 Sep 2009 at 3:56

Attachments:

Solver::noMoreFlaws() can lie, which seems unfari

What steps will reproduce the problem?
1. Load and initial state and use a solver to solve it.  Solver.hasFlaws()
correctly returns false.
2. Add something (for example, parse some new NDDL).
3. Solver.hasFlaws() INCORRECTLY return false.

The problem is that the solver has an internal variable m_noFlawsFound that
has no way of knowing that things have changed (so it becomes stale).

An obvious solution is to remove that variable, and force the code to
recheck every time hasFlaws() is called - ie call
allocateNewDecisionPoint().  However, this doesn't feel like a method that
should have side effects.  A discussion is currently under way on the
developers mailing list....

Original issue reported on code.google.com by [email protected] on 10 Aug 2009 at 10:27

Weird error if default PlannerConfig.xml used

See the example problem with ticket #199.  Replace the existing
PlannerConfig.xml with PlannerConfig?.bak.xml (ie original PlannerConfig
that makeproject created).  Running make produces an error.

This is probably a bug; you should be able to successfully run however
things are configured.  At the very least, it would be nice to have the
warning suggest appropriate changes in PlannerConfig.xml.

NOTE:  I try to assign bugs to the person who could most quickly track them
down, not necessarily the person who should track them down :)

Original issue reported on code.google.com by [email protected] on 15 Sep 2009 at 3:58

Make all listeners behave consistently

If possible, see:

 * [http://babelfish.arc.nasa.gov/trac/europa/changeset/5093] which changes
behavior for one type of listener.
 * [http://babelfish.arc.nasa.gov/trac/europa/changeset/5098] which fixed a
bug in the above change.

Repeat those changes for all listeners.  Here's the email describing the
above changes:

 Repeat for all other listeners.

Here's my email describing why:

Hi all,

Short form: to guarantee that listeners will be notified in the order they
are added (in the reverse order, actually), I am going to change them from
being stored in a set to being stored in a vector, unless people object. We
were already asserting that a listener never gets added twice.

Long form: I was getting non-deterministic behavior because I had two plan
database listeners: one internal to EUROPA and one that I was creating in
my Java application. Because these were added to the set, their order
depended on the pointers involved. The order of events is interesting:

A) If the java listener is first:

1. Java activates X 2. java listener notified that X activated 3. C++
listener notified that X activated 4. C++ activates slave token Y 5. java
listener notified that Y activated 6. C++ listener notified that Y activated

B) If the C++ listener is first:

1. Java activates X 2. C++ listener notified that X activated 3. C++
activates slave token Y 4. C++ listener notified that Y activated 5. Java
listener notified that Y activated 6. Java listener notified that X activated

Notice that the java listener gets the messages 'backward' in the B, hence
the decisision to use a reverse iterator in the vector of listeners (ie,
since the internal listeners are most likely to be the impolite ones with
side-effects, notify the external listeners first).

Tristan

P.S. Thanks to Matt for solving the conundrum! :) 


Some other comments:

There are a few changes that would be nice:

   1. The ability to give listeners priorities so we can order them however
we want. This is orthogonal to other changes.
   2. No matter what order the listeners are in, each listener should get
events in the order they occur.  This probably requires each publisher to
have a list of messages received, and never start publishing a new message
until it completes sending the previous one to all listeners, to avoid the
problem of #207.
   3. Internal listeners to have higher priority than external listeners.
This should not be done until the previous item, to avoid ruining the fix
in [5093].
   4. Can we unify all listeners?  Is there an external listener library
that would be worth using?
   5. All changes should also have relevant tests added to our new cppUnit
framework. 


Original issue reported on code.google.com by [email protected] on 9 Sep 2009 at 11:43

Add type checking for all constraint types

ConstraintType::checkArgTypes() allows a constraint type to perform type
checking on the arguments that are passed to a constraint. This is used by
the parser at compile time to perform type checking.

currently only the AbsoluteValue and AddEqual constraint take advantage of
this, we need to implement it for all subclasses of Constraint that are
exposed to the user.

See ModuleConstraintLibrary::initialize for registration and 
Constraints.hh/.cc for definition.

Original issue reported on code.google.com by [email protected] on 24 Aug 2009 at 10:50

Add setBaseDomain method to variable

currently, the syntax  

{{{
int x=5;
}}}

means create an int var and set the base domain to [5,5].
to set the current domain you can say :

{{{
x.specify(5);
}}}


I think this choice of semantics is confusing for the user, let's add a 
explicit setBaseDomain() 
method and make '=" and 'specify()' mean the same

Original issue reported on code.google.com by [email protected] on 11 Aug 2009 at 12:43

Clean up resource search operator notes



See ResourceSearchNotes

These were created for internal use, but I've included them as a link off
our main documentation page, because I think it's an important piece to
have available to users.

It needs clean up and some fleshing out so it can be useful to
non-super-users :)

Original issue reported on code.google.com by [email protected] on 15 Sep 2009 at 3:51

Upgrade to latest version of ANTLR C



we're currently using an antlr C release from Jan-08, there have been
numerous bug fixes/improvements since then. Let's upgrade to the latest
stable release, which is available at : 
http://fisheye2.atlassian.com/browse/antlr/runtime/C/dist

Also, let's look into requiring antlr C as a pre-req (for developers, users
should still get it packaged with the binary distribution), instead of
making it part of the build, hopefully this will remove the need for the
hack we added to make it work on 64-bit platforms. We'd need to harvest the
antlr libraries and include them in the binary release.

we may also need to upgrade the antlr jar that we keep in our ext dir.

Original issue reported on code.google.com by [email protected] on 15 Sep 2009 at 3:53

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.