Giter VIP home page Giter VIP logo

tarski's Introduction

Tarski - An AI Planning Modeling Framework

Unit Tests Documentation Status codecov PyPI - Python Version PyPI

What is Tarski

Tarski is a framework for the specification, modeling and manipulation of AI planning problems. Tarski is written in Python and includes parsers for major modeling languages (e.g., PDDL, FSTRIPS, RDDL), along with modules to perform other common tasks such as logical transformations, reachability analysis, grounding of first-order representations and problem reformulations.

Installation: Check the installation instructions.

Documentation: Read the documentation of the project.

Testing: Most tests can be run by executing pytest on the root directory. Alternatively, they can be run through tox, for which several testing environments are defined.

How to Cite

If you find tarski useful in your research, you can cite it with the following bibtex entry:

@misc{tarski:github:18,
  author = {Guillem Franc\'{e}s and Miquel Ramirez and Collaborators},
  title = {Tarski: An {AI} Planning Modeling Framework},
  year = {2018},
  publisher = {{GitHub}},
  journal = {{GitHub} repository},
  howpublished = {\url{https://github.com/aig-upf/tarski}}
}

License

Tarski is licensed under the Apache-2.0 License.

tarski's People

Contributors

abcorrea avatar anubhav-cs avatar camcunningham avatar emilkeyder-invitae avatar gfrances avatar mejrpete avatar miquelramirez avatar phoeft670 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

tarski's Issues

Implement Writers and Readers

We would like to implement writers and readers in two different formats: json and PDDL. This will definitely help adjust and improve the current code design / API. The code should live under the tarski.io module.
Not sure if we want to reuse code from the FS planner and/or FD's python parser, that might be a reasonable first step - what do you think, @miquelramirez ?

The status of Axiomatic Formulas

Axiomatic formulae have been disabled in this iteration. Quoting the comment in the code:

The distinction between whether a formula is axiomatic, external, etc. should probably be done elsewhere, not here (possibly at the evaluation level)

Like the implication, "axiomatic" formulas of the form

\phi \iff \varphi

can be rewritten, compiling the equivalence operator away. This form of syntactic sugar is somewhat controversial, because of the tendency in the planning literature of conflating the semantics of the constraint (phi and varphi need to be simultaneously satisfiable in all worlds) with the vagaries of the syntactic restrictions enforced by planners so as to make transition functions, heuristics and what not tractable/simple/easy.

My idea with axiomatic formulas in Tarski is that they are what they are: the bi conditional. How they're compiled, handled or what restrictions are forced on other elements of the language, such as action effects, is something for the back ends to handle as it suits them best.

Fluent Symbol and State Variable Detection

At the moment the unit tests:

  • test_task_index_process_symbols_fluents
  • test_task_index_create_state_variables

are broken.

The reason for this is that the predicate clear(x) is not being picked up as a fluent. This is expected, as clear has been defined as part of a state constraint, without specific syntax giving it away as a fluent symbol.

How can we figure out that the value of clear will be changing without syntactic sugar neither compiling them into actions? I think the way is to look at the interpretation given as the initial state of the FSTRIPS problem... wouldn't it?

Keeping Parser Files in the Repository

As discussed elsewhere, at one point we had the pre-generated parser files in the repo.

I removed them because they are incompatible with different versions of the ANTLR Python runtime. I need to double check which is the latest version, and make sure that all setup.py scripts are pointing to the right version before reintroducing them.

Parsing blocksworld fails with DuplicatePredicateDefinition

This can be replicated with:

r = tarski.io.FstripsReader()
r.read_problem('blocks/domain.pddl', 'blocks/probBLOCKS-10-0.pddl')

Resulting in:

Traceback (most recent call last):
File "", line 1, in
File "/usr/local/lib/python3.7/site-packages/tarski-0.1.0-py3.7.egg/tarski/io/fstrips.py", line 27, in read_problem
self.parse_domain(domain)
File "/usr/local/lib/python3.7/site-packages/tarski-0.1.0-py3.7.egg/tarski/io/fstrips.py", line 37, in parse_domain
self.parse_file(filename, 'domain')
File "/usr/local/lib/python3.7/site-packages/tarski-0.1.0-py3.7.egg/tarski/io/fstrips.py", line 34, in parse_file
self.parser.visit(domain_parse_tree)
File "/usr/local/lib/python3.7/site-packages/antlr4_python3_runtime-4.7.1-py3.7.egg/antlr4/tree/Tree.py", line 34, in visit
return tree.accept(self)
File "/usr/local/lib/python3.7/site-packages/tarski-0.1.0-py3.7.egg/tarski/io/_fstrips/parser/parser.py", line 790, in accept
return visitor.visitDomain(self)
File "/usr/local/lib/python3.7/site-packages/tarski-0.1.0-py3.7.egg/tarski/io/_fstrips/parser/visitor.py", line 19, in visitDomain
return self.visitChildren(ctx)
File "/usr/local/lib/python3.7/site-packages/antlr4_python3_runtime-4.7.1-py3.7.egg/antlr4/tree/Tree.py", line 44, in visitChildren
childResult = c.accept(self)
File "/usr/local/lib/python3.7/site-packages/tarski-0.1.0-py3.7.egg/tarski/io/_fstrips/parser/parser.py", line 2191, in accept
return visitor.visitPredicate_definition_block(self)
File "/usr/local/lib/python3.7/site-packages/tarski-0.1.0-py3.7.egg/tarski/io/_fstrips/parser/visitor.py", line 134, in visitPredicate_definition_block
return self.visitChildren(ctx)
File "/usr/local/lib/python3.7/site-packages/antlr4_python3_runtime-4.7.1-py3.7.egg/antlr4/tree/Tree.py", line 44, in visitChildren
childResult = c.accept(self)
File "/usr/local/lib/python3.7/site-packages/tarski-0.1.0-py3.7.egg/tarski/io/_fstrips/parser/parser.py", line 2256, in accept
return visitor.visitSingle_predicate_definition(self)
File "/usr/local/lib/python3.7/site-packages/tarski-0.1.0-py3.7.egg/tarski/io/_fstrips/reader.py", line 108, in visitSingle_predicate_definition
return self.language.predicate(predicate, *argument_types)
File "/usr/local/lib/python3.7/site-packages/tarski-0.1.0-py3.7.egg/tarski/fol.py", line 234, in predicate
self._check_name_not_defined(name, self._predicates, err.DuplicatePredicateDefinition)
File "/usr/local/lib/python3.7/site-packages/tarski-0.1.0-py3.7.egg/tarski/fol.py", line 229, in _check_name_not_defined
raise exception(name, where[name])
tarski.errors.DuplicatePredicateDefinition: Duplicate definition of element "on": "on/2"

Tutorial

I have finished doing a pass over the old Tutorials in the notebooks folder, and I would need some feedback on them. @gfrances could you take a look through them?

Also, I was wondering if it would be worth the effort to integrate the example with pyperplan, just for the purpose of illustration.

RDDL Support Incoming for both Tarski and FS

After conferring with Scott Sanner and one of his students (?) Thiago Bueno, I was made aware that there is a fully functional, Python3 RDDL parser available

https://github.com/miquelramirez/pyrddl

which is pip installable. Over the next few days I will be mimicking the FSTRIPSWriter/Reader classes so we can work directly on Scott's domains (and use Tarski to produce RDDL models).

Be consistent in FSTRIPS hierarchy attributes: `symbol`, `name`, `head`

This is a spin-off from #53: we want to be more consistent in the way we name attributes in different classes of the FSTRIPS hierarchy. Quoting a couple of comments from that issue:

  • in one of them the main attribute is called symbol, whereas in the other it's _symbol, and then a @property is there to help make things uniform. This "symbol" is sometimes a string, sometimes an object of type BuiltinSymbol, etc.
  • I see that you haven't transformed all terms yet (constants have a symbol not a name), and that there's now a new member called head defined for all CompoundTerms (but the subclasses haven't been changed).
  • The general rationale for changing symbol to name was to call "symbol" those attributes that are indeed a symbol (i.e. a function symbol or a predicate symbol); and call "name" those attributes that are just strings or literals. OTOH, the rationale for changing the interface of CompoundTerms was to have both CompoundTerm and Atom present the same interface (a head plus a number of subterms).

This is half implemented in branch dev-0.2.0-symbols. The idea would be to fully finish this work, check for problems in clients of the library, and then merge todev-0.2.0

Remove dependency on scipy

The dependency on scipy should definitely be optional and loaded only when needed. Besides, if the only need we have for it at the moment is defining the symbol Pi, we can probably get rid of it altogether :-)
But surely it'd be better to think on a mechanism for importing it under demand.

Implement support for parsing of derived predicates

The PDDL / FSTRIPS parser is mostly working, but support for parsing PDDL axioms, aka "derived predicates" is still lacking.
Related: #12, where we had a long discussion about axioms in Tarski. However, this current issue is a bit different: we just want to be able to parse (and represent) PDDL axioms as defined in the language spec. Seems like that should be simpler.

Explore alternatives to the current ANTLR parser: PLY, SLY

It just recently came to my attention this Python module

http://www.dabeaz.com/ply/index.html

PLY is a 100% Python implementation of the common parsing tools lex and yacc.

This looks like a possible alternative to ANTLR - it is both quite mature and seems to be well supported. I haven't looked at how does ply handle semantic attachments to the parser. This was the prime motivator for me to use ANTLR (that and a cleaner syntax for both the parser and the lexer).

FSTRIPS Language: Change Syntax for External Procedures

I'd like to change the syntax we use to declare external procedures in PDDL files to something simpler, inspired by the way GPT handles this. Specifically, I would get rid of the need to prefix external symbols with an @, and simply have an extra top-level block in the PDDL declaring which symbols (usually functions or predicates) are external, e.g:

(:predicates (valid ?p - position))
(:external valid)

Implement Grounding via compilation into ASP

From the e-mail:

ASP-based grounding: migrar tu interfaz con gringo para que en vez de ocuparse de parsear el PDDL, tome una representación del problema en Tarski, y haga el reachability analysis. Por acabar de detaller en qué formato exactamente tiene que sacar output.

Tasks:

  • Review existing ASP-based grounder and identify key components
  • Define output format
  • Refactor ASP grounder (if necessary)
  • Write interface adapter for the ASP grounder

Implement Simple Validation Routines

A nice thing to have in Tarski would be a simple validation routine of any FSTRIPS instance, which checks e.g. that the problem is well-formed: all functions are total, etc.

Fix Term Evaluation

There's currently a subtle but important bug in the evaluation of constant terms. The code has some explanations on it, but basically, the denotation of a constant might not always be model-independent. Technically, a constant is a nullary function, and as such, and in a planning context, it won't necessarily have a fixed denotation. A nullary function symbol looking_at in blocksworld, for instance, is a constant, although likely it'll be a fluent.

PyFS Convergence

We expect several changes need to be made to Tarski interfaces in order to make easier the development of PyFS. So far we have identified:

  • Hashable Predicate
  • Hashable Function

Provide basic search capabilities

One thing that would be really useful for some of my projects would be to have some basic search capabilities integrated into Tarski (doesn't really need to be code that lives within this repo, to be discussed). I think at some point we mentioned with Miquel that an option would be to integrate pyperplan; perhaps that's not even necessary, as most of what pyperplan does in terms of parsing, we already do ourselves. I am just thinking on a basic breadth-first search that lists all reachable states or something along those lines. As a user, I'd like to use Tarski to parse a given (small) instance and then tell it to give me all reachable states, so that I can do cool stuff from them.

Again, I have this kind-of implemented with Pyperplan, but I would want to do it in a clean and elegant manner with Tarski.

ASP Parser - are equal / not_equal predicates necessary?

Just as a reminder: when importing the ASP preprocessor, I'd like to check whether the equal / not_equal predicates used in the ASP model (see e.g. grounder.py:181) can be replaced always (or sometimes?) by simple equality or inequality Clingo builtin predicates. I need to have a deeper look at that, there might be some good reason not to do that.

FSTRIPS Writer Cannot Handle Term References

Problem: @emilkeyder is running into trouble using the FSTRIPS printing functions in tarski.io.fstrips due to print_init being out of step with the implementation of predicate and function extensions. The issue is that we needed to define as tuples of TermReferences the entries in the extensions of declared functions and predicates, but the print_init method is expecting raw terms in lines 88 to 96.

Proposed fix: change the calls to print_term_list to print_term_ref_list in print_init.

Provide Reachability Analysis Capabilities Through ASP

We have several open issues regarding grounding and the integration of the ASP models that we already have in the FS planner to do that task (See #15, #16, #17, #19). Just to add to the wishlist, I think when we implement that we should use the opportunity to provide basic reachability-analysis capabilities to the Tarski user. With this I simply mean having some class / methods that are able to tell whether a certain atom is ever reachable from a given state / compute the set of all atoms reachable from a certain state. We don't want to reinvent the wheel here, just adapt the ASP generator code that we already have in a modular manner.
This would probably be restricted to some fragment of FSTRIPS without function symbols, but still useful.

Revise type casting

We need to think carefully about implicit type casting. There's a couple of tests that are failing ATM and which are related to this, and which I think should be addressed (e.g. a lang.constant(1.0, lang.Real) should be a term, namely, an instance of the class Constant, not a double, right? I might have introduced this bug on the last refactoring, but just wanted to be sure before changing the code).

More in general, however, we need to think what to do with e.g. arithmetic built-in operations. Take for instance the builtin function symbol "+". Technically speaking, from a FOL point of view, we should consider that we have different functions "+" for different sorts: we might have an addition function symbol for reals, another for ints, another for naturals, etc. Each of them has a different sort, i.e. the sort of "+_R" is <Real, Real, Real>; the sort of "+_N" is <Natural, Natural, Natural>. This is of course not what we're doing at the moment, but it gets even worse if we want for instance to create a term which is the sum of a Natural and an Integer, since there is no addition symbol with signature <Natural, Integer, Integer>. These are two distinct issues. For the second one, we should consider implicitly upcasting the subtype to the supertype, in this case nat to ints. I assume you already had something like this in mind, @miquelramirez ? One thing I think would be good from the beginning is to let the library user enable or disable this "implicit typecasts" by invoking some method in the language.

This is relevant not only to the arithmetic module, but to the core language, as we have the same considerations e.g. with the equality symbol "=".

Removal of extension tuple

The entry variable here is unused, so I guess that the remove will have no effect? OTOH, the construction of the frozenset will raise an exception when tup is None, right?

AST involving terms and Python implementation of polymorphism

Python implementation of polymorphism does not extend to built-in methods, such as add or eq. This requires subclasses to overwrite the default implementations and call the base class method directly.

The first pass on the implementation took this into account, but introduced a fair bit of clutter. I will restore the correctness (and usefulness) of the implementation, while at the same time curbing clutter by having the interface of Term and its subclasses to acquire these overloads as a module tarski.syntax.arithmetic is loaded.

Implement PDDL-like parsing

From the e-mail:

Parsing: punto aburrido pero imprescindible - parsear problemas PDDL / FSTRIPS en un formato intermedio. Espero liquidar esto esta semana, adaptando la gramática que ya tenías en el FS+ parser. Esa será nuestra gramática estándar, y nos podremos cargar todo el código heredado de FD, el parser del ASP preprocessor, etc etc.

TODO list:

  • Review existing ANTLR grammar
  • Rewrite callbacks for parsing events to capture the components modelled by Tarski language and planning problem components
  • Introduce facade that allows to kickstart the parsing process

Clarify use of TaskIndex

Hi @miquelramirez ,
I am trying to understand and fix / improve the TaskIndex class, which has changed from the lite branch to the dev-2.0 branch, and is breaking some of my code. In particular, assume that index is a TaskIndex... when I invoke index.process_symbols(problem), now index.fluent_symbols no longer contains symbols, but FormulaReferences. Is there any reason for this? My understanding is the following: a symbol is either fluent or static, depending on whether its denotation changes over any state in the problem. fluent_symbols used to collect which symbols in the signature of my language were fluent, and for that we don't need TermReferences or FormulaReferences (of course, we can also speak of fluent atoms and fluent terms; interestingly, not every atom formed from a fluent predicate symbol will necessarily be fluent, as it could be that a reachability analysis detects that it is static over some particular instance).

At the same time, in visitors.py a comment reads:

Visitor method to sort atoms and terms into the
"fluent" and "static" categories. Note that a given
symbol can be in both sets, this means that it gets
"votes" as static and fluent... the post_process() method
is meant to settle the issue (and potentially allow for
more ellaborate/clever heuristics).

which makes me think that I am missing something. What are the cases in which a symbol can be both fluent and static? Tried to look for the post_process() method, but doesn't seem to exist anymore.

What are the general use cases of TaskIndex in your code? Are you using it just to compute state variables? Or any other thing?

Implement Algebraic Data Types for Tarski

A long pursued objective - I have a few interesting use cases for this :-)
The objective would be to be able to deal at least with lists and sets within the specification of a problem.
These datatypes should likely be imported as a separate module:

import tarski as tsk
import tarski.list

bread_t = tsk.sort("bread")
gluten_free_bread = tsk.list('gluten_free_bread', bread_t)

Or something along those lines

The status of Externally Defined Formulas

Externally defined formulas have been disabled in this iteration. The comment in the code is as follows

The distinction between whether a formula is axiomatic, external, etc.
should probably be done elsewhere, not here (possibly at the evaluation level)

I think that Externally Defined formulae do not need to be a distinct entity in Tarski formula hierarchy they're entirely something concerning the back ends.

Transformations: Compile function symbols away

We'd like to implement a Tarski transformation that takes any basic Functional STRIPS problem and converts it into a plain STRIPS problem (i.e. no function symbols) by applying Patrik Haslum's "Skolemization in reverse" procedure. "Basic" FSTRIPS here means that this should not deal with semantic attachments and other "advanced" FSTRIPS features.

Possible typo in "super" hierarchy element

This line here seems to be the result of some typo, as super is a built-in name that has nothing to do with the type hierarchy, and will surely be always different to None.
BTW not sure the code inside the if is correct either - isn't it missing the creation of an equality symbol for that sort, for instance?

Transformations: Compile existential quantifiers away

We'd like to implement a Tarski transformation that takes any STRIPS / FSTRIPS problem with existential quantifiers and compiles these quantifiers away, either by pushing the quantified variables into the list of action parameters (if the variable appears in an action precondition and the user so desires), or by expanding the quantification into a disjunction.

Clean up copy / deepcopy usage and implementation

We have a few overrides of the special methods copy and deepcopy in the FOL syntax classes in Tarski. Most of them are not necessary, as they do exactly what the copy.copy and copy.deepcopy methods should do. One of them is key however, and does a different thing. The one in the Sort class:

    def __deepcopy__(self, memo):
        memo[id(self)] = self
        return self

Which prevents an otherwise undesirable circular recursion, which is the one that leads from a Sort to itself, because Sort objects have a language attribute, and language objects contain the sorts in many places. copy.deepcopy should deal with this correctly, but it doesn't. I've traced down the cause of this "error", and it is due to the Sort object being only partially built because it is being deepcopied, but the deepcopy process requiring the value of some of its attributes to be set, e.g. to compute its hash value, etc.

Without going too much into detail on why that happens, I think it'd be better to decide how we want to deepcopy these objects, because the (circular) dependencies are many, and the Language objects are quite heavy. For that, it'd be good to collect what are the use cases.
@miquelramirez , you mentioned that this is necessary for transformations, etc.
Could you elaborate a bit on that?

In case the above is not too clear (sorry, in a bit of a rush now), another way of looking at this is: if I want to deepcopy, say, a Constant object with value Constant(1)... do I want this deepcopy to trigger the whole copy of the entire first-order language, including duplicating all sorts, all other constants, functions, predicates, and so on? Probably not, although I'm not 100% sure. If we have a clear idea of what we want this for, we'll be able to "short-circuit" the deep-copying mechanism in the appropriate places (as in the method above, that stops the recursion and does a simple shallow copy), and document that adequately.

Backends: SAS / FDR writer

We'd like to implement a SAS / FDR writer that takes a STRIPS / FSTRIPS problem and outputs a file in the corresponding format, which can then be fed into planners such as Fast Downward. The reference for that would be the Fast Downward preprocessor and Malte's JAIR / AIJ articles ("The Fast Downward Planning System", "Concise finite-domain representations for PDDL planning tasks").
This will likely depend on #61, #62, if we want it to support interesting FSTRIPS models.

Should languages have the EQUALITY theory attached by default?

Consider the following test case in

https://github.com/aig-upf/tarski/blob/dev-0.2.0/tests/fol/test_interpretations.py

def test_predicate_extensions2():
    import numpy

    lang = tarski.language(theories=[Theory.EQUALITY])
    leq = lang.predicate('leq', lang.Real, lang.Real)
    w = lang.function('w', lang.Object, lang.Real)
    o1 = lang.constant("o1", lang.Object)
    o2 = lang.constant("o2", lang.Object)

    model = Model(lang)
    model.evaluator = evaluate

    model.setx(w(o1), 1.0)
    model.setx(w(o2), 2.0)
    for x in numpy.arange(0.0, 5.0):
        for y in numpy.arange(x, 5.0):
            model.add(leq, x, y)

    assert model[leq(w(o1), w(o2))] == True
    assert model[leq(w(o2), w(o1))] == False

if Theory.EQUALITY is not specified we get an exception when trying to evaluate the formulas

leq(w(o1), w(o2))

and

leq(w(o2), w(o1))

The reason for that seems to be that we need the operator == defined for constants which are mappable to integral types. I am not 100% sure why we're not running into issues with arbitrary constants, probably because at some point '==' is being applied over the id() of the objects?

I tend to think that this is a bug: my understanding of @gfrances design is that one could define all of algebra from first principles... that seems not to be the case.

Develop a Library of Problem Generators

I should find time to port to Tarski some of the countless problem generators that I have written for different papers / projects. Having some generators submodule (again, no need for that to live under this repo) that is basically a user module of Tarski and its io module and provides off-the-shelf generators for standard problems (blocksworld, gripper, and so on) would be a great way of showcasing the capabilities of Tarski (in fact the change from my old python generator scripts to developing generators using Tarski has really saved me a lot of time), and eventually of having a good library of problem generators that fosters collaboration from the community, etc.

Refactor Interpretation / Model Objects

I'd like to refactor a bit our design for FOL models. The current Model class is a great starting point, but we should have a (perhaps different) "Planning model" object which takes into account information of which predicate and function symbols are static to minimize the memory footprint and, essentially, is able to do the same key operation than the current models, which is to compute the denotation of any formula or term. The distinction between static / dynamic symbols does unfortunately not belong to FOL per se, which is why I am reluctant to modify the current Model class. We should stick to the principle that "pure FOL" concepts are implemented as cleanly as possible, and planning concepts are implemented separately, possibly as wrappers of the FOL concepts.

The driving motive of this change is not just keeping the memory footprint low by identifying static info, but also achieving a full integration with the DLModels that I have in the lite branch. These are Description Logic models, their key operation is to return the denotation of any DL concept or role.
But concepts and roles are just a subclass of FOL, so DLModel should essentially be a thin wrapper over a standard FOLModel (or maybe we don't even need a wrapper). The reason why I implemented that independently was that I needed the static distinction.

We have to keep in mind that a state is a model. I would like to have a couple of creators / adaptors / whatever design pattern we want that take e.g. a pyperplan state and converts it into a Tarski Planning Model. These will be very simple classes, nothing too complicated here.

Finally, the current models are based on Python strings, but eventually we'll want to move to assigning numeric IDs to predicate / function / constant symbols, and keeping everything a bit more performant. I wouldn't do this yet though, better first to get the design of the rest of things right.

Implement Naive Grounding

From the e-mail

Naive grounding: no es ideal, pero no siempre podremos usar el ASP grounder (e.g. when functions available, etc.). Un punto interesante es si queremos "forzar" el uso del ASP grounder when possible, e.g. by reformulating the problem into STRIPS without functions (just for the purpose of feeding it into the ASP module), and then extracting the info from there.

Tasks:

  • Define output format
  • Implement state variable calculation procedure
  • Implement action grounding procedure
  • Implement universal quantifier elimination
  • Implement constraint grounding procedure
  • Implement HDF5 writer for storing groundings of symbols

Uniformize Predicate and Function interface

The current design is not too consistent between classes Predicate and Function. They share no code, which is not nice (given that they are extremely similar, except for functions having a codomain, which for predicates is assumed to be boolean). Even worse, in one of them the main attribute is called symbol, whereas in the other it's _symbol, and then a @property is there to help make things uniform. This "symbol" is sometimes a string, sometimes an object of type BuiltinSymbol, etc.

In short, I would like to address this somewhat minor inconsistencies that however make other code in many places be unnecessarily complex, by forcing it to subcase for predicates and functions, etc.

Functional STRIPS with action costs refactorings

Following up on the changes proposed by @emilkeyder on #38 :

  1. There is no need to add specific attributes to tarski.fstrips.Problem in order to generate domain definitions compatible with IPC planners. I would rather do by adding rules to determine requirements in tarski.io._fstrips.common.
  2. IncreaseEffect needs to be refactored to avoid complicating the generation of compatible PDDL action effects. We do not need to call the constructor of the super class, we need to ensure that the interface is consistent with the requirements of the type hierarchy. That is, ensure all attributes are defined.
  3. The ARITHMETIC theory is not needed for Parc Printer as we are not using any of the arithmetic operators (i.e. +, etc)

Rethink builtin symbols

I'm starting to think that it might be more elegant to provide a "core language" module and then have all the builtins that we want to provide (perhaps we don't want to provide them monolithically, e.g. the arithmetic function symbols might only be loaded if explicitly required, etc.) separately. I.e. we create a language and then configure it with the "addons" that we want; each addon provides builtin symbols, etc. Both at the syntax and semantic level. To be thought about.

This is somewhat related to issue #6.

`resolve_function_symbol` is not suitable

to account for functional symbols with arity different than 2. The previous iteration included the suffix _2 to indicate that it was the version of the method with 2 parameters. That was not a great solution, as it harks back to the symbol/arity convention of Prolog. Which was a PITA.

resolve_function_symbol needs to refactored so its arguments list is variadic (i.e. using *args).

Transformations: Project away non-effect variables

We'd like to implement a Tarski transformation that takes any STRIPS / Functional STRIPS problem and, for those action schemas that contain parameters (variables) that appear only in the precondition of the schema, but not on the effects, projects this variables into an existentially quantified variable. To illustrate:

action a(x, y, z)
PRE: p(x, y) and q(y, z)
EFF: not p(x, y)

would become:

action a(x, y)
PRE: Exists z [p(x, y) and q(y, z)]
EFF: not p(x, y)

where the action schema has one parameter less.

Implement Some Axiom Inference Algorithm

This is a more far fetched project, but I'd like to implement some algorithms (perhaps a good starting point would be Miura & Fukunaga, ICAPS 2017) to infer axioms from the problem description.

Discuss how to deal with shadowing of Python built-in function names

We are currently overriding a few of Python's builtin names with our own function names, namely: pow, min, max, abs. I am not in principle against this, but at the same time it is true that I've seen some

from blablabla import *

in our code. The conjunction of both things worries me a bit more, since we're opening the door for the user to unadvertently override common functions like pow. I understand that it is the responsibility of the user whether to use * imports, but in general we might be sparing the potential users some headaches if we found some workaround for this. What do you think, @miquelramirez ? (a simple possibility would be to rename pow to power, and similar renamings)

Possible bug when parsing domain with universally-quantified effects?

Commit 25b6555 introduces a new test on a domain with universally quantified effects; the test is failing, I suspect due to some bug with those effects, but I don't have time to check that now. It could also be that by now this has been solved in the dev branch, but I thought that having the test on that simple domain wouldn't hurt either.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.