Giter VIP home page Giter VIP logo

caelan / pddlstream Goto Github PK

View Code? Open in Web Editor NEW
350.0 10.0 96.0 27.51 MB

PDDLStream: Integrating Symbolic Planners and Blackbox Samplers

Home Page: https://arxiv.org/abs/1802.08705

License: GNU General Public License v3.0

Python 99.98% Shell 0.02%
pddl robotics motion-planning automated-planning fastdownward pybullet pybullet-planning planning-algorithms artificial-intelligence task-and-motion-planning

pddlstream's Introduction

pddlstream

PDDLStream is a planning framework comprised of an action language and suite of algorithms for Artificial Intelligence (AI) planning in the presence of sampling procedures. PDDLStream extends Planning Domain Definition Language (PDDL) by introducing streams, declarative specifications of sampling procedures. PDDLStream algorithms are domain independent and solve PDDLStream problems with only a blackbox description of each sampler. The motivating application of PDDLStream was for general-purpose robot Task and Motion Planning (TAMP).

The default pddlstream branch (main) is the newest stable "release" of pddlstream. The downward pddlstream branch is the most recent and advanced version of pddlstream but also is somewhat experimental.

Publications

Citation

Caelan R. Garrett, Tomás Lozano-Pérez, Leslie P. Kaelbling. PDDLStream: Integrating Symbolic Planners and Blackbox Samplers via Optimistic Adaptive Planning, International Conference on Automated Planning and Scheduling (ICAPS), 2020.

Contact

Caelan Garrett: [username]@csail.mit.edu

History

PDDLStream is the "third version" of the PDDLStream/STRIPStream planning framework, intended to supersede previous versions:

  1. https://github.com/caelan/stripstream
  2. https://github.com/caelan/ss

PDDLStream makes several representational and algorithmic improvements over these versions. Most notably, it adheres to PDDL conventions and syntax whenever possible and contains several new algorithms.

Installation

$ git clone --recursive --branch main [email protected]:caelan/pddlstream.git
$ cd pddlstream
pddlstream$ git submodule update --init --recursive
pddlstream$ ./downward/build.py

If necessary, see FastDownward's documentation for more detailed installation instructions.

PDDLStream actively supports python2.7 as well as the most recent version of python3.

Make sure to recursively update pddlstream's submodules when pulling new commits.

pddlstream$ git pull --recurse-submodules

Examples

This repository contains several robotic and non-robotic PDDLStream example domains.

PyBullet

Install PyBullet on OS X or Linux using:

$ pip install pybullet numpy scipy

Examples:

  • PR2 TAMP: pddlstream$ python -m examples.pybullet.tamp.run
  • PR2 Cleaning and Cooking: pddlstream$ python -m examples.pybullet.pr2.run
  • Turtlebot Rovers: pddlstream$ python -m examples.pybullet.turtlebot_rovers.run
  • PR2 Rovers: pddlstream$ python -m examples.pybullet.pr2_rovers.run
  • PR2 Planning and Execution: pddlstream$ python -m examples.pybullet.pr2_belief.run
  • Kuka Cleaning and Cooking: pddlstream$ python -m examples.pybullet.kuka.run

See https://github.com/caelan/pybullet-planning for more information about my PyBullet planning primitives library.

Python TKinter

Install numpy and Python TKinter on Linux using:

$ sudo apt-get install python-tk
$ pip install numpy

Examples:

  • 1D Continuous TAMP: pddlstream$ python -m examples.continuous_tamp.run
  • 2D Motion Planning: pddlstream$ python -m examples.motion.run
  • Discrete TAMP: pddlstream$ python -m examples.discrete_tamp.run
  • Discrete TAMP with pushing: pddlstream$ python -m examples.discrete_tamp.run

Pure Python

Simple examples that can be run without additional dependencies:

  • Blocksworld: pddlstream$ python -m examples.blocksworld.run
  • Blocksworld with Derived Predicates: pddlstream$ python -m examples.blocksworld.run_derived
  • Kitchen (debug streams): pddlstream$ python -m examples.kitchen.run

Advanced Functionality

Test cases or advanced (and undocumented) functionality:

  • Action Description Language (ADL): pddlstream$ python -m examples.advanced.adl.run
  • Deferred streams (postponed evaluation): pddlstream$ python -m examples.advanced.defer.run
  • Exogenous streams (observations): pddlstream$ python -m examples.advanced.exogenous.run
  • Fluent streams (state constraints): pddlstream$ python -m examples.advanced.fluent.run
  • Constraint satisfaction: pddlstream$ python -m examples.advanced.satisfy.run
  • Wild streams (ad hoc certification): pddlstream$ python -m examples.advanced.wild.run

International Planning Competition (IPC)

Unmodified PDDL IPC examples solved using PDDLStream's modified translator:

  • Rovers: pddlstream$ python -m examples.ipc.rovers.run
  • Satellites: pddlstream$ python -m examples.ipc.satellites.run

Applications

External projects that make use of PDDLStream:

Algorithms

PDDLStream is a planning framework comprised of a single planning language but multiple planning algorithms. Some of the algorithms are radically different than others (e.g. Incremental vs Focused) and thus the planning time can also substantially vary. The Adaptive algorithm typically performs best for domains with many possible sampling pathways, such as robot manipulation domains.

The meta procedure solve(...) allows the user to toggle between avaliable algorithms using the keyword argument algorithm={}.

Property descriptions:

  • Method: the python function that calls the algorithm
  • Negated streams: whether the algorithm supports inverting test streams
  • Fluent streams: whether the algorithm supports fluent streams that additionally condition on the fluent state
  • Wild streams: whether the algorithm supports streams that additionally can certify ad hoc facts

Adaptive

  • Method: solve_adaptive(...)
  • Negated streams: supported
  • Fluent streams: supported
  • Wild streams: supported

Binding

  • Method: solve_binding(...)
  • Negated streams: supported
  • Fluent streams: supported
  • Wild streams: supported

Focused

Incremental

  • Method: solve_incremental(...)
  • Negated streams: not supported
  • Fluent streams: not supported
  • Wild streams: supported

Search Subroutines

Many (but not all) pddlstream algorithms have a discrete planning phase that can be implemented using any finite state-space search algorithm, such as Breadth-First Search (BFS) and Uniform-Cost Search (UCS). However, because pddlstream extends PDDL, this planning phase can also be implemented by state-of-the-art classical planning algorithms, which leverage the factored structure of action languages such as PDDL to vastly improve empirical planning efficiency. Best-first heuristic search algorithms, which automatically derive heursitics in a domain-independent manner, are one example class of these algorithms.

FastDownward

pddlstream comes pre-packaged with FastDownward, a prolific library that contains many best-first heuristic search PDDL planning algorithms. I've preconfigured a small number of effective and general search algorithms in SEARCH_OPTIONS, which can be toggled using the keyword argument planner=?. I've roughly ranked them in order of least lazy (lowest cost) to most lazy (lowest runtime):

The runtime of the discrete planning phase varies depending on the selected search algorithm. For many non-adversarial problems, these algorithms will either solve a problem instantenously or, if they aren't greedy enough, not terminate within 10 minutes. I recommend starting with a greedier configuration and moving torward a less greedy one if desired.

Other PDDL Planners

Any PDDL planning algorithm could be used in the place of FastDownward; however, a caveat is that some of these planners are only implemented to support a limited set of representational features (e.g. no conditional effects, no derived predicates, etc.), which can make both modeling more difficult and ultimately planning less efficient in many real-world (non-IPC) planning domains. While I heavily recommend FastDownward, some PDDL planners that I've interfaced with in the past with some success include:

Classical Planners

Numeric Planners:

Temporal Planners:

Diverse Planners:

Resources

Retired

"Retired" folders indicate code that no longer is continuously supported and thus is likely outdated.

Drake

Install Drake on OS X or Ubuntu by following the following instructions: http://drake.mit.edu/installation.html.

Alternatively, install Drake through docker by following the following instructions: http://manipulation.csail.mit.edu/install_drake_docker.html. Use the appropriate docker_run_bash script with docker tag drake-20181128.

Examples:

  • Kuka IIWA task and motion planning: ~/pddlstream$ python -m examples.drake.run

Additional PDDLStream + Drake examples can be found at: https://github.com/RobotLocomotion/6-881-examples.

pddlstream's People

Contributors

aidan-curtis avatar caelan avatar cpaxton avatar sea-bass avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pddlstream's Issues

Optimistic algorithms unable to solve TAMP problem with movable obstacles

Hello,

I'm trying to solve a TAMP problem where a robot r is in an office-like map in a starting position s, and it has to traverse n doors {d0, d1, ..., dn}, initially closed, to reach a final destination g. Walls of the map are fixed obstacles, while doors are movable obstacles.
I have two actions: open and move. The move action moves the robot between two locations while avoiding obstacles (the doors and the walls) - I have a stream that tries to compute a collision-free path via RRT and an ad-hoc collision checker. open allows the robot to open a door when positioned in front of it, like pushing a button (b0 for d0 , b1 for d1, ..., bn for dn). Once the button is pushed, the door configuration changes instantaneously from closed to open.

The incremental algorithm (with FastDownward) can find a valid plan. For example, if n=2:

move(r, s, b0, d0, c0, d1, c1) # move r from s to button b0 with d0 and d1 closed
open(d0, c0, o1) # open d0
move(r, b0, b1, d0, o0, d1, c1) # move r from button b0 to button b1 with d0 open and d1 closed
open(d1, c1, o1) # open d1
move(r, b1, g, d0, o0, d1, o1) # move from button b1 to g with all doors open

All other algorithms (focused, binding, adaptive - with FastDownward), instead, are unable to solve the problem even if the number of doors is small. For example, with n=2, the focused algorithm exits with

Stream plan (inf, 0, inf): False
Action plan (inf, inf): False
Summary: {complexity: 2, cost: inf, evaluations: 31, iterations: 10, length: inf, run_time: 36.669, sample_time: 35.535, search_time: 1.134, skeletons: 0, solutions: 0, solved: False, timeout: False}

Precisely, the status is INFEASIBLE because exhausted=True when calling iterative_plan_streams() of refinement.py.

Is this behavior reasonable? Could you please help me solving this issue?

Does PDDLStream support multi agent planning?

In the example figure below, the goal is to put block B in red region. First, I want r0 to move block B closer towards r1. Then r1 takes block B towards the red region.

If it supports multi agent planning, where do I specify that?
image

Some issues when running drake

Hello, I'm trying to run the demo in drake. However, I find that pydrake only supports python3 now, and the document of drake API changes a lot from your codes. Can you please show your environment settings of the demo of drake? The settings should include your Ubuntu version, python version and the date of binary package . Thanks a lot!

incremental algorithm on kitchen example

Hi Caelan, thanks for sharing your code.
I chose to start with the simplest algorithm, so I'm trying to run the incremental algorithm on the kitchen example. I have commented "stream sample-motion" in stream.pddl and uncommented stream sample-motion-h. Now the incremental algorithm is running to solve the PDDLStream problem but the sample-motion-h stream is always generating the same pose, causing the algorithm to never solve the problem.
Perhaps there are some other things I should edit in the code to get the algorithm running properly. Can you please guide me on how to do that?

What is :rule ?

In the stream file for the discrete TAMP problem example, the first few lines read.

  (:rule
    :inputs (?q ?p)
    :domain (Kin ?q ?p)
    :certified (and (Conf ?q) (Pose ?p))
  )

If I comment this out, no plan is found. I'm wondering what is the use of this :rule and what it does

No module named pddl.f_expression

python -m examples.continuous_tamp.run

pddlstream/pddlstream/algorithms/downward.py", line 58, in <module>
    import pddl.f_expression
ModuleNotFoundError: No module named 'pddl.f_expression'

FastDownward version broken?

There seems to be something wrong with the version of fast downward that is included here? I think it has to do with the blind search heuristic. It fails to solve trivial problems.

Here is a minimal domain and problem file that reproduce the bug:

(define (domain sanity)
    (:requirements :strips)
    (:predicates
        (isA ?obj)
        (isB ?obj)
    )
)
(define (problem check) 
 (:domain sanity)
    (:objects
        A
        B
    )
    (:INIT
        (isA A)
        (isB B)
    )
    (:goal 
        (or (isA A) (isB B))
    )
)

Planner output:

# /pddlstream/FastDownward/fast-downward.py --plan-file plan pddl/sanity/domain.pddl pddl/sanity/problem.pddl  --heuristic "h=blind()" --search "astar(h)"
INFO     Running translator.
INFO     translator stdin: None
INFO     translator time limit: None
INFO     translator memory limit: None
INFO     translator command line string: /opt/conda/envs/ikea/bin/python /pddlstream/FastDownward/builds/release32/bin/translate/translate.py pddl/sanity/domain.pddl pddl/sanity/problem.pddl --sas-file output.sas
Parsing...
Parsing: [0.000s CPU, 0.002s wall-clock]
Normalizing task... [0.000s CPU, 0.000s wall-clock]
Instantiating...
Generating Datalog program... [0.000s CPU, 0.000s wall-clock]
Normalizing Datalog program...
Normalizing Datalog program: [0.000s CPU, 0.000s wall-clock]
Preparing model... [0.000s CPU, 0.000s wall-clock]
Generated 5 rules.
Computing model... [0.000s CPU, 0.000s wall-clock]
10 relevant atoms
0 auxiliary atoms
10 final queue length
11 total queue pushes
Completing instantiation... [0.000s CPU, 0.000s wall-clock]
Instantiating: [0.000s CPU, 0.001s wall-clock]
Computing fact groups...
Finding invariants...
0 initial candidates
Finding invariants: [0.000s CPU, 0.000s wall-clock]
Checking invariant weight... [0.000s CPU, 0.000s wall-clock]
Instantiating groups... [0.000s CPU, 0.000s wall-clock]
Collecting mutex groups... [0.000s CPU, 0.000s wall-clock]
Choosing groups...
1 uncovered facts
Choosing groups: [0.000s CPU, 0.000s wall-clock]
Building translation key... [0.000s CPU, 0.000s wall-clock]
Computing fact groups: [0.000s CPU, 0.000s wall-clock]
Building STRIPS to SAS dictionary... [0.000s CPU, 0.000s wall-clock]
Building dictionary for full mutex groups... [0.000s CPU, 0.000s wall-clock]
Building mutex information...
Building mutex information: [0.000s CPU, 0.000s wall-clock]
Translating task...
Processing axioms...
Simplifying axioms... [0.000s CPU, 0.000s wall-clock]
Processing axioms: [0.000s CPU, 0.000s wall-clock]
Translating task: [0.000s CPU, 0.000s wall-clock]
0 effect conditions simplified
0 implied preconditions added
Detecting unreachable propositions...
0 operators removed
0 axioms removed
0 propositions removed
Detecting unreachable propositions: [0.000s CPU, 0.000s wall-clock]
Reordering and filtering variables...
1 of 1 variables necessary.
0 of 0 mutex groups necessary.
0 of 0 operators necessary.
1 of 1 axiom rules necessary.
Reordering and filtering variables: [0.000s CPU, 0.000s wall-clock]
Translator variables: 1
Translator derived variables: 1
Translator facts: 2
Translator goal facts: 1
Translator mutex groups: 0
Translator total mutex groups size: 0
Translator operators: 0
Translator axioms: 1
Translator task size: 5
Translator peak memory: 37164 KB
Writing output... [0.000s CPU, 0.000s wall-clock]
Done! [0.000s CPU, 0.005s wall-clock]

translate exit code: 0
INFO     Running search (release32).
INFO     search stdin: output.sas
INFO     search time limit: None
INFO     search memory limit: None
INFO     search command line string: /pddlstream/FastDownward/builds/release32/bin/downward --heuristic 'h=blind()' --search 'astar(h)' --internal-plan-file plan < output.sas
reading input... [t=2.1421e-05s]
done reading input! [t=6.4301e-05s]
Initializing blind search heuristic...
Building successor generator...done! [t=0.000172728s]
peak memory difference for successor generator creation: 0 KB
time for successor generation creation: 2.683e-06s
Variables: 1
FactPairs: 2
Bytes per state: 4
Conducting best first search with reopening closed nodes, (real) bound = 2147483647
Initial state is a dead end.
Initial heuristic value for blind: infinity
pruning method: none
Completely explored state space -- no solution!
Actual search time: 7.493e-06s [t=0.000232899s]
Expanded 0 state(s).
Reopened 0 state(s).
Evaluated 1 state(s).
Evaluations: 1
Generated 0 state(s).
Dead ends: 0 state(s).
Number of registered states: 1
Int hash set load factor: 1/1 = 1
Int hash set resizes: 0
Search time: 2.8906e-05s
Total time: 0.000236778s
Search stopped without finding a solution.
Peak memory: 4908 KB

search exit code: 12
Driver aborting after search

Include individual action costs in solution

When you solve a problem using PDDLStream, you get access to all the individual actions and the total cost of the plan as the first and second elements of the solution tuple.

However, it would be really convenient to also get the cost of each action individually as part of this solution.

The easiest thing to do would be for the Action namedtuple to include name, args, and an additional cost field.

Right now there is the workaround of re-evaluating each cost function for each action in the plan, but it would be nice to avoid having to do this.

'examples.pybullet.kuka.run' failed when I used 'AtPose'

Hi Caelan,

I really appreciate your great work. I was able to run the example code "examples.pybullet.kuka.run", but when I was trying to revise the goal state, it could not run successfully. I tried 'AtPose', but it didn't work. In order to eliminate the influence of variables, I set the goal object pose as the initial pose. The code in 'pddlstream_from_problem' function is:

    #Line 93 function:pddlstream_from_problem
    body = movable[0]
    pose = BodyPose(body, get_pose(body))
    goal = ('and',
            ('AtConf', conf),
            ('AtPose', body, pose),
    )

The code with the above goal could not be performed successfully. No matter what pose parameters I tried, all failed. The log shows the Stream plan and Action plan are None:

    Attempt: 1 | Results: 24 | Depth: 0 | Success: False
    Stream plan (inf, 0, inf): False
    Action plan (inf, inf): False

If I comment the line of 'AtPose', the code can run successfully. I tried to debug it but could not find the reason. Could you help me check this problem? I really appreciate it.

Thanks,
Nick Tian

file handle leak

I noticed that there are some file handle leaks in the code. The symptom is that if you run the planning, e.g., solve_focused in an infinite loop, eventually it will crash with OSError: [Errno 24] Too many open files: 'temp/'. I've been looking for where the leak happens, but no luck so far. You did a really great job in blocking such leak by using context manager for opening files. Hence I suspect the leak happens in the FastDownward process as opposed to the main python process, but it's just a guess.

How to control the order of the objects to be manipulated?

Hi Caelan,

Thanks for your great work. I have tried the example code. I tried to manipulate a set of the same objects without priority to the goal panel. The final result shows the algorithm randomly chooses the object to be manipulated. My question is, if I hope the robot first tries to manipulate a subset of the objects, how could I control the order?

Thanks for your help. Looking forward to your reply.

Nick

examples.pybullet.pr2.run failed

Hi Caelan,

Thanks for the great repo! I was able to run the kuka example, but when I was trying to run examples.pybullet.pr2.run, it gives me the following error:

assert(isinstance(fd, pddl.Literal) and not fd.negated)

Any idea what might be the cause?

Collision disabled

Hi,

When running the tamp example (from main branch), with or without --cfree flag plans ignore collision. Is there something else I need to change? I am also getting the following warnings that seem related.

b3Printf: No inertial data for link, using mass=1, localinertiadiagonal = 1,1,1, identity local inertial frame
b3Printf: b3Warning[examples/Importers/ImportURDFDemo/BulletUrdfImporter.cpp,126]:

b3Printf: r_gripper_tool_frame
b3Printf: b3Warning[examples/Importers/ImportURDFDemo/BulletUrdfImporter.cpp,126]:

b3Printf: No inertial data for link, using mass=1, localinertiadiagonal = 1,1,1, identity local inertial frame
b3Printf: b3Warning[examples/Importers/ImportURDFDemo/BulletUrdfImporter.cpp,126]:

b3Printf: l_gripper_led_frame
b3Printf: b3Warning[examples/Importers/ImportURDFDemo/BulletUrdfImporter.cpp,126]:

b3Printf: No inertial data for link, using mass=1, localinertiadiagonal = 1,1,1, identity local inertial frame
b3Printf: b3Warning[examples/Importers/ImportURDFDemo/BulletUrdfImporter.cpp,126]:

b3Printf: l_gripper_tool_frame
b3Printf: b3Warning[examples/Importers/ImportURDFDemo/BulletUrdfImporter.cpp,126]:

b3Printf: No inertial data for link, using mass=1, localinertiadiagonal = 1,1,1, identity local inertial frame
b3Printf: b3Warning[examples/Importers/ImportURDFDemo/BulletUrdfImporter.cpp,126]:

b3Printf: l_forearm_cam_optical_frame
b3Printf: b3Warning[examples/Importers/ImportURDFDemo/BulletUrdfImporter.cpp,126]:

b3Printf: No inertial data for link, using mass=1, localinertiadiagonal = 1,1,1, identity local inertial frame
b3Printf: b3Warning[examples/Importers/ImportURDFDemo/BulletUrdfImporter.cpp,126]:

Thanks in advance for any help!

:typing support

Hi. As the number of arguments in my PDDL actions have grown, I've noticed solving the underlying PDDL problem has become more and more of a limiting factor. I believe this could be mitigated substantially if my action preconditions were typed so as to reduce the number of actions considered by the planner, however this is not supported at the moment. Could you give some insight as to why that is the case (my intuition is that typing the output of streams might be tricky for some reason)? If you've given any thought to adding support for this feature, do you know what would need to be changed in the codebase to enable it?

Write a manual

Hello,

Can you write a short manual on how to use your solver? I'd like to try pddlstream on my custom problem, and it's very hard to start. I especially interested for use in pybullet kuka environment.

-Dmitry

Make a package

Hello,

Can you make a package to simplify the use?

-Dmitry

Can't find a plan in the 'Kuka Cleaning and Cooking' scenario using a Panda

Hello, I've tried to change the robot in the 'Kuka Cleaning and Cooking' scenario, using the Franka Panda arm (without hand). Here are the changes I did:

  • pybullet_tools/kuka_primitives.py
TOOL_FRAMES = {
    'panda': 'panda_link8',
  }
  • pybullet_tools/utils.py
PANDA_ARM_URDF = "models/franka_description/robots/panda_arm.urdf"
  • models/franka_description/robots/panda_arm.urdf
    • replace every occurrence of package://franka_description with ..
  • examples/pybullet/kuka/run.py
robot = load_model(PANDA_ARM_URDF, fixed_base=True)

However the plan exploration stops after just few attempts and I get the following result:

Solved: False
Cost: inf
Length: inf
Deferred: 0
Evaluations: 2

The program exits at https://github.com/caelan/pddlstream/blob/main/examples/pybullet/kuka/run.py#L192 (because plan is None).

Maybe I need to change something else to use another robot arm? Any (other) hints?
Thanks!
Matteo

Issue with running the Tamp and Kuka examples with args.simulate=True

I've been trying to run examples.pybullet.tamp.run and examples.pybullet.kuka.run with args.simulate set to be True; however, once I reach the simulation (particularly the line with control_commands() and command.control(), respectively), it freezes. In TAMP, the simulation runs through four steps before freezing and the problem seems to be that it cannot step forward in the simulation since some condition is unsatisfied. In Kuka, as soon as the simulating starts, the platform and all objects immediately drop from the plane they are on and the robot is unable to find them so it freezes soon after. This can be recreated by inserting args.simulate=True immediately after args is defined in each of the examples.

Customize environments

Hello!

I am trying to modify the code to be able to compare it to other planning approaches. I would be glad if you could provide some guidance. The first task I would like to solve is moving objects in 3D, f.e. if I have some initial pose and some goal pose. Can you please advise how to modify the code to solve this problem with custom objects (created as blocks of a certain size or constructed from meshes) in some custom environment (table with a certain size, the certain pose of the robot)? Maybe you can point me to some files/functions that I would need to update in order to customize the environment. Thank you very much n advance!

Easiest way to extract predicate value

What would be the easiest way to tell if a predicate is true given the current state of the environment? Looks like the evaluations returned by solve_focused are for the entire plan.

PyBullet visualization problem

The installation of PDDLStream and pybullet went on well. But when I tried the pr demo: python -m examples.pybullet.pr2.run, the visualization didn't work (snapshot below) while the planning itself seems to work smoothly. The only things I can see in the Bullet browser are two moving cooked and cleaned labels...

I know it is not necessarily PDDLStream's problem and likely to be a pybullet thing, but any hint on this? The pybullet install on my Ubuntu 16.04 is clean without any modification.

image

EDIT: This error has been reproduced on two of my Ubuntu 16.04 visual machines.

TypeError: unsupported operand type(s) for -: 'tuple' and 'float'

I encountered this issue when I tried to run the command python -m examples.pybullet.tamp.run.
File "/home/lu/Desktop/Code/pddlstream/examples/pybullet/utils/pybullet_tools/utils.py", line 3728, in <genexpr> return tuple(circular_difference(value2, value1) if circular else (value2 - value1) TypeError: unsupported operand type(s) for -: 'tuple' and 'float'
After debugging I found the problem was caused by the apply function in pr2_primitives.py.

    def apply(self, state, **kwargs):
        joints = get_gripper_joints(self.robot, self.arm)
        start_conf = get_joint_positions(self.robot, joints)
        end_conf = [self.position] * len(joints)
        if self.teleport:
            path = [start_conf, end_conf]
        else:
            extend_fn = get_extend_fn(self.robot, joints)
            path = [start_conf] + list(extend_fn(start_conf, end_conf))
        for positions in path:
            set_joint_positions(self.robot, joints, positions)
            yield positions

The type of start_conf is tuple with value (0.548, 0.548, 0.548, 0.548),
however the end_conf is list with value [(0.4298039215686276, 0.4298039215686276, 0.4298039215686276, 0.4298039215686276), (0.4298039215686276, 0.4298039215686276, 0.4298039215686276, 0.4298039215686276), (0.4298039215686276, 0.4298039215686276, 0.4298039215686276, 0.4298039215686276), (0.4298039215686276, 0.4298039215686276, 0.4298039215686276, 0.4298039215686276)].

In joint_from_name functions, there're four joints which are l_gripper_l_finger_joint, l_gripper_r_finger_joint, l_gripper_l_finger_tip_joint and l_gripper_r_finger_tip_joint.

I guess the problem was caused by the difference with the dimension of start_conf and end_conf. As it will call the get_extend_fn which will recursively call the get_difference_fn function and finally will lead to the program crash at line 3728.
I would appreciate if anyone could offer any advice.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.