Giter VIP home page Giter VIP logo

robotics_transformer's Introduction

Google Research

This repository contains code released by Google Research.

All datasets in this repository are released under the CC BY 4.0 International license, which can be found here: https://creativecommons.org/licenses/by/4.0/legalcode. All source files in this repository are released under the Apache 2.0 license, the text of which can be found in the LICENSE file.


Because the repo is large, we recommend you download only the subdirectory of interest:

SUBDIR=foo
svn export https://github.com/google-research/google-research/trunk/$SUBDIR

If you'd like to submit a pull request, you'll need to clone the repository; we recommend making a shallow clone (without history).

git clone [email protected]:google-research/google-research.git --depth=1

Disclaimer: This is not an official Google product.

Updated in 2023.

robotics_transformer's People

Contributors

eltociear avatar fxia22 avatar keerthanpg avatar yaolug avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

robotics_transformer's Issues

using RGBD data as input

Amazing work!
I wonder if it is possible to input RGBD data instead of RGB?
Hopefully with the extra depth information, we can train it with better accuracy.

Why not using 3D data for training?

Hi,Thanks for your great job!

In manipulation task, depth camera is a very indispensable sensor and is easy to apply. The question is why not use this kind of camera in your project and training procedure?

Thanks for your attention. I am always waiting for your kind response

how to run inference

Hello,

I'm new to TensorFlow and I know it's a silly question.
But how can I run inference on a prompt and a sequence of images with the code in this git repository?

Thank you very much!

Error with 'bazel test ...' command

Hi, thanks for your excellent work and open source it!

I am new to bazel and have some trouble running the command: bazel test ...

Is is possible for you to upload the bazel BUILD file or anyelse do I need to run the bazel related command?

How to distribute Train with large scale dataset?

Hi, many thanks for this project and code.

I wander how to do distribute training with large scale dataset? Which framework do you use(like Megatron for pytorch)?Currently we havn't found such a good framework. It would be happy if you could recommend one.

Thanks

The initial values of the network_state for transformer_network

I find that in the unit test (e.g., transformer_network_test.py), the network_state is randomly sampled as:

network_state = tensor_spec.sample_spec_nest(
        network.state_spec, outer_dims=[BATCH_SIZE])

If I want to do inference in my own environment, what are the initial values of the network_state?

Release Simulated Environments?

Are there any plans to release the simulated environments used in the paper? (I.e. those used for the "Absorbing simulation data" experiments)

The availability of these environments would make it much easier to test/play with the RT-1 pretrained models.

Thanks!

How to use loaded checkpoint?

I'm trying to pass time_step into the policy for inference, but it raises the following error:

InvalidArgumentError       (note: full exception trace is shown but execution is paused at: _run_module_as_main)
Graph execution error:

Detected at node 'transformer_network_1/Reshape_7' defined at (most recent call last):
Node: 'transformer_network_1/Reshape_7'
Input to reshape is a tensor with 0 values, but the requested shape has 1
	 [[{{node transformer_network_1/Reshape_7}}]] [Op:__inference_restored_function_body_123572]
  File "/home/robot/miniconda3/envs/rt1/lib/python3.10/site-packages/tensorflow/python/eager/execute.py", line 53, in quick_execute

The code I am using is attached below and I would like to know how to use the model provided. The format of the TimeStep object is defined with reference to the time_step_spec of the loaded policy.

import tensorflow as tf
from tf_agents.policies import py_tf_eager_policy
from tf_agents.trajectories.time_step import StepType, TimeStep

HEIGHT = 256
WIDTH = 320

policy = py_tf_eager_policy.SavedModelPyTFEagerPolicy(
    model_path="src/rt1_ros/src/robotics_transformer/trained_checkpoints/rt1main",
    load_specs_from_pbtxt=True,
    use_tf_function=True,
)

def create_time_step(seed, t):
    np.random.seed(seed)
    observations = {
        "image": tf.constant(
            0,
            shape=[1, HEIGHT, WIDTH, 3],
            dtype=tf.dtypes.uint8,
        ),
        "natural_language_embedding": tf.constant(
            0.0,
            shape=(
                1,
                512,
            ),
            dtype=tf.dtypes.float32,
        ),
        "natural_language_instruction": tf.constant(
            "",
            shape=(1,),
            dtype=tf.dtypes.string,
        ),
        "workspace_bounds": tf.constant(
            0.0,
            shape=(1, 3, 3),
            dtype=tf.dtypes.float32,
        ),
        "base_pose_tool_reached": tf.constant(
            0.0,
            shape=(
                1,
                7,
            ),
            dtype=tf.dtypes.float32,
        ),
        "gripper_closed": tf.constant(
            0.0,
            shape=(
                1,
                1,
            ),
            dtype=tf.dtypes.float32,
        ),
        "gripper_closedness_commanded": tf.constant(
            0.0,
            shape=(
                1,
                1,
            ),
            dtype=tf.dtypes.float32,
        ),
        "height_to_bottom": tf.constant(
            0.0,
            shape=(
                1,
                1,
            ),
            dtype=tf.dtypes.float32,
        ),
        "orientation_box": tf.constant(
            0.0,
            shape=(1, 2, 3),
            dtype=tf.dtypes.float32,
        ),
        "orientation_start": tf.constant(
            0.0,
            shape=(
                1,
                4,
            ),
            dtype=tf.dtypes.float32,
        ),
        "robot_orientation_positions_box": tf.constant(
            0.0,
            shape=(1, 3, 3),
            dtype=tf.dtypes.float32,
        ),
        "rotation_delta_to_go": tf.constant(
            0.0,
            shape=(
                1,
                3,
            ),
            dtype=tf.dtypes.float32,
        ),
        "src_rotation": tf.constant(
            0.0,
            shape=(
                1,
                4,
            ),
            dtype=tf.dtypes.float32,
        ),
        "vector_to_go": tf.constant(
            0.0,
            shape=(
                1,
                3,
            ),
            dtype=tf.dtypes.float32,
        ),
    }
    time_step = TimeStep(
        observation=observations,
        reward=tf.constant(
            0.0,
            shape=(1,),
            dtype=tf.dtypes.float32,
        ),
        discount=tf.constant(
            0.0,
            shape=(1,),
            dtype=tf.dtypes.float32,
        ),
        step_type=tf.constant(
            t,
            shape=(1,),
            dtype=tf.dtypes.int32,
        ),
    )
    return time_step

time_step = create_time_step()
policy_state = policy.get_initial_state()

action_step = policy.action(time_step, policy_state)

Data collection for navigation

Hi thank for this great job!

Data for manipulation is mentioned in the paper. As mentioned in the paper, each of the robots autonomously approaches its station at the beginning of the episode when collecting the data.

The network can also predict the the base pose for a given task. The question is how to collecting the navigation data for training this network? Will the predicted base pose keep changing when running or it represents the final position around a given station?

Thanks for your attention and I am always keeping waiting for you kind response.

What is the correct way to restore the checkpoints?

When I run

tf.saved_model.load('./robotics_transformer/trained_checkpoints/rt1main')

I got a following error,

IndexError: Read less bytes than requested

All the efforts to restore the checkpoint you provided were failed.
For example, the following code also not worked for me.

from tf_agents.utils.common import Checkpointer
checkpointer= Checkpointer(
      agent=agent,
      ckpt_dir='./robotics_transformer/trained_checkpoints/rt1main'
  )
checkpointer.initialize_or_restore()

What is the correct way to restore the checkpoints?

Question about action_tokens usage in _assemble_input_token_sequence

Hi, thanks for your great job!
I have read all the code and found that in the function transformer_network::_assemble_input_token_sequence, action_tokens are used to be a kind of input, but the values all are set to be zero(**action_tokens = tf.zeros_like(action_tokens) **).
My question is that why not use the data here? Have you done any experiment proving that it works bad?
Thanks for your attention and keep waiting for your kind response!

`
def _assemble_input_token_sequence(self, context_image_tokens, action_tokens, batch_size):

action_tokens = tf.one_hot(action_tokens, self._vocab_size)

action_tokens = self._action_token_emb(action_tokens)

action_tokens = tf.zeros_like(action_tokens) 

action_tokens = tf.expand_dims(action_tokens, axis=-2)

input_token_sequence = tf.concat([context_image_tokens, action_tokens],  axis=2)

input_token_sequence = tf.reshape(  input_token_sequence, [batch_size, -1, self._token_embedding_size])

return input_token_sequence

`

Dataset release

Hi! Congratulations for the paper and the huge effort!

Is there any plan to release the dataset in the near future?

Failing at loading checkpoints

Hi,

I am trying to load the checkpoints. I have followed #11 and ran this code:

saved_path = './trained_checkpoints/rt1main'
from tf_agents.policies import py_tf_eager_policy

py_tf_eager_policy.SavedModelPyTFEagerPolicy(
    model_path=saved_path,
    load_specs_from_pbtxt=True,
    use_tf_function=True,
)

But I am getting this error:

Traceback (most recent call last):
  File "/home/ali/workspace/repos/google-research/robotics_transformer/load_checkpoints.py", line 7, in <module>
    py_tf_eager_policy.SavedModelPyTFEagerPolicy(
  File "/home/ali/anaconda3/envs/rt9/lib/python3.8/site-packages/gin/config.py", line 1605, in gin_wrapper
    utils.augment_exception_message_and_reraise(e, err_str)
  File "/home/ali/anaconda3/envs/rt9/lib/python3.8/site-packages/gin/utils.py", line 41, in augment_exception_message_and_reraise
    raise proxy.with_traceback(exception.__traceback__) from None
  File "/home/ali/anaconda3/envs/rt9/lib/python3.8/site-packages/gin/config.py", line 1582, in gin_wrapper
    return fn(*new_args, **new_kwargs)
  File "/home/ali/anaconda3/envs/rt9/lib/python3.8/site-packages/tf_agents/policies/py_tf_eager_policy.py", line 179, in __init__
    policy = tf.compat.v2.saved_model.load(model_path)
  File "/home/ali/anaconda3/envs/rt9/lib/python3.8/site-packages/tensorflow/python/saved_model/load.py", line 936, in load
    result = load_internal(export_dir, tags, options)["root"]
  File "/home/ali/anaconda3/envs/rt9/lib/python3.8/site-packages/tensorflow/python/saved_model/load.py", line 974, in load_internal
    loader = loader_cls(object_graph_proto, saved_model_proto, export_dir,
  File "/home/ali/anaconda3/envs/rt9/lib/python3.8/site-packages/tensorflow/python/saved_model/load.py", line 187, in __init__
    self._restore_checkpoint()
  File "/home/ali/anaconda3/envs/rt9/lib/python3.8/site-packages/tensorflow/python/saved_model/load.py", line 560, in _restore_checkpoint
    load_status = saver.restore(variables_path, self._checkpoint_options)
  File "/home/ali/anaconda3/envs/rt9/lib/python3.8/site-packages/tensorflow/python/training/tracking/util.py", line 1351, in restore
    object_graph_string = reader.get_tensor(base.OBJECT_GRAPH_PROTO_KEY)
  File "/home/ali/anaconda3/envs/rt9/lib/python3.8/site-packages/tensorflow/python/training/py_checkpoint_reader.py", line 66, in get_tensor
    return CheckpointReader.CheckpointReader_GetTensor(
IndexError: Read less bytes than requested
  In call to configurable 'SavedModelPyTFEagerPolicy' (<class 'tf_agents.policies.py_tf_eager_policy.SavedModelPyTFEagerPolicy'>)

Process finished with exit code 1

I am using python 3.8.0 and the following packages:

(rt9) λ › pip list                                                                                      workspace/repos
Package                       Version
----------------------------- ---------
absl-py                       1.4.0
astunparse                    1.6.3
cachetools                    5.3.0
certifi                       2022.12.7
charset-normalizer            3.1.0
cloudpickle                   2.2.1
decorator                     5.1.1
dill                          0.3.6
dm-tree                       0.1.8
etils                         1.1.1
flatbuffers                   23.3.3
gast                          0.5.3
gin-config                    0.5.0
google-auth                   2.16.2
google-auth-oauthlib          0.4.6
google-pasta                  0.2.0
googleapis-common-protos      1.59.0
grpcio                        1.51.3
gym                           0.26.2
gym-notices                   0.0.8
h5py                          3.8.0
idna                          3.4
importlib-metadata            6.1.0
importlib-resources           5.12.0
keras                         2.8.0
Keras-Preprocessing           1.1.2
libclang                      15.0.6.1
Markdown                      3.4.1
MarkupSafe                    2.1.2
numpy                         1.24.2
oauthlib                      3.2.2
opt-einsum                    3.3.0
packaging                     23.0
Pillow                        9.4.0
pip                           23.0.1
promise                       2.3
protobuf                      3.19.6
pyasn1                        0.4.8
pyasn1-modules                0.2.8
requests                      2.28.2
requests-oauthlib             1.3.1
rsa                           4.9
setuptools                    65.6.3
six                           1.16.0
tensorboard                   2.8.0
tensorboard-data-server       0.6.1
tensorboard-plugin-wit        1.8.1
tensorflow                    2.8.2
tensorflow-addons             0.17.1
tensorflow-datasets           4.6.0
tensorflow-estimator          2.8.0
tensorflow-hub                0.12.0
tensorflow-io-gcs-filesystem  0.26.0
tensorflow-metadata           1.9.0
tensorflow-model-optimization 0.7.2
tensorflow-probability        0.16.0
tensorflow-text               2.8.2
termcolor                     2.2.0
tf-agents                     0.12.0
toml                          0.10.2
tqdm                          4.65.0
typeguard                     3.0.1
typing_extensions             4.5.0
urllib3                       1.26.15
Werkzeug                      2.2.3
wheel                         0.38.4
wrapt                         1.15.0
zipp                          3.15.0

Trying to convert weights to TensorRT Graph tensorflow version issue

Installed packages:
packages.txt

GTX 1070 8Gb memory, Ryzen 5, Ubuntu 20.04

When I try to convert the model weights to TensorRT engines I get the following error:

  File "/home/dt/miniconda3/envs/TensorRT/lib/python3.6/site-packages/tensorflow/python/framework/importer.py", line 497, in _import_graph_def_internal
    graph._c_graph, serialized, options)  # pylint: disable=protected-access
tensorflow.python.framework.errors_impl.InvalidArgumentError: NodeDef mentions attr 'validate_shape' not in Op<name=AssignVariableOp; signature=resource:resource, value:dtype -> ; attr=dtype:type; is_stateful=true>; NodeDef: {{node AssignNewValue}}. (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.).

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/home/dt/miniconda3/envs/TensorRT/lib/python3.6/site-packages/tensorflow/python/compiler/tensorrt/trt_convert.py", line 1085, in convert
    self._input_saved_model_tags)
  File "/home/dt/miniconda3/envs/TensorRT/lib/python3.6/site-packages/tensorflow/python/saved_model/load.py", line 578, in load
    return load_internal(export_dir, tags)
  File "/home/dt/miniconda3/envs/TensorRT/lib/python3.6/site-packages/tensorflow/python/saved_model/load.py", line 604, in load_internal
    export_dir)
  File "/home/dt/miniconda3/envs/TensorRT/lib/python3.6/site-packages/tensorflow/python/saved_model/load.py", line 116, in __init__
    meta_graph.graph_def.library))
  File "/home/dt/miniconda3/envs/TensorRT/lib/python3.6/site-packages/tensorflow/python/saved_model/function_deserialization.py", line 311, in load_function_def_library
    func_graph = function_def_lib.function_def_to_graph(copy)
  File "/home/dt/miniconda3/envs/TensorRT/lib/python3.6/site-packages/tensorflow/python/framework/function_def_to_graph.py", line 63, in function_def_to_graph
    importer.import_graph_def_for_function(graph_def, name="")
  File "/home/dt/miniconda3/envs/TensorRT/lib/python3.6/site-packages/tensorflow/python/framework/importer.py", line 412, in import_graph_def_for_function
    graph_def, validate_colocation_constraints=False, name=name)
  File "/home/dt/miniconda3/envs/TensorRT/lib/python3.6/site-packages/tensorflow/python/framework/importer.py", line 501, in _import_graph_def_internal
    raise ValueError(str(e))
ValueError: NodeDef mentions attr 'validate_shape' not in Op<name=AssignVariableOp; signature=resource:resource, value:dtype -> ; attr=dtype:type; is_stateful=true>; NodeDef: {{node AssignNewValue}}. (Check whether your GraphDef-interpreting binary is up to date with your GraphDef-generating binary.).

What tensorflow version was used to save these weights? I am using the same requirements.txt for my environment but still running into this issue.

Value Error occured while calling the restored 'action' function from saved model

Hi there!

I was trying to use the RT-1 saved model but encountered a 'Value Error' as shown below. Did I do something wrong with the arguments (i.e., time_step and policy_state)? I am new to TensorFlow and any suggestion will be appreciated!

Many thanks in advance.


ValueError Traceback (most recent call last)
in <cell line: 3>()
1 time_step = create_time_step()
2 init_state = policy.get_initial_state(1)
----> 3 policy_state = policy.action(time_step, init_state)

1 frames
/usr/local/lib/python3.10/dist-packages/tensorflow/python/util/traceback_utils.py in error_handler(*args, **kwargs)
151 except Exception as e:
152 filtered_tb = _process_traceback_frames(e.traceback)
--> 153 raise e.with_traceback(filtered_tb) from None
154 finally:
155 del filtered_tb

/usr/local/lib/python3.10/dist-packages/tensorflow/python/saved_model/function_deserialization.py in restored_function_body(*args, **kwargs)
299 "Option {}:\n {}\n Keyword arguments: {}".format(
300 index + 1, _pretty_format_positional(positional), keyword))
--> 301 raise ValueError(
302 "Could not find matching concrete function to call loaded from the "
303 f"SavedModel. Got:\n {_pretty_format_positional(args)}\n Keyword "

ValueError: Could not find matching concrete function to call loaded from the SavedModel. Got:
Positional arguments (3 total):
* TimeStep(
{'discount': <tf.Tensor 'time_step_2:0' shape=(1,) dtype=float32>,
'observation': {'base_pose_tool_reached': <tf.Tensor 'time_step_12:0' shape=(1, 7) dtype=float32>,
'gripper_closed': <tf.Tensor 'time_step_11:0' shape=(1, 1) dtype=float32>,
'gripper_closedness_commanded': <tf.Tensor 'time_step_16:0' shape=(1, 1) dtype=float32>,
'height_to_bottom': <tf.Tensor 'time_step_8:0' shape=(1, 1) dtype=float32>,
'image': <tf.Tensor 'time_step_10:0' shape=(1, 256, 320, 3) dtype=uint8>,
'natural_language_embedding': <tf.Tensor 'time_step_6:0' shape=(1, 512) dtype=float32>,
'natural_language_instruction': <tf.Tensor 'time_step_4:0' shape=(1,) dtype=string>,
'orientation_box': <tf.Tensor 'time_step_13:0' shape=(1, 2, 3) dtype=float32>,
'orientation_start': <tf.Tensor 'time_step_3:0' shape=(1, 4) dtype=float32>,
'robot_orientation_position_box': <tf.Tensor 'time_step_14:0' shape=(1, 3, 3) dtype=float32>,
'rotation_delta_to_go': <tf.Tensor 'time_step_5:0' shape=(1, 3) dtype=float32>,
'src_rotation': <tf.Tensor 'time_step_9:0' shape=(1, 4) dtype=float32>,
'vector_to_go': <tf.Tensor 'time_step_7:0' shape=(1, 3) dtype=float32>,
'workspace_bounds': <tf.Tensor 'time_step_15:0' shape=(1, 3, 3) dtype=float32>},
'reward': <tf.Tensor 'time_step_1:0' shape=(1,) dtype=float32>,
'step_type': <tf.Tensor 'time_step:0' shape=(1,) dtype=int32>})
* {'action_tokens': <tf.Tensor 'policy_state_3:0' shape=(1, 6, 11, 1, 1) dtype=int32>,
'image': <tf.Tensor 'policy_state:0' shape=(1, 6, 256, 320, 3) dtype=uint8>,
'step_num': <tf.Tensor 'policy_state_2:0' shape=(1, 1, 1, 1, 1) dtype=int32>,
't': <tf.Tensor 'policy_state_1:0' shape=(1, 1, 1, 1, 1) dtype=int32>}
* None
Keyword arguments: {}

Expected these arguments to match one of the following 2 option(s):

Option 1:
Positional arguments (3 total):
* TimeStep(step_type=TensorSpec(shape=(None,), dtype=tf.int32, name='step_type'), reward=TensorSpec(shape=(None,), dtype=tf.float32, name='reward'), discount=TensorSpec(shape=(None,), dtype=tf.float32, name='discount'), observation={'orientation_start': TensorSpec(shape=(None, 4), dtype=tf.float32, name='observation/orientation_start'), 'vector_to_go': TensorSpec(shape=(None, 3), dtype=tf.float32, name='observation/vector_to_go'), 'orientation_box': TensorSpec(shape=(None, 2, 3), dtype=tf.float32, name='observation/orientation_box'), 'image': TensorSpec(shape=(None, 256, 320, 3), dtype=tf.uint8, name='observation/image'), 'robot_orientation_positions_box': TensorSpec(shape=(None, 3, 3), dtype=tf.float32, name='observation/robot_orientation_positions_box'), 'natural_language_embedding': TensorSpec(shape=(None, 512), dtype=tf.float32, name='observation/natural_language_embedding'), 'rotation_delta_to_go': TensorSpec(shape=(None, 3), dtype=tf.float32, name='observation/rotation_delta_to_go'), 'natural_language_instruction': TensorSpec(shape=(None,), dtype=tf.string, name='observation/natural_language_instruction'), 'gripper_closedness_commanded': TensorSpec(shape=(None, 1), dtype=tf.float32, name='observation/gripper_closedness_commanded'), 'base_pose_tool_reached': TensorSpec(shape=(None, 7), dtype=tf.float32, name='observation/base_pose_tool_reached'), 'workspace_bounds': TensorSpec(shape=(None, 3, 3), dtype=tf.float32, name='observation/workspace_bounds'), 'src_rotatio...
* {'action_tokens': TensorSpec(shape=(None, 6, 11, 1, 1), dtype=tf.int32, name='action_tokens'),
'image': TensorSpec(shape=(None, 6, 256, 320, 3), dtype=tf.uint8, name='image'),
'step_num': TensorSpec(shape=(None, 1, 1, 1, 1), dtype=tf.int32, name='step_num'),
't': TensorSpec(shape=(None, 1, 1, 1, 1), dtype=tf.int32, name='t')}
* None
Keyword arguments: {}

Option 2:
Positional arguments (3 total):
* TimeStep(step_type=TensorSpec(shape=(None,), dtype=tf.int32, name='time_step_step_type'), reward=TensorSpec(shape=(None,), dtype=tf.float32, name='time_step_reward'), discount=TensorSpec(shape=(None,), dtype=tf.float32, name='time_step_discount'), observation={'base_pose_tool_reached': TensorSpec(shape=(None, 7), dtype=tf.float32, name='time_step_observation_base_pose_tool_reached'), 'workspace_bounds': TensorSpec(shape=(None, 3, 3), dtype=tf.float32, name='time_step_observation_workspace_bounds'), 'image': TensorSpec(shape=(None, 256, 320, 3), dtype=tf.uint8, name='time_step_observation_image'), 'gripper_closedness_commanded': TensorSpec(shape=(None, 1), dtype=tf.float32, name='time_step_observation_gripper_closedness_commanded'), 'orientation_start': TensorSpec(shape=(None, 4), dtype=tf.float32, name='time_step_observation_orientation_start'), 'src_rotation': TensorSpec(shape=(None, 4), dtype=tf.float32, name='time_step_observation_src_rotation'), 'orientation_box': TensorSpec(shape=(None, 2, 3), dtype=tf.float32, name='time_step_observation_orientation_box'), 'height_to_bottom': TensorSpec(shape=(None, 1), dtype=tf.float32, name='time_step_observation_height_to_bottom'), 'rotation_delta_to_go': TensorSpec(shape=(None, 3), dtype=tf.float32, name='time_step_observation_rotation_delta_to_go'), 'gripper_closed': TensorSpec(shape=(None, 1), dtype=tf.float32, name='time_step_observation_gripper_closed'), 'robot_orientation_positions_box': TensorSpec(shape=(None, 3, 3), ...
* {'action_tokens': TensorSpec(shape=(None, 6, 11, 1, 1), dtype=tf.int32, name='policy_state_action_tokens'),
'image': TensorSpec(shape=(None, 6, 256, 320, 3), dtype=tf.uint8, name='policy_state_image'),
'step_num': TensorSpec(shape=(None, 1, 1, 1, 1), dtype=tf.int32, name='policy_state_step_num'),
't': TensorSpec(shape=(None, 1, 1, 1, 1), dtype=tf.int32, name='policy_state_t')}
* None
Keyword arguments: {}

the code snippets are shown as follows:

from tf_agents.policies import py_tf_eager_policy
model_path = os.path.join(os.getcwd(), '../trained_checkpoints/rt1main')
print('model path', model_path)
policy = py_tf_eager_policy.SavedModelPyTFEagerPolicy(
    model_path=model_path,
    load_specs_from_pbtxt=True,
    use_tf_function=True)
def create_time_step(seed=123,t=0):
	import numpy as np
	from tf_agents.trajectories.time_step import StepType, TimeStep
	HEIGHT, WIDTH = 256, 320
	np.random.seed(seed)
	observations = {
    'orientation_start': tf.constant(0.0, shape=(1, 4), dtype=tf.dtypes.float32),
    'natural_language_instruction': tf.constant('', shape=(1,), dtype=tf.dtypes.string),
    'rotation_delta_to_go': tf.constant(0.0, shape=(1, 3), dtype=tf.dtypes.float32),
    'natural_language_embedding': tf.constant(0.0, shape=(1, 512), dtype=tf.dtypes.float32),
    'vector_to_go': tf.constant(0.0, shape=(1, 3), dtype=tf.dtypes.float32),
    'height_to_bottom': tf.constant(0.0, shape=(1,1), dtype=tf.dtypes.float32),
    'src_rotation': tf.constant(0.0, shape=(1, 4), dtype=tf.dtypes.float32),
    'image': tf.constant(0, shape=(1, 256, 320, 3), dtype=tf.dtypes.uint8),
    'gripper_closed': tf.constant(0.0, shape=(1,1), dtype=tf.dtypes.float32),
    'base_pose_tool_reached': tf.constant(0.0, shape=(1, 7), dtype=tf.dtypes.float32),
    'orientation_box': tf.constant(0.0, shape=(1, 2, 3), dtype=tf.dtypes.float32),
    'robot_orientation_position_box': tf.constant(0.0, shape=(1,3,3), dtype=tf.dtypes.float32),
    'workspace_bounds':tf.constant(0.0, shape=(1, 3, 3), dtype=tf.dtypes.float32),
    'gripper_closedness_commanded': tf.constant(0.0, shape=(1, 1), dtype=tf.dtypes.float32),

  }
	time_step = TimeStep(
		observation=observations,
		reward=tf.constant(
			0.0, shape=(1, ), dtype=tf.dtypes.float32,
		),
		discount=tf.constant(
			0.0, shape=(1, ), dtype=tf.dtypes.float32,
		),
		step_type=tf.constant(
			t, shape=(1, ), dtype=tf.dtypes.int32,
		),
	)
	return time_step
time_step = create_time_step()
init_state = policy.get_initial_state(1)
policy_state = policy.action(time_step, init_state)

The inference results with trained checkpoint (RT1main) look so weired.

After downloading rt1 dataset here, I did inference the dataset with your checkpoint (RT1 main). However, the inference result is far from that in the dataset. When plotted, the shape of the trajectories looks similar between the dataset and inference, but scale shift and drift are observed in the inference. Even more, the training loss reduces greatly when I additionally train the checkpoint with the dataset. It seems that your checkpoint is trained with the different dataset from what you shared, or maybe I am doing wrong inference.

I appreciate your efforts of large-scale robot learning, and I want to reproduce your result. Could you give me any comments about the mismatch between the inference result and GT in your dataset?

Here is psuedo-code of what I did:

# Data source loading
import tensorflow_datasets as tfds
builder = tfds.builder_from_directory(builder_dir=[RT1 data dir])
ds = builder.as_data_source(split=split, decoders=tfds.decode.SkipDecoding())
example = tf.train.Example.FromString(ds.data_source[episode_id])
features = example.features.feature

images = tf.io.decode_image(features["steps/observations/images"].bytes_list.value)
rotation_delta = tf.constant(features["steps/acitons/rotation_delta"].float_list.value, dtype=np.float32)
...
observations = {'images': images, ...}
actions_gt = {'actions': rotation_delta , ...}

# Inference
from tf_agents.trajectories.time_step import TimeStep
time_step = TimeStep(
        observation=observations,
        ...
    )
policy = tf.saved_model.load(pb_path_of_rt1main)
policy_state = policy.get_initial_state(batch_size=1)
action_step = policy.action(time_step, policy_state)

# !! action_step is different_from actions_gt

Error installing requirements.txt ( tensor2robot )

Hello, thanks a lot for sharing the code related to this interesting research!

I tried to follow the instructions in the README, and I have problem in the pip install -r robotics_transformer/requirements.txt step. In particular, I tried with Python 3.11, 3.10, 3.8 and 3.6, and it is failing. Which version of Python should be used? Thanks a lot in advance.

The errors are listed in the following.

Python 3.8

(robtrans) traversaro@IITICUBLAP257:~/robotics_transformer$ pip install -r ./requirements.txt
Collecting git+https://github.com/google-research/tensor2robot#tensor2robot (from -r ./requirements.txt (line 9))
  Cloning https://github.com/google-research/tensor2robot to /tmp/pip-req-build-w12cm8j5
  Running command git clone --filter=blob:none --quiet https://github.com/google-research/tensor2robot /tmp/pip-req-build-w12cm8j5
  Resolved https://github.com/google-research/tensor2robot to commit 8fbc4d696f35ce44a63841cae13f26af6c334004
ERROR: git+https://github.com/google-research/tensor2robot#tensor2robot (from -r ./requirements.txt (line 9)) does not appear to be a Python project: neither 'setup.py' nor 'pyproject.toml' found.

Python 3.10

(robtrans) traversaro@IITICUBLAP257:~/robotics_transformer$ pip install -r ./requirements.txt
Collecting git+https://github.com/google-research/tensor2robot#tensor2robot (from -r ./requirements.txt (line 9))
  Cloning https://github.com/google-research/tensor2robot to /tmp/pip-req-build-w12cm8j5
  Running command git clone --filter=blob:none --quiet https://github.com/google-research/tensor2robot /tmp/pip-req-build-w12cm8j5
  Resolved https://github.com/google-research/tensor2robot to commit 8fbc4d696f35ce44a63841cae13f26af6c334004
ERROR: git+https://github.com/google-research/tensor2robot#tensor2robot (from -r ./requirements.txt (line 9)) does not appear to be a Python project: neither 'setup.py' nor 'pyproject.toml' found.

Python 3.6

(robtrans) traversaro@IITICUBLAP257:~$ pip install -r robotics_transformer/requirements.txt
Collecting git+https://github.com/google-research/tensor2robot#tensor2robot (from -r robotics_transformer/requirements.txt (line 9))
  Cloning https://github.com/google-research/tensor2robot to /tmp/pip-um6_0r33-build
    Complete output from command python setup.py egg_info:
    Traceback (most recent call last):
      File "<string>", line 1, in <module>
      File "/usr/lib/python3.6/tokenize.py", line 452, in open
        buffer = _builtin_open(filename, 'rb')
    FileNotFoundError: [Errno 2] No such file or directory: '/tmp/pip-um6_0r33-build/setup.py'

    ----------------------------------------
Command "python setup.py egg_info" failed with error code 1 in /tmp/pip-um6_0r33-build/

QQ讨论群

大家对谷歌新发布的这个库有兴趣讨论的,请加qq号:2665793027

How to access the model's tf.Module object

Hi, Thanks for sharing the code and dataset.

I am trying to get the RT-1 model working in PyTorch by transpiling the TensorFlow model using Ivy. I am able to load the TF model, but it's in a tensorflow.python.saved_model.load.Loader object, and I didn't find a way to extract a tf.Module object with the RT-1 model from there, which is the object I need to transpile to PyTorch using Ivy. Is there a way to get this object?

Thank you very much in advance.

Victor

ModuleNotFoundError: No module named 'tensorflow.contrib'

my env :

GPU driver
(google_RT1) robot@robot:/usr/local$ ls -al cuda
lrwxrwxrwx 1 root root 20 Sep 20 15:10 cuda -> /usr/local/cuda-10.0

(google_RT1) robot@robot:/usr/local$ nvidia-smi
Fri Dec 16 11:11:13 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 418.88 Driver Version: 418.88 CUDA Version: 10.1 |
|-------------------------------+----------------------+----------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 GeForce RTX 208... Off | 00000000:01:00.0 On | N/A |
| 0% 39C P8 16W / 257W | 185MiB / 10986MiB | 5% Default |
+-------------------------------+----------------------+----------------------+

+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| 0 1142 G /usr/lib/xorg/Xorg 183MiB |
+-----------------------------------------------------------------------------+

python3:

(google_RT1) robot@robot:/usr/local$ python
Python 3.8.15 (default, Nov 24 2022, 15:19:38)
[GCC 11.2.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.

(google_RT1) robot@robot:~/ref$ pip install -r robotics_transformer/requirements.txt 
Looking in indexes: http://pypi.douban.com/simple
Requirement already satisfied: absl-py>=0.5.0 in /home/robot/anaconda3/envs/google_RT1/lib/python3.8/site-packages (from -r robotics_transformer/requirements.txt (line 1)) (1.3.0)
Requirement already satisfied: numpy>=1.13.3 in /home/robot/anaconda3/envs/google_RT1/lib/python3.8/site-packages (from -r robotics_transformer/requirements.txt (line 2)) (1.23.5)
Requirement already satisfied: tensorflow>=1.13.0 in /home/robot/anaconda3/envs/google_RT1/lib/python3.8/site-packages (from -r robotics_transformer/requirements.txt (line 3)) (2.11.0)
Requirement already satisfied: tensorflow-serving-api>=1.13.0 in /home/robot/anaconda3/envs/google_RT1/lib/python3.8/site-packages (from -r robotics_transformer/requirements.txt (line 4)) (2.11.0)
Requirement already satisfied: gin-config>=0.1.4 in /home/robot/anaconda3/envs/google_RT1/lib/python3.8/site-packages (from -r robotics_transformer/requirements.txt (line 5)) (0.5.0)
Requirement already satisfied: tensorflow-probability>=0.6.0 in /home/robot/anaconda3/envs/google_RT1/lib/python3.8/site-packages (from -r robotics_transformer/requirements.txt (line 6)) (0.19.0)
Requirement already satisfied: tf-agents>=0.3.0 in /home/robot/anaconda3/envs/google_RT1/lib/python3.8/site-packages (from -r robotics_transformer/requirements.txt (line 7)) (0.15.0)
Requirement already satisfied: tf-slim>=1.0 in /home/robot/anaconda3/envs/google_RT1/lib/python3.8/site-packages (from -r robotics_transformer/requirements.txt (line 8)) (1.1.0)
Requirement already satisfied: flatbuffers>=2.0 in /home/robot/anaconda3/envs/google_RT1/lib/python3.8/site-packages (from tensorflow>=1.13.0->-r robotics_transformer/requirements.txt (line 3)) (22.12.6)
Requirement already satisfied: termcolor>=1.1.0 in /home/robot/anaconda3/envs/google_RT1/lib/python3.8/site-packages (from tensorflow>=1.13.0->-r robotics_transformer/requirements.txt (line 3)) (2.1.1)
Requirement already satisfied: tensorboard<2.12,>=2.11 in /home/robot/anaconda3/envs/google_RT1/lib/python3.8/site-packages (from tensorflow>=1.13.0->-r robotics_transformer/requirements.txt (line 3)) (2.11.0)
Requirement already satisfied: opt-einsum>=2.3.2 in /home/robot/anaconda3/envs/google_RT1/lib/python3.8/site-packages (from tensorflow>=1.13.0->-r robotics_transformer/requirements.txt (line 3)) (3.3.0)
Requirement already satisfied: packaging in /home/robot/anaconda3/envs/google_RT1/lib/python3.8/site-packages (from tensorflow>=1.13.0->-r robotics_transformer/requirements.txt (line 3)) (22.0)
Requirement already satisfied: tensorflow-io-gcs-filesystem>=0.23.1 in /home/robot/anaconda3/envs/google_RT1/lib/python3.8/site-packages (from tensorflow>=1.13.0->-r robotics_transformer/requirements.txt (line 3)) (0.28.0)
Requirement already satisfied: h5py>=2.9.0 in /home/robot/anaconda3/envs/google_RT1/lib/python3.8/site-packages (from tensorflow>=1.13.0->-r robotics_transformer/requirements.txt (line 3)) (3.7.0)
Requirement already satisfied: libclang>=13.0.0 in /home/robot/anaconda3/envs/google_RT1/lib/python3.8/site-packages (from tensorflow>=1.13.0->-r robotics_transformer/requirements.txt (line 3)) (14.0.6)
Requirement already satisfied: six>=1.12.0 in /home/robot/anaconda3/envs/google_RT1/lib/python3.8/site-packages (from tensorflow>=1.13.0->-r robotics_transformer/requirements.txt (line 3)) (1.16.0)
Requirement already satisfied: typing-extensions>=3.6.6 in /home/robot/anaconda3/envs/google_RT1/lib/python3.8/site-packages (from tensorflow>=1.13.0->-r robotics_transformer/requirements.txt (line 3)) (4.4.0)
Requirement already satisfied: tensorflow-estimator<2.12,>=2.11.0 in /home/robot/anaconda3/envs/google_RT1/lib/python3.8/site-packages (from tensorflow>=1.13.0->-r robotics_transformer/requirements.txt (line 3)) (2.11.0)
Requirement already satisfied: gast<=0.4.0,>=0.2.1 in /home/robot/anaconda3/envs/google_RT1/lib/python3.8/site-packages (from tensorflow>=1.13.0->-r robotics_transformer/requirements.txt (line 3)) (0.4.0)
Requirement already satisfied: google-pasta>=0.1.1 in /home/robot/anaconda3/envs/google_RT1/lib/python3.8/site-packages (from tensorflow>=1.13.0->-r robotics_transformer/requirements.txt (line 3)) (0.2.0)
Requirement already satisfied: keras<2.12,>=2.11.0 in /home/robot/anaconda3/envs/google_RT1/lib/python3.8/site-packages (from tensorflow>=1.13.0->-r robotics_transformer/requirements.txt (line 3)) (2.11.0)
Requirement already satisfied: protobuf<3.20,>=3.9.2 in /home/robot/anaconda3/envs/google_RT1/lib/python3.8/site-packages (from tensorflow>=1.13.0->-r robotics_transformer/requirements.txt (line 3)) (3.19.6)
Requirement already satisfied: setuptools in /home/robot/anaconda3/envs/google_RT1/lib/python3.8/site-packages (from tensorflow>=1.13.0->-r robotics_transformer/requirements.txt (line 3)) (65.5.0)
Requirement already satisfied: wrapt>=1.11.0 in /home/robot/anaconda3/envs/google_RT1/lib/python3.8/site-packages (from tensorflow>=1.13.0->-r robotics_transformer/requirements.txt (line 3)) (1.14.1)
Requirement already satisfied: astunparse>=1.6.0 in /home/robot/anaconda3/envs/google_RT1/lib/python3.8/site-packages (from tensorflow>=1.13.0->-r robotics_transformer/requirements.txt (line 3)) (1.6.3)
Requirement already satisfied: grpcio<2.0,>=1.24.3 in /home/robot/anaconda3/envs/google_RT1/lib/python3.8/site-packages (from tensorflow>=1.13.0->-r robotics_transformer/requirements.txt (line 3)) (1.51.1)
Requirement already satisfied: dm-tree in /home/robot/anaconda3/envs/google_RT1/lib/python3.8/site-packages (from tensorflow-probability>=0.6.0->-r robotics_transformer/requirements.txt (line 6)) (0.1.7)
Requirement already satisfied: decorator in /home/robot/anaconda3/envs/google_RT1/lib/python3.8/site-packages (from tensorflow-probability>=0.6.0->-r robotics_transformer/requirements.txt (line 6)) (5.1.1)
Requirement already satisfied: cloudpickle>=1.3 in /home/robot/anaconda3/envs/google_RT1/lib/python3.8/site-packages (from tensorflow-probability>=0.6.0->-r robotics_transformer/requirements.txt (line 6)) (2.2.0)
Requirement already satisfied: gym<=0.23.0,>=0.17.0 in /home/robot/anaconda3/envs/google_RT1/lib/python3.8/site-packages (from tf-agents>=0.3.0->-r robotics_transformer/requirements.txt (line 7)) (0.23.0)
Requirement already satisfied: pillow in /home/robot/anaconda3/envs/google_RT1/lib/python3.8/site-packages (from tf-agents>=0.3.0->-r robotics_transformer/requirements.txt (line 7)) (5.3.0)
Requirement already satisfied: pygame==2.1.0 in /home/robot/anaconda3/envs/google_RT1/lib/python3.8/site-packages (from tf-agents>=0.3.0->-r robotics_transformer/requirements.txt (line 7)) (2.1.0)
Requirement already satisfied: wheel<1.0,>=0.23.0 in /home/robot/anaconda3/envs/google_RT1/lib/python3.8/site-packages (from astunparse>=1.6.0->tensorflow>=1.13.0->-r robotics_transformer/requirements.txt (line 3)) (0.37.1)
Requirement already satisfied: gym-notices>=0.0.4 in /home/robot/anaconda3/envs/google_RT1/lib/python3.8/site-packages (from gym<=0.23.0,>=0.17.0->tf-agents>=0.3.0->-r robotics_transformer/requirements.txt (line 7)) (0.0.8)
Requirement already satisfied: importlib-metadata>=4.10.0 in /home/robot/anaconda3/envs/google_RT1/lib/python3.8/site-packages (from gym<=0.23.0,>=0.17.0->tf-agents>=0.3.0->-r robotics_transformer/requirements.txt (line 7)) (5.1.0)
Requirement already satisfied: werkzeug>=1.0.1 in /home/robot/anaconda3/envs/google_RT1/lib/python3.8/site-packages (from tensorboard<2.12,>=2.11->tensorflow>=1.13.0->-r robotics_transformer/requirements.txt (line 3)) (2.2.2)
Requirement already satisfied: markdown>=2.6.8 in /home/robot/anaconda3/envs/google_RT1/lib/python3.8/site-packages (from tensorboard<2.12,>=2.11->tensorflow>=1.13.0->-r robotics_transformer/requirements.txt (line 3)) (3.4.1)
Requirement already satisfied: tensorboard-data-server<0.7.0,>=0.6.0 in /home/robot/anaconda3/envs/google_RT1/lib/python3.8/site-packages (from tensorboard<2.12,>=2.11->tensorflow>=1.13.0->-r robotics_transformer/requirements.txt (line 3)) (0.6.1)
Requirement already satisfied: google-auth<3,>=1.6.3 in /home/robot/anaconda3/envs/google_RT1/lib/python3.8/site-packages (from tensorboard<2.12,>=2.11->tensorflow>=1.13.0->-r robotics_transformer/requirements.txt (line 3)) (2.15.0)
Requirement already satisfied: requests<3,>=2.21.0 in /home/robot/anaconda3/envs/google_RT1/lib/python3.8/site-packages (from tensorboard<2.12,>=2.11->tensorflow>=1.13.0->-r robotics_transformer/requirements.txt (line 3)) (2.28.1)
Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /home/robot/anaconda3/envs/google_RT1/lib/python3.8/site-packages (from tensorboard<2.12,>=2.11->tensorflow>=1.13.0->-r robotics_transformer/requirements.txt (line 3)) (0.4.6)
Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /home/robot/anaconda3/envs/google_RT1/lib/python3.8/site-packages (from tensorboard<2.12,>=2.11->tensorflow>=1.13.0->-r robotics_transformer/requirements.txt (line 3)) (1.8.1)
Requirement already satisfied: rsa<5,>=3.1.4 in /home/robot/anaconda3/envs/google_RT1/lib/python3.8/site-packages (from google-auth<3,>=1.6.3->tensorboard<2.12,>=2.11->tensorflow>=1.13.0->-r robotics_transformer/requirements.txt (line 3)) (4.9)
Requirement already satisfied: pyasn1-modules>=0.2.1 in /home/robot/anaconda3/envs/google_RT1/lib/python3.8/site-packages (from google-auth<3,>=1.6.3->tensorboard<2.12,>=2.11->tensorflow>=1.13.0->-r robotics_transformer/requirements.txt (line 3)) (0.2.8)
Requirement already satisfied: cachetools<6.0,>=2.0.0 in /home/robot/anaconda3/envs/google_RT1/lib/python3.8/site-packages (from google-auth<3,>=1.6.3->tensorboard<2.12,>=2.11->tensorflow>=1.13.0->-r robotics_transformer/requirements.txt (line 3)) (5.2.0)
Requirement already satisfied: requests-oauthlib>=0.7.0 in /home/robot/anaconda3/envs/google_RT1/lib/python3.8/site-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.12,>=2.11->tensorflow>=1.13.0->-r robotics_transformer/requirements.txt (line 3)) (1.3.1)
Requirement already satisfied: zipp>=0.5 in /home/robot/anaconda3/envs/google_RT1/lib/python3.8/site-packages (from importlib-metadata>=4.10.0->gym<=0.23.0,>=0.17.0->tf-agents>=0.3.0->-r robotics_transformer/requirements.txt (line 7)) (3.11.0)
Requirement already satisfied: charset-normalizer<3,>=2 in /home/robot/anaconda3/envs/google_RT1/lib/python3.8/site-packages (from requests<3,>=2.21.0->tensorboard<2.12,>=2.11->tensorflow>=1.13.0->-r robotics_transformer/requirements.txt (line 3)) (2.1.1)
Requirement already satisfied: idna<4,>=2.5 in /home/robot/anaconda3/envs/google_RT1/lib/python3.8/site-packages (from requests<3,>=2.21.0->tensorboard<2.12,>=2.11->tensorflow>=1.13.0->-r robotics_transformer/requirements.txt (line 3)) (3.4)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /home/robot/anaconda3/envs/google_RT1/lib/python3.8/site-packages (from requests<3,>=2.21.0->tensorboard<2.12,>=2.11->tensorflow>=1.13.0->-r robotics_transformer/requirements.txt (line 3)) (1.26.13)
Requirement already satisfied: certifi>=2017.4.17 in /home/robot/anaconda3/envs/google_RT1/lib/python3.8/site-packages (from requests<3,>=2.21.0->tensorboard<2.12,>=2.11->tensorflow>=1.13.0->-r robotics_transformer/requirements.txt (line 3)) (2022.9.24)
Requirement already satisfied: MarkupSafe>=2.1.1 in /home/robot/anaconda3/envs/google_RT1/lib/python3.8/site-packages (from werkzeug>=1.0.1->tensorboard<2.12,>=2.11->tensorflow>=1.13.0->-r robotics_transformer/requirements.txt (line 3)) (2.1.1)
Requirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /home/robot/anaconda3/envs/google_RT1/lib/python3.8/site-packages (from pyasn1-modules>=0.2.1->google-auth<3,>=1.6.3->tensorboard<2.12,>=2.11->tensorflow>=1.13.0->-r robotics_transformer/requirements.txt (line 3)) (0.4.8)
Requirement already satisfied: oauthlib>=3.0.0 in /home/robot/anaconda3/envs/google_RT1/lib/python3.8/site-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard<2.12,>=2.11->tensorflow>=1.13.0->-r robotics_transformer/requirements.txt (line 3)) (3.2.2)

Question:
I has successfully configure env as your install step instruction:

pybullet build time: Dec 16 2022 10:07:09
Running tests under Python 3.8.15: /home/robot/anaconda3/envs/google_RT1/bin/python3.8
[ RUN      ] PoseEnvTest.test_PoseEnv
[       OK ] PoseEnvTest.test_PoseEnv
----------------------------------------------------------------------
Ran 1 test in 0.065s

but when I 
**(google_RT1) robot@robot:~/ref$ python -m tensor2robot.research.pose_env.pose_env_models_test**
it shows the following infomation:**ModuleNotFoundError: No module named 'tensorflow.contrib'**


(google_RT1) robot@robot:~/ref$ python -m tensor2robot.research.pose_env.pose_env_models_test
2022-12-16 11:06:35.907244: I tensorflow/core/platform/cpu_feature_guard.cc:193] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations:  AVX2 FMA
To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags.
2022-12-16 11:06:35.980578: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /opt/ros/kinetic/share/euslisp/jskeus/eus//Linux64/lib:/opt/ros/kinetic/lib:/opt/ros/kinetic/lib/x86_64-linux-gnu:/usr/local/cuda/lib64
2022-12-16 11:06:35.980598: I tensorflow/compiler/xla/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
2022-12-16 11:06:36.418803: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /opt/ros/kinetic/share/euslisp/jskeus/eus//Linux64/lib:/opt/ros/kinetic/lib:/opt/ros/kinetic/lib/x86_64-linux-gnu:/usr/local/cuda/lib64
2022-12-16 11:06:36.418856: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory; LD_LIBRARY_PATH: /opt/ros/kinetic/share/euslisp/jskeus/eus//Linux64/lib:/opt/ros/kinetic/lib:/opt/ros/kinetic/lib/x86_64-linux-gnu:/usr/local/cuda/lib64
2022-12-16 11:06:36.418865: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.
Traceback (most recent call last):
  File "/home/robot/anaconda3/envs/google_RT1/lib/python3.8/runpy.py", line 194, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/home/robot/anaconda3/envs/google_RT1/lib/python3.8/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/home/robot/ref/tensor2robot/research/pose_env/pose_env_models_test.py", line 23, in <module>
    from tensor2robot.input_generators import default_input_generator
  File "/home/robot/ref/tensor2robot/input_generators/__init__.py", line 17, in <module>
    from tensor2robot.input_generators import abstract_input_generator
  File "/home/robot/ref/tensor2robot/input_generators/abstract_input_generator.py", line 25, in <module>
    from tensor2robot.models import abstract_model
  File "/home/robot/ref/tensor2robot/models/abstract_model.py", line 28, in <module>
    from tensor2robot.models import model_interface
  File "/home/robot/ref/tensor2robot/models/model_interface.py", line 33, in <module>
    from tensor2robot.preprocessors import abstract_preprocessor
  File "/home/robot/ref/tensor2robot/preprocessors/__init__.py", line 17, in <module>
    from tensor2robot.preprocessors import abstract_preprocessor
  File "/home/robot/ref/tensor2robot/preprocessors/abstract_preprocessor.py", line 22, in <module>
    from tensor2robot.utils import tensorspec_utils
  File "/home/robot/ref/tensor2robot/utils/tensorspec_utils.py", line 31, in <module>
    from tensorflow.contrib import framework as contrib_framework
**ModuleNotFoundError: No module named 'tensorflow.contrib'**

How to use the dataset

Hi, thank you for the kind words!

I have found and downloaded the data for 'RT_1_paper_release'. However, when I use the function 'builder_from_directory' to load the data, I'm not sure what to do next. Could you please provide more information on how to load the released dataset?

Thank you very much!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.