Giter VIP home page Giter VIP logo

projection-conditioned-point-cloud-diffusion's Introduction

PC^2 Projection-Conditioned Point Cloud Diffusion for Single-Image 3D Reconstruction

CVPR 2023 (Highlight)

Arxiv CVPR

Table of Contents

Overview

Explanatory Video

Explanatory Video

Code Overview

This repository uses PyTorch3D for most 3D operations. It uses Hydra for configuration, and the config is located at config/structured.py. The entrypoints for training are main.py for the point cloud diffusion model and main_coloring.py for the point cloud coloring model. There are shared utilities in diffusion_utils.py and training_utils.py. The data is Co3Dv2.

I substantially refactored the repository for the public release to use the diffusers library from HuggingFace. As a results, most of the code is different from the original code used for the paper. Only the Co3Dv2 dataset is implemented in this version of this code, but it should be easy to run on other datasets if you need to.

If you have any questions or contributions, feel free to leave an issue or a pull request.

Abstract

Reconstructing the 3D shape of an object from a single RGB image is a long-standing and highly challenging problem in computer vision. In this paper, we propose a novel method for single-image 3D reconstruction which generates a sparse point cloud via a conditional denoising diffusion process. Our method takes as input a single RGB image along with its camera pose and gradually denoises a set of 3D points, whose positions are initially sampled randomly from a three-dimensional Gaussian distribution, into the shape of an object. The key to our method is a geometrically-consistent conditioning process which we call projection conditioning: at each step in the diffusion process, we project local image features onto the partially-denoised point cloud from the given camera pose. This projection conditioning process enables us to generate high-resolution sparse geometries that are well-aligned with the input image, and can additionally be used to predict point colors after shape reconstruction. Moreover, due to the probabilistic nature of the diffusion process, our method is naturally capable of generating multiple different shapes consistent with a single input image. In contrast to prior work, our approach not only performs well on synthetic benchmarks, but also gives large qualitative improvements on complex real-world data.

Examples

Examples

Method

Diagram

Running the code

Dependencies

Dependencies may be installed with pip:

pip install -r requirements.txt

PyTorch and PyTorch3D are not included in requirements.txt because that sometimes messes up conda installations by trying to re-install PyTorch using pip. I assume you've already installed these by yourself. If not, you can use a command such as:

mamba install pytorch torchvision pytorch-cuda=11.7 pytorch3d -c pytorch -c nvidia -c pytorch3d

Data

For our data, we use Co3Dv2. Full information about the dataset is provided on the GitHub page.

We train on individual categories, so you can just download one category or a subset of the categories (for example hydrants or teddy bears).

Then you can set the environment variable CO3DV2_DATASET_ROOT to the dataset root:

export CO3DV2_DATASET_ROOT="your_dataset_root_folder"

Training

The config is in config/structured.py.

You can specify your job mode using run.job=train, run.job=train_coloring, run.job=sample, or run.job=sample_coloring. By default, the mode is set to train.

An example training command is:

python main.py dataset.category=hydrant dataloader.batch_size=24 dataloader.num_workers=8 run.vis_before_training=True run.val_before_training=True run.name=train__hydrant__ebs_24

To run multiple jobs in parallel on a SLURM cluster, you can use a script such as:

python scripts/example-slurm.py --partition ${PARTITION_NAME} --submit

Separately, you can train a coloring model to predict the color of points with fixed locations in 3D space.

An example command is:

python main_coloring.py run.job=train_coloing model=coloring_model run.mixed_precision=no dataset.category=hydrant dataloader.batch_size=24 run.max_steps=20_000 run.coloring_training_noise_std=0.1 run.name=train_coloring__hydrant__ebs_24

Sampling

For sampling point clouds, use run.job=sample.

For example:

python main.py run.job=sample dataloader.batch_size=16 dataloader.num_workers=6 dataset.category=hydrant checkpoint.resume="/path/to/checkpoint/like/train__hydrant__ebs_24/2022-11-01--17-04-36/checkpoint-latest.pth" run.name=sample__hydrant__ebs_24

Results will be saved to your output directory.

Afterwards, you can predict colors using the point clouds obtained from the sampling procedure above, specifying them with the argument run.coloring_sample_dir.

For example:

python main_coloring.py run.job=sample_coloing dataset.category=hydrant dataloader.batch_size=8 model=coloring_model checkpoint.resume="/path/to/coloring/model/checkpoint-latest.pth" run.coloring_sample_dir="/path/to/sample/dir/like/sample__hydrant__ebs_24/2022-09-22--18-03-20/sample/" run.name=sample_coloring__hydrant__ebs_24

Side note: although this is called "sample_coloring" in the code, it is not really doing any sampling because the coloring model is deterministic.

Pretrained checkpoints

You can download example checkpoints here:

# Downloads checkpoint and logs (1.2G)
bash ./scripts/download-example-logs-and-checkpoints.sh
# Downloads visualizations over the course of training, as an example. Since
# these are large (3.5G), we have made them a separate download.
bash ./scripts/download-example-vis.sh

These are newly-trained models with this codebase. We can train and upload models for other categories as well if you would like; just let us know.

Common issues

(1) If you get an error of the form Error building extension '_pvcnn_backend', make sure you have installed gcc and g++. Then check the path in model/pvcnn/modules/functional/backend.py and edit it to your desired location.

(2) I believe PyTorch3D has some large changes recently and it is possible some of their code is now broken. I am using version 0.7.3 with a patch on line 634 of pytorch3d/implicitron/dataset/frame_data.py.

image_rgb = torch.from_numpy(load_image(self._local_path(path)))

(3) You may also have to patch the accelerate library in order to properly batch the FrameData objects from PyTorch3D. To fix this I replaced the following lines in accelerate/utils/operations.py (L91-99)

elif isinstance(data, Mapping):
    return type(data)(
        {
            k: recursively_apply(
                func, v, *args, test_type=test_type, error_on_other_type=error_on_other_type, **kwargs
            )
            for k, v in data.items()
        }
    )

with the following lines

elif isinstance(data, Mapping):
    from pytorch3d.implicitron.dataset.data_loader_map_provider import FrameData
    if isinstance(data, (FrameData)):
        return type(data)(
            **{
                k: recursively_apply(
                    func, v, *args, test_type=test_type, error_on_other_type=error_on_other_type, **kwargs
                )
                for k, v in data.items()
            }
        )
    else:
        return type(data)(
            {
                k: recursively_apply(
                    func, v, *args, test_type=test_type, error_on_other_type=error_on_other_type, **kwargs
                )
                for k, v in data.items()
            }
        )

Acknowledgement

  • The PyTorch3D library.
  • The diffusers library.
  • The Co3D and Co3Dv2 datasets.
  • Our funding: Luke Melas-Kyriazi is supported by the Rhodes Trust. Andrea Vedaldi and Christian Rupprecht are supported by ERC-UNION-CoG-101001212. Christian Rupprecht is also supported by VisualAI EP/T028572/1.

Citation

@misc{melaskyriazi2023projection,
  doi = {10.48550/ARXIV.2302.10668},
  url = {https://arxiv.org/abs/2302.10668},
  author = {Melas-Kyriazi, Luke and Rupprecht, Christian and Vedaldi, Andrea},
  title = {PC^2 Projection-Conditioned Point Cloud Diffusion for Single-Image 3D Reconstruction},
  publisher = {arXiv},
  year = {2023},
}

projection-conditioned-point-cloud-diffusion's People

Contributors

lukemelas avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar

projection-conditioned-point-cloud-diffusion's Issues

Point cloud visualization

Hi, congratulations on your fantastic work! I notice that the point cloud renderings in Figure 1-3 of the PC^2 paper look stunning. So I'm wondering how did you render them? Could you please give an instruction or share your rendering script? Thanks a lot!

Output files

Thank you for the codebase. I found this work amazing so I tried to implement as you said. However, I am not sure how I can visualise the pointclouds. Fyi, I have this output folder like this

sample__hydrant__ebs__16
--2023-07-11--16-45-45
------wandb
---------debug.log
---------debug-internal.log
---------latest-run
-----------files
-----------logs
-----------tmp
-----------run-o6ivu4cq.wandb
---------run-20230711_164545-o6ivu4cq
-----------files
-----------logs
-----------tmp
-----------run-o6ivu4cq.wandb
------main.log

I can't find any point cloud data output. I wonder if I have missed something.

It'd be very helpful as I am working on a similar work.

Lots of unexpected keys in checkpoint and noisy output

Hi,

Thanks for the amazing work and releasing the code!
I was trying to run the demo code using the pretrained model of co3d hydrant class. However, it seems the downloaded checkpoint has lots of unexpected keys. And the predictor insid folder outputs/sample__hydrant__ebs_24/2023-07-22--21-29-07/sample/pred/hydrant/ is simple a point cloud of Gaussian noise.

Here is the detailed log of checkpoint loading:

Loading checkpoint (2023-07-23 14:16:30.206870)
Loaded model checkpoint key model from /BS/xxie-2/work/projection-conditioned-point-cloud-diffusion/experiments/example-logs-and-checkpoints/train__hydrant__ebs_16/2022-09-12--09-16-11/checkpoint-latest.pth
 - Missing_keys: ['point_cloud_model.model.sa_layers.0.0.voxel_layers.0.weight', 'point_cloud_model.model.sa_layers.0.0.voxel_layers.0.bias', 'point_cloud_model.model.sa_layers.0.0.voxel_layers.1.weight', 
'point_cloud_model.model.sa_layers.0.0.voxel_layers.1.bias', 'point_cloud_model.model.sa_layers.0.0.voxel_layers.4.weight', 'point_cloud_model.model.sa_layers.0.0.voxel_layers.4.bias', 
'point_cloud_model.model.sa_layers.0.0.voxel_layers.5.weight', 'point_cloud_model.model.sa_layers.0.0.voxel_layers.5.bias', 'point_cloud_model.model.sa_layers.0.0.voxel_layers.7.fc.0.weight', 
'point_cloud_model.model.sa_layers.0.0.voxel_layers.7.fc.2.weight', 'point_cloud_model.model.sa_layers.0.0.point_features.layers.0.weight', 'point_cloud_model.model.sa_layers.0.0.point_features.layers.0.bias', 
'point_cloud_model.model.sa_layers.0.0.point_features.layers.1.weight', 'point_cloud_model.model.sa_layers.0.0.point_features.layers.1.bias', 'point_cloud_model.model.sa_layers.0.1.voxel_layers.0.weight', 
'point_cloud_model.model.sa_layers.0.1.voxel_layers.0.bias', 'point_cloud_model.model.sa_layers.0.1.voxel_layers.1.weight', 'point_cloud_model.model.sa_layers.0.1.voxel_layers.1.bias', 
'point_cloud_model.model.sa_layers.0.1.voxel_layers.4.weight', 'point_cloud_model.model.sa_layers.0.1.voxel_layers.4.bias', 'point_cloud_model.model.sa_layers.0.1.voxel_layers.5.weight', 
'point_cloud_model.model.sa_layers.0.1.voxel_layers.5.bias', 'point_cloud_model.model.sa_layers.0.1.voxel_layers.7.fc.0.weight', 'point_cloud_model.model.sa_layers.0.1.voxel_layers.7.fc.2.weight', 
'point_cloud_model.model.sa_layers.0.1.point_features.layers.0.weight', 'point_cloud_model.model.sa_layers.0.1.point_features.layers.0.bias', 'point_cloud_model.model.sa_layers.0.1.point_features.layers.1.weight', 
'point_cloud_model.model.sa_layers.0.1.point_features.layers.1.bias', 'point_cloud_model.model.sa_layers.0.2.mlps.0.layers.0.weight', 'point_cloud_model.model.sa_layers.0.2.mlps.0.layers.0.bias', 
'point_cloud_model.model.sa_layers.0.2.mlps.0.layers.1.weight', 'point_cloud_model.model.sa_layers.0.2.mlps.0.layers.1.bias', 'point_cloud_model.model.sa_layers.0.2.mlps.0.layers.3.weight', 
'point_cloud_model.model.sa_layers.0.2.mlps.0.layers.3.bias', 'point_cloud_model.model.sa_layers.0.2.mlps.0.layers.4.weight', 'point_cloud_model.model.sa_layers.0.2.mlps.0.layers.4.bias', 
'point_cloud_model.model.sa_layers.1.0.voxel_layers.0.weight', 'point_cloud_model.model.sa_layers.1.0.voxel_layers.0.bias', 'point_cloud_model.model.sa_layers.1.0.voxel_layers.1.weight', 
'point_cloud_model.model.sa_layers.1.0.voxel_layers.1.bias', 'point_cloud_model.model.sa_layers.1.0.voxel_layers.4.weight', 'point_cloud_model.model.sa_layers.1.0.voxel_layers.4.bias', 
'point_cloud_model.model.sa_layers.1.0.voxel_layers.5.weight', 'point_cloud_model.model.sa_layers.1.0.voxel_layers.5.bias', 'point_cloud_model.model.sa_layers.1.0.voxel_layers.6.q.weight', 
'point_cloud_model.model.sa_layers.1.0.voxel_layers.6.q.bias', 'point_cloud_model.model.sa_layers.1.0.voxel_layers.6.k.weight', 'point_cloud_model.model.sa_layers.1.0.voxel_layers.6.k.bias', 
'point_cloud_model.model.sa_layers.1.0.voxel_layers.6.v.weight', 'point_cloud_model.model.sa_layers.1.0.voxel_layers.6.v.bias', 'point_cloud_model.model.sa_layers.1.0.voxel_layers.6.out.weight', 
'point_cloud_model.model.sa_layers.1.0.voxel_layers.6.out.bias', 'point_cloud_model.model.sa_layers.1.0.voxel_layers.6.norm.weight', 'point_cloud_model.model.sa_layers.1.0.voxel_layers.6.norm.bias', 
'point_cloud_model.model.sa_layers.1.0.voxel_layers.7.fc.0.weight', 'point_cloud_model.model.sa_layers.1.0.voxel_layers.7.fc.2.weight', 'point_cloud_model.model.sa_layers.1.0.point_features.layers.0.weight', 
'point_cloud_model.model.sa_layers.1.0.point_features.layers.0.bias', 'point_cloud_model.model.sa_layers.1.0.point_features.layers.1.weight', 'point_cloud_model.model.sa_layers.1.0.point_features.layers.1.bias', 
'point_cloud_model.model.sa_layers.1.1.mlps.0.layers.0.weight', 'point_cloud_model.model.sa_layers.1.1.mlps.0.layers.0.bias', 'point_cloud_model.model.sa_layers.1.1.mlps.0.layers.1.weight', 
'point_cloud_model.model.sa_layers.1.1.mlps.0.layers.1.bias', 'point_cloud_model.model.sa_layers.1.1.mlps.0.layers.3.weight', 'point_cloud_model.model.sa_layers.1.1.mlps.0.layers.3.bias', 
'point_cloud_model.model.sa_layers.1.1.mlps.0.layers.4.weight', 'point_cloud_model.model.sa_layers.1.1.mlps.0.layers.4.bias', 'point_cloud_model.model.sa_layers.2.0.voxel_layers.0.weight', 
'point_cloud_model.model.sa_layers.2.0.voxel_layers.0.bias', 'point_cloud_model.model.sa_layers.2.0.voxel_layers.1.weight', 'point_cloud_model.model.sa_layers.2.0.voxel_layers.1.bias', 
'point_cloud_model.model.sa_layers.2.0.voxel_layers.4.weight', 'point_cloud_model.model.sa_layers.2.0.voxel_layers.4.bias', 'point_cloud_model.model.sa_layers.2.0.voxel_layers.5.weight', 
'point_cloud_model.model.sa_layers.2.0.voxel_layers.5.bias', 'point_cloud_model.model.sa_layers.2.0.voxel_layers.7.fc.0.weight', 'point_cloud_model.model.sa_layers.2.0.voxel_layers.7.fc.2.weight', 
'point_cloud_model.model.sa_layers.2.0.point_features.layers.0.weight', 'point_cloud_model.model.sa_layers.2.0.point_features.layers.0.bias', 'point_cloud_model.model.sa_layers.2.0.point_features.layers.1.weight', 
'point_cloud_model.model.sa_layers.2.0.point_features.layers.1.bias', 'point_cloud_model.model.sa_layers.2.1.mlps.0.layers.0.weight', 'point_cloud_model.model.sa_layers.2.1.mlps.0.layers.0.bias', 
'point_cloud_model.model.sa_layers.2.1.mlps.0.layers.1.weight', 'point_cloud_model.model.sa_layers.2.1.mlps.0.layers.1.bias', 'point_cloud_model.model.sa_layers.2.1.mlps.0.layers.3.weight', 
'point_cloud_model.model.sa_layers.2.1.mlps.0.layers.3.bias', 'point_cloud_model.model.sa_layers.2.1.mlps.0.layers.4.weight', 'point_cloud_model.model.sa_layers.2.1.mlps.0.layers.4.bias', 
'point_cloud_model.model.sa_layers.3.mlps.0.layers.0.weight', 'point_cloud_model.model.sa_layers.3.mlps.0.layers.0.bias', 'point_cloud_model.model.sa_layers.3.mlps.0.layers.1.weight', 
'point_cloud_model.model.sa_layers.3.mlps.0.layers.1.bias', 'point_cloud_model.model.sa_layers.3.mlps.0.layers.3.weight', 'point_cloud_model.model.sa_layers.3.mlps.0.layers.3.bias', 
'point_cloud_model.model.sa_layers.3.mlps.0.layers.4.weight', 'point_cloud_model.model.sa_layers.3.mlps.0.layers.4.bias', 'point_cloud_model.model.sa_layers.3.mlps.0.layers.6.weight', 
'point_cloud_model.model.sa_layers.3.mlps.0.layers.6.bias', 'point_cloud_model.model.sa_layers.3.mlps.0.layers.7.weight', 'point_cloud_model.model.sa_layers.3.mlps.0.layers.7.bias', 
'point_cloud_model.model.global_att.q.weight', 'point_cloud_model.model.global_att.q.bias', 'point_cloud_model.model.global_att.k.weight', 'point_cloud_model.model.global_att.k.bias', 
'point_cloud_model.model.global_att.v.weight', 'point_cloud_model.model.global_att.v.bias', 'point_cloud_model.model.global_att.out.weight', 'point_cloud_model.model.global_att.out.bias', 
'point_cloud_model.model.global_att.norm.weight', 'point_cloud_model.model.global_att.norm.bias', 'point_cloud_model.model.fp_layers.0.0.mlp.layers.0.weight', 
'point_cloud_model.model.fp_layers.0.0.mlp.layers.0.bias', 'point_cloud_model.model.fp_layers.0.0.mlp.layers.1.weight', 'point_cloud_model.model.fp_layers.0.0.mlp.layers.1.bias', 
'point_cloud_model.model.fp_layers.0.0.mlp.layers.3.weight', 'point_cloud_model.model.fp_layers.0.0.mlp.layers.3.bias', 'point_cloud_model.model.fp_layers.0.0.mlp.layers.4.weight', 
'point_cloud_model.model.fp_layers.0.0.mlp.layers.4.bias', 'point_cloud_model.model.fp_layers.0.1.voxel_layers.0.weight', 'point_cloud_model.model.fp_layers.0.1.voxel_layers.0.bias', 
'point_cloud_model.model.fp_layers.0.1.voxel_layers.1.weight', 'point_cloud_model.model.fp_layers.0.1.voxel_layers.1.bias', 'point_cloud_model.model.fp_layers.0.1.voxel_layers.4.weight', 
'point_cloud_model.model.fp_layers.0.1.voxel_layers.4.bias', 'point_cloud_model.model.fp_layers.0.1.voxel_layers.5.weight', 'point_cloud_model.model.fp_layers.0.1.voxel_layers.5.bias', 
'point_cloud_model.model.fp_layers.0.1.voxel_layers.7.fc.0.weight', 'point_cloud_model.model.fp_layers.0.1.voxel_layers.7.fc.2.weight', 'point_cloud_model.model.fp_layers.0.1.point_features.layers.0.weight', 
'point_cloud_model.model.fp_layers.0.1.point_features.layers.0.bias', 'point_cloud_model.model.fp_layers.0.1.point_features.layers.1.weight', 'point_cloud_model.model.fp_layers.0.1.point_features.layers.1.bias', 
'point_cloud_model.model.fp_layers.0.2.voxel_layers.0.weight', 'point_cloud_model.model.fp_layers.0.2.voxel_layers.0.bias', 'point_cloud_model.model.fp_layers.0.2.voxel_layers.1.weight', 
'point_cloud_model.model.fp_layers.0.2.voxel_layers.1.bias', 'point_cloud_model.model.fp_layers.0.2.voxel_layers.4.weight', 'point_cloud_model.model.fp_layers.0.2.voxel_layers.4.bias', 
'point_cloud_model.model.fp_layers.0.2.voxel_layers.5.weight', 'point_cloud_model.model.fp_layers.0.2.voxel_layers.5.bias', 'point_cloud_model.model.fp_layers.0.2.voxel_layers.7.fc.0.weight', 
'point_cloud_model.model.fp_layers.0.2.voxel_layers.7.fc.2.weight', 'point_cloud_model.model.fp_layers.0.2.point_features.layers.0.weight', 'point_cloud_model.model.fp_layers.0.2.point_features.layers.0.bias', 
'point_cloud_model.model.fp_layers.0.2.point_features.layers.1.weight', 'point_cloud_model.model.fp_layers.0.2.point_features.layers.1.bias', 'point_cloud_model.model.fp_layers.0.3.voxel_layers.0.weight', 
'point_cloud_model.model.fp_layers.0.3.voxel_layers.0.bias', 'point_cloud_model.model.fp_layers.0.3.voxel_layers.1.weight', 'point_cloud_model.model.fp_layers.0.3.voxel_layers.1.bias', 
'point_cloud_model.model.fp_layers.0.3.voxel_layers.4.weight', 'point_cloud_model.model.fp_layers.0.3.voxel_layers.4.bias', 'point_cloud_model.model.fp_layers.0.3.voxel_layers.5.weight', 
'point_cloud_model.model.fp_layers.0.3.voxel_layers.5.bias', 'point_cloud_model.model.fp_layers.0.3.voxel_layers.7.fc.0.weight', 'point_cloud_model.model.fp_layers.0.3.voxel_layers.7.fc.2.weight', 
'point_cloud_model.model.fp_layers.0.3.point_features.layers.0.weight', 'point_cloud_model.model.fp_layers.0.3.point_features.layers.0.bias', 'point_cloud_model.model.fp_layers.0.3.point_features.layers.1.weight', 
'point_cloud_model.model.fp_layers.0.3.point_features.layers.1.bias', 'point_cloud_model.model.fp_layers.1.0.mlp.layers.0.weight', 'point_cloud_model.model.fp_layers.1.0.mlp.layers.0.bias', 
'point_cloud_model.model.fp_layers.1.0.mlp.layers.1.weight', 'point_cloud_model.model.fp_layers.1.0.mlp.layers.1.bias', 'point_cloud_model.model.fp_layers.1.0.mlp.layers.3.weight', 
'point_cloud_model.model.fp_layers.1.0.mlp.layers.3.bias', 'point_cloud_model.model.fp_layers.1.0.mlp.layers.4.weight', 'point_cloud_model.model.fp_layers.1.0.mlp.layers.4.bias', 
'point_cloud_model.model.fp_layers.1.1.voxel_layers.0.weight', 'point_cloud_model.model.fp_layers.1.1.voxel_layers.0.bias', 'point_cloud_model.model.fp_layers.1.1.voxel_layers.1.weight', 
'point_cloud_model.model.fp_layers.1.1.voxel_layers.1.bias', 'point_cloud_model.model.fp_layers.1.1.voxel_layers.4.weight', 'point_cloud_model.model.fp_layers.1.1.voxel_layers.4.bias', 
'point_cloud_model.model.fp_layers.1.1.voxel_layers.5.weight', 'point_cloud_model.model.fp_layers.1.1.voxel_layers.5.bias', 'point_cloud_model.model.fp_layers.1.1.voxel_layers.7.fc.0.weight', 
'point_cloud_model.model.fp_layers.1.1.voxel_layers.7.fc.2.weight', 'point_cloud_model.model.fp_layers.1.1.point_features.layers.0.weight', 'point_cloud_model.model.fp_layers.1.1.point_features.layers.0.bias', 
'point_cloud_model.model.fp_layers.1.1.point_features.layers.1.weight', 'point_cloud_model.model.fp_layers.1.1.point_features.layers.1.bias', 'point_cloud_model.model.fp_layers.1.2.voxel_layers.0.weight', 
'point_cloud_model.model.fp_layers.1.2.voxel_layers.0.bias', 'point_cloud_model.model.fp_layers.1.2.voxel_layers.1.weight', 'point_cloud_model.model.fp_layers.1.2.voxel_layers.1.bias', 
'point_cloud_model.model.fp_layers.1.2.voxel_layers.4.weight', 'point_cloud_model.model.fp_layers.1.2.voxel_layers.4.bias', 'point_cloud_model.model.fp_layers.1.2.voxel_layers.5.weight', 
'point_cloud_model.model.fp_layers.1.2.voxel_layers.5.bias', 'point_cloud_model.model.fp_layers.1.2.voxel_layers.7.fc.0.weight', 'point_cloud_model.model.fp_layers.1.2.voxel_layers.7.fc.2.weight', 
'point_cloud_model.model.fp_layers.1.2.point_features.layers.0.weight', 'point_cloud_model.model.fp_layers.1.2.point_features.layers.0.bias', 'point_cloud_model.model.fp_layers.1.2.point_features.layers.1.weight', 
'point_cloud_model.model.fp_layers.1.2.point_features.layers.1.bias', 'point_cloud_model.model.fp_layers.1.3.voxel_layers.0.weight', 'point_cloud_model.model.fp_layers.1.3.voxel_layers.0.bias', 
'point_cloud_model.model.fp_layers.1.3.voxel_layers.1.weight', 'point_cloud_model.model.fp_layers.1.3.voxel_layers.1.bias', 'point_cloud_model.model.fp_layers.1.3.voxel_layers.4.weight', 
'point_cloud_model.model.fp_layers.1.3.voxel_layers.4.bias', 'point_cloud_model.model.fp_layers.1.3.voxel_layers.5.weight', 'point_cloud_model.model.fp_layers.1.3.voxel_layers.5.bias', 
'point_cloud_model.model.fp_layers.1.3.voxel_layers.7.fc.0.weight', 'point_cloud_model.model.fp_layers.1.3.voxel_layers.7.fc.2.weight', 'point_cloud_model.model.fp_layers.1.3.point_features.layers.0.weight', 
'point_cloud_model.model.fp_layers.1.3.point_features.layers.0.bias', 'point_cloud_model.model.fp_layers.1.3.point_features.layers.1.weight', 'point_cloud_model.model.fp_layers.1.3.point_features.layers.1.bias', 
'point_cloud_model.model.fp_layers.2.0.mlp.layers.0.weight', 'point_cloud_model.model.fp_layers.2.0.mlp.layers.0.bias', 'point_cloud_model.model.fp_layers.2.0.mlp.layers.1.weight', 
'point_cloud_model.model.fp_layers.2.0.mlp.layers.1.bias', 'point_cloud_model.model.fp_layers.2.0.mlp.layers.3.weight', 'point_cloud_model.model.fp_layers.2.0.mlp.layers.3.bias', 
'point_cloud_model.model.fp_layers.2.0.mlp.layers.4.weight', 'point_cloud_model.model.fp_layers.2.0.mlp.layers.4.bias', 'point_cloud_model.model.fp_layers.2.1.voxel_layers.0.weight', 
'point_cloud_model.model.fp_layers.2.1.voxel_layers.0.bias', 'point_cloud_model.model.fp_layers.2.1.voxel_layers.1.weight', 'point_cloud_model.model.fp_layers.2.1.voxel_layers.1.bias', 
'point_cloud_model.model.fp_layers.2.1.voxel_layers.4.weight', 'point_cloud_model.model.fp_layers.2.1.voxel_layers.4.bias', 'point_cloud_model.model.fp_layers.2.1.voxel_layers.5.weight', 
'point_cloud_model.model.fp_layers.2.1.voxel_layers.5.bias', 'point_cloud_model.model.fp_layers.2.1.voxel_layers.7.fc.0.weight', 'point_cloud_model.model.fp_layers.2.1.voxel_layers.7.fc.2.weight', 
'point_cloud_model.model.fp_layers.2.1.point_features.layers.0.weight', 'point_cloud_model.model.fp_layers.2.1.point_features.layers.0.bias', 'point_cloud_model.model.fp_layers.2.1.point_features.layers.1.weight', 
'point_cloud_model.model.fp_layers.2.1.point_features.layers.1.bias', 'point_cloud_model.model.fp_layers.2.2.voxel_layers.0.weight', 'point_cloud_model.model.fp_layers.2.2.voxel_layers.0.bias', 
'point_cloud_model.model.fp_layers.2.2.voxel_layers.1.weight', 'point_cloud_model.model.fp_layers.2.2.voxel_layers.1.bias', 'point_cloud_model.model.fp_layers.2.2.voxel_layers.4.weight', 
'point_cloud_model.model.fp_layers.2.2.voxel_layers.4.bias', 'point_cloud_model.model.fp_layers.2.2.voxel_layers.5.weight', 'point_cloud_model.model.fp_layers.2.2.voxel_layers.5.bias', 
'point_cloud_model.model.fp_layers.2.2.voxel_layers.7.fc.0.weight', 'point_cloud_model.model.fp_layers.2.2.voxel_layers.7.fc.2.weight', 'point_cloud_model.model.fp_layers.2.2.point_features.layers.0.weight', 
'point_cloud_model.model.fp_layers.2.2.point_features.layers.0.bias', 'point_cloud_model.model.fp_layers.2.2.point_features.layers.1.weight', 'point_cloud_model.model.fp_layers.2.2.point_features.layers.1.bias', 
'point_cloud_model.model.fp_layers.3.0.mlp.layers.0.weight', 'point_cloud_model.model.fp_layers.3.0.mlp.layers.0.bias', 'point_cloud_model.model.fp_layers.3.0.mlp.layers.1.weight', 
'point_cloud_model.model.fp_layers.3.0.mlp.layers.1.bias', 'point_cloud_model.model.fp_layers.3.0.mlp.layers.3.weight', 'point_cloud_model.model.fp_layers.3.0.mlp.layers.3.bias', 
'point_cloud_model.model.fp_layers.3.0.mlp.layers.4.weight', 'point_cloud_model.model.fp_layers.3.0.mlp.layers.4.bias', 'point_cloud_model.model.fp_layers.3.0.mlp.layers.6.weight', 
'point_cloud_model.model.fp_layers.3.0.mlp.layers.6.bias', 'point_cloud_model.model.fp_layers.3.0.mlp.layers.7.weight', 'point_cloud_model.model.fp_layers.3.0.mlp.layers.7.bias', 
'point_cloud_model.model.fp_layers.3.1.voxel_layers.0.weight', 'point_cloud_model.model.fp_layers.3.1.voxel_layers.0.bias', 'point_cloud_model.model.fp_layers.3.1.voxel_layers.1.weight', 
'point_cloud_model.model.fp_layers.3.1.voxel_layers.1.bias', 'point_cloud_model.model.fp_layers.3.1.voxel_layers.4.weight', 'point_cloud_model.model.fp_layers.3.1.voxel_layers.4.bias', 
'point_cloud_model.model.fp_layers.3.1.voxel_layers.5.weight', 'point_cloud_model.model.fp_layers.3.1.voxel_layers.5.bias', 'point_cloud_model.model.fp_layers.3.1.voxel_layers.7.fc.0.weight', 
'point_cloud_model.model.fp_layers.3.1.voxel_layers.7.fc.2.weight', 'point_cloud_model.model.fp_layers.3.1.point_features.layers.0.weight', 'point_cloud_model.model.fp_layers.3.1.point_features.layers.0.bias', 
'point_cloud_model.model.fp_layers.3.1.point_features.layers.1.weight', 'point_cloud_model.model.fp_layers.3.1.point_features.layers.1.bias', 'point_cloud_model.model.fp_layers.3.2.voxel_layers.0.weight', 
'point_cloud_model.model.fp_layers.3.2.voxel_layers.0.bias', 'point_cloud_model.model.fp_layers.3.2.voxel_layers.1.weight', 'point_cloud_model.model.fp_layers.3.2.voxel_layers.1.bias', 
'point_cloud_model.model.fp_layers.3.2.voxel_layers.4.weight', 'point_cloud_model.model.fp_layers.3.2.voxel_layers.4.bias', 'point_cloud_model.model.fp_layers.3.2.voxel_layers.5.weight', 
'point_cloud_model.model.fp_layers.3.2.voxel_layers.5.bias', 'point_cloud_model.model.fp_layers.3.2.voxel_layers.7.fc.0.weight', 'point_cloud_model.model.fp_layers.3.2.voxel_layers.7.fc.2.weight', 
'point_cloud_model.model.fp_layers.3.2.point_features.layers.0.weight', 'point_cloud_model.model.fp_layers.3.2.point_features.layers.0.bias', 'point_cloud_model.model.fp_layers.3.2.point_features.layers.1.weight', 
'point_cloud_model.model.fp_layers.3.2.point_features.layers.1.bias', 'point_cloud_model.model.classifier.0.layers.0.weight', 'point_cloud_model.model.classifier.0.layers.0.bias', 
'point_cloud_model.model.classifier.0.layers.1.weight', 'point_cloud_model.model.classifier.0.layers.1.bias', 'point_cloud_model.model.classifier.2.weight', 'point_cloud_model.model.classifier.2.bias', 
'point_cloud_model.model.embedf.0.weight', 'point_cloud_model.model.embedf.0.bias', 'point_cloud_model.model.embedf.2.weight', 'point_cloud_model.model.embedf.2.bias']
 - Unexpected_keys: ['point_cloud_model.model.simple_point_model.timestep_projection.0.weight', 'point_cloud_model.model.simple_point_model.timestep_projection.0.bias', 
'point_cloud_model.model.simple_point_model.timestep_projection.2.weight', 'point_cloud_model.model.simple_point_model.timestep_projection.2.bias', 
'point_cloud_model.model.simple_point_model.positional_encoding.freq_bands', 'point_cloud_model.model.simple_point_model.input_projection.weight', 'point_cloud_model.model.simple_point_model.input_projection.bias',
'point_cloud_model.model.simple_point_model.layers.0.layer1.weight', 'point_cloud_model.model.simple_point_model.layers.0.layer2.weight', 'point_cloud_model.model.simple_point_model.layers.0.linear_v.weight', 
'point_cloud_model.model.simple_point_model.layers.0.layernorm.weight', 'point_cloud_model.model.simple_point_model.layers.0.layernorm.bias', 'point_cloud_model.model.simple_point_model.layers.1.layer1.weight', 
'point_cloud_model.model.simple_point_model.layers.1.layer2.weight', 'point_cloud_model.model.simple_point_model.layers.1.linear_v.weight', 'point_cloud_model.model.simple_point_model.layers.1.layernorm.weight', 
'point_cloud_model.model.simple_point_model.layers.1.layernorm.bias', 'point_cloud_model.model.simple_point_model.layers.2.layer1.weight', 'point_cloud_model.model.simple_point_model.layers.2.layer2.weight', 
'point_cloud_model.model.simple_point_model.layers.2.linear_v.weight', 'point_cloud_model.model.simple_point_model.layers.2.layernorm.weight', 'point_cloud_model.model.simple_point_model.layers.2.layernorm.bias', 
'point_cloud_model.model.simple_point_model.output_projection.weight', 'point_cloud_model.model.simple_point_model.output_projection.bias', 'point_cloud_model.model.pvcnn.sa_layers.0.0.voxel_layers.0.weight', 
'point_cloud_model.model.pvcnn.sa_layers.0.0.voxel_layers.0.bias', 'point_cloud_model.model.pvcnn.sa_layers.0.0.voxel_layers.1.weight', 'point_cloud_model.model.pvcnn.sa_layers.0.0.voxel_layers.1.bias', 
'point_cloud_model.model.pvcnn.sa_layers.0.0.voxel_layers.4.weight', 'point_cloud_model.model.pvcnn.sa_layers.0.0.voxel_layers.4.bias', 'point_cloud_model.model.pvcnn.sa_layers.0.0.voxel_layers.5.weight', 
'point_cloud_model.model.pvcnn.sa_layers.0.0.voxel_layers.5.bias', 'point_cloud_model.model.pvcnn.sa_layers.0.0.voxel_layers.7.fc.0.weight', 'point_cloud_model.model.pvcnn.sa_layers.0.0.voxel_layers.7.fc.2.weight',
'point_cloud_model.model.pvcnn.sa_layers.0.0.point_features.layers.0.weight', 'point_cloud_model.model.pvcnn.sa_layers.0.0.point_features.layers.0.bias', 
'point_cloud_model.model.pvcnn.sa_layers.0.0.point_features.layers.1.weight', 'point_cloud_model.model.pvcnn.sa_layers.0.0.point_features.layers.1.bias', 
'point_cloud_model.model.pvcnn.sa_layers.0.1.voxel_layers.0.weight', 'point_cloud_model.model.pvcnn.sa_layers.0.1.voxel_layers.0.bias', 'point_cloud_model.model.pvcnn.sa_layers.0.1.voxel_layers.1.weight', 
'point_cloud_model.model.pvcnn.sa_layers.0.1.voxel_layers.1.bias', 'point_cloud_model.model.pvcnn.sa_layers.0.1.voxel_layers.4.weight', 'point_cloud_model.model.pvcnn.sa_layers.0.1.voxel_layers.4.bias', 
'point_cloud_model.model.pvcnn.sa_layers.0.1.voxel_layers.5.weight', 'point_cloud_model.model.pvcnn.sa_layers.0.1.voxel_layers.5.bias', 'point_cloud_model.model.pvcnn.sa_layers.0.1.voxel_layers.7.fc.0.weight', 
'point_cloud_model.model.pvcnn.sa_layers.0.1.voxel_layers.7.fc.2.weight', 'point_cloud_model.model.pvcnn.sa_layers.0.1.point_features.layers.0.weight', 
'point_cloud_model.model.pvcnn.sa_layers.0.1.point_features.layers.0.bias', 'point_cloud_model.model.pvcnn.sa_layers.0.1.point_features.layers.1.weight', 
'point_cloud_model.model.pvcnn.sa_layers.0.1.point_features.layers.1.bias', 'point_cloud_model.model.pvcnn.sa_layers.0.2.mlps.0.layers.0.weight', 'point_cloud_model.model.pvcnn.sa_layers.0.2.mlps.0.layers.0.bias', 
'point_cloud_model.model.pvcnn.sa_layers.0.2.mlps.0.layers.1.weight', 'point_cloud_model.model.pvcnn.sa_layers.0.2.mlps.0.layers.1.bias', 'point_cloud_model.model.pvcnn.sa_layers.0.2.mlps.0.layers.3.weight', 
'point_cloud_model.model.pvcnn.sa_layers.0.2.mlps.0.layers.3.bias', 'point_cloud_model.model.pvcnn.sa_layers.0.2.mlps.0.layers.4.weight', 'point_cloud_model.model.pvcnn.sa_layers.0.2.mlps.0.layers.4.bias', 
'point_cloud_model.model.pvcnn.sa_layers.1.0.voxel_layers.0.weight', 'point_cloud_model.model.pvcnn.sa_layers.1.0.voxel_layers.0.bias', 'point_cloud_model.model.pvcnn.sa_layers.1.0.voxel_layers.1.weight', 
'point_cloud_model.model.pvcnn.sa_layers.1.0.voxel_layers.1.bias', 'point_cloud_model.model.pvcnn.sa_layers.1.0.voxel_layers.4.weight', 'point_cloud_model.model.pvcnn.sa_layers.1.0.voxel_layers.4.bias', 
'point_cloud_model.model.pvcnn.sa_layers.1.0.voxel_layers.5.weight', 'point_cloud_model.model.pvcnn.sa_layers.1.0.voxel_layers.5.bias', 'point_cloud_model.model.pvcnn.sa_layers.1.0.voxel_layers.6.q.weight', 
'point_cloud_model.model.pvcnn.sa_layers.1.0.voxel_layers.6.q.bias', 'point_cloud_model.model.pvcnn.sa_layers.1.0.voxel_layers.6.k.weight', 'point_cloud_model.model.pvcnn.sa_layers.1.0.voxel_layers.6.k.bias', 
'point_cloud_model.model.pvcnn.sa_layers.1.0.voxel_layers.6.v.weight', 'point_cloud_model.model.pvcnn.sa_layers.1.0.voxel_layers.6.v.bias', 'point_cloud_model.model.pvcnn.sa_layers.1.0.voxel_layers.6.out.weight', 
'point_cloud_model.model.pvcnn.sa_layers.1.0.voxel_layers.6.out.bias', 'point_cloud_model.model.pvcnn.sa_layers.1.0.voxel_layers.6.norm.weight', 
'point_cloud_model.model.pvcnn.sa_layers.1.0.voxel_layers.6.norm.bias', 'point_cloud_model.model.pvcnn.sa_layers.1.0.voxel_layers.7.fc.0.weight', 
'point_cloud_model.model.pvcnn.sa_layers.1.0.voxel_layers.7.fc.2.weight', 'point_cloud_model.model.pvcnn.sa_layers.1.0.point_features.layers.0.weight', 
'point_cloud_model.model.pvcnn.sa_layers.1.0.point_features.layers.0.bias', 'point_cloud_model.model.pvcnn.sa_layers.1.0.point_features.layers.1.weight', 
'point_cloud_model.model.pvcnn.sa_layers.1.0.point_features.layers.1.bias', 'point_cloud_model.model.pvcnn.sa_layers.1.1.mlps.0.layers.0.weight', 'point_cloud_model.model.pvcnn.sa_layers.1.1.mlps.0.layers.0.bias', 
'point_cloud_model.model.pvcnn.sa_layers.1.1.mlps.0.layers.1.weight', 'point_cloud_model.model.pvcnn.sa_layers.1.1.mlps.0.layers.1.bias', 'point_cloud_model.model.pvcnn.sa_layers.1.1.mlps.0.layers.3.weight', 
'point_cloud_model.model.pvcnn.sa_layers.1.1.mlps.0.layers.3.bias', 'point_cloud_model.model.pvcnn.sa_layers.1.1.mlps.0.layers.4.weight', 'point_cloud_model.model.pvcnn.sa_layers.1.1.mlps.0.layers.4.bias', 
'point_cloud_model.model.pvcnn.sa_layers.2.0.voxel_layers.0.weight', 'point_cloud_model.model.pvcnn.sa_layers.2.0.voxel_layers.0.bias', 'point_cloud_model.model.pvcnn.sa_layers.2.0.voxel_layers.1.weight', 
'point_cloud_model.model.pvcnn.sa_layers.2.0.voxel_layers.1.bias', 'point_cloud_model.model.pvcnn.sa_layers.2.0.voxel_layers.4.weight', 'point_cloud_model.model.pvcnn.sa_layers.2.0.voxel_layers.4.bias', 
'point_cloud_model.model.pvcnn.sa_layers.2.0.voxel_layers.5.weight', 'point_cloud_model.model.pvcnn.sa_layers.2.0.voxel_layers.5.bias', 'point_cloud_model.model.pvcnn.sa_layers.2.0.voxel_layers.7.fc.0.weight', 
'point_cloud_model.model.pvcnn.sa_layers.2.0.voxel_layers.7.fc.2.weight', 'point_cloud_model.model.pvcnn.sa_layers.2.0.point_features.layers.0.weight', 
'point_cloud_model.model.pvcnn.sa_layers.2.0.point_features.layers.0.bias', 'point_cloud_model.model.pvcnn.sa_layers.2.0.point_features.layers.1.weight', 
'point_cloud_model.model.pvcnn.sa_layers.2.0.point_features.layers.1.bias', 'point_cloud_model.model.pvcnn.sa_layers.2.1.mlps.0.layers.0.weight', 'point_cloud_model.model.pvcnn.sa_layers.2.1.mlps.0.layers.0.bias', 
'point_cloud_model.model.pvcnn.sa_layers.2.1.mlps.0.layers.1.weight', 'point_cloud_model.model.pvcnn.sa_layers.2.1.mlps.0.layers.1.bias', 'point_cloud_model.model.pvcnn.sa_layers.2.1.mlps.0.layers.3.weight', 
'point_cloud_model.model.pvcnn.sa_layers.2.1.mlps.0.layers.3.bias', 'point_cloud_model.model.pvcnn.sa_layers.2.1.mlps.0.layers.4.weight', 'point_cloud_model.model.pvcnn.sa_layers.2.1.mlps.0.layers.4.bias', 
'point_cloud_model.model.pvcnn.sa_layers.3.mlps.0.layers.0.weight', 'point_cloud_model.model.pvcnn.sa_layers.3.mlps.0.layers.0.bias', 'point_cloud_model.model.pvcnn.sa_layers.3.mlps.0.layers.1.weight', 
'point_cloud_model.model.pvcnn.sa_layers.3.mlps.0.layers.1.bias', 'point_cloud_model.model.pvcnn.sa_layers.3.mlps.0.layers.3.weight', 'point_cloud_model.model.pvcnn.sa_layers.3.mlps.0.layers.3.bias', 
'point_cloud_model.model.pvcnn.sa_layers.3.mlps.0.layers.4.weight', 'point_cloud_model.model.pvcnn.sa_layers.3.mlps.0.layers.4.bias', 'point_cloud_model.model.pvcnn.sa_layers.3.mlps.0.layers.6.weight', 
'point_cloud_model.model.pvcnn.sa_layers.3.mlps.0.layers.6.bias', 'point_cloud_model.model.pvcnn.sa_layers.3.mlps.0.layers.7.weight', 'point_cloud_model.model.pvcnn.sa_layers.3.mlps.0.layers.7.bias', 
'point_cloud_model.model.pvcnn.global_att.q.weight', 'point_cloud_model.model.pvcnn.global_att.q.bias', 'point_cloud_model.model.pvcnn.global_att.k.weight', 'point_cloud_model.model.pvcnn.global_att.k.bias', 
'point_cloud_model.model.pvcnn.global_att.v.weight', 'point_cloud_model.model.pvcnn.global_att.v.bias', 'point_cloud_model.model.pvcnn.global_att.out.weight', 'point_cloud_model.model.pvcnn.global_att.out.bias', 
'point_cloud_model.model.pvcnn.global_att.norm.weight', 'point_cloud_model.model.pvcnn.global_att.norm.bias', 'point_cloud_model.model.pvcnn.fp_layers.0.0.mlp.layers.0.weight', 
'point_cloud_model.model.pvcnn.fp_layers.0.0.mlp.layers.0.bias', 'point_cloud_model.model.pvcnn.fp_layers.0.0.mlp.layers.1.weight', 'point_cloud_model.model.pvcnn.fp_layers.0.0.mlp.layers.1.bias', 
'point_cloud_model.model.pvcnn.fp_layers.0.0.mlp.layers.3.weight', 'point_cloud_model.model.pvcnn.fp_layers.0.0.mlp.layers.3.bias', 'point_cloud_model.model.pvcnn.fp_layers.0.0.mlp.layers.4.weight', 
'point_cloud_model.model.pvcnn.fp_layers.0.0.mlp.layers.4.bias', 'point_cloud_model.model.pvcnn.fp_layers.0.1.voxel_layers.0.weight', 'point_cloud_model.model.pvcnn.fp_layers.0.1.voxel_layers.0.bias', 
'point_cloud_model.model.pvcnn.fp_layers.0.1.voxel_layers.1.weight', 'point_cloud_model.model.pvcnn.fp_layers.0.1.voxel_layers.1.bias', 'point_cloud_model.model.pvcnn.fp_layers.0.1.voxel_layers.4.weight', 
'point_cloud_model.model.pvcnn.fp_layers.0.1.voxel_layers.4.bias', 'point_cloud_model.model.pvcnn.fp_layers.0.1.voxel_layers.5.weight', 'point_cloud_model.model.pvcnn.fp_layers.0.1.voxel_layers.5.bias', 
'point_cloud_model.model.pvcnn.fp_layers.0.1.voxel_layers.7.fc.0.weight', 'point_cloud_model.model.pvcnn.fp_layers.0.1.voxel_layers.7.fc.2.weight', 
'point_cloud_model.model.pvcnn.fp_layers.0.1.point_features.layers.0.weight', 'point_cloud_model.model.pvcnn.fp_layers.0.1.point_features.layers.0.bias', 
'point_cloud_model.model.pvcnn.fp_layers.0.1.point_features.layers.1.weight', 'point_cloud_model.model.pvcnn.fp_layers.0.1.point_features.layers.1.bias', 
'point_cloud_model.model.pvcnn.fp_layers.0.2.voxel_layers.0.weight', 'point_cloud_model.model.pvcnn.fp_layers.0.2.voxel_layers.0.bias', 'point_cloud_model.model.pvcnn.fp_layers.0.2.voxel_layers.1.weight', 
'point_cloud_model.model.pvcnn.fp_layers.0.2.voxel_layers.1.bias', 'point_cloud_model.model.pvcnn.fp_layers.0.2.voxel_layers.4.weight', 'point_cloud_model.model.pvcnn.fp_layers.0.2.voxel_layers.4.bias', 
'point_cloud_model.model.pvcnn.fp_layers.0.2.voxel_layers.5.weight', 'point_cloud_model.model.pvcnn.fp_layers.0.2.voxel_layers.5.bias', 'point_cloud_model.model.pvcnn.fp_layers.0.2.voxel_layers.7.fc.0.weight', 
'point_cloud_model.model.pvcnn.fp_layers.0.2.voxel_layers.7.fc.2.weight', 'point_cloud_model.model.pvcnn.fp_layers.0.2.point_features.layers.0.weight', 
'point_cloud_model.model.pvcnn.fp_layers.0.2.point_features.layers.0.bias', 'point_cloud_model.model.pvcnn.fp_layers.0.2.point_features.layers.1.weight', 
'point_cloud_model.model.pvcnn.fp_layers.0.2.point_features.layers.1.bias', 'point_cloud_model.model.pvcnn.fp_layers.0.3.voxel_layers.0.weight', 'point_cloud_model.model.pvcnn.fp_layers.0.3.voxel_layers.0.bias', 
'point_cloud_model.model.pvcnn.fp_layers.0.3.voxel_layers.1.weight', 'point_cloud_model.model.pvcnn.fp_layers.0.3.voxel_layers.1.bias', 'point_cloud_model.model.pvcnn.fp_layers.0.3.voxel_layers.4.weight', 
'point_cloud_model.model.pvcnn.fp_layers.0.3.voxel_layers.4.bias', 'point_cloud_model.model.pvcnn.fp_layers.0.3.voxel_layers.5.weight', 'point_cloud_model.model.pvcnn.fp_layers.0.3.voxel_layers.5.bias', 
'point_cloud_model.model.pvcnn.fp_layers.0.3.voxel_layers.7.fc.0.weight', 'point_cloud_model.model.pvcnn.fp_layers.0.3.voxel_layers.7.fc.2.weight', 
'point_cloud_model.model.pvcnn.fp_layers.0.3.point_features.layers.0.weight', 'point_cloud_model.model.pvcnn.fp_layers.0.3.point_features.layers.0.bias', 
'point_cloud_model.model.pvcnn.fp_layers.0.3.point_features.layers.1.weight', 'point_cloud_model.model.pvcnn.fp_layers.0.3.point_features.layers.1.bias', 
'point_cloud_model.model.pvcnn.fp_layers.1.0.mlp.layers.0.weight', 'point_cloud_model.model.pvcnn.fp_layers.1.0.mlp.layers.0.bias', 'point_cloud_model.model.pvcnn.fp_layers.1.0.mlp.layers.1.weight', 
'point_cloud_model.model.pvcnn.fp_layers.1.0.mlp.layers.1.bias', 'point_cloud_model.model.pvcnn.fp_layers.1.0.mlp.layers.3.weight', 'point_cloud_model.model.pvcnn.fp_layers.1.0.mlp.layers.3.bias', 
'point_cloud_model.model.pvcnn.fp_layers.1.0.mlp.layers.4.weight', 'point_cloud_model.model.pvcnn.fp_layers.1.0.mlp.layers.4.bias', 'point_cloud_model.model.pvcnn.fp_layers.1.1.voxel_layers.0.weight', 
'point_cloud_model.model.pvcnn.fp_layers.1.1.voxel_layers.0.bias', 'point_cloud_model.model.pvcnn.fp_layers.1.1.voxel_layers.1.weight', 'point_cloud_model.model.pvcnn.fp_layers.1.1.voxel_layers.1.bias', 
'point_cloud_model.model.pvcnn.fp_layers.1.1.voxel_layers.4.weight', 'point_cloud_model.model.pvcnn.fp_layers.1.1.voxel_layers.4.bias', 'point_cloud_model.model.pvcnn.fp_layers.1.1.voxel_layers.5.weight', 
'point_cloud_model.model.pvcnn.fp_layers.1.1.voxel_layers.5.bias', 'point_cloud_model.model.pvcnn.fp_layers.1.1.voxel_layers.7.fc.0.weight', 'point_cloud_model.model.pvcnn.fp_layers.1.1.voxel_layers.7.fc.2.weight',
'point_cloud_model.model.pvcnn.fp_layers.1.1.point_features.layers.0.weight', 'point_cloud_model.model.pvcnn.fp_layers.1.1.point_features.layers.0.bias', 
'point_cloud_model.model.pvcnn.fp_layers.1.1.point_features.layers.1.weight', 'point_cloud_model.model.pvcnn.fp_layers.1.1.point_features.layers.1.bias', 
'point_cloud_model.model.pvcnn.fp_layers.1.2.voxel_layers.0.weight', 'point_cloud_model.model.pvcnn.fp_layers.1.2.voxel_layers.0.bias', 'point_cloud_model.model.pvcnn.fp_layers.1.2.voxel_layers.1.weight', 
'point_cloud_model.model.pvcnn.fp_layers.1.2.voxel_layers.1.bias', 'point_cloud_model.model.pvcnn.fp_layers.1.2.voxel_layers.4.weight', 'point_cloud_model.model.pvcnn.fp_layers.1.2.voxel_layers.4.bias', 
'point_cloud_model.model.pvcnn.fp_layers.1.2.voxel_layers.5.weight', 'point_cloud_model.model.pvcnn.fp_layers.1.2.voxel_layers.5.bias', 'point_cloud_model.model.pvcnn.fp_layers.1.2.voxel_layers.7.fc.0.weight', 
'point_cloud_model.model.pvcnn.fp_layers.1.2.voxel_layers.7.fc.2.weight', 'point_cloud_model.model.pvcnn.fp_layers.1.2.point_features.layers.0.weight', 
'point_cloud_model.model.pvcnn.fp_layers.1.2.point_features.layers.0.bias', 'point_cloud_model.model.pvcnn.fp_layers.1.2.point_features.layers.1.weight', 
'point_cloud_model.model.pvcnn.fp_layers.1.2.point_features.layers.1.bias', 'point_cloud_model.model.pvcnn.fp_layers.1.3.voxel_layers.0.weight', 'point_cloud_model.model.pvcnn.fp_layers.1.3.voxel_layers.0.bias', 
'point_cloud_model.model.pvcnn.fp_layers.1.3.voxel_layers.1.weight', 'point_cloud_model.model.pvcnn.fp_layers.1.3.voxel_layers.1.bias', 'point_cloud_model.model.pvcnn.fp_layers.1.3.voxel_layers.4.weight', 
'point_cloud_model.model.pvcnn.fp_layers.1.3.voxel_layers.4.bias', 'point_cloud_model.model.pvcnn.fp_layers.1.3.voxel_layers.5.weight', 'point_cloud_model.model.pvcnn.fp_layers.1.3.voxel_layers.5.bias', 
'point_cloud_model.model.pvcnn.fp_layers.1.3.voxel_layers.7.fc.0.weight', 'point_cloud_model.model.pvcnn.fp_layers.1.3.voxel_layers.7.fc.2.weight', 
'point_cloud_model.model.pvcnn.fp_layers.1.3.point_features.layers.0.weight', 'point_cloud_model.model.pvcnn.fp_layers.1.3.point_features.layers.0.bias', 
'point_cloud_model.model.pvcnn.fp_layers.1.3.point_features.layers.1.weight', 'point_cloud_model.model.pvcnn.fp_layers.1.3.point_features.layers.1.bias', 
'point_cloud_model.model.pvcnn.fp_layers.2.0.mlp.layers.0.weight', 'point_cloud_model.model.pvcnn.fp_layers.2.0.mlp.layers.0.bias', 'point_cloud_model.model.pvcnn.fp_layers.2.0.mlp.layers.1.weight', 
'point_cloud_model.model.pvcnn.fp_layers.2.0.mlp.layers.1.bias', 'point_cloud_model.model.pvcnn.fp_layers.2.0.mlp.layers.3.weight', 'point_cloud_model.model.pvcnn.fp_layers.2.0.mlp.layers.3.bias', 
'point_cloud_model.model.pvcnn.fp_layers.2.0.mlp.layers.4.weight', 'point_cloud_model.model.pvcnn.fp_layers.2.0.mlp.layers.4.bias', 'point_cloud_model.model.pvcnn.fp_layers.2.1.voxel_layers.0.weight', 
'point_cloud_model.model.pvcnn.fp_layers.2.1.voxel_layers.0.bias', 'point_cloud_model.model.pvcnn.fp_layers.2.1.voxel_layers.1.weight', 'point_cloud_model.model.pvcnn.fp_layers.2.1.voxel_layers.1.bias', 
'point_cloud_model.model.pvcnn.fp_layers.2.1.voxel_layers.4.weight', 'point_cloud_model.model.pvcnn.fp_layers.2.1.voxel_layers.4.bias', 'point_cloud_model.model.pvcnn.fp_layers.2.1.voxel_layers.5.weight', 
'point_cloud_model.model.pvcnn.fp_layers.2.1.voxel_layers.5.bias', 'point_cloud_model.model.pvcnn.fp_layers.2.1.voxel_layers.7.fc.0.weight', 'point_cloud_model.model.pvcnn.fp_layers.2.1.voxel_layers.7.fc.2.weight',
'point_cloud_model.model.pvcnn.fp_layers.2.1.point_features.layers.0.weight', 'point_cloud_model.model.pvcnn.fp_layers.2.1.point_features.layers.0.bias', 
'point_cloud_model.model.pvcnn.fp_layers.2.1.point_features.layers.1.weight', 'point_cloud_model.model.pvcnn.fp_layers.2.1.point_features.layers.1.bias', 
'point_cloud_model.model.pvcnn.fp_layers.2.2.voxel_layers.0.weight', 'point_cloud_model.model.pvcnn.fp_layers.2.2.voxel_layers.0.bias', 'point_cloud_model.model.pvcnn.fp_layers.2.2.voxel_layers.1.weight', 
'point_cloud_model.model.pvcnn.fp_layers.2.2.voxel_layers.1.bias', 'point_cloud_model.model.pvcnn.fp_layers.2.2.voxel_layers.4.weight', 'point_cloud_model.model.pvcnn.fp_layers.2.2.voxel_layers.4.bias', 
'point_cloud_model.model.pvcnn.fp_layers.2.2.voxel_layers.5.weight', 'point_cloud_model.model.pvcnn.fp_layers.2.2.voxel_layers.5.bias', 'point_cloud_model.model.pvcnn.fp_layers.2.2.voxel_layers.7.fc.0.weight', 
'point_cloud_model.model.pvcnn.fp_layers.2.2.voxel_layers.7.fc.2.weight', 'point_cloud_model.model.pvcnn.fp_layers.2.2.point_features.layers.0.weight', 
'point_cloud_model.model.pvcnn.fp_layers.2.2.point_features.layers.0.bias', 'point_cloud_model.model.pvcnn.fp_layers.2.2.point_features.layers.1.weight', 
'point_cloud_model.model.pvcnn.fp_layers.2.2.point_features.layers.1.bias', 'point_cloud_model.model.pvcnn.fp_layers.3.0.mlp.layers.0.weight', 'point_cloud_model.model.pvcnn.fp_layers.3.0.mlp.layers.0.bias', 
'point_cloud_model.model.pvcnn.fp_layers.3.0.mlp.layers.1.weight', 'point_cloud_model.model.pvcnn.fp_layers.3.0.mlp.layers.1.bias', 'point_cloud_model.model.pvcnn.fp_layers.3.0.mlp.layers.3.weight', 
'point_cloud_model.model.pvcnn.fp_layers.3.0.mlp.layers.3.bias', 'point_cloud_model.model.pvcnn.fp_layers.3.0.mlp.layers.4.weight', 'point_cloud_model.model.pvcnn.fp_layers.3.0.mlp.layers.4.bias', 
'point_cloud_model.model.pvcnn.fp_layers.3.0.mlp.layers.6.weight', 'point_cloud_model.model.pvcnn.fp_layers.3.0.mlp.layers.6.bias', 'point_cloud_model.model.pvcnn.fp_layers.3.0.mlp.layers.7.weight', 
'point_cloud_model.model.pvcnn.fp_layers.3.0.mlp.layers.7.bias', 'point_cloud_model.model.pvcnn.fp_layers.3.1.voxel_layers.0.weight', 'point_cloud_model.model.pvcnn.fp_layers.3.1.voxel_layers.0.bias', 
'point_cloud_model.model.pvcnn.fp_layers.3.1.voxel_layers.1.weight', 'point_cloud_model.model.pvcnn.fp_layers.3.1.voxel_layers.1.bias', 'point_cloud_model.model.pvcnn.fp_layers.3.1.voxel_layers.4.weight', 
'point_cloud_model.model.pvcnn.fp_layers.3.1.voxel_layers.4.bias', 'point_cloud_model.model.pvcnn.fp_layers.3.1.voxel_layers.5.weight', 'point_cloud_model.model.pvcnn.fp_layers.3.1.voxel_layers.5.bias', 
'point_cloud_model.model.pvcnn.fp_layers.3.1.voxel_layers.7.fc.0.weight', 'point_cloud_model.model.pvcnn.fp_layers.3.1.voxel_layers.7.fc.2.weight', 
'point_cloud_model.model.pvcnn.fp_layers.3.1.point_features.layers.0.weight', 'point_cloud_model.model.pvcnn.fp_layers.3.1.point_features.layers.0.bias', 
'point_cloud_model.model.pvcnn.fp_layers.3.1.point_features.layers.1.weight', 'point_cloud_model.model.pvcnn.fp_layers.3.1.point_features.layers.1.bias', 
'point_cloud_model.model.pvcnn.fp_layers.3.2.voxel_layers.0.weight', 'point_cloud_model.model.pvcnn.fp_layers.3.2.voxel_layers.0.bias', 'point_cloud_model.model.pvcnn.fp_layers.3.2.voxel_layers.1.weight', 
'point_cloud_model.model.pvcnn.fp_layers.3.2.voxel_layers.1.bias', 'point_cloud_model.model.pvcnn.fp_layers.3.2.voxel_layers.4.weight', 'point_cloud_model.model.pvcnn.fp_layers.3.2.voxel_layers.4.bias', 
'point_cloud_model.model.pvcnn.fp_layers.3.2.voxel_layers.5.weight', 'point_cloud_model.model.pvcnn.fp_layers.3.2.voxel_layers.5.bias', 'point_cloud_model.model.pvcnn.fp_layers.3.2.voxel_layers.7.fc.0.weight', 
'point_cloud_model.model.pvcnn.fp_layers.3.2.voxel_layers.7.fc.2.weight', 'point_cloud_model.model.pvcnn.fp_layers.3.2.point_features.layers.0.weight', 
'point_cloud_model.model.pvcnn.fp_layers.3.2.point_features.layers.0.bias', 'point_cloud_model.model.pvcnn.fp_layers.3.2.point_features.layers.1.weight', 
'point_cloud_model.model.pvcnn.fp_layers.3.2.point_features.layers.1.bias', 'point_cloud_model.model.pvcnn.classifier.0.layers.0.weight', 'point_cloud_model.model.pvcnn.classifier.0.layers.0.bias', 
'point_cloud_model.model.pvcnn.classifier.0.layers.1.weight', 'point_cloud_model.model.pvcnn.classifier.0.layers.1.bias', 'point_cloud_model.model.pvcnn.classifier.2.weight', 
'point_cloud_model.model.pvcnn.classifier.2.bias', 'point_cloud_model.model.pvcnn.embedf.0.weight', 'point_cloud_model.model.pvcnn.embedf.0.bias', 'point_cloud_model.model.pvcnn.embedf.2.weight', 
'point_cloud_model.model.pvcnn.embedf.2.bias', 'point_cloud_model.model.output_projection.0.layers.0.weight', 'point_cloud_model.model.output_projection.0.layers.0.bias', 
'point_cloud_model.model.output_projection.0.layers.1.weight', 'point_cloud_model.model.output_projection.0.layers.1.bias', 'point_cloud_model.model.output_projection.2.weight', 
'point_cloud_model.model.output_projection.2.bias']
298 missing, 328 unexpected! total 448 modules.

and this is the predicted mesh (file outputs/sample__hydrant__ebs_24/2023-07-22--21-29-07/sample/pred/hydrant/147_16374_32167.ply ):
image

Can you check what is going wrong in the checkpoint? Thank you very much!

Best,
Xianghui

AttributeError: 'NoneType' object has no attribute 'path'

I try to infer(sample) some data by using the dataset of "co3dv2 --single_sequence_subset". However, I met some error like this:

img_v2_f4992799-e425-4df3-850d-a966387b7a1g

Following the hint of the issue, I found that it is may caused by the following part of the code in file "projection-conditioned-point-cloud-diffusion-main/experiments/dataset/init.py" :
img_v2_833a791b-d79f-4a2e-a676-38d093c8f83g
It seems that something wrong with the path of the point cloud. And I aslo see the author wrote something like "PATCH BUG WITH POINT CLOUD LOCATION"But I don't know how to deal with this issue. Can someone help me? Thanks a lot.

Error running sampling code

Hi there,

Thanks for the open source code. When I run the sampling code using the following command (which is slightly modified to what you provide in README),

python main.py run.job=sample dataloader.batch_size=16 dataloader.num_workers=6 dataset.category=teddybear checkpoint.resume="./example-logs-and-checkpoints/train__teddybear__ebs_48/2022-09-19--16-11-43/checkpoint-latest.pth" run.name=sample__teddybear__ebs_24

I got this error:

ValidationError: Incompatible value 'None' for field of type 'str'
    full_key: root
    object_type=CO3DConfig

It originates from ./experiments/config/structured.py:282:

cs.store(group='dataset', name='co3d', node=CO3DConfig)

Do you have any idea on this issue? Thanks in advance.

How to load ShapeNet data for training?

Dear Luke Melas-Kyriazi:
Sorry to interrupt you!
I'm currently trying to reproduce the results on the shapeNet dataset, but I'm failing at the data loading stage. Specifically, I imitated .json files in the co3d dataset you provided and created .json files corresponding to the Shapenet. However, due to the lack of the corrected files frame_annotations.jgz and sequence_annotations.jgz about ShapeNet dataset, the data loading still failed.
Are you convenient and willing to provide ShapeNet-related configuration files(i.e. set_lists/eval_batches/frame_annotations.jgz/sequence_annotations.jgz) and source files about data loading, making the source code can run on the ShapeNet dataset? I would be grateful if you could provide it!

In addition, should the 13 categories of ShapeNet be trained separately or jointly?

Dataset

Can you please mention how your dataset looks like. I am trying to implement your work [your code] But something is wrong with the way I downloaded dataset I guess. It's showing num_sample=0.

You work is amazing and I am working on a similar topic. Your reply will be helpful on this issue.
thank you again.

ShapeNet checkpoint & test scripts

Hi,

The current sampling codes look contained to the CO3D dataset. Would it be possible to also release the checkpoint and test scripts on the ShapeNet dataset?

Thanks in advance!

Test code

May I ask could you release the test code? Thank you very much!

Color normalization

Do we have to normalize the color before the color model training? What's the impact of not doing the color normalization?

Inquiry about Inference Time for Real Examples

Hello @lukemelas ,

I've been exploring your project, and I'm particularly interested in the inference capabilities of your model. Could you provide some information on how long it typically takes to run an inference on a real-world example?

Specifically, I'm looking to understand the performance I can expect in a production environment. Any details on inference time across different hardware setups or complexities of input data would be extremely helpful.

Thank you in advance for your support and for the great work on this project!

Best regards,
woody

How do you manage the structure of the CO3D dataset?

Hi, thanks for your nice work, I want to re-run your project, and I download the tv subset from CO3D and put it under project_root/datasets folder like this:
1709281712931
1709281977611
Then I got the error like this:
1709282013644
Then I found under tv_000 and tv_001 both of them have "frame_annotations.jgz" file, then I tried to move all the files under tv/tv_000/ to tv/
1709282210099
Then the error became this:
1709282242741
so I moved all the files under tv/tv_001/ to tv/
1709282416258
Then I got a new error:
1709282535219

I used pycharm to debug but I have no idea why it caused, can you share me how you manage the dataset and successfully run the project?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.