Giter VIP home page Giter VIP logo

chatsim's Introduction

ChatSim

Editable Scene Simulation for Autonomous Driving via LLM-Agent Collaboration

Arxiv | Project Page | Video

teaser

News

[06/12/2024] 🔥🔥🔥 background rendering speed up! 3D Gaussian splatting is integrated as a background rendering engine, rendering 50 frames within 30s.

[06/12/2024] 🔥🔥🔥 foreground rendering speed up! multiple process for blender rendering in parallel! rendering 50 frames within 5 minutes.

Requirement

  • Ubuntu version >= 20.04 (for using Blender 3.+)
  • Python >= 3.8
  • Pytorch >= 1.13
  • CUDA >= 11.6
  • COLMAP or Metashape software (not necessary, we provide recalibrated poses)
  • OpenAI API Key (you can also use other models' API from NVIDIA AI for free lunch)

Installation

First clone this repo recursively.

git clone https://github.com/yifanlu0227/ChatSim.git --recursive

Step 1: Setup environment

conda create -n chatsim python=3.9 git-lfs
conda activate chatsim

Step 2: Install background rendering engine

We offer two background rendering methods, one is McNeRF in our paper, and another is 3D Gaussian Splatting. McNeRF encodes the exposure time and achieves brightness-consistent rendering. 3D Gaussian Splatting is much faster (about 50 x) in rendering and has higher PSNR in training views. However, strong perspective shifts result in noticeable artifacts.

McNeRF

mcnerf.mp4

3D Gaussian Splatting

3dgs.mp4

Installing either one is OK! If you want high rendering speed and do not care about brightness inconsistency, choose 3D Gaussian Splatting.

Install McNeRF (official implement in the paper)
pip install torch==1.13.1+cu117 torchvision==0.14.1+cu117 torchaudio==0.13.1 --extra-index-url https://download.pytorch.org/whl/cu117

pip install -r requirements.txt
imageio_download_bin freeimage

The installation is the same as F2-NeRF. Please go through the following steps.

cd chatsim/background/mcnerf/

# mcnerf use the same data directory. 
ln -s ../../../data .

Step 2.1: Install dependencies

For Debian based Linux distributions:

sudo apt install zlib1g-dev

For Arch based Linux distributions:

sudo pacman -S zlib

Step 2.2: Download pre-compiled LibTorch

Taking torch-1.13.1+cu117 for example.

cd chatsim/background/mcnerf
cd External

# modify the verison if you use a different pytorch installation
wget https://download.pytorch.org/libtorch/cu117/libtorch-cxx11-abi-shared-with-deps-1.13.1%2Bcu117.zip
unzip ./libtorch-cxx11-abi-shared-with-deps-1.13.1+cu117.zip
rm ./libtorch-cxx11-abi-shared-with-deps-1.13.1+cu117.zip

Step 2.3: Compile

The lowest g++ version is 7.5.0.

cd ..
cmake . -B build
cmake --build build --target main --config RelWithDebInfo -j

If the mcnerf code is modified, the last two lines should always be executed.

Install 3D Gaussians Splatting

3DGS has much faster inference speed, higher rendering quality. But the HDR sky is not enabled in this case.

Installing 3DGS requires that your CUDA NVCC version matches your pytorch cuda version.

# make CUDA (nvcc) version consistent with the pytorch CUDA version.

# first check your CUDA (nvcc) version
nvcc -V # for example: Build cuda_11.8.r11.8

# go to https://pytorch.org/get-started/previous-versions/ to find a corresponding one. The version of pytorch itself should >= 1.13.

# We list a few options here for quick setup.
# CUDA 11.6 
pip install torch==1.13.0+cu116 torchvision==0.14.0+cu116 torchaudio==0.13.0 --extra-index-url https://download.pytorch.org/whl/cu116
# CUDA 11.7
pip install torch==1.13.0+cu117 torchvision==0.14.0+cu117 torchaudio==0.13.0 --extra-index-url https://download.pytorch.org/whl/cu117
# CUDA 11.8
conda install pytorch==2.0.0 torchvision==0.15.0 torchaudio==2.0.0 pytorch-cuda=11.8 -c pytorch -c nvidia
# CUDA 12.1
conda install pytorch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 pytorch-cuda=12.1 -c pytorch -c nvidia

pip install -r requirements.txt
imageio_download_bin freeimage

cd chatsim/background/gaussian-splatting/
pip install submodules/simple-knn

Step 3: Install Inpainting tools

Step 3.1: Setup Video Inpainting

cd ../inpainting/Inpaint-Anything/
python -m pip install -e segment_anything
gdown https://drive.google.com/drive/folders/1wpY-upCo4GIW4wVPnlMh_ym779lLIG2A -O pretrained_models --folder
gdown https://drive.google.com/drive/folders/1SERTIfS7JYyOOmXWujAva4CDQf-W7fjv -O pytracking/pretrain --folder

Step 3.2: Setup Image Inpainting

cd ../latent-diffusion
pip install -e git+https://github.com/CompVis/taming-transformers.git@master#egg=taming-transformers
pip install -e git+https://github.com/openai/CLIP.git@main#egg=clip
pip install -e .

# download pretrained ldm
wget -O models/ldm/inpainting_big/last.ckpt https://heibox.uni-heidelberg.de/f/4d9ac7ea40c64582b7c9/?dl=1

Step 4: Install Blender Software and our Blender Utils

We tested with Blender 3.5.1. Note that Blender 3+ requires Ubuntu version >= 20.04.

Step 4.1: Install Blender software

cd ../../Blender
wget https://download.blender.org/release/Blender3.5/blender-3.5.1-linux-x64.tar.xz
tar -xvf blender-3.5.1-linux-x64.tar.xz
rm blender-3.5.1-linux-x64.tar.xz

Step 4.2: Install blender utils for Blender's python

locate the internal Python of Blender, for example, blender-3.5.1-linux-x64/3.5/python/bin/python3.10

export blender_py=$PWD/blender-3.5.1-linux-x64/3.5/python/bin/python3.10

cd utils

# install dependency (use the -i https://pypi.tuna.tsinghua.edu.cn/simple if you are in the Chinese mainland)
$blender_py -m pip install -r requirements.txt 
$blender_py -m pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple

$blender_py setup.py develop

Step 5: Setup Trajectory Tracking Module (optional)

If you want to get smoother and more realistic trajectories, you can install the trajectory module and change the parameter motion_agent-motion_tracking to True in .yaml file. For installation (both code and pre-trained model), you can run the following commands in the terminal. This requires Pytorch >= 1.13.

pip install frozendict gym==0.26.2 stable-baselines3[extra] protobuf==3.20.1

cd chatsim/foreground
git clone --recursive [email protected]:MARMOTatZJU/drl-based-trajectory-tracking.git -b v1.0.0

cd drl-based-trajectory-tracking
source setup-minimum.sh

Then when the parameter motion_agent-motion_tracking is set as True, each trajectory will be tracked by this module to make it smoother and more realistic.

Step 6: Install McLight (optional)

If you want to train the skydome model, follow the README in chatsim/foreground/mclight/skydome_lighting/readme.md. You can download our provided skydome HDRI in the next section and start the simulation.

Usage

Data Preparation

Download and extract Waymo data

mkdir data
mkdir data/waymo_tfrecords
mkdir data/waymo_tfrecords/1.4.2

Download the waymo perception dataset v1.4.2 to the data/waymo_tfrecords/1.4.2. In the google cloud console, the correct folder path is waymo_open_dataset_v_1_4_2/individual_files/training or waymo_open_dataset_v_1_4_2/individual_files/validation. Some static scenes we have used are listed here. Use Filter to find them quickly, or use gcloud to download them in batch.

gcloud CLI installation for ubuntu 18.04+ (need sudo)
sudo apt-get update
sudo apt-get install apt-transport-https ca-certificates gnupg curl
curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo gpg --dearmor -o /usr/share/keyrings/cloud.google.gpg
echo "deb [signed-by=/usr/share/keyrings/cloud.google.gpg] https://packages.cloud.google.com/apt cloud-sdk main" | sudo tee -a /etc/apt/sources.list.d/google-cloud-sdk.list
sudo apt-get update && sudo apt-get install google-cloud-cli # for clash proxy user, you may need https://blog.csdn.net/m0_53694308/article/details/134874757
Static waymo scenes in training set

segment-11379226583756500423_6230_810_6250_810_with_camera_labels segment-12879640240483815315_5852_605_5872_605_with_camera_labels segment-13196796799137805454_3036_940_3056_940_with_camera_labels segment-14333744981238305769_5658_260_5678_260_with_camera_labels segment-14424804287031718399_1281_030_1301_030_with_camera_labels segment-16470190748368943792_4369_490_4389_490_with_camera_labels segment-17761959194352517553_5448_420_5468_420_with_camera_labels segment-4058410353286511411_3980_000_4000_000_with_camera_labels segment-10676267326664322837_311_180_331_180_with_camera_labels segment-1172406780360799916_1660_000_1680_000_with_camera_labels segment-13085453465864374565_2040_000_2060_000_with_camera_labels segment-13142190313715360621_3888_090_3908_090_with_camera_labels segment-13238419657658219864_4630_850_4650_850_with_camera_labels segment-13469905891836363794_4429_660_4449_660_with_camera_labels segment-14004546003548947884_2331_861_2351_861_with_camera_labels segment-14348136031422182645_3360_000_3380_000_with_camera_labels segment-14869732972903148657_2420_000_2440_000_with_camera_labels segment-15221704733958986648_1400_000_1420_000_with_camera_labels segment-15270638100874320175_2720_000_2740_000_with_camera_labels segment-15349503153813328111_2160_000_2180_000_with_camera_labels segment-15365821471737026848_1160_000_1180_000_with_camera_labels segment-15868625208244306149_4340_000_4360_000_with_camera_labels segment-16345319168590318167_1420_000_1440_000_with_camera_labels segment-16608525782988721413_100_000_120_000_with_camera_labels segment-16646360389507147817_3320_000_3340_000_with_camera_labels (deprecated) segment-3425716115468765803_977_756_997_756_with_camera_labels segment-3988957004231180266_5566_500_5586_500_with_camera_labels segment-8811210064692949185_3066_770_3086_770_with_camera_labels segment-9385013624094020582_2547_650_2567_650_with_camera_labels

Static waymo scenes in validation set

segment-10247954040621004675_2180_000_2200_000_with_camera_labels segment-10061305430875486848_1080_000_1100_000_with_camera_labels segment-10275144660749673822_5755_561_5775_561_with_camera_labels

If you have installed gcloud, you can download the above tfrecords via

bash data_utils/download_waymo.sh data_utils/waymo_static_32.lst data/waymo_tfrecords/1.4.2

After downloading tfrecords, you should see a folder structure like the following. If you download the tfrecord files from the console, you will also have prefixes like individual_files_training_ or individual_files_validation_.

data
|-- ...
|-- ...
`-- waymo_tfrecords
    `-- 1.4.2
        |-- segment-10247954040621004675_2180_000_2200_000_with_camera_labels.tfrecord
        |-- segment-11379226583756500423_6230_810_6250_810_with_camera_labels.tfrecord
        |-- ...
        `-- segment-1172406780360799916_1660_000_1680_000_with_camera_labels.tfrecord

We extract the images, camera poses, LiDAR file, etc. out of the tfrecord files with the data_utils/process_waymo_script.py:

cd data_utils
python process_waymo_script.py --waymo_data_dir=../data/waymo_tfrecords/1.4.2 --nerf_data_dir=../data/waymo_multi_view

This will generate the data folder data/waymo_multi_view.

Recalibrate Waymo data

Download our recalibrated files
cd ../data

# calibration files using metashape
# you can also go to https://drive.google.com/file/d/1ms4yhjH5pEDMhyf_CfzNEYq5kj4HILki/view?usp=sharing to download mannually
gdown 1ms4yhjH5pEDMhyf_CfzNEYq5kj4HILki
unzip recalibrated_poses.zip
rsync -av recalibrated_poses/ waymo_multi_view/
rm -r recalibrated_poses*


# if you use 3D Guassian Splatting, you also need to download following files
# calibration files using colmap, also point cloud for 3DGS training
# you can also go to https://huggingface.co/datasets/yifanlu/waymo_recalibrated_poses_colmap/tree/main to download mannually
git lfs install
git clone https://huggingface.co/datasets/yifanlu/waymo_recalibrated_poses_colmap
git lfs pull # ~ 2GB
tar xvf waymo_recalibrated_poses_colmap.tar
cd ..
rsync -av waymo_recalibrated_poses_colmap/waymo_multi_view/ waymo_multi_view/
rm -rf waymo_recalibrated_poses_colmap
Or recalibrated by yourself

If you want to do the recalibration yourself, you need to use COLMAP or Metashape to calibrate images in the data/waymo_multi_view/{SCENE_NAME}/images folder and convert them back to the waymo world coordinate. Please follow the tutorial in data_utils/README.md. And the final camera extrinsics and intrinsics are stored as cam_meta.npy (metashape case) or colmap/sparse_undistorted/cam_meta.npy (colmap case, necessary for 3dgs training).

compare

The final data folder will be like:

data
`-- waymo_multi_view
    |-- ...
    `-- segment-1172406780360799916_1660_000_1680_000_with_camera_labels
        |-- 3d_boxes.npy                # 3d bounding boxes of the first frame
        |-- images                      # a clip of waymo images used in chatsim (typically 40 frames)
        |-- images_all                  # full waymo images (typically 198 frames)
        |-- map.pkl                     # map data of this scene
        |-- point_cloud                 # point cloud file of the first frame
        |-- cams_meta.npy               # Camera ext&int calibrated by metashape and transformed to waymo coordinate system.
        |-- cams_meta_metashape.npy     # Camera ext&int calibrated by metashape (intermediate file, relative scale, not required by simulation inference)
        |-- cams_meta_colmap.npy        # Camera ext&int calibrated by colmap (intermediate file, relative scale, not required by simulation inference)
        |-- cams_meta_waymo.npy         # Camera ext&int from original waymo dataset (intermediate file, not required by simulation inference)
        |-- shutters                    # normalized exposure time (mean=0 std=1)
        |-- tracking_info.pkl           # tracking data
        |-- vehi2veh0.npy               # transformation matrix from i-th frame's vehicle coordinate to the first frame's vehicle 
        |-- camera.xml                  # calibration file from Metashape (intermediate file, not required by simulation inference)
        `-- colmap/sparse_undistorted/[images/cams_meta.npy/points3D_waymo.ply]   # calibration files from COLMAP (intermediate file, only required when using 3dgs rendering)
        

Coordinate Convention

  • Points in point_cloud/000_xxx.pcd are in the ego vehicle's coordinate
  • Camera poses in camera.xml are RDF convention (x-right, y-down, z-front).
  • Camera poses in cams_meta.npy are in RUB convention (x-right, y-up, z-back).
  • vehi2veh0.npy transformation between vehicle coordinates, vehicle coordinates are FLU convention (x-front, y-left, z-up), as Waymo paper illustrated.

cams_meta.npy instruction

cams_meta.shape = (N, 27)
cams_meta[:, 0 :12]: flatten camera poses in RUB, world coordinate is the starting frame's vehicle coordinate.
cams_meta[:, 12:21]: flatten camse intrinsics
cams_meta[:, 21:25]: distortion params [k1, k2, p1, p2]
cams_meta[:, 25:27]: bounds [z_near, z_far] (not used.)

Download Blender 3D Assets

  • Blender Assets. Download with the following command and make sure they are in data/blender_assets.
# suppose you are in ChatSim/data
git lfs install
git clone https://huggingface.co/datasets/yifanlu/Blender_3D_assets
cd Blender_3D_assets
git lfs pull # about 1GB, You might meet `Error updating the Git index: (1/1), 1.0 GB | 7.4 MB/s` when finishing `git lfs pull`. It doesn't matter. Please continue.

cd ..
mv Blender_3D_assets/assets.zip ./
unzip assets.zip
rm assets.zip
rm -rf Blender_3D_assets
mv assets blender_assets

Our 3D models are collected from the Internet. We tried our best to contact the author of the model and ensure that copyright issues are properly dealt with (our open-source projects are not for profit). If you are the author of a model and our behaviour infringes your copyright, please contact us immediately and we will delete the model.

Download Skydome HDRI

  • Skydome HDRI. Download with the following command and make sure they are in data/waymo_skydome.
# suppose you are in ChatSim/data
git lfs install
git clone https://huggingface.co/datasets/yifanlu/Skydome_HDRI
mv Skydome_HDRI/waymo_skydome ./
rm -rf Skydome_HDRI

You can also train the skydome estimation network yourself. Go to chatsim/foreground/mclight/skydome_lighting and follow chatsim/foreground/mclight/skydome_lighting/readme.md for the training.

Train and simulation

Either train McNeRF or 3D Gaussian Splatting, depending on your installation.

Train McNeRF
cd chatsim/background/mcnerf

Make sure you have the data folder linking to ../../../data. If haven't, run ln -s ../../../data data. Then train your model with

python scripts/run.py --config-name=wanjinyou_big \
dataset_name=waymo_multi_view case_name=${CASE_NAME} \
exp_name=${EXP_NAME} dataset.shutter_coefficient=0.15 mode=train_hdr_shutter +work_dir=$(pwd) 

where ${CASE_NAME} are those like segment-11379226583756500423_6230_810_6250_810_with_camera_labels and ${EXP_NAME} can be anything like exp_coeff_0.15. dataset.shutter_coefficient = 0.15 or dataset.shutter_coefficient = 0.3 work well.

You can simply run scripts like bash train-1137.sh for training and bash render_novel_view-1137.sh for testing.

Train 3D Gaussian Splatting
cd chatsim/background/gaussian-splatting

Make sure you have the data folder linking to ../../../data. If haven't, run ln -s ../../../data data. Then train your model with

# example
SCENE_NAME=segment-11379226583756500423_6230_810_6250_810_with_camera_labels

python train.py --config configs/chatsim/original.yaml source_path=data/waymo_multi_view/${SCENE_NAME}/colmap/sparse_undistorted model_path=output/${SCENE_NAME}

# rendering
python render.py -m output/${SCENE_NAME}

You can simply run scripts like bash train-1137.sh for training.

Start simulation

Set the API to an environment variable. Also, set OPENAI_API_BASE if you have network issues (especially in China mainland).

export OPENAI_API_KEY=<your api key>

Now you can start the simulation with

python main.py -y ${CONFIG YAML} \
               -p ${PROMPT} \
               [-s ${SIMULATION NAME}]
  • ${CONFIG YAML} specifies the scene information, and yamls are stored in config folder. e.g. config/waymo-1137.yaml.

  • ${PROMPT} is your input prompt, which should be wrapped in quotation marks. e.g. add a straight driving car in the scene.

  • ${SIMULATION NAME} determines the name of the folder when saving results. default demo.

You can try

# if you train nerf
python main.py -y config/waymo-1137.yaml -p "Add a Benz G in front of me, driving away fast."
# if you train 3DGS
python main.py -y config/3dgs-waymo-1137.yaml -p "Add a Benz G in front of me, driving away fast."

The rendered results are saved in results/1137_demo_%Y_%m_%d_%H_%M_%S. Intermediate files are saved in results/cache/1137_demo_%Y_%m_%d_%H_%M_%S for debug and visualization if save_cache are enabled in config/waymo-1137.yaml.

Config file explanation

config/waymo-1137.yaml contains a detailed explanation for each entry. We will give some extra explanation. Suppose the yaml is read into config_dict:

  • config_dict['scene']['is_wide_angle'] determines the rendering view. If set to True, we will expand Waymo's intrinsics (width -> 3 x width) to render wide-angle images. Also note that is_wide_angle = True comes with rendering_mode = 'render_wide_angle_hdr_shutter'; is_wide_angle = False comes with rendering_mode = 'render_hdr_shutter'

  • config_dict['scene']['frames'] the frame number for rendering.

  • config_dict['agents']['background_rendering_agent']['nerf_quiet_render'] will determine whether to print the output of mcnerf to the terminal. Set to False for debug use.

  • config_dict['agents']['foreground_rendering_agent']['use_surrounding_lighting'] defines whether we use the surrounding lighting. Currently use_surrounding_lighting = True only takes effect when merely one vehicle is added, because HDRI is a global illumination in Blender. It is difficult to set a separate HDRI for each car. use_surrounding_lighting = True can also lead to slow rendering, since it will call nerf #frame times. We set it to False in each default yaml.

  • config_dict['agents']['foreground_rendering_agent']['skydome_hdri_idx'] is the filename (w.o. extension) we choose from data/waymo_skydome/${SCENE_NAME}/. It is the skydome HDRI estimation from the first frame('000') by default, but you can manually select a better estimation from another frame. To view the HDRI, we recommend the VERIV for vscode and tev for desktop environment.

Todo

Citation

@InProceedings{wei2024editable,
      title={Editable Scene Simulation for Autonomous Driving via Collaborative LLM-Agents}, 
      author={Yuxi Wei and Zi Wang and Yifan Lu and Chenxin Xu and Changxing Liu and Hao Zhao and Siheng Chen and Yanfeng Wang},
      booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
      month={June},
      year={2024},
}

chatsim's People

Contributors

michaelfan30 avatar vfishc avatar yifanjiang111 avatar yifanlu0227 avatar ziwang1105 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

chatsim's Issues

UnboundLocalError: local variable 'sorted_destination' referenced before assignment

python3 main.py -y config/waymo-3425.yaml -p 'add serval different colors traffic_cones such as yellow red black in the scene' -s demo

Traceback (most recent call last):
File "/home/ChatSim/main.py", line 135, in
chatsim.execute_llms(args.prompt)
File "/home/ChatSim/main.py", line 99, in execute_llms
self.project_manager.dispatch_task(self.scene, task, self.tech_agents)
File "/home/ChatSim/chatsim/agents/project_manager.py", line 167, in dispatch_task
self.addition_operation(scene, task, tech_agents)
File "/home/ChatSim/chatsim/agents/project_manager.py", line 230, in addition_operation
motion_agent.func_placement_and_motion_single_vehicle(scene, added_car_name)
File "/home/ChatSim/chatsim/agents/motion_agent.py", line 314, in func_placement_and_motion_single_vehicle
motion_result = vehicle_motion(
File "/home/ChatSim/chatsim/foreground/motion_tools/placement_and_motion.py", line 87, in vehicle_motion
end = np.array([sorted_destination[0], sorted_destination[1]])
UnboundLocalError: local variable 'sorted_destination' referenced before assignment

About the mclight part

Hi! Thanks for the amazing work!
I am really interested in your multi-camera images to HDR skydome part. I tried to run the infer.py in the mc_to_sky/tools files, and I copied the config info from the Pretrain zip you provided. The infer part worked well after I ignored all the request to import a unet module from the sub_model file. I run some test based on the waymo segment: "segment-11379226583756500423_6230_810_6250_810_with_camera_labels", and I imported the exr documents end without '_sky' into the blender, the shadow direction doesn't seem to be right, am I setting the config wrong? And when I change the config to infer from only a single image, the rendered image from blender seems to lack the sunshine. I couldn't help but noticed the line written on top of the infer.py, it gives the command to infer the hdr from a single image, but I can't fine the config '/home/yfl/workspace/LDR_to_HDR/mc_to_sky/logs/pred_hdr_pano_from_single_1012_195149/config.yaml' maybe you could post it?
Thanks in advance!

The view adjust agents broken.

command:python3 main.py -y config/roan-10-not-wide.yaml -p 'move the viewpoint to 10m ahead' -s demo
output:
[User prompt] move the viewpoint to 10m ahead

[Project Manager] decomposing tasks
[Raw Response>>>] The broken down actions for this requirement are:

{1: 'Move the viewpoint to 10m ahead'}

This only requires a single action without any additional information about the scene or any vehicles.
[Extracted Response>>>] {1: 'Move the viewpoint to 10m ahead'}

[Performing Single Prompt] Move the viewpoint to 10m ahead

[Project Manager] dispatching each task
[Raw Response>>>] {'operation': 3}
[Extracted Response>>>] 3. (adjusting the viewpoint)

[View Adjust Agent LLM] reasoning the view motion
[Raw Response>>>] This description is related to view motion. Therefore, the dictionary would be:

{'if_view_motion': 1}
[Extracted Response>>>] {'if_view_motion': 1}

[View Adjust Agent LLM] generating the ego motion
[Raw Response>>>] I'm sorry, as an AI language model, I don't have the capability to actually move your viewpoint physically. However, I can provide you with assistance and information related to computer vision and image processing. Is there anything specific that you would like help with?
substring not found
Traceback (most recent call last):
File "/home/ChatSim/chatsim/agents/view_adjust_agent.py", line 115, in llm_view_motion_gen
start = answer.index("{")
ValueError: substring not found
Traceback (most recent call last):
File "/home/ChatSim/main.py", line 135, in
chatsim.execute_llms(args.prompt)
File "/home/ChatSim/main.py", line 99, in execute_llms
self.project_manager.dispatch_task(self.scene, task, self.tech_agents)
File "/home/ChatSim/chatsim/agents/project_manager.py", line 173, in dispatch_task
self.view_adjust_operation(scene, task, tech_agents)
File "/home/ChatSim/chatsim/agents/project_manager.py", line 278, in view_adjust_operation
start_frame_in_nerf, end_frame_in_nerf = view_adjust_agent.llm_view_motion_gen(scene, task)
ValueError: too many values to unpack (expected 2)

Can you provide a dockerfile?

@yifanlu0227
HI,first of all, this is a great work. 3D production should find vertical application scenarios to be practical, otherwise it will be digital asset garbage. I recently saw the UK automated-driving company's PRISM-1, which is also a 4D scene reconstruction framework, which is also very impressive.
Secondly, can you provide a dockerfile so that we can quickly deploy and see the results? Otherwise, the configuration will take a long time and different problems will occur for different machines.
Finally, thank you very much !

ValueError: need at least one array to concatenate

Nice works!!
When I was running 'start simulation', executing this line of code: python main.py -y config/waymo-1137.yaml -p 'add a straight driving car in the scene' -s demo
I encountered this error."
image-20240515112111962
I found that the issue was because the values for 'centerline' and 'boundary' read from the map.pkl file are empty.

image-20240515135851042

Additionally, when I am doing Data Preparation,the instruction is: "python process_waymo_script.py --waymo_data_dir=../data/waymo_tfrecords/1.4.2 --nerf_data_dir=../data/waymo_multi_view" .
I encounter this display,
image-20240514133205855

and I'm unsure if it's related to the issue mentioned above.

Could you please advise me on how to resolve this problem?
Thanks a lot!

Add assets in 3dgs scene

Hi,

Me again, sorry:).

I run the command with python main.py -y config/3dgs-waymo-1137.yaml -p "Add a Benz G in front of me, driving away fast."
The results shows a static scene with nothing added:
output
I found that the rendering log shows :
image

I guess this is because the ego_motion is false in the agent function:
image
So i manually set it to be true, then i get the results with an added car but still staic scene:
output

I've checked the rendering images that have moving frames in gaussian-splatting. I don't know how to do with it. Could you help me fix it and explain a bit about what does he ego_motion setting here means?

FileNotFoundError: No such file: '/home/lferris/ChatSim/results/cache/1137_demo_2024_07_09_11_04_41/blender_output/0/RGB_composite.png'

Currently, during the replication process, I encountered the following issue:
image
when I run the command python main.py -y config/waymo-1137.yaml -p "Add a Benz G in front of me, driving away fast." for testing, I receive the following error:

FileNotFoundError: No such file: '/home/lferris/ChatSim/results/cache/1137_demo_2024_07_09_11_04_41/blender_output/0/RGB_composite.png'

The McNeRF has already been fully trained, and I have also called the valid OpenAI API.

About multi-camera alignment

Hi, I am very interested in your promising work. I have some questions about multi-camera alignment.

  1. Why did you use Metashape to refine the camera pose? Is the result better than COLMAP for multi-camera pose estimating?
  2. Did you only refine multi-camera extrinsic (cami2ego)? Then, Did you use the pose of ego car from the vehicle sensor to translate the camera coord to the world?

Thanks in advance.

ChatSim and Blender rendering with my own scene data

Hi, I met some problems in blending background rgb and 3d assets.
It is successful when I use your provided scene data to add a traffic cone on the ground.
However, the traffic cone can not be added onto the ground (setting z to 0) when I use my scene data.

The config likes this:

cars:
- blender_file: blend_3d_assets/data_assets/Traffic_cone.blend
  insert_pos:
  - 10.459
  - -0.0068
  - 2
  insert_rot:
  - 0
  - 0
  - -0.0045
  model_obj_name: Car
  new_obj_name: cone2
- blender_file: blend_3d_assets/data_assets/Traffic_cone.blend
  insert_pos:
  - 10.459
  - -0.0068
  - 0
  insert_rot:
  - 0
  - 0
  - -0.0045
  model_obj_name: Car
  new_obj_name: cone3

The cam2world matrix is:

[[ 0.0112684  -0.02057352  0.99972486  1.9004364 ]
 [-0.9999064   0.00751276  0.01142506 -0.03219445]
 [-0.00774574 -0.9997601  -0.02048694  1.4025325 ]
 [ 0.          0.          0.          1.        ]]

I want to know whether there are some hard codes in blender python scripts for waymo setting, and how can I debug the script in vscode debugger?
Thanks in advance.
Best wishes.

mc_to_light代码运行不起来

我按照你的那个readme,install了skydomexxx包,然后运行train第一阶段,但是有很多包的导入问题,反正很多小错误,最典的是

from mc_to_sky.data_utils.holicity_sdr_centered_dataset import HoliCitySDRCenteredDataset

根本没有这个文件。。。。。。。

RuntimeError: The size of tensor a (50) must match the size of tensor b (10) at non-singleton dimension 1

I change the frames from 50 to 10, and run the following command:
python main.py -y config/waymo-1006.yaml -p 'Remove all cars.Viewpoints ahead slowly and A chevrolet driving away from me fast.' -s demo

I got the following output:

/root/AImodel/wenke/ChatSim/chatsim/background/inpainting/Inpaint-Anything/segment_anything/segment_anything/modeling/tiny_vit_sam.py:657: UserWarning: Overwriting tiny[0/1927]_512 in registry with segment_anything.modeling.tiny_vit_sam.tiny_vit_21m_512. This is because the name being registered conflicts with an existing name. Please check if this is not expected.
  return register_model(fn_wrapper)
sttn
Traceback (most recent call last):
  File "remove_anything_video_npy.py", line 288, in <module>
    all_frame_rm_w_mask = model.forward_inpainter(frames, masks)
  File "remove_anything_video_npy.py", line 132, in forward_inpainter
    frames = inpaint_video_with_builded_sttn(
  File "/root/miniconda3/envs/chatsim/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/root/AImodel/wenke/ChatSim/chatsim/background/inpainting/Inpaint-Anything/sttn_video_inpaint.py", line 91, in inpaint_video_with_builded_sttn
    feats = (feats * (1 - _masks).float()).view(video_length, 3, h, w)
RuntimeError: The size of tensor a (50) must match the size of tensor b (10) at non-singleton dimension 1

About datastruct of Blender model

Hi, I want to ask you something about how to convert the '.fbx' model to the '.blend' model ChatSim supported.
I simply import '.fbx' to Blender and save it to '.blend'. However, I can not get the result, and report the message:

WARN (bke.customdata): source/blender/blenkernel/intern/customdata.cc:4319 CustomData_layer_ensure_data_exists: CustomDataLayer->data is NULL for type 46.
Blender 3.5.1 (hash e1ccd9d4a1d3 built 2023-04-24 23:31:15)
Writing: /tmp/blender.crash.txt
Writing: /tmp/blender.crash.txt
Writing: /tmp/blender.crash.txt
Writing: /tmp/blender.crash.txt

Could you provide some help if you have any time?

Waymo data different with data in process_waymo_script.py

截图 2024-03-22 16-05-33

below is process_waymo_script:
截图 2024-03-22 16-07-54
I download the waymo dataset as your readme:v1.4.2/individual_files/training, but they are different with the scene names in process_waymo_scrip.py, so I can't execute the command to get correct data.

Error when compositing video

Hi,

First, I would like to thank for your great work and really cool demos!

Recently, I've encountered a bug when testing simulation. At video composition stage, it seems like the imageio-ffmpeg package has problem being imported? I wonder what may cause this problem and how to solve it?

I've followed all steps for installation and environment setup + trained McNeRF. I did not install the trajectory tracking module since it is listed as optional.

The logs output are provided below for your reference. Thank you in advance!

use shutter                                                                               
[Inpaint] No inpainting.                                                                  
[Blender] Start rendering 50 images.                                                      
see the log in results/cache/1006_demo_2024_06_11_17_12_20/blender.log if save_cache is en
abled                                                                                     
100%|█████████████████████████████████████████████████████████████████████████████████████
████████████████████████| 50/50 [08:58<00:00, 10.76s/it]                                  
[Blender] Finish rendering 50 images.                                                     
[Blender] Copying Remaining 0 images.                                                     
[Compositing video] start...                                                              
  0%|                                                                                     
                                 | 0/50 [00:00<?, ?it/s]                                  
Traceback (most recent call last):                                                        
  File "/home/shenlong/Documents/maxhsu/ChatSim/main.py", line 136, in <module>
    chatsim.execute_funcs()                  
  File "/home/shenlong/Documents/maxhsu/ChatSim/main.py", line 124, in execute_funcs
    generate_video(self.scene, self.current_prompt)                                       
  File "/home/shenlong/Documents/maxhsu/ChatSim/chatsim/agents/utils.py", line 41, in gene
rate_video
    writer.append_data(frame)                
  File "/home/shenlong/miniconda3/envs/chatsim/lib/python3.9/site-packages/imageio/core/fo
rmat.py", line 590, in append_data
    return self._append_data(im, total_meta)                                              
  File "/home/shenlong/miniconda3/envs/chatsim/lib/python3.9/site-packages/imageio/plugins
/ffmpeg.py", line 587, in _append_data
    self._initialize()                       
  File "/home/shenlong/miniconda3/envs/chatsim/lib/python3.9/site-packages/imageio/plugins
/ffmpeg.py", line 648, in _initialize
    self._write_gen.send(None)               
  File "/home/shenlong/miniconda3/envs/chatsim/lib/python3.9/site-packages/imageio_ffmpeg/
_io.py", line 508, in write_frames
codec = get_first_available_h264_encoder()                                            
  File "/home/shenlong/miniconda3/envs/chatsim/lib/python3.9/site-packages/imageio_ffmpeg/
_io.py", line 124, in get_first_available_h264_encoder
    compiled_encoders = get_compiled_h264_encoders()                                      
  File "/home/shenlong/miniconda3/envs/chatsim/lib/python3.9/site-packages/imageio_ffmpeg/
_io.py", line 58, in get_compiled_h264_encoders
    cmd = [get_ffmpeg_exe(), "-hide_banner", "-encoders"]                                 
  File "/home/shenlong/miniconda3/envs/chatsim/lib/python3.9/site-packages/imageio_ffmpeg/
_utils.py", line 28, in get_ffmpeg_exe
    exe = _get_ffmpeg_exe()                  
  File "/home/shenlong/miniconda3/envs/chatsim/lib/python3.9/site-packages/imageio_ffmpeg/
_utils.py", line 44, in _get_ffmpeg_exe
    exe = os.path.join(_get_bin_dir(), FNAME_PER_PLATFORM.get(plat, ""))                  
  File "/home/shenlong/miniconda3/envs/chatsim/lib/python3.9/site-packages/imageio_ffmpeg/
_utils.py", line 69, in _get_bin_dir
    ref = importlib.resources.files("imageio_ffmpeg.binaries") / "__init__.py"
  File "/home/shenlong/miniconda3/envs/chatsim/lib/python3.9/importlib/resources.py", line
 147, in files
    return _common.from_package(_get_package(package))                                    
  File "/home/shenlong/miniconda3/envs/chatsim/lib/python3.9/importlib/_common.py", line 1
4, in from_package
    return fallback_resources(package.__spec__)                                           
  File "/home/shenlong/miniconda3/envs/chatsim/lib/python3.9/importlib/_common.py", line 1
8, in fallback_resources
    package_directory = pathlib.Path(spec.origin).parent                                  
  File "/home/shenlong/miniconda3/envs/chatsim/lib/python3.9/pathlib.py", line 1082, in __
new__
    self = cls._from_parts(args, init=False)                                              
  File "/home/shenlong/miniconda3/envs/chatsim/lib/python3.9/pathlib.py", line 707, in _fr
om_parts
    drv, root, parts = self._parse_args(args)                                             
  File "/home/shenlong/miniconda3/envs/chatsim/lib/python3.9/pathlib.py", line 691, in _pa
rse_args
    a = os.fspath(a)                        
TypeError: expected str, bytes or os.PathLike object, not NoneType

render only foregound in Blender

Hi Yifan,

Thanks for the great code release. The paper shows the foreground-only rendering from Blender (see below), but the code only produces the final composed image. Would you please advise how to change the code to render the foreground only?
image

cmake --build build --target main --config RelWithDebInfo -j

root@ip-:/home/ChatSim/chatsim/background/mcnerf# cmake --build build --target main --config RelWithDebInfo -j
[  3%] Built target fmt
[ 45%] Built target yaml-cpp
Consolidate compiler generated dependencies of target tiny-cuda-nn
[ 46%] Building CUDA object External/tiny-cuda-nn/CMakeFiles/tiny-cuda-nn.dir/src/common.cu.o
[ 48%] Building CUDA object External/tiny-cuda-nn/CMakeFiles/tiny-cuda-nn.dir/src/common_device.cu.o
[ 49%] Building CUDA object External/tiny-cuda-nn/CMakeFiles/tiny-cuda-nn.dir/src/cpp_api.cu.o
[ 50%] Building CUDA object External/tiny-cuda-nn/CMakeFiles/tiny-cuda-nn.dir/src/cutlass_mlp.cu.o
[ 51%] Building CUDA object External/tiny-cuda-nn/CMakeFiles/tiny-cuda-nn.dir/src/encoding.cu.o
[ 53%] Building CUDA object External/tiny-cuda-nn/CMakeFiles/tiny-cuda-nn.dir/src/loss.cu.o
[ 54%] Building CUDA object External/tiny-cuda-nn/CMakeFiles/tiny-cuda-nn.dir/src/network.cu.o
[ 55%] Building CUDA object External/tiny-cuda-nn/CMakeFiles/tiny-cuda-nn.dir/src/optimizer.cu.o
[ 57%] Building CUDA object External/tiny-cuda-nn/CMakeFiles/tiny-cuda-nn.dir/src/object.cu.o
[ 58%] Building CUDA object External/tiny-cuda-nn/CMakeFiles/tiny-cuda-nn.dir/src/reduce_sum.cu.o
[ 59%] Building CUDA object External/tiny-cuda-nn/CMakeFiles/tiny-cuda-nn.dir/src/fully_fused_mlp.cu.o
/usr/include/c++/11/bits/std_function.h:435:145: error: parameter packs not expanded with ‘...’:
  435 |         function(_Functor&& __f)
      |                                                                                                                                                 ^
/usr/include/c++/11/bits/std_function.h:435:145: note:         ‘_ArgTypes’
/usr/include/c++/11/bits/std_function.h:530:146: error: parameter packs not expanded with ‘...’:
  530 |         operator=(_Functor&& __f)
      |                                                                                                                                                  ^
/usr/include/c++/11/bits/std_function.h:530:146: note:         ‘_ArgTypes’
/usr/include/c++/11/bits/std_function.h:435:145: error: parameter packs not expanded with ‘...’:
  435 |         function(_Functor&& __f)
      |                                                                                                                                                 ^
/usr/include/c++/11/bits/std_function.h:435:145: note:         ‘_ArgTypes’
/usr/include/c++/11/bits/std_function.h:530:146: error: parameter packs not expanded with ‘...’:
  530 |         operator=(_Functor&& __f)
      |                                                                                                                                                  ^
/usr/include/c++/11/bits/std_function.h:530:146: note:         ‘_ArgTypes’
/usr/include/c++/11/bits/std_function.h:435:145: error: parameter packs not expanded with ‘...’:
  435 |         function(_Functor&& __f)
      |                                                                                                                                                 ^
/usr/include/c++/11/bits/std_function.h:435:145: note:         ‘_ArgTypes’
/usr/include/c++/11/bits/std_function.h:530:146: error: parameter packs not expanded with ‘...’:
  530 |         operator=(_Functor&& __f)
      |                                                                                                                                                  ^
/usr/include/c++/11/bits/std_function.h:530:146: note:         ‘_ArgTypes’
/usr/include/c++/11/bits/std_function.h:435:145: error: parameter packs not expanded with ‘...’:
  435 |         function(_Functor&& __f)
      |                                                                                                                                                 ^
/usr/include/c++/11/bits/std_function.h:435:145: note:         ‘_ArgTypes’
/usr/include/c++/11/bits/std_function.h:530:146: error: parameter packs not expanded with ‘...’:
  530 |         operator=(_Functor&& __f)
      |                                                                                                                                                  ^
/usr/include/c++/11/bits/std_function.h:530:146: note:         ‘_ArgTypes’
/usr/include/c++/11/bits/std_function.h:435:145: error: parameter packs not expanded with ‘...’:
  435 |         function(_Functor&& __f)
      |                                                                                                                                                 ^
/usr/include/c++/11/bits/std_function.h:435:145: note:         ‘_ArgTypes’
/usr/include/c++/11/bits/std_function.h:530:146: error: parameter packs not expanded with ‘...’:
  530 |         operator=(_Functor&& __f)
      |                                                                                                                                                  ^
/usr/include/c++/11/bits/std_function.h:530:146: note:         ‘_ArgTypes’
/usr/include/c++/11/bits/std_function.h:435:145: error: parameter packs not expanded with ‘...’:
  435 |         function(_Functor&& __f)
      |                                                                                                                                                 ^
/usr/include/c++/11/bits/std_function.h:435:145: note:         ‘_ArgTypes’
/usr/include/c++/11/bits/std_function.h:530:146: error: parameter packs not expanded with ‘...’:
  530 |         operator=(_Functor&& __f)
      |                                                                                                                                                  ^
/usr/include/c++/11/bits/std_function.h:530:146: note:         ‘_ArgTypes’
/usr/include/c++/11/bits/std_function.h:435:145: error: parameter packs not expanded with ‘...’:
  435 |         function(_Functor&& __f)
      |                                                                                                                                                 ^
/usr/include/c++/11/bits/std_function.h:435:145: note:         ‘_ArgTypes’
/usr/include/c++/11/bits/std_function.h:530:146: error: parameter packs not expanded with ‘...’:
  530 |         operator=(_Functor&& __f)
      |                                                                                                                                                  ^
/usr/include/c++/11/bits/std_function.h:530:146: note:         ‘_ArgTypes’
gmake[3]: *** [External/tiny-cuda-nn/CMakeFiles/tiny-cuda-nn.dir/build.make:174: External/tiny-cuda-nn/CMakeFiles/tiny-cuda-nn.dir/src/object.cu.o] Error 1
gmake[3]: *** Waiting for unfinished jobs....
gmake[3]: *** [External/tiny-cuda-nn/CMakeFiles/tiny-cuda-nn.dir/build.make:76: External/tiny-cuda-nn/CMakeFiles/tiny-cuda-nn.dir/src/common.cu.o] Error 1
gmake[3]: *** [External/tiny-cuda-nn/CMakeFiles/tiny-cuda-nn.dir/build.make:90: External/tiny-cuda-nn/CMakeFiles/tiny-cuda-nn.dir/src/common_device.cu.o] Error 1
gmake[3]: *** [External/tiny-cuda-nn/CMakeFiles/tiny-cuda-nn.dir/build.make:202: External/tiny-cuda-nn/CMakeFiles/tiny-cuda-nn.dir/src/reduce_sum.cu.o] Error 1
gmake[3]: *** [External/tiny-cuda-nn/CMakeFiles/tiny-cuda-nn.dir/build.make:160: External/tiny-cuda-nn/CMakeFiles/tiny-cuda-nn.dir/src/network.cu.o] Error 1
gmake[3]: *** [External/tiny-cuda-nn/CMakeFiles/tiny-cuda-nn.dir/build.make:146: External/tiny-cuda-nn/CMakeFiles/tiny-cuda-nn.dir/src/loss.cu.o] Error 1
gmake[3]: *** [External/tiny-cuda-nn/CMakeFiles/tiny-cuda-nn.dir/build.make:104: External/tiny-cuda-nn/CMakeFiles/tiny-cuda-nn.dir/src/cpp_api.cu.o] Error 1
/usr/include/c++/11/bits/std_function.h:435:145: error: parameter packs not expanded with ‘...’:
  435 |         function(_Functor&& __f)
      |                                                                                                                                                 ^
/usr/include/c++/11/bits/std_function.h:435:145: note:         ‘_ArgTypes’
/usr/include/c++/11/bits/std_function.h:530:146: error: parameter packs not expanded with ‘...’:
  530 |         operator=(_Functor&& __f)
      |                                                                                                                                                  ^
/usr/include/c++/11/bits/std_function.h:530:146: note:         ‘_ArgTypes’
gmake[3]: *** [External/tiny-cuda-nn/CMakeFiles/tiny-cuda-nn.dir/build.make:188: External/tiny-cuda-nn/CMakeFiles/tiny-cuda-nn.dir/src/optimizer.cu.o] Error 1
^Cnvcc error   : 'cicc' died due to signal 2
gmake[3]: *** [External/tiny-cuda-nn/CMakeFiles/tiny-cuda-nn.dir/build.make:118: External/tiny-cuda-nn/CMakeFiles/tiny-cuda-nn.dir/src/cutlass_mlp.cu.o] Interrupt
nvcc error   : 'cicc' died due to signal 2
gmake[3]: *** [External/tiny-cuda-nn/CMakeFiles/tiny-cuda-nn.dir/build.make:216: External/tiny-cuda-nn/CMakeFiles/tiny-cuda-nn.dir/src/fully_fused_mlp.cu.o] Interrupt
nvcc error   : 'cicc' died due to signal 2
gmake[3]: *** [External/tiny-cuda-nn/CMakeFiles/tiny-cuda-nn.dir/build.make:132: External/tiny-cuda-nn/CMakeFiles/tiny-cuda-nn.dir/src/encoding.cu.o] Interrupt
gmake[2]: *** [CMakeFiles/Makefile2:212: External/tiny-cuda-nn/CMakeFiles/tiny-cuda-nn.dir/all] Interrupt
gmake[1]: *** [CMakeFiles/Makefile2:193: CMakeFiles/main.dir/rule] Interrupt
gmake: *** [Makefile:125: main] Interrupt

I'm running the repo on AWS G4 instance due to some GPU concern.

Can anyone help solving the above error

RuntimeError: The size of tensor a (50) must match the size of tensor b (10) at non-singleton dimension 1 #25

I change the frames from 50 to 10, and run the following command:
python main.py -y config/waymo-1006.yaml -p 'Remove all cars.Viewpoints ahead slowly and A chevrolet driving away from me fast.' -s demo

I got the following output:

/root/AImodel/wenke/ChatSim/chatsim/background/inpainting/Inpaint-Anything/segment_anything/segment_anything/modeling/tiny_vit_sam.py:657: UserWarning: Overwriting tiny[0/1927]_512 in registry with segment_anything.modeling.tiny_vit_sam.tiny_vit_21m_512. This is because the name being registered conflicts with an existing name. Please check if this is not expected.
  return register_model(fn_wrapper)
sttn
Traceback (most recent call last):
  File "remove_anything_video_npy.py", line 288, in <module>
    all_frame_rm_w_mask = model.forward_inpainter(frames, masks)
  File "remove_anything_video_npy.py", line 132, in forward_inpainter
    frames = inpaint_video_with_builded_sttn(
  File "/root/miniconda3/envs/chatsim/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
    return func(*args, **kwargs)
  File "/root/AImodel/wenke/ChatSim/chatsim/background/inpainting/Inpaint-Anything/sttn_video_inpaint.py", line 91, in inpaint_video_with_builded_sttn
    feats = (feats * (1 - _masks).float()).view(video_length, 3, h, w)
RuntimeError: The size of tensor a (50) must match the size of tensor b (10) at non-singleton dimension 1

This won‘t happen if I set frames as 50.
I think there's a hardcode remains, but I failed to find it.
Do you know how to generate a video with a different frame number? Many thanks!

Why the car can't appear in the camera area?

图片
The prompt is ‘add a black car on the front on the scene‘. I use my own data. I don't know where the problem is?It seems that my camera's posture is incorrect, it cannot face forward on the x-axis.The mcNerf training is right. Is there a fixed code parameter written during the camera generation process?

FileNotFoundError: No such file:

When I try to test the simulation with the command python main.py -y config/waymo-1137.yaml -p "Add a Benz G in front of me, driving away fast.", I encounter the following error. I have already trained McNeRF, and I cannot find the target image file in the file path of this error. How should I resolve this issue? How should the image be generated?
image

Why the extrinsics need to multitply the trans_mat?

in process_waymo.py
trans_mat = np.array([[[ 0., 0., 1., 0.],
[-1., -0., -0., 0.],
[-0., -1., -0., 0.],
[-0., 0., -0., 1.]]])
extrinsics_ = np.matmul(extrinsics_, trans_mat)
To convert the vehicle coordinate to opencv coordinate?

Simulation 3dgs training problem

Hi,

I successfully trained the scene with 3dgs but encountered problem when simulation shown below:
image

It looks like rendering works but the results are not saved in cache? And i found the 'blender_output' folder is empty. If you have any idea how to fix it?

Thanks!

Could NOT find CUDNN (missing: CUDNN_LIBRARY_PATH CUDNN_INCLUDE_PATH)

Wonderful works!!!
I was trying Step 2.3: cmake . -B build
However,I encountered the following problem:

(chatsim) root@admin:~/AImodel/wenke/ChatSim/chatsim/background/mcnerf# cmake . -B build
-- Obtained CUDA architectures automatically from installed GPUs
-- Targeting CUDA architectures: 80
-- Module support is disabled.
-- Version: 9.1.1
-- Build type: Release
-- Caffe2: CUDA detected: 11.7
-- Caffe2: CUDA nvcc is: /usr/local/cuda-11.7/bin/nvcc
-- Caffe2: CUDA toolkit directory: /usr/local/cuda-11.7
-- Caffe2: Header version is: 11.7
-- Could NOT find CUDNN (missing: CUDNN_LIBRARY_PATH CUDNN_INCLUDE_PATH)
CMake Warning at External/libtorch/share/cmake/Caffe2/public/cuda.cmake:120 (message):
Caffe2: Cannot find cuDNN library. Turning the option off
Call Stack (most recent call first):
External/libtorch/share/cmake/Caffe2/Caffe2Config.cmake:92 (include)
External/libtorch/share/cmake/Torch/TorchConfig.cmake:68 (find_package)
CMakeLists.txt:39 (find_package)

-- /usr/local/cuda-11.7/lib64/libnvrtc.so shorthash is d833c4f3
-- Autodetected CUDA architecture(s): 8.0 8.0 8.0 8.0 8.0 8.0 8.0 8.0
-- Added CUDA NVCC flags for: -gencode;arch=compute_80,code=sm_80
CMake Error at External/libtorch/share/cmake/Caffe2/Caffe2Config.cmake:100 (message):
Your installed Caffe2 version uses cuDNN but I cannot find the cuDNN
libraries. Please set the proper cuDNN prefixes and / or install cuDNN.
Call Stack (most recent call first):
External/libtorch/share/cmake/Torch/TorchConfig.cmake:68 (find_package)
CMakeLists.txt:39 (find_package)

-- Configuring incomplete, errors occurred!
See also "/root/AImodel/wenke/ChatSim/chatsim/background/mcnerf/build/CMakeFiles/CMakeOutput.log".
See also "/root/AImodel/wenke/ChatSim/chatsim/background/mcnerf/build/CMakeFiles/CMakeError.log".

Do you have any idea? Many thanks!!

About the 3D detection augmentation

I‘m new to Nerf. How do you obtain annotation information such as 3D boxes from simulation data that can be used for 3D detector training?

Data could not be found as per directed

Downloaded the data and stored in the mentioned location

image

The following command is not creating the waymo_multi_view folder and seems the script is not running

image

Code release

@yifanlu0227 thanks for sharing ur wonderful work , when is the tentative time of the code and weight release
Thanks in adavance

Could not find TrajectryTracker form simulator package

IMG-20240429-WA0007

Got a error while the motion agent is activated
The error is a module error. Simulator package has no module TrajectoryTracker

In the below image I have printed all the modules from the library but I could not find TrajectoryTracking in that

IMG-20240429-WA0008

Hope you guys have a solution for this

Thanks in advance

Errors when run rendering and sim

I follow the README.md to install and train the nerf successfully, and when I tried to run the sim or rendering, the error shows up :
I check the dir, and no such file, the training process was ended well(no error shows)

image

open file failed because of errno 2 on fopen: , file path: /code/ChatSim/chatsim/background/mcnerf/exp/segment-10275144660749673822_5755_561_5775_561_with_camera_labels/exp_coeff_0.3/checkpoints/latest/scalars.pt

image

waymo-1137-not-wide not work

Hi,
Thanks for opensource your fancy work.

I tried the waymo-1137-not-wide.yaml for rendering efficiency, but result did not contain any object mentioned in the prompt. (even for the default prompt: "add a straight driving car in the scene" )

I notice that the rendering_mode is "render_hdr_shutter" in not-wide yaml. However, there is no mode named "render_hdr_shutter" in Exprunner file.

Is this the reason why no object appear in the final rendering result?

Segmentation fault (core dumped)

Nice works!!
When I did Step 6 Start simulation,I encountered the following problem:

Segmentation fault (core dumped)

Other than that, there are no further hints.
Do you have any idea?
Thanks a lot!

Installation on RTX 4090

Hi,

I have tried to install ChatSim environment on NVIDIA RTX 4090, and I have encountered some problems during installation.

OS: Ubuntu 22.04.4 LTS
gcc: 11.4.0
cmake: 3.22.1
cuda: 11.8

I have modified several steps to make it compatible to 4090. (since it is compute_89)

conda create -n chatsim python=3.9 git-lfs
conda activate chatsim

##### Install different version of PyTorch & CUDA #####
conda install pytorch==2.0.0 torchvision==0.15.0 torchaudio==2.0.0 pytorch-cuda=11.8 -c pytorch -c nvidia
conda install -c "nvidia/label/cuda-11.8.0" cuda-toolkit

pip install -r requirements.txt
imageio_download_bin freeimage

cd chatsim/background/mcnerf/

# mcnerf use the same data directory. 
ln -s ../../../data .

sudo apt install zlib1g-dev

cd chatsim/background/mcnerf
cd External

##### Download different version of LibTorch #####
wget https://download.pytorch.org/libtorch/cu118/libtorch-cxx11-abi-shared-with-deps-2.0.0%2Bcu118.zip
unzip ./libtorch-cxx11-abi-shared-with-deps-2.0.0+cu118.zip
rm ./libtorch-cxx11-abi-shared-with-deps-2.0.0+cu118.zip

As I tried to run cmake . -B build, it comes with a warning messages.

-- Obtained CUDA architectures automatically from installed GPUs
-- Targeting CUDA architectures: 89
-- Module support is disabled.
-- Version: 9.1.1
-- Build type: Release
-- Caffe2: CUDA detected: 11.8
-- Caffe2: CUDA nvcc is: /home/haoyuyh3/miniconda3/envs/chatsim/bin/nvcc
-- Caffe2: CUDA toolkit directory: /home/haoyuyh3/miniconda3/envs/chatsim
-- Caffe2: Header version is: 11.8
-- /usr/lib/x86_64-linux-gnu/libnvrtc.so shorthash is 65f2c18b
-- USE_CUDNN is set to 0. Compiling without cuDNN support
-- Autodetected CUDA architecture(s):  8.9
-- Added CUDA NVCC flags for: -gencode;arch=compute_89,code=sm_89
-- Configuring done
CMake Warning at CMakeLists.txt:74 (add_executable):
  Cannot generate a safe runtime search path for target main because files in
  some directories may conflict with libraries in implicit directories:

    runtime library [libnvrtc.so.11.2] in /usr/lib/x86_64-linux-gnu may be hidden by files in:
      /home/haoyuyh3/miniconda3/envs/chatsim/lib
    runtime library [libnvToolsExt.so.1] in /usr/lib/x86_64-linux-gnu may be hidden by files in:
      /home/haoyuyh3/miniconda3/envs/chatsim/lib
    runtime library [libz.so.1] in /usr/lib/x86_64-linux-gnu may be hidden by files in:
      /home/haoyuyh3/miniconda3/envs/chatsim/lib

  Some of these libraries may not be found correctly.


-- Generating done
-- Build files have been written to: /home/haoyuyh3/Documents/maxhsu/editing-related-works/ChatSim/chatsim/background/mcnerf/build

Afterwards, I ran cmake --build build --target main --config RelWithDebInfo -j and came across this error.

(chats) root@server:~/Documents/maxhsu/editing-related-works/ChatSim/chatsim/background/mcnerf$ cmake --build build --target main --config RelWithDebInfo -j
Consolidate compiler generated dependencies of target fmt
Consolidate compiler generated dependencies of target yaml-cpp
[  3%] Built target fmt
Consolidate compiler generated dependencies of target tiny-cuda-nn
[ 45%] Built target yaml-cpp
[ 61%] Built target tiny-cuda-nn
Consolidate compiler generated dependencies of target main
[ 62%] Linking CXX executable main
/usr/bin/ld: ../External/libtorch/lib/libkineto.a(CuptiActivityApi.cpp.o): in function `libkineto::CuptiActivityApi::pushCorrelationID(int, libkineto::CuptiActivityApi::CorrelationFlowType)':
CuptiActivityApi.cpp:(.text+0x95b): undefined reference to `cuptiActivityPushExternalCorrelationId'
/usr/bin/ld: CuptiActivityApi.cpp:(.text+0x976): undefined reference to `cuptiGetResultString'
/usr/bin/ld: CuptiActivityApi.cpp:(.text+0xa09): undefined reference to `cuptiActivityPushExternalCorrelationId'
/usr/bin/ld: CuptiActivityApi.cpp:(.text+0xa28): undefined reference to `cuptiGetResultString'
/usr/bin/ld: ../External/libtorch/lib/libkineto.a(CuptiActivityApi.cpp.o): in function `libkineto::CuptiActivityApi::popCorrelationID(libkineto::CuptiActivityApi::CorrelationFlowType)::{lambda()#1}::operator()() const [clone .isra.216]':
CuptiActivityApi.cpp:(.text+0xbe6): undefined reference to `cuptiActivityPopExternalCorrelationId'
/usr/bin/ld: CuptiActivityApi.cpp:(.text+0xc29): undefined reference to `cuptiGetResultString'
/usr/bin/ld: ../External/libtorch/lib/libkineto.a(CuptiActivityApi.cpp.o): in function `libkineto::CuptiActivityApi::popCorrelationID(libkineto::CuptiActivityApi::CorrelationFlowType)::{lambda()#2}::operator()() const [clone .isra.217]':
CuptiActivityApi.cpp:(.text+0xd36): undefined reference to `cuptiActivityPopExternalCorrelationId'
/usr/bin/ld: CuptiActivityApi.cpp:(.text+0xd79): undefined reference to `cuptiGetResultString'
/usr/bin/ld: ../External/libtorch/lib/libkineto.a(CuptiActivityApi.cpp.o): in function `libkineto::CuptiActivityApi::setDeviceBufferSize(unsigned long)::{lambda()#1}::operator()() const':
CuptiActivityApi.cpp:(.text+0xeb8): undefined reference to `cuptiActivitySetAttribute'
/usr/bin/ld: CuptiActivityApi.cpp:(.text+0xf01): undefined reference to `cuptiGetResultString'
/usr/bin/ld: ../External/libtorch/lib/libkineto.a(CuptiActivityApi.cpp.o): in function `libkineto::CuptiActivityApi::setDeviceBufferPoolLimit(unsigned long)::{lambda()#1}::operator()() const':
CuptiActivityApi.cpp:(.text+0x107b): undefined reference to `cuptiActivitySetAttribute'
/usr/bin/ld: CuptiActivityApi.cpp:(.text+0x10c1): undefined reference to `cuptiGetResultString'
/usr/bin/ld: ../External/libtorch/lib/libkineto.a(CuptiActivityApi.cpp.o): in function `libkineto::CuptiActivityApi::forceLoadCupti()::{lambda()#1}::operator()() const [clone .isra.218]':
CuptiActivityApi.cpp:(.text+0x1234): undefined reference to `cuptiActivityEnable'
/usr/bin/ld: CuptiActivityApi.cpp:(.text+0x1279): undefined reference to `cuptiGetResultString'
/usr/bin/ld: ../External/libtorch/lib/libkineto.a(CuptiActivityApi.cpp.o): in function `libkineto::CuptiActivityApi::activityBuffers()':
CuptiActivityApi.cpp:(.text+0x1402): undefined reference to `cuptiActivityFlushAll'
/usr/bin/ld: CuptiActivityApi.cpp:(.text+0x1571): undefined reference to `cuptiGetResultString'
/usr/bin/ld: ../External/libtorch/lib/libkineto.a(CuptiActivityApi.cpp.o): in function `libkineto::nextActivityRecord(unsigned char*, unsigned long, CUpti_Activity*&)::{lambda()#1}::operator()() const':
CuptiActivityApi.cpp:(.text+0x1751): undefined reference to `cuptiGetResultString'
/usr/bin/ld: ../External/libtorch/lib/libkineto.a(CuptiActivityApi.cpp.o): in function `libkineto::CuptiActivityApi::processActivitiesForBuffer(unsigned char*, unsigned long, std::function<void (CUpti_Activity const*)>)':
CuptiActivityApi.cpp:(.text+0x188a): undefined reference to `cuptiActivityGetNextRecord'
/usr/bin/ld: ../External/libtorch/lib/libkineto.a(CuptiActivityApi.cpp.o): in function `libkineto::CuptiActivityApi::clearActivities()::{lambda()#1}::operator()() const [clone .isra.221]':
CuptiActivityApi.cpp:(.text+0x1a71): undefined reference to `cuptiActivityFlushAll'
/usr/bin/ld: CuptiActivityApi.cpp:(.text+0x1ab1): undefined reference to `cuptiGetResultString'
/usr/bin/ld: ../External/libtorch/lib/libkineto.a(CuptiActivityApi.cpp.o): in function `libkineto::CuptiActivityApi::bufferCompleted(CUctx_st*, unsigned int, unsigned char*, unsigned long, unsigned long)':
CuptiActivityApi.cpp:(.text+0x1fec): undefined reference to `cuptiActivityGetNumDroppedRecords'
/usr/bin/ld: CuptiActivityApi.cpp:(.text+0x21a6): undefined reference to `cuptiGetResultString'
/usr/bin/ld: ../External/libtorch/lib/libkineto.a(CuptiActivityApi.cpp.o): in function `libkineto::CuptiActivityApi::enableCuptiActivities(std::set<libkineto::ActivityType, std::less<libkineto::ActivityType>, std::allocator<libkineto::ActivityType> > const&)::{lambda()#1}::operator()() const [clone .isra.222]':
CuptiActivityApi.cpp:(.text+0x231d): undefined reference to `cuptiActivityRegisterCallbacks'
/usr/bin/ld: CuptiActivityApi.cpp:(.text+0x2361): undefined reference to `cuptiGetResultString'
/usr/bin/ld: ../External/libtorch/lib/libkineto.a(CuptiActivityApi.cpp.o): in function `libkineto::CuptiActivityApi::enableCuptiActivities(std::set<libkineto::ActivityType, std::less<libkineto::ActivityType>, std::allocator<libkineto::ActivityType> > const&)::{lambda()#2}::operator()() const [clone .isra.223]':
CuptiActivityApi.cpp:(.text+0x2464): undefined reference to `cuptiActivityEnable'
/usr/bin/ld: CuptiActivityApi.cpp:(.text+0x24a9): undefined reference to `cuptiGetResultString'
/usr/bin/ld: ../External/libtorch/lib/libkineto.a(CuptiActivityApi.cpp.o): in function `libkineto::CuptiActivityApi::enableCuptiActivities(std::set<libkineto::ActivityType, std::less<libkineto::ActivityType>, std::allocator<libkineto::ActivityType> > const&)::{lambda()#3}::operator()() const [clone .isra.224]':
CuptiActivityApi.cpp:(.text+0x25b4): undefined reference to `cuptiActivityEnable'
/usr/bin/ld: CuptiActivityApi.cpp:(.text+0x25f9): undefined reference to `cuptiGetResultString'
/usr/bin/ld: ../External/libtorch/lib/libkineto.a(CuptiActivityApi.cpp.o): in function `libkineto::CuptiActivityApi::enableCuptiActivities(std::set<libkineto::ActivityType, std::less<libkineto::ActivityType>, std::allocator<libkineto::ActivityType> > const&)::{lambda()#4}::operator()() const [clone .isra.225]':
CuptiActivityApi.cpp:(.text+0x2704): undefined reference to `cuptiActivityEnable'
/usr/bin/ld: CuptiActivityApi.cpp:(.text+0x2749): undefined reference to `cuptiGetResultString'
/usr/bin/ld: ../External/libtorch/lib/libkineto.a(CuptiActivityApi.cpp.o): in function `libkineto::CuptiActivityApi::enableCuptiActivities(std::set<libkineto::ActivityType, std::less<libkineto::ActivityType>, std::allocator<libkineto::ActivityType> > const&)::{lambda()#5}::operator()() const [clone .isra.226]':
CuptiActivityApi.cpp:(.text+0x2854): undefined reference to `cuptiActivityEnable'
/usr/bin/ld: CuptiActivityApi.cpp:(.text+0x2899): undefined reference to `cuptiGetResultString'
/usr/bin/ld: ../External/libtorch/lib/libkineto.a(CuptiActivityApi.cpp.o): in function `libkineto::CuptiActivityApi::enableCuptiActivities(std::set<libkineto::ActivityType, std::less<libkineto::ActivityType>, std::allocator<libkineto::ActivityType> > const&)::{lambda()#6}::operator()() const [clone .isra.227]':
CuptiActivityApi.cpp:(.text+0x29a4): undefined reference to `cuptiActivityEnable'
/usr/bin/ld: CuptiActivityApi.cpp:(.text+0x29e9): undefined reference to `cuptiGetResultString'
/usr/bin/ld: ../External/libtorch/lib/libkineto.a(CuptiActivityApi.cpp.o): in function `libkineto::CuptiActivityApi::enableCuptiActivities(std::set<libkineto::ActivityType, std::less<libkineto::ActivityType>, std::allocator<libkineto::ActivityType> > const&)::{lambda()#7}::operator()() const [clone .isra.228]':
CuptiActivityApi.cpp:(.text+0x2af4): undefined reference to `cuptiActivityEnable'
/usr/bin/ld: CuptiActivityApi.cpp:(.text+0x2b39): undefined reference to `cuptiGetResultString'
/usr/bin/ld: ../External/libtorch/lib/libkineto.a(CuptiActivityApi.cpp.o): in function `libkineto::CuptiActivityApi::disableCuptiActivities(std::set<libkineto::ActivityType, std::less<libkineto::ActivityType>, std::allocator<libkineto::ActivityType> > const&)::{lambda()#1}::operator()() const [clone .isra.229]':
CuptiActivityApi.cpp:(.text+0x2d24): undefined reference to `cuptiActivityDisable'
/usr/bin/ld: CuptiActivityApi.cpp:(.text+0x2d69): undefined reference to `cuptiGetResultString'
/usr/bin/ld: ../External/libtorch/lib/libkineto.a(CuptiActivityApi.cpp.o): in function `libkineto::CuptiActivityApi::disableCuptiActivities(std::set<libkineto::ActivityType, std::less<libkineto::ActivityType>, std::allocator<libkineto::ActivityType> > const&)::{lambda()#2}::operator()() const [clone .isra.230]':
CuptiActivityApi.cpp:(.text+0x2e74): undefined reference to `cuptiActivityDisable'
/usr/bin/ld: CuptiActivityApi.cpp:(.text+0x2eb9): undefined reference to `cuptiGetResultString'
/usr/bin/ld: ../External/libtorch/lib/libkineto.a(CuptiActivityApi.cpp.o): in function `libkineto::CuptiActivityApi::disableCuptiActivities(std::set<libkineto::ActivityType, std::less<libkineto::ActivityType>, std::allocator<libkineto::ActivityType> > const&)::{lambda()#3}::operator()() const [clone .isra.231]':
CuptiActivityApi.cpp:(.text+0x2fc4): undefined reference to `cuptiActivityDisable'
/usr/bin/ld: CuptiActivityApi.cpp:(.text+0x3009): undefined reference to `cuptiGetResultString'
/usr/bin/ld: ../External/libtorch/lib/libkineto.a(CuptiActivityApi.cpp.o): in function `libkineto::CuptiActivityApi::disableCuptiActivities(std::set<libkineto::ActivityType, std::less<libkineto::ActivityType>, std::allocator<libkineto::ActivityType> > const&)::{lambda()#4}::operator()() const [clone .isra.232]':
CuptiActivityApi.cpp:(.text+0x3114): undefined reference to `cuptiActivityDisable'
/usr/bin/ld: CuptiActivityApi.cpp:(.text+0x3159): undefined reference to `cuptiGetResultString'
/usr/bin/ld: ../External/libtorch/lib/libkineto.a(CuptiActivityApi.cpp.o): in function `libkineto::CuptiActivityApi::disableCuptiActivities(std::set<libkineto::ActivityType, std::less<libkineto::ActivityType>, std::allocator<libkineto::ActivityType> > const&)::{lambda()#5}::operator()() const [clone .isra.233]':
CuptiActivityApi.cpp:(.text+0x3264): undefined reference to `cuptiActivityDisable'
/usr/bin/ld: CuptiActivityApi.cpp:(.text+0x32a9): undefined reference to `cuptiGetResultString'
/usr/bin/ld: ../External/libtorch/lib/libkineto.a(CuptiActivityApi.cpp.o): in function `libkineto::CuptiActivityApi::disableCuptiActivities(std::set<libkineto::ActivityType, std::less<libkineto::ActivityType>, std::allocator<libkineto::ActivityType> > const&)::{lambda()#6}::operator()() const [clone .isra.234]':
CuptiActivityApi.cpp:(.text+0x33b4): undefined reference to `cuptiActivityDisable'
/usr/bin/ld: CuptiActivityApi.cpp:(.text+0x33f9): undefined reference to `cuptiGetResultString'
/usr/bin/ld: ../External/libtorch/lib/libkineto.a(CuptiActivityApi.cpp.o): in function `libkineto::CuptiActivityApi::teardownContext()::{lambda()#1}::operator()() const':
CuptiActivityApi.cpp:(.text+0x3766): undefined reference to `cuptiActivityFlushAll'
/usr/bin/ld: CuptiActivityApi.cpp:(.text+0x3951): undefined reference to `cuptiGetResultString'
/usr/bin/ld: ../External/libtorch/lib/libkineto.a(CuptiCallbackApi.cpp.o): in function `libkineto::CuptiCallbackApi::initCallbackApi()':
CuptiCallbackApi.cpp:(.text+0x1f): undefined reference to `cuptiSubscribe'
/usr/bin/ld: ../External/libtorch/lib/libkineto.a(CuptiCallbackApi.cpp.o): in function `libkineto::CuptiCallbackApi::enableCallback(CUpti_CallbackDomain, unsigned int)':
CuptiCallbackApi.cpp:(.text+0x64b): undefined reference to `cuptiEnableCallback'
/usr/bin/ld: ../External/libtorch/lib/libkineto.a(CuptiCallbackApi.cpp.o): in function `libkineto::CuptiCallbackApi::enableCallbackDomain(CUpti_CallbackDomain)':
CuptiCallbackApi.cpp:(.text+0x7ca): undefined reference to `cuptiEnableDomain'
/usr/bin/ld: ../External/libtorch/lib/libkineto.a(CuptiCallbackApi.cpp.o): in function `libkineto::CuptiCallbackApi::reenableCallbacks()':
CuptiCallbackApi.cpp:(.text+0x8f8): undefined reference to `cuptiEnableCallback'
/usr/bin/ld: CuptiCallbackApi.cpp:(.text+0x931): undefined reference to `cuptiEnableDomain'
/usr/bin/ld: ../External/libtorch/lib/libkineto.a(CuptiCallbackApi.cpp.o): in function `libkineto::CuptiCallbackApi::disableCallback(CUpti_CallbackDomain, unsigned int)':
CuptiCallbackApi.cpp:(.text+0xbbf): undefined reference to `cuptiEnableCallback'
/usr/bin/ld: ../External/libtorch/lib/libkineto.a(CuptiCallbackApi.cpp.o): in function `libkineto::CuptiCallbackApi::disableCallbackDomain(CUpti_CallbackDomain)':
CuptiCallbackApi.cpp:(.text+0xccd): undefined reference to `cuptiEnableDomain'
/usr/bin/ld: ../External/libtorch/lib/libkineto.a(CuptiCallbackApi.cpp.o): in function `libkineto::CuptiCallbackApi::__callback_switchboard(CUpti_CallbackDomain, unsigned int, CUpti_CallbackData const*)':
CuptiCallbackApi.cpp:(.text+0x1041): undefined reference to `cuptiFinalize'
/usr/bin/ld: CuptiCallbackApi.cpp:(.text+0x1111): undefined reference to `cuptiGetResultString'
collect2: error: ld returned 1 exit status
gmake[3]: *** [CMakeFiles/main.dir/build.make:571: main] Error 1
gmake[2]: *** [CMakeFiles/Makefile2:186: CMakeFiles/main.dir/all] Error 2
gmake[1]: *** [CMakeFiles/Makefile2:193: CMakeFiles/main.dir/rule] Error 2
gmake: *** [Makefile:125: main] Error 2

I would like to know which part did I get it wrong and what is the proper way to fix it? Thanks!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.