Giter VIP home page Giter VIP logo

nuscenes-devkit's Introduction

nuScenes™ devkit

Welcome to the nuTonomy® downloadable driverless vehicle software page. Click on the green box above labeled "Code" to download a copy of the software described below.

Overview

Changelog

  • Sep. 25, 2023: Devkit v1.1.11: Specify version for various pip requirements.
  • Feb. 13, 2023: Devkit v1.1.10: Specify version for various pip requirements.
  • Sep. 20, 2021: Devkit v1.1.9: Refactor tracking eval code for custom datasets with different classes.
  • Sep. 17, 2021: Devkit v1.1.8: Add PAT metric to Panoptic nuScenes.
  • Aug. 23, 2021: Devkit v1.1.7: Add more panoptic tracking metrics to Panoptic nuScenes code.
  • Jul. 29, 2021: Devkit v1.1.6: Panoptic nuScenes v1.0 code, NeurIPS challenge announcement.
  • Apr. 5, 2021: Devkit v1.1.3: Bug fixes and pip requirements.
  • Nov. 23, 2020: Devkit v1.1.2: Release map-expansion v1.3 with lidar basemap.
  • Nov. 9, 2020: Devkit v1.1.1: Lidarseg evaluation code, NeurIPS challenge announcement.
  • Aug. 31, 2020: Devkit v1.1.0: nuImages v1.0 and nuScenes-lidarseg v1.0 code release.
  • Jul. 7, 2020: Devkit v1.0.9: Misc updates on map and prediction code.
  • Apr. 30, 2020: nuImages v0.1 code release.
  • Apr. 1, 2020: Devkit v1.0.8: Relax pip requirements and reorganize prediction code.
  • Mar. 24, 2020: Devkit v1.0.7: nuScenes prediction challenge code released.
  • Feb. 12, 2020: Devkit v1.0.6: CAN bus expansion released.
  • Dec. 11, 2019: Devkit v1.0.5: Remove weight factor from AMOTA tracking metrics.
  • Nov. 1, 2019: Tracking eval code released and detection eval code reorganized.
  • Jul. 1, 2019: Map expansion released.
  • Apr. 30, 2019: Devkit v1.0.1: loosen PIP requirements, refine detection challenge, export 2d annotation script.
  • Mar. 26, 2019: Full dataset, paper, & devkit v1.0.0 released. Support dropped for teaser data.
  • Dec. 20, 2018: Initial evaluation code released. Devkit folders restructured, which breaks backward compatibility.
  • Nov. 21, 2018: RADAR filtering and multi sweep aggregation.
  • Oct. 4, 2018: Code to parse RADAR data released.
  • Sep. 12, 2018: Devkit for teaser dataset released.

Devkit setup

We use a common devkit for nuScenes and nuImages. The devkit is tested for Python 3.6 and Python 3.7. To install Python, please check here.

Our devkit is available and can be installed via pip :

pip install nuscenes-devkit

For an advanced installation, see installation for detailed instructions.

nuImages

nuImages is a stand-alone large-scale image dataset. It uses the same sensor setup as the 3d nuScenes dataset. The structure is similar to nuScenes and both use the same devkit, which make the installation process simple.

nuImages setup

To download nuImages you need to go to the Download page, create an account and agree to the nuScenes Terms of Use. For the devkit to work you will need to download at least the metadata and samples, the sweeps are optional. Please unpack the archives to the /data/sets/nuimages folder *without* overwriting folders that occur in multiple archives. Eventually you should have the following folder structure:

/data/sets/nuimages
    samples	-	Sensor data for keyframes (annotated images).
    sweeps  -   Sensor data for intermediate frames (unannotated images).
    v1.0-*	-	JSON tables that include all the meta data and annotations. Each split (train, val, test, mini) is provided in a separate folder.

If you want to use another folder, specify the dataroot parameter of the NuImages class (see tutorial).

Getting started with nuImages

Please follow these steps to make yourself familiar with the nuImages dataset:

jupyter notebook $HOME/nuscenes-devkit/python-sdk/tutorials/nuimages_tutorial.ipynb

nuScenes

nuScenes setup

To download nuScenes you need to go to the Download page, create an account and agree to the nuScenes Terms of Use. After logging in you will see multiple archives. For the devkit to work you will need to download all archives. Please unpack the archives to the /data/sets/nuscenes folder *without* overwriting folders that occur in multiple archives. Eventually you should have the following folder structure:

/data/sets/nuscenes
    samples	-	Sensor data for keyframes.
    sweeps	-	Sensor data for intermediate frames.
    maps	-	Folder for all map files: rasterized .png images and vectorized .json files.
    v1.0-*	-	JSON tables that include all the meta data and annotations. Each split (trainval, test, mini) is provided in a separate folder.

If you want to use another folder, specify the dataroot parameter of the NuScenes class (see tutorial).

Panoptic nuScenes

In August 2021 we published Panoptic nuScenes which contains the panoptic labels of the point clouds for the approximately 40,000 keyframes in nuScenes. To install Panoptic nuScenes, please follow these steps:

  • Download the dataset from the Download page,
  • Extract the panoptic and v1.0-* folders to your nuScenes root directory (e.g. /data/sets/nuscenes/panoptic, /data/sets/nuscenes/v1.0-*).
  • Get the latest version of the nuscenes-devkit.
  • Get started with the tutorial.

nuScenes-lidarseg

In August 2020 we published nuScenes-lidarseg which contains the semantic labels of the point clouds for the approximately 40,000 keyframes in nuScenes. To install nuScenes-lidarseg, please follow these steps:

  • Download the dataset from the Download page,
  • Extract the lidarseg and v1.0-* folders to your nuScenes root directory (e.g. /data/sets/nuscenes/lidarseg, /data/sets/nuscenes/v1.0-*).
  • Get the latest version of the nuscenes-devkit.
  • If you already have a previous version of the devkit, update the pip requirements (see details): pip install -r setup/requirements.txt
  • Get started with the tutorial.

Prediction challenge

In March 2020 we released code for the nuScenes prediction challenge. To get started:

  • Download the version 1.2 of the map expansion (see below).
  • Download the trajectory sets for CoverNet from here.
  • Go through the prediction tutorial.
  • For information on how submissions will be scored, visit the challenge website.

CAN bus expansion

In February 2020 we published the CAN bus expansion. It contains low-level vehicle data about the vehicle route, IMU, pose, steering angle feedback, battery, brakes, gear position, signals, wheel speeds, throttle, torque, solar sensors, odometry and more. To install this expansion, please follow these steps:

  • Download the expansion from the Download page,
  • Extract the can_bus folder to your nuScenes root directory (e.g. /data/sets/nuscenes/can_bus).
  • Get the latest version of the nuscenes-devkit.
  • If you already have a previous version of the devkit, update the pip requirements (see details): pip install -r setup/requirements.txt
  • Get started with the CAN bus readme or tutorial.

Map expansion

In July 2019 we published a map expansion with 11 semantic layers (crosswalk, sidewalk, traffic lights, stop lines, lanes, etc.). To install this expansion, please follow these steps:

  • Download the expansion from the Download page,
  • Extract the contents (folders basemap, expansion and prediction) to your nuScenes maps folder.
  • Get the latest version of the nuscenes-devkit.
  • If you already have a previous version of the devkit, update the pip requirements (see details): pip install -r setup/requirements.txt
  • Get started with the map expansion tutorial. For more information, see the map versions below.

Map versions

Here we give a brief overview of the different map versions:

  • v1.3: Add BitMap class that supports new lidar basemap and legacy semantic prior map. Remove one broken lane.
  • v1.2: Expand devkit and maps to include arcline paths and lane connectivity for the prediction challenge.
  • v1.1: Resolved issues with ego poses being off the drivable surface.
  • v1.0: Initial map expansion release from July 2019. Supports 11 semantic layers.
  • nuScenes v1.0: Came with a bitmap for the semantic prior. All code is contained in nuscenes.py.

Getting started with nuScenes

Please follow these steps to make yourself familiar with the nuScenes dataset:

jupyter notebook $HOME/nuscenes-devkit/python-sdk/tutorials/nuscenes_tutorial.ipynb

Known issues

Great care has been taken to collate the nuScenes dataset and many users have praised the quality of the data and annotations. However, some minor issues remain:

Maps:

  • For singapore-hollandvillage and singapore-queenstown the traffic light 3d poses are all 0 (except for tz).
  • For boston-seaport, the ego poses of 3 scenes (499, 515, 517) are slightly incorrect and 2 scenes (501, 502) are outside the annotated area.
  • For singapore-onenorth, the ego poses of about 10 scenes were off the drivable surface. This has been resolved in map v1.1.
  • Some lanes are disconnected from the rest of the lanes. We chose to keep these as they still provide valuable information.

Annotations:

  • A small number of 3d bounding boxes is annotated despite the object being temporarily occluded. For this reason we make sure to filter objects without lidar or radar points in the nuScenes benchmarks. See issue 366.

Citation

Please use the following citation when referencing nuScenes or nuImages:

@article{nuscenes2019,
  title={nuScenes: A multimodal dataset for autonomous driving},
  author={Holger Caesar and Varun Bankiti and Alex H. Lang and Sourabh Vora and 
          Venice Erin Liong and Qiang Xu and Anush Krishnan and Yu Pan and 
          Giancarlo Baldan and Oscar Beijbom},
  journal={arXiv preprint arXiv:1903.11027},
  year={2019}
}

Please use the following citation when referencing Panoptic nuScenes or nuScenes-lidarseg:

@article{fong2021panoptic,
  title={Panoptic nuScenes: A Large-Scale Benchmark for LiDAR Panoptic Segmentation and Tracking},
  author={Fong, Whye Kit and Mohan, Rohit and Hurtado, Juana Valeria and Zhou, Lubing and Caesar, Holger and
          Beijbom, Oscar and Valada, Abhinav},
  journal={arXiv preprint arXiv:2109.03805},
  year={2021}
}

nuscenes-devkit's People

Contributors

alex-nutonomy avatar cdicle-motional avatar charmve avatar chris-li-nutonomy avatar derduher avatar dh-nutonomy avatar emwolff avatar ericwiener avatar eskjorg avatar freddyaboulton avatar gdippolito avatar gnsjhenjie avatar holger-motional avatar hongyisun avatar jean-lucas avatar kiwoo-nutonomy avatar ktro2828 avatar lubing-motional avatar marcso avatar mengnutonomy avatar michael-hoss avatar mohammed-deifallah avatar mohan1914 avatar mzahran001 avatar oscar-nutonomy avatar qiang-xu avatar ruoning-ng avatar sourabh-nutonomy avatar sunilchomal avatar whyekit-motional avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

nuscenes-devkit's Issues

NameError: name 'Dict' is not defined

Traceback (most recent call last):
3.7.2 (tags/v3.7.2:9a3ffc0492, Dec 23 2018, 22:20:52) [MSC v.1916 32 bit (Intel)]
  File "play.py", line 1, in <module>
    from nuscenes.nuscenes import NuScenes
  File "A:\Sync\work\nuscenes-devkit\python-sdk\nuscenes\nuscenes.py", line 30, in <module>
    from nuscenes.utils.data_classes import LidarPointCloud, RadarPointCloud, Box
  File "A:\Sync\work\nuscenes-devkit\python-sdk\nuscenes\utils\data_classes.py", line 20, in <module>
    class PointCloud(ABC):
  File "A:\Sync\work\nuscenes-devkit\python-sdk\nuscenes\utils\data_classes.py", line 60, in PointCloud
    min_distance: float=1.0) -> Tuple[self, np.ndarray]:
NameError: name 'Dict' is not defined

Typo? should be dict?

linewidth datatype changed in def render_cv2

The definition that works:

def render_cv2(self, im: np.ndarray, view: np.ndarray=np.eye(3), normalize: bool=False,
colors: Tuple=((0, 0, 255), (255, 0, 0), (155, 155, 155)), linewidth: int=2) -> None:

Untar files

Hello,

When I untar the files v1.0-trainval*_blobs.tgz on a Ubuntu 18.04.1 LTS, it gives me the following error:

sweeps/RADAR_FRONT/n015-2018-07-27-11-36-48+0800__RADAR_FRONT__1532663153790733.pcd

gzip: stdin: invalid compressed data--crc error
sweeps/RADAR_FRONT/n008-2018-07-26-12-13-50-0400__RADAR_FRONT__1532622141160534.pcd
sweeps/RADAR_FRONT/n015-2018-07-27-11-36-48+0800__RADAR_FRONT__1532663021491223.pcd
.v1.0-trainval02_blobs.txt
tar: Child returned status 1
tar: Error is not recoverable: exiting now

The command I run is tar xvzf <filename>

Thank you very much

Distance to other vehicles

What is the easiest way to get the distance between the ego-vehicle and other vehicles in the scenes.
Is it just the xyz field of the object box that we get from nusc.get_sample_data?

Issue with default data in example code

Hello,

I run into an error when I run
python examples/export_pointclouds_as_obj.py with the default options.

FileNotFoundError: [Errno 2] No such file or directory: 'nuScenes/sweeps/LIDAR_TOP/n015-2018-07-24-11-22-45+0800__LIDAR_TOP__1532402927747489.pcd.bin'

The path is correct, but when I check the downloaded dataset, I cannot find the file n015-2018-07-24-11-22-45+0800__LIDAR_TOP__1532402927747489.pcd.bin. The closest one I could find is n015-2018-07-24-11-22-45+0800__LIDAR_TOP__1532402927797806.pcd.bin. Are there any files missing? Just want to be sure before I delve further into the code to figure this out.

Calibrated_sensors location

1.It is written that the translation and the rotation parameters are given with respect to the ego vehicle body frame, what is the [0,0,0] location? I thought that it is the center of the 3D bounding box (the same as the reference point to the translation values for the ego_pose and other objects) but the numbers doesn't add up.
2.The translation of the sensors are written as [l,w,h] ?
3. The rotation of the ego_pose and other objects rotation is relative to which axis? are they all written at the same coordinate system?

Terrain models

Does the dataset use any terrain models? It seems for each sample, ground always has zero elevation, because after transform LiDAR points to global frame, the ones on the ground have a height of 0. If I place an object on the ground in the global frame, how can I get the right elevation so that it can be correctly projected onto images?

Same route, different time/weather?

Hi,

Is there an overview of the scenes (beyond nusc.list_scenes) that would enable the user to know which scenes correspond to the same route, just at a different time or under different weather conditions? I see the "singapore-onenorth" field for example, but there are multiple routes that the car took in scenes under this description.

(Based on your familiarity with the dataset, if you have a couple of scenes you already know correspond to the same route under different weather, that would be great!)

tp_metrics crashes for classes without attributes

eval_utils.attr_acc() will return np.nan for classes without attributes (e.g. barrier).

This will eventually lead to assert np.nanmin(metric_vals) >= 0 failing when tp_metrics() is run for the barrier class.

Thus, the nuscenes_eval script will always crash, I believe. Haven't checked the code thoroughly but I think that is the case..

Trouble for downloading dataset

Hi, I want to download the full nuScenes dataset, but the download speed is too slow.
After building download missions by chrome (on Windows), it show 0 KB/s (never larger than 50 KB/s).

It's strange so I want to ask if there are some solutions for it?
Is Chinese Wall blocking it ?

Thanks.

Setting visibility threshold for bounding boxes

what is the parameter for setting the threshold value for the bounding boxes. By default it is set to BoxVisibility.ANY, how can i change this to make bounding boxes when visibility is greater the 70%?

Thanks in advance.

Setting up nu-scenes on windows

Hi,
Need some help on how to set up nu-scenes in windows (I am stuck for several weeks ):
I have installed python 3.7 through anaconda prompt
I have installed the packages from the requirements.txt
Completed the data download (meta, point_samples, point_sweep, image_samples, image_sweep, dev-kit)

From this point how to proceed further ?
Have tried working the solution by going through the existing reported issues but no luck so far.
will appreciate a lot if somebody can help.
I am a beginner in python.

name 'PointCloud' is not defined

Using python 3.7.2

3.7.2 (tags/v3.7.2:9a3ffc0492, Dec 23 2018, 22:20:52) [MSC v.1916 32 bit (Intel)]
Traceback (most recent call last):
  File "play.py", line 1, in <module>
    from nuscenes.nuscenes import NuScenes
  File "A:\Sync\work\nuscenes-devkit\python-sdk\nuscenes\nuscenes.py", line 30, in <module>
    from nuscenes.utils.data_classes import LidarPointCloud, RadarPointCloud, Box
  File "A:\Sync\work\nuscenes-devkit\python-sdk\nuscenes\utils\data_classes.py", line 20, in <module>
    class PointCloud(ABC):
  File "A:\Sync\work\nuscenes-devkit\python-sdk\nuscenes\utils\data_classes.py", line 47, in PointCloud
    def from_file(cls, file_name: str) -> PointCloud:
NameError: name 'PointCloud' is not defined

Handling of invalid Lidar points

Hi,
I am trying to apply a spherical projection on the data, but I am not quite sure how invalid Lidar points are handled in your dataset? By invalid I mean rays that never returned to the sensor due to absorption, diffusion, ..
Have they just been left out of the point clouds or is it still possible to detect those points?

Thanks in advance!

Radars and Camera Specs

Hi,

Could you please share the following specification:
1- Radars Vertical and horizontal resolution and FOV
2- Cameras FOV

Best Regards,

Issues with the Jupyter Notebook

I am trying to run the tutorial.ipynb notebook, I am able to setup the virtualenvwrapper and virtualenv easily, and the notebook is opening but the issue is that there is an error whenever I try to run the first cell. It gives an error: AssertionError: Database version not found: /data/nuscenes/v0.1

Please help me out.

Trouble downloading data

After making an account to download the data I keep getting an error. On a Linux machine I get the error this xml file does not appear to have any style information associated with it on ubuntu 16.04 and the file will not download. On a windows machine I can download the data, but when I extract it using WinRAR I get just the name of the tar file with no file type extension. Reading your documentation, I should expect it to extract into a folder, but that doesn’t seem to be the case. Let me know what I need to do to fix the issue. Thank you.

Poor python skills

Can not run it under python3, this syntax is really not pythonic:

    def __init__(self, version: str='v0.1', dataroot: str='/data/nuscenes', verbose: bool=True):

A minor issue for virtual environment setup

If the nuScenes-devkit is the first virtual environment a user creates, the procedure needs a small tweak. Before the step of "Create the virtual environment", $PATH needs to be updated to include the path of [VIRTUAL_ENV_LOCATION]. I guess "source ~/.profile" will work - if yes, then add a line "source ~/.profile" to bashrc in the previous step "Install virtualenvwrapper" will do. But I used a dumb reboot to resolve the "error: virtualenvwrapper could not find virtualenv in your path" I encountered by following the original steps.

Identities of train and val scenes

The Download page says the trainval split consists of: "850 scenes, 700 train, 150 val." I am assuming this distinction is intended to standardize the training and validation sets. How is each scene identified as belonging to either train or val? I can't find these annotations or descriptions. Thanks!

Issue with example code

I have downloaded theNuScenes dataset to visualize the Radar cloud.
I am trying to execute the code given in github:

import os
import os.path as osp
import argparse
from typing import Tuple

import numpy as np
from PIL import Image
from pyquaternion import Quaternion
from tqdm import tqdm

from nuscenes_utils.data_classes import PointCloud
from nuscenes_utils.geometry_utils import view_points
from nuscenes_utils.nuscenes import NuScenes, NuScenesExplorer


def export_scene_pointcloud(explorer: NuScenesExplorer, out_path: str, scene_token: str, channel: str='LIDAR_TOP',
                            min_dist: float=3.0, max_dist: float=30.0, verbose: bool=True) -> None:
    """
    Export fused point clouds of a scene to a Wavefront OBJ file.
    This point-cloud can be viewed in your favorite 3D rendering tool, e.g. Meshlab or Maya.
    :param explorer: NuScenesExplorer instance.
    :param out_path: Output path to write the point-cloud to.
    :param scene_token: Unique identifier of scene to render.
    :param channel: Channel to render.
    :param min_dist: Minimum distance to ego vehicle below which points are dropped.
    :param max_dist: Maximum distance to ego vehicle above which points are dropped.
    :param verbose: Whether to print messages to stdout.
    :return: <None>
    """

    # Check inputs.
    valid_channels = ['LIDAR_TOP', 'RADAR_FRONT', 'RADAR_FRONT_RIGHT', 'RADAR_FRONT_LEFT', 'RADAR_BACK_LEFT',
                      'RADAR_BACK_RIGHT']
    camera_channels = ['CAM_FRONT_LEFT', 'CAM_FRONT', 'CAM_FRONT_RIGHT', 'CAM_BACK_LEFT', 'CAM_BACK', 'CAM_BACK_RIGHT']
    assert channel in valid_channels, 'Input channel {} not valid.'.format(channel)

    # Get records from DB.
    scene_rec = explorer.nusc.get('scene', scene_token)
    start_sample_rec = explorer.nusc.get('sample', scene_rec['first_sample_token'])
    sd_rec = explorer.nusc.get('sample_data', start_sample_rec['data'][channel])

    # Make list of frames
    cur_sd_rec = sd_rec
    sd_tokens = []
    while cur_sd_rec['next'] != '':
        cur_sd_rec = explorer.nusc.get('sample_data', cur_sd_rec['next'])
        sd_tokens.append(cur_sd_rec['token'])

    # Write point-cloud.
    with open(out_path, 'w') as f:
        f.write("OBJ File:\n")

        for sd_token in tqdm(sd_tokens):
            if verbose:
                print('Processing {}'.format(sd_rec['filename']))
            sc_rec = explorer.nusc.get('sample_data', sd_token)
            sample_rec = explorer.nusc.get('sample', sc_rec['sample_token'])
            lidar_token = sd_rec['token']
            lidar_rec = explorer.nusc.get('sample_data', lidar_token)
            pc = PointCloud.from_file(osp.join(explorer.nusc.dataroot, lidar_rec['filename']))

            # Get point cloud colors.
            coloring = np.ones((3, pc.points.shape[1])) * -1
            for channel in camera_channels:
                camera_token = sample_rec['data'][channel]
                cam_coloring, cam_mask = pointcloud_color_from_image(nusc, lidar_token, camera_token)
                coloring[:, cam_mask] = cam_coloring

            # Points live in their own reference frame. So they need to be transformed via global to the image plane.
            # First step: transform the point cloud to the ego vehicle frame for the timestamp of the sweep.
            cs_record = explorer.nusc.get('calibrated_sensor', lidar_rec['calibrated_sensor_token'])
            pc.rotate(Quaternion(cs_record['rotation']).rotation_matrix)
            pc.translate(np.array(cs_record['translation']))

            # Optional Filter by distance to remove the ego vehicle.
            dists_origin = np.sqrt(np.sum(pc.points[:3, :] ** 2, axis=0))
            keep = np.logical_and(min_dist <= dists_origin, dists_origin <= max_dist)
            pc.points = pc.points[:, keep]
            coloring = coloring[:, keep]
            if verbose:
                print('Distance filter: Keeping %d of %d points...' % (keep.sum(), len(keep)))

            # Second step: transform to the global frame.
            poserecord = explorer.nusc.get('ego_pose', lidar_rec['ego_pose_token'])
            pc.rotate(Quaternion(poserecord['rotation']).rotation_matrix)
            pc.translate(np.array(poserecord['translation']))

            # Write points to file
            for (v, c) in zip(pc.points.transpose(), coloring.transpose()):
                if (c == -1).any():
                    # Ignore points without a color.
                    pass
                else:
                    f.write("v {v[0]:.8f} {v[1]:.8f} {v[2]:.8f} {c[0]:.4f} {c[1]:.4f} {c[2]:.4f}\n".format(v=v, c=c/255.0))

            if not sd_rec['next'] == "":
                sd_rec = explorer.nusc.get('sample_data', sd_rec['next'])


def pointcloud_color_from_image(nusc, pointsensor_token: str, camera_token: str) -> Tuple[np.array, np.array]:
    """
    Given a point sensor (lidar/radar) token and camera sample_data token, load point-cloud and map it to the image
    plane, then retrieve the colors of the closest image pixels.
    :param pointsensor_token: Lidar/radar sample_data token.
    :param camera_token: Camera sample data token.
    :return (coloring <np.float: 3, n>, mask <np.bool: m>). Returns the colors for n points that reproject into the
        image out of m total points. The mask indicates which points are selected.
    """

    cam = nusc.get('sample_data', camera_token)
    pointsensor = nusc.get('sample_data', pointsensor_token)

    pc = PointCloud.from_file(osp.join(nusc.dataroot, pointsensor['filename']))
    im = Image.open(osp.join(nusc.dataroot, cam['filename']))

    # Points live in the point sensor frame. So they need to be transformed via global to the image plane.
    # First step: transform the point-cloud to the ego vehicle frame for the timestamp of the sweep.
    cs_record = nusc.get('calibrated_sensor', pointsensor['calibrated_sensor_token'])
    pc.rotate(Quaternion(cs_record['rotation']).rotation_matrix)
    pc.translate(np.array(cs_record['translation']))

    # Second step: transform to the global frame.
    poserecord = nusc.get('ego_pose', pointsensor['ego_pose_token'])
    pc.rotate(Quaternion(poserecord['rotation']).rotation_matrix)
    pc.translate(np.array(poserecord['translation']))

    # Third step: transform into the ego vehicle frame for the timestamp of the image.
    poserecord = nusc.get('ego_pose', cam['ego_pose_token'])
    pc.translate(-np.array(poserecord['translation']))
    pc.rotate(Quaternion(poserecord['rotation']).rotation_matrix.T)

    # Fourth step: transform into the camera.
    cs_record = nusc.get('calibrated_sensor', cam['calibrated_sensor_token'])
    pc.translate(-np.array(cs_record['translation']))
    pc.rotate(Quaternion(cs_record['rotation']).rotation_matrix.T)

    # Fifth step: actually take a "picture" of the point cloud.
    # Grab the depths (camera frame z axis points away from the camera).
    depths = pc.points[2, :]

    # Take the actual picture (matrix multiplication with camera-matrix + renormalization).
    points = view_points(pc.points[:3, :], np.array(cs_record['camera_intrinsic']), normalize=True)

    # Remove points that are either outside or behind the camera. Leave a margin of 1 pixel for aesthetic reasons.
    mask = np.ones(depths.shape[0], dtype=bool)
    mask = np.logical_and(mask, depths > 0)
    mask = np.logical_and(mask, points[0, :] > 1)
    mask = np.logical_and(mask, points[0, :] < im.size[0] - 1)
    mask = np.logical_and(mask, points[1, :] > 1)
    mask = np.logical_and(mask, points[1, :] < im.size[1] - 1)
    points = points[:, mask]

    # Pick the colors of the points
    im_data = np.array(im)
    coloring = np.zeros(points.shape)
    for i, p in enumerate(points.transpose()):
        point = p[:2].round().astype(np.int32)
        coloring[:, i] = im_data[point[1], point[0], :]

    return coloring, mask


if __name__ == '__main__':
    # Read input parameters
    parser = argparse.ArgumentParser(description='Export a scene in Wavefront point cloud format.',
                                     formatter_class=argparse.ArgumentDefaultsHelpFormatter)
    parser.add_argument('--scene', default='scene-0061', type=str, help='Name of a scene, e.g. scene-0061')
    parser.add_argument('--out_dir', default='', type=str, help='Output folder')
    parser.add_argument('--verbose', default=0, type=int, help='Whether to print outputs to stdout')
    args = parser.parse_args()
    out_dir = args.out_dir
    scene_name = args.scene
    verbose = bool(args.verbose)

    out_path = osp.join(out_dir, '%s.obj' % scene_name)
    if osp.exists(out_path):
        print('=> File {} already exists. Aborting.'.format(out_path))
        exit()
    else:
        print('=> Extracting scene {} to {}'.format(scene_name, out_path))

    # Create output folder
    if not out_dir == '' and not osp.isdir(out_dir):
        os.makedirs(out_dir)

    # Extract point-cloud for the specified scene
    nusc = NuScenes()
    scene_tokens = [s['token'] for s in nusc.scene if s['name'] == scene_name]
    assert len(scene_tokens) == 1, 'Error: Invalid scene %s' % scene_name

    export_scene_pointcloud(nusc.explorer, out_path, scene_tokens[0], channel='LIDAR_TOP', verbose=verbose)

But when I execute it an error comes up:

File "export_pointclouds_as_obj.py", line 16
def export_scene_pointcloud(explorer: NuScenesExplorer, out_path: str, scene_token: str, channel: str='RADAR_FRONT',
^
SyntaxError: invalid syntax

Why does this error appear?? Thank you

not support python3.6 ?

when I use your tutorial code for test, raise the following error:

---> 47 def from_file(cls, file_name: str) -> PointCloud:
48 """
49 Loads point cloud from disk.

NameError: name 'PointCloud' is not defined

please help me !!!

Running nuscenes_eval without any TPs gives uninformative error message

I want to try the nuscenes_eval.py script with fake results. I.e. there might be 0 TPs in the results file.

In this case the assert np.nanmin(metric_vals) >= 0 fails, since metric_vals contains only np.nan. Took me a while to understand why.

How about adding an error message to the assert statement?
Or should the case of 0 TPs be handled in another way?

IMU measurements - Velocity, Acceleration

Do we have access to the IMU measurements such as the velocity and acceleration of the ego vehicle? Or do we obtain them by deriving the vehicle's translation with respect to time?

can not render radar point

the description of the function "render_pointcloud_in_image" says the RADAR or LIDAR channel is supported, but it can not render the radar point yet

construct coordinate frames parallel to gravity

I would like to represent points in a coordinate system with a z-axis parallel to the direction of gravity. From the diagram here, all coordinate frames attached to the ego vehicle have z-axes parallel to the local ground plane (the legend says "downward from ground" and "upward from ground", which leads me to believe that each axis is normal to the local ground plane that contains the points where the four wheels touch the ground). How can I rotate these frames to be parallel with the direction of gravity? Relatedly, how is the global coordinate frame constructed? Are there any guarantees for its axes, possibly related to the direction of gravity?

Orientation angle of the bounding box with respect to the camera

Hello,

For every annotation, the object Box returns the orientation of the bounding box in degrees:

filename, boxes, kk = nusc.get_sample_data(token_sd, box_vis_level=0)
for box in boxes:
angle= box.orientation.degrees

Can you please clarify what the angle refers to? I expected the angle to be the orientation in the XZ plane of the bounding box with respect to the camera. However, people facing the camera sometimes have positive angles, sometimes negative angles.

In general, how can I obtain the orientation of the bounding box with respect to the camera?

Thank you very much
Lorenzo

2D bounding box

Hello,
Do you provide a 2D bounding box of the detected objects (xy coordinates in pixels)?
I saw that view points in geometry utils.py allows to project the corners of the 3d bounding box into the image plane. However, projecting those corners (e.g all the 8 corners or only the frontal ones) does not match exactly a 2D bounding box, depending on the position of the instance in the image.

Thank you

Devkit with the mini dataset?

Hello,

I'd like to play around with some of the scenes of the newly released dataset, but I see on the devkit readme: "For the devkit to work you will need to download all archives"; does that mean we have to download the whole 250GB of data for the provided notebook to work?

I've seen the rendered notebook online, which is great, but I'd like to run some things on my laptop. Is there a way to just work with the mini version?

Repeated samples

Hi,

There are a few repeated samples. The images and annotations are the same (occasionally additional annotations).

e.g sample 343 and sample 369 (the camera keys point to the same filenames)

This issue extends from samples 343 -357 which are the same as these samples: 369-383

Was this intentional? If so, why?

Thank you.

How are samples created?

The documentation doesn't discuss how samples are created. The nusc object contains 3,977 samples. Assuming they are taken at 2Hz (the same frequency as the labels, as mentioned in the dataset overview), the total number of seconds in the sample records, as computed in a modified list_scenes function, should be equivalent to 3,977 / 2. However, I got total length: 1931.2375695705414 -> 3862 != 3977. Here is the modified list_scenes() function I used to compute the length. The 13 second overlap as mentioned in #8 is not enough to account for the discrepancy. How are the extra samples created?

    def list_scenes(self) -> None:
        """ Lists all scenes with some meta data. """

        def ann_count(record):
            count = 0
            sample = self.nusc.get('sample', record['first_sample_token'])
            while not sample['next'] == "":
                count += len(sample['anns'])
                sample = self.nusc.get('sample', sample['next'])
            return count

        recs = [(self.nusc.get('sample', record['first_sample_token'])['timestamp'], record) for record in
                self.nusc.scene]

        total_length = 0
        for start_time, record in sorted(recs):
            start_time = self.nusc.get('sample', record['first_sample_token'])['timestamp'] / 1000000
            length_time = self.nusc.get('sample', record['last_sample_token'])['timestamp'] / 1000000 - start_time
            location = self.nusc.get('log', record['log_token'])['location']
            total_length += length_time
            desc = record['name'] + ', ' + record['description']
            if len(desc) > 55:
                desc = desc[:51] + "..."

            print('{:16} [{}] {:4.0f}s, {}, #anns:{}'.format(
                desc, datetime.utcfromtimestamp(start_time).strftime('%y-%m-%d %H:%M:%S'),
                length_time, location, ann_count(record)))
        print("total length: {}".format(total_length))

Numpy version

Is numpy==1.14.5 necessary? Some other dependencies of mine requires numpy>=1.16.1. Would be convenient to be able to install both in the same environment without conflict. How about for example using numpy==1.16 in NuScenes?

One hack that seems to work right now is:

  1. pip install nuscenes-devkit
  2. pip install --upgrade numpy
  3. pip install other-package

Homography matrix from ground plane to camera plane

Hello,

I want to compute the homography matrix from the ground plane to the camera plane. To do so, I use the following matrix:
H = K * T * R * A
where K is the intrinsic matrix, T the translation matrix, R the rotation matrix and A the 3d to 2d projection matrix.
I got wrong results until I tweaked the values setting the focal length to a third of its value and setting the translation to values that do not correspond to the ones given. I know I am doing something wrong because your projection works perfectly fine when projecting the Lidar in the camera plane.

Can you give me an idea of what could be possibly wrong in my approach.
Many thanks.

How do I read pcd.bin with pcl?

Hi,
I know this might be a noob question, but I am trying to read the point cloud of a pcd.bin file of the Lidar using PCL library in C++ but as the extension is pcd.bin loadPCDFile() does not work. I have alredy tried PCDReader but same error occurs

[pcl::PCDReader::readHeader] No points to read

How can I read the binary pointclouds using pcl in c++?

Getting the 3D bonding box points value

I was just playing around with the notebook and was just wondering how can we get the actual 3D bounding box point printed? Is there any way of getting these points?

Camera distortion parameters

Hello,

Is there a place where we can get the distortion parameters for the cameras? (k1, k2, p1, p2)? I found the calibration parameters in the calibrated_sensor table, but the others don't seem to be there. Thanks!

Suggest using conda instead of virtualenv

I strongly suggest you using anaconda instead of virtualenv, because virtualenv is giving me lots of errors between python3.7 and the rest of the system (Ubuntu14.04).

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.