Giter VIP home page Giter VIP logo

pykitti's Introduction

pykitti

KITTI

This package provides a minimal set of tools for working with the KITTI dataset [1] in Python. So far only the raw datasets and odometry benchmark datasets are supported, but we're working on adding support for the others. We welcome contributions from the community.

Installation

Using pip

You can install pykitti via pip using

pip install pykitti

From source

To install the package from source, simply clone or download the repository to your machine

git clone https://github.com/utiasSTARS/pykitti.git

and run the provided setup tool

cd pykitti
python setup.py install

Assumptions

This package assumes that you have also downloaded the calibration data associated with the sequences you want to work on (these are separate files from the sequences themselves), and that the directory structure is unchanged from the original structure laid out in the KITTI zip files.

Notation

Homogeneous coordinate transformations are provided as 4x4 numpy.array objects and are denoted as T_destinationFrame_originFrame.

Pinhole camera intrinsics for camera N are provided as 3x3 numpy.array objects and are denoted as K_camN. Stereo pair baselines are given in meters as b_gray for the monochrome stereo pair (cam0 and cam1), and b_rgb for the color stereo pair (cam2 and cam3).

Example

More detailed examples can be found in the demos directory, but the general idea is to specify what dataset you want to load, then access the parts you need and do something with them.

Camera and velodyne data are available via generators for easy sequential access (e.g., for visual odometry), and by indexed getter methods for random access (e.g., for deep learning). Images are loaded as PIL.Image objects using Pillow.

import pykitti

basedir = '/your/dataset/dir'
date = '2011_09_26'
drive = '0019'

# The 'frames' argument is optional - default: None, which loads the whole dataset.
# Calibration, timestamps, and IMU data are read automatically. 
# Camera and velodyne data are available via properties that create generators
# when accessed, or through getter methods that provide random access.
data = pykitti.raw(basedir, date, drive, frames=range(0, 50, 5))

# dataset.calib:         Calibration data are accessible as a named tuple
# dataset.timestamps:    Timestamps are parsed into a list of datetime objects
# dataset.oxts:          List of OXTS packets and 6-dof poses as named tuples
# dataset.camN:          Returns a generator that loads individual images from camera N
# dataset.get_camN(idx): Returns the image from camera N at idx  
# dataset.gray:          Returns a generator that loads monochrome stereo pairs (cam0, cam1)
# dataset.get_gray(idx): Returns the monochrome stereo pair at idx  
# dataset.rgb:           Returns a generator that loads RGB stereo pairs (cam2, cam3)
# dataset.get_rgb(idx):  Returns the RGB stereo pair at idx  
# dataset.velo:          Returns a generator that loads velodyne scans as [x,y,z,reflectance]
# dataset.get_velo(idx): Returns the velodyne scan at idx  

point_velo = np.array([0,0,0,1])
point_cam0 = data.calib.T_cam0_velo.dot(point_velo)

point_imu = np.array([0,0,0,1])
point_w = [o.T_w_imu.dot(point_imu) for o in data.oxts]

for cam0_image in data.cam0:
    # do something
    pass

cam2_image, cam3_image = data.get_rgb(3)

OpenCV

PIL Image data can be converted to an OpenCV-friendly format using numpy and cv2.cvtColor:

img_np = np.array(img)
img_cv2 = cv2.cvtColor(img_np, cv2.COLOR_RGB2BGR)

Note: This package does not actually require that OpenCV be installed on your system, except to run demo_raw_cv2.py.

References

[1] A. Geiger, P. Lenz, C. Stiller, and R. Urtasun, "Vision meets robotics: The KITTI dataset," Int. J. Robot. Research (IJRR), vol. 32, no. 11, pp. 1231–1237, Sep. 2013. http://www.cvlibs.net/datasets/kitti/ `

pykitti's People

Contributors

alexandrx avatar ar13pit avatar jkelly-stars avatar leeclemnet avatar michaelnutt2 avatar noahrjohnson avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

pykitti's Issues

Cannot import tracking (Not listed in __init__.py)

I wanted to import the tracking class but was unable to do so; I realized tracking module is a recent-ish addition: Is there a reason not to include this yet? If not, it would be nice to enable it.

P.S. Thank you for releasing this toolkit, saved a lot of time and headaches. ^_^

Cheers!

A question about Velodyne Data Label

Hi, @leeclemnet

Thanks for sharing such good work !
I found in intel-isl/Open3D-PointNet2-Semantic3D project, the author use your work to handle KITTI Velodyne data.
It's a segmentation work, each point has a label to show which class it belongs. But I found velodyne point data only has 4 channels (x, y, z, reflection). So do you have access to the ground truth label of each point?

Thanks for your help !

Allow access to T_cam0unrect_velo

Hi,

I'd suggest making the rigid transformation from velodyne coordinates to unrectified cam0 coordinates accessible by adding
data['Tr_velo_to_cam'] = T_cam0unrect_velo
in line 142 in raw.py

Cheers,
Moritz

Erronous angular velocity in Date: 2011_10_03 Drive: 0027 (Seq 00)

I do a simple euler integration using angular velocity provided in raw dataset 0027 starting from the initial ground truth orientation. I would expect the estimated orientation to be overlapping with the ground truth until some timestep and then drift away. Instead I get the following orientation estimations vs ground truth.

orientation

The roll and pitch estimations are incorrect. They make huge jumps. After each jump the estimation's shape is consistent with the ground truth but it is way higher or lower than the ground truth because of jumps. When I use the same algorithm for other dataset, I absolutely don't get such irrelevant estimations. I get figures like this:

orientation

orientation

Anyway, in order to check if there is anything wrong with my euler integration based orientation estimation, I calculated the true angular velocity by subtracting two consecutive ground truth orientation values (For roll pitch and yaw) and dividing the difference by the time passed. The results that I get here and from the angular velocity readings (Collected from OXTS) are extremely different and there is no bias or scale term that could correct this behavoir because the difference isn't in a way that allows for it.

Have you ever encountered a similar behavior in dataset 0027 (Seq 00) ? I think the angular velocity readings are wrong.

Mistake in Documentation?!

Hello,

Is this a mistake?
Shouldn't it be # Load the rigid transformation from IMU to velodyne.
LMK if I understood it wrong.

Thanks.

Random access to the image data

Since the data streams in the Odometry class are all generators, is it true that that makes random access slow?
For example, if I want to get the 50th pointcloud, and do next(iter(itertools.islice(odometry.velo, 49, None))) like in the example, it will read the first 49 point clouds and then yield the 50th point cloud.

Using numpy.fromfile might be platform-dependent?

According to the official documentation, numpy.fromfile assumes platform-dependent binary format, and hence, it should not be used to transfer data from machines with different architectures.

The load_velo_scan is currently defined as:

def load_velo_scan(file):
    """Load and parse a velodyne binary file."""
    scan = np.fromfile(file, dtype=np.float32)
    return scan.reshape((-1, 4))

I am concerned about using the function to load data from the velodyne bin files provided by KITTI. Is this approach guaranteed to work regardless of the CPU architecture on which the code is being run?

How do I transfer the pose data in the kitti raw data packet to the pose data in the kitti odometry data packet?

How do I transfer the pose data in the kitti raw data packet to the pose data in the kitti odometry data packet?if the conversion is correct, the output should be consistent with the ground truth in the kitti odometry data set. Formula is as follows:
T_cam0_to_world = kitti.oxts.T_w_imu.dot(inv(kitti.calib.T_cam0_imu))
But the result is not ideal. Can you point out my mistakes and correct them, or can you provide a tool or some code to convert the original pose data into the form of ground truth in the kitti odometry data set ?

A Question About Velodyne Data

Hi, can you help me with a question?
I've been working with velodyne for a while, and kitti is a good dataset since it has 3D track lets so I don't have to label stuff myself, but I'm running into a problem.
Previously in my project, I use raw points from velodyne, so points with distance and reflectivity, and comes in firing order. It seems that velodyne data in kitti dataset is not stored in firing order, nor is it possible to convert (x, y, z) coordinate back to raw data directly. Actually the number of points in each file is not even multiple of 64. So can you come up with any idea how I should convert velodyne data in kitti so that I could put them in firing order? Or at least I could put 64 points with same rotation together.
Many thanks.

Unable to run demo_odometry.py on Jetson TX2(Ubuntu 16.04). python-matplotlib binaries are installed as sudo apt-get python-matplotlib

Error is as shown:
nvidia@tegra-ubuntu:~/kitti/pykitti/demos$ python demo_odometry.py
Traceback (most recent call last):
File "demo_odometry.py", line 5, in
from mpl_toolkits.mplot3d import Axes3D
File "/usr/local/lib/python2.7/dist-packages/mpl_toolkits/mplot3d/init.py", line 6, in
from .axes3d import Axes3D
File "/usr/local/lib/python2.7/dist-packages/mpl_toolkits/mplot3d/axes3d.py", line 32, in
from matplotlib.cbook import _backports
ImportError: cannot import name _backports

Kindly help me resolve this issue.

Thanks

Evaluate BEV detections

I'm using the KITTI cpp code (evaluate_object_3d_offline.cpp) to evaluate my birds eye view bounding box detections. The code filters the detections into easy, moderate and hard by the height of the detected object's bounding box in the image space (i.e. pixels). I do not have image detections at all and so all my detections are being ignored (lines 447-452). I'm experimenting solely on BEV data. Any idea how to evaluate ONLY BEV detections (without any corresponding image detections). Is there a reference somewhere for this?

demo_odometry.py "TypeError: list object is not an iterator"

After modifying the basedir variable in demo_odometry.py, running the script gives me an error

Traceback (most recent call last):
  File "demo_odometry.py", line 34, in <module>
    first_gray = next(dataset.gray)
TypeError: list object is not an iterator

It seems that Line 34

first_gray = next(dataset.gray)

should be replaced with

first_gray = next(iter(dataset.gray))

Same with the several lines below it.

Tr_Cam_to_Road

Looking for the script that generated the Tr_cam_to_road matrix in the calibration files used in the training and test data test. Or an explanation of how this Matrix is formed.

download seq13

Thank you :)
I just want to download the sequence of the dataset, such as seq 13
but I failed ,like below,
the demo odometry.py needs any change?
File "pykitti-master/demos/demo_odometry.py", line 32, in
first_gray = next(iter(dataset.gray))
StopIteration
first_gray = next(iter(dataset.gray))
first_cam1 = next(iter(dataset.cam1))
first_rgb = dataset.get_rgb(0)
first_cam2 = dataset.get_cam2(0)
third_velo = dataset.get_velo(2)

Projection from pointcloud on image plane

Hello. I tried to use [x, y, z, 1] = T_cam2_velo * [x_velo, y_velo, z_velo, 1] and [u,v,1] = K_cam2 * [x/z, y/z, 1] to calculate the coordinates of point cloud on the image plane of camera 2 using .bin file of point cloud in raw data . The range of values for u and v is extremely large in the result I got (10^-8 < u < 10^8 , 10^-7 < v < 10^7 ). That looks really weird. Is that a normal range of u, v-value for the point cloud in KITTI? Or maybe I used the wrong equations here?
Thanks for your help.

Erroneous trajectory of egocentric vehicle

Using the code in the Readme for converting imu to world coordinates, I plot the trajectory of the egocentric vehicle in 2d plane(xy) and the trajectory doesn't seem to be correct in some of the videos.

For instance, in City category for drive 93, I get this plot which doesn't look right at all
ego_trajectory_wrong

after plotting point_w array below

point_imu = np.array([0,0,0,1])
point_w = [o.T_w_imu.dot(point_imu) for o in data.oxts]

Whereas the same code for drive 15 in Road category gives the correct trajectory of the egocentric vehicle below:

ego_trajectory_correct

Are the GPS/IMU tags noisy and are x and y in meters here?

How to convert velodyne coordinates to world coordinates?

Hello,

How does one convert from Velodyne to world coordinates using GPS/IMU?

As a first step,
I was able to go from Velodyne to the IMU coordinate system(which is still in the car frame).

Now using these coordinates, how do I get to the world coordinates?

pose_from_oxts_packet method

Hello, thanks for your implementation !

I have been looking at your code to get a pose vector from gps coordinates here : https://github.com/utiasSTARS/pykitti/blob/master/pykitti/utils.py#L63

At first the mercator projection made sense, but I wonder how off it can become compared to real XYZ data, i.e Earth Centered, Earth Fixed (ECEF) x-y-z values

I'd say that if you are not outrageously close to a pole it should not be a problem, especially because RTK data is at best 1cm precise.

So converting GPS to ECEF and then back to a rotated centered system is clearly overkill.

What I don't understand then is why having some complicated function as the one used in mercator projection ? if we compute the derivative of function used for ty , we get ... 1/cos(lat) which evens out the scale used in the first place. So the taylor expansion is the same as when we simply use ty = er * (lat - lat_0) (lat_0 is the latitude of first frame)

In addition, just using ty = er * (lat - lat_0) might seem more closely related to actual position because the error induced is the same when we got north or south, which is not the case for mercator which will expand lengths when going north and compress them for south.

The main point still remains that in German roads for KITTI we don't have north pole problems so why not use a simpler function for ty?

Missing Velodyne calibration data: "KeyError: 'Tr'" (and the solution)

This is just a minor issue which confused me a little at first, and could be nice to have in e.g., the README.
Namely, if one simply downloads the KITTI odometry dataset (e.g., just http://www.cvlibs.net/download.php?file=data_odometry_gray.zip), then the sequence folders will already have calibration files in them, but these files will not have the velodyne-to-rectified-camera-coords information.

One also needs to download the separate calibration data (http://www.cvlibs.net/download.php?file=data_odometry_calib.zip) archive, and overwrite all old calib.txt files (which only contain P0-P3 matrices) with the new ones which also contain the Tr key, providing the velodyne-to-camera transform matrices.

I was getting a few KeyError: 'Tr' before I figured this out. Again, this isn't a major issue, but I just thought I'd point it out in case anyone else was confused.

issue in utils.read_calib_file

Hi,
The read_calib_file() cannot read the calib file in KITTI object data correctly, since there is a "/n" at the end

Difference between wz and wu in oxts file

In the dataformat.txt file for oxts,
wz: angular rate around z (rad/s)
wu: angular rate around upward axis (rad/s)

What is the difference between z and upward axis?

I need the yaw angular velocity of the ego vehicle.

Error between camera relative poses for odometry Vs raw labels

@leeclemnet Hi there. I'm using these codes to compute camera relative poses within sequence 00 for both the odometry and raw labels respectively:

odometry

   # Transformation from world coordinates to camera 2 (colour) in frame i
   T_i = torch.matmul(  self.T_cam0tocam2     ,   torch.inverse(torch.FloatTensor(self.dataset.poses[frame_cami])) )
    
    # Extract rotation and translation from world to camera frame i
    R_i = torch.FloatTensor(T_i)[0:3,0:3]
    t_i = torch.FloatTensor(T_i)[0:3,3].unsqueeze(-1)
    
    # Transformation from world coordinates to camera 2 (colour) in frame i
    T_j = torch.matmul(  self.T_cam0tocam2     ,   torch.inverse(torch.FloatTensor(self.dataset.poses[frame_camj])) )
    
    # Extract rotation and translation from world to camera frame i
    R_j = torch.FloatTensor(T_j)[0:3,0:3]
    t_j = torch.FloatTensor(T_j)[0:3,3].unsqueeze(-1)
    
    # Relative pose: transformation from frame i to j coordinate systems
    R_itoj = torch.matmul(R_j,tp(R_i,-2,-1))
    t_itoj = t_j - torch.matmul(R_itoj,t_i)

raw

    # Transformation from world coordinates to camera 2 (colour) in frame i
    T_i = torch.matmul(torch.FloatTensor(self.dataset.calib.T_cam2_imu),torch.inverse(torch.FloatTensor(self.dataset.oxts[frame_cami].T_w_imu)))
    
    # Extract rotation and translation from world to camera frame i
    R_i = torch.FloatTensor(T_i)[0:3,0:3]
    t_i = torch.FloatTensor(T_i)[0:3,3].unsqueeze(-1)
    
    # Transformation from world coordinates to camera 2 (colour) in frame j
    T_j = torch.matmul(torch.FloatTensor(self.dataset.calib.T_cam2_imu),torch.inverse(torch.FloatTensor(self.dataset.oxts[frame_camj].T_w_imu)))
    
    # Extract rotation and translation from world to camera frame i
    R_j = torch.FloatTensor(T_j)[0:3,0:3]
    t_j = torch.FloatTensor(T_j)[0:3,3].unsqueeze(-1)
    
    # Relative pose: transformation from frame i to j coordinate systems
    R_itoj = torch.matmul(R_j,tp(R_i,-2,-1))
    t_itoj = t_j - torch.matmul(R_itoj,t_i)

There seems to be error between these methods. For example the first X relative poses the error looks like this where the x-axis corresponds to each pair (i.e. frames 0-->1, 0-->2 ... 0-->10, 1-->2, 1-->3, ... , 1 -- > 11 , 2-->3,... etc)

raw vs odo error

Any idea why this might be? Did the odometry dataset do further processing on the GPS/IMU raw measurements? Thanks!

point projection gives different results comparing to kitti depth dataset

In my project I have to downsample the KITTI velodyne lidar data to 16 lines (KITTI depth dataset provides original 64-line projected png files of cam02 and cam03)
so I need to
(1) downsample the 64-line bin lidar data to 16-line (this has already been done)
(2) use the calib parameters to project the point from bin files to 2d pictures of cam02 and cam03
however, I met a problem in step 2: I find that points in my projected pngs look a little bit shifted
so I tried to test my projection procedure : project the original 64-line lidar data to cam02 png, and merge my projected image with the KITTI-provided projected image of cam02. (image from dataset is used as green and blue channel, and my own projected image is used as red channel)
I can see that most of the points are at the same pixel location, but the points in the very right and very left of the image are shifted from the correct location (especially points of the objects that are near from the camera, as you can see in the picture below, the correctly projected points shows in white color)

20190724112440

my code is here:
20190724113053

Can't use the "dataset.get_velo(idx)" method from pykitti

Hi, How do you set up the folder for dataset downloaded from KITTI website? Because when I try to load the velodyne point using method dataset.get_velo(0), it has error:

Traceback (most recent call last):
File "D:/Documents/Python Projects/kitti velodyne/kitti5.py", line 26, in
print(data.get_cam0(0))
File "C:\Users\Welly\Anaconda3\envs\py36\lib\site-packages\pykitti\raw.py", line 48, in get_cam0
return utils.load_image(self.cam0_files[idx], mode='L')
IndexError: list index out of range

my configuration is like this:
basedir = 'data'
date = '2011_09_26'
drive = '0005'

Calibration script for kitti dataset

Hello,

Is there any script available to do the calibration using the kitti calibration dataset?!

Using opencv chessboard detection detects only one chess boards ... Where as kitti calibration data images contain multiple chessboard in a single image.....any help in this topic?!

demo_odometry.py KeyError: 'Tr' error

After running demo_odometry.py code this error

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/lib/python2.7/dist-packages/spyder/utils/site/sitecustomize.py", line 866, in runfile
    execfile(filename, namespace)
  File "/usr/local/lib/python2.7/dist-packages/spyder/utils/site/sitecustomize.py", line 94, in execfile
    builtins.execfile(filename, *where)
  File "/home/acs/Documents/python_projects/DOKTORA/Monocular Odometry/demos/demo_odometry.py", line 23, in <module>
    dataset = pykitti.odometry(basedir, sequence, frames=range(0, 20, 5))
  File "build/bdist.linux-x86_64/egg/pykitti/odometry.py", line 31, in __init__
  File "build/bdist.linux-x86_64/egg/pykitti/odometry.py", line 156, in _load_calib
KeyError: 'Tr'

My setup:

xubuntu: Ubuntu 16.04.2 LTS
Editor: Spyder
Python 2.7

Add fileformat kwarg to pykitti.raw()

Hi,
I'suggest adding an option for fileformat in the call to pykitti.raw(), such that .jpg can be loaded as well.

I've hacked it by adding
self.fileformat = kwargs.get('fileformat', 'png')
to the init and replacing '.png' with
'
.{}'.format(self.fileformat)
in the generators.

This way other fileformats such as 'jpg' should work as well.

Cheers,
Moritz

Estimating pose and warping gives incorrect results

Hi,
I'm trying to warp frame1 to the view of frame2. When we do that, except moving objects, all other static objects should overlap perfectly. I'm estimating depth using OpenCV from stereo images. Incorrect depth estimation can lead to some errors but overall, the warped image should look similar to frame2.

I'm using this code to estimate pose of frame1 and frame2. When I warp using these, the warped_image is completely black. When I looked at the relative transformation matrix, translation values were too huge. Any idea how I can compute the relative transformation between frame1 and frame2?

I tried reducing translation values by 10000. In this case, the warped image is not completely black. There iss some transformation. However, the transformation is completely different than what is observed, which seems to suggest that the rotation matrix may also be incorrect. Or am I using them incorrectly?

next(dataset.cam0) doesn't work

It always returns the first frame data no matter how many times I've called it.

Which version of python does it require? Python 2 or 3? I use Python3.

ValueError: invalid literal for int() with base 10: '4.00000000000000'

/usr/local/lib/python3.5/dist-packages/pykitti/raw.py in load_oxts(self)
    200                     # Last five entries are flags and counts
    201                     line[:-5] = [float(x) for x in line[:-5]]
--> 202                     line[-5:] = [int(x) for x in line[-5:]]
    203 
    204                     data = OxtsPacket(*line)

This file 2011_09_30/2011_09_30_drive_0018_sync/oxts/data/0000001513.txt has the last 5 fields encoded as floats:

49.05107247768027 8.39634710035407 112.97637430827200 0.02133146666667 0.02549146666667 -3.12968378038470 -0.02869999635360 -6.11906527162626 6.12277726787053 0.04792340039509 -0.01607654044456 -1.69708811845858 0.40287935775269 9.71601390782342 -1.45137920409012 0.20791130147662 9.76489679132369 0.02139399051063 0.00560646235189 0.02384900619543 0.02202315097739 0.00481946270660 0.02341972805644 0.01891981911463 0.01199306884014 4.00000000000000 10.00000000000000 -1.00000000000000 -1.00000000000000 -1.00000000000000.

Creation of T_camX_velo all use R_rect_00

This is what is instructed in the paper, but this seems like a typo?
I'm projecting the pointcloud onto the image plane for T_cam2_velo and I'm get slightly better results when using R_rect_20.

Finding pixel coordinates from projected values on images

Sorry if this isn't the best place to ask and if I am missing something but I was wondering what was the meaning of the values of the Velodyne point once they have been projected on the images, in particular how to translate this in term of pixel coordinate for each camera?

So when calling point_cam0 = data.calib.T_cam0_velo.dot(point_velo), I get float values that I am not sure how to interpret since I don't know their range or what is the center of the image. In particular how to get the actual pixel coordinates on each image from those values?

raw time stamp parsing error

I don't think you are parsing the raw time stamps correctly in raw.py. When I try to do this in python 3.7 on macOS, I get the same as when I ignore the fractional seconds:

>>> s = '2011-09-28 12:26:18.089952593'
>>> time.mktime(datetime.datetime.strptime(s[:-4], "%Y-%m-%d %I:%M:%S.%f").timetuple())
1317191178.0
>>> time.mktime(datetime.datetime.strptime(s[:-10], "%Y-%m-%d %I:%M:%S").timetuple())
1317191178.0

They are the same!

However, if I just turn the fractional part into a float and add it, I get (I think) the correct answer:

>>> time.mktime(datetime.datetime.strptime(s[:-10], "%Y-%m-%d %I:%M:%S").timetuple()) + float(s[-11:])
1317191186.0899527

Looking at the time stamp, I think (with this fractional part) is correct.

Intrinsic matrix

Hi, thanks for the code! When we use the matlab devkit to read the kitti camera calibration file, we could use (x - cx)/fx to transform x from image coordinates to a 3D point in camera frame. When we use pykitti, since the row, column indices start at 0 in python instead of 1 as in matlab, should we use (x - cx +1)/fx?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.