Giter VIP home page Giter VIP logo

argoverse-forecasting's People

Contributors

jagjeet-singh avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

argoverse-forecasting's Issues

Visualization

Hi, thanks for your great work.

How should I set the "--viz_seq_id" parameter when I visualise the trajectories?

Thank you!

function help

thanks for sharing.

def sort_lanes_based_on_point_in_polygon_score(
        self,
        lane_seqs: List[List[int]],
        xy_seq: np.ndarray,
        city_name: str,
        avm: ArgoverseMap,
) -> List[List[int]]:
    """Filter lane_seqs based on the number of coordinates inside the bounding polygon of lanes.

    Args:
        lane_seqs: Sequence of lane sequences
        xy_seq: Trajectory coordinates
        city_name: City name (PITT/MIA)
        avm: Argoverse map_api instance
    Returns:
        sorted_lane_seqs: Sequences of lane sequences sorted based on the point_in_polygon score

    """
    point_in_polygon_scores = []
    for lane_seq in lane_seqs:
        point_in_polygon_scores.append(
            self.get_point_in_polygon_score(lane_seq, xy_seq, city_name,
                                            avm))
    randomized_tiebreaker = np.random.random(len(point_in_polygon_scores))

    sorted_point_in_polygon_scores_idx = np.lexsort(
        (randomized_tiebreaker, np.array(point_in_polygon_scores)))[::-1]
    sorted_lane_seqs = [
        lane_seqs[i] for i in sorted_point_in_polygon_scores_idx
    ]
    sorted_scores = [
        point_in_polygon_scores[i]
        for i in sorted_point_in_polygon_scores_idx
    ]
    return sorted_lane_seqs, sorted_scores

why using the randomized_tiebreaker in the function?
sort by point_in_polygon_scores first and then sort by the random value?

Loading model fail

Hi, after training when I was loading the model(eg. LSTM_rollout30.pth.tar), it fails by saying: "Missing key(s) in state_dict: "linear1.weight", "linear1.bias", "lstm1.weight_ih", "lstm1.weight_hh", "lstm1.bias_ih", "lstm1.bias_hh".
".

Any ideas how to solve it? Thanks.

cannot visualize trajectory

Hi, when I run the viz python script I got the error:

File "argoverse-api/argoverse/utils/manhattan_search.py", line 90, in find_all_polygon_bboxes_overlapping_query_bbox
overlaps_left = (query_min_x <= bboxes_x2) & (bboxes_x2 <= query_max_x)
ValueError: operands could not be broadcast together with shapes (2,) (4952,)

Is this some error in argoverse api or am I using the wrong commnd by :
python eval_forecasting_helper.py --viz --gt trajectory_predict/const_vel_traj_predict.pkl --forecast trajectory_predict/lstm_social_traj.pkl --horizon 30 --obs_len 20 --features features_pre/forecasting_features_test.pkl

how could I find ground true data?

I run this command
python eval_forecasting_helper.py --metrics --gt forecasting_features_val.pkl --forecast result/cont_vel_predict.pkl --horizon 30 --obs_len 20 --miss_threshold 2 --features forecasting_features_test.pkl --max_n_guesses 6

But I got this error, I think the forecasting_features_val.pkl data is not the same format as ground truth data, could you please release the ground truth data for evaluation?

image

data compression

Can the dataset be compressed by the Douglas-Peucker algorithm?

Map based LSTM

Hi guys!

These baselines are really wonderful to start with Motion Forecasting. In my particular case, I am trying to include map information to the LSTM in order to compute multimodal predictions that are directed towards plausible lanes (computed using map_utils.py). However, I have not seen in the train function that the input to the encoder is influenced by the lanes:

imagen

But, on the other hand, in the infer_map function, you specify some CANDIDATE_CENTERLINES and modify the input of the encoder:

imagen

Do you use these map lanes only during inference or also while training?

About downloading Argoverse forecasting and feature computation

1, The link of Argoverse forecasting in suggested repository is missing, can you provide the correct link?
2, In feature computation step, the link of precomputed features is missing, can you provide the correct link?
Moreover,
$ python compute_features.py --data_dir <path/to/data> --feature_dir <directory/where/features/to/be/saved> --mode <train/val/test> --obs_len 20 --pred_len 30$
what is the --data_dir <path/to/data>?

[K-Nearest Neighbors] evaluation problem.

I trained with this command
[K-Nearest Neighbors]

  • Using Map prior:

python nn_train_test.py --train_features /media/han/E46A4C3C6A4C0E2C/hsi_ws/argoverse-forecasting/forecasting_features/forecasting_features_train.pkl --val_features /media/han/E46A4C3C6A4C0E2C/hsi_ws/argoverse-forecasting/forecasting_features/forecasting_features_val.pkl --test_features /media/han/E46A4C3C6A4C0E2C/hsi_ws/argoverse-forecasting/forecasting_features/forecasting_features_test.pkl --use_map --use_delta --obs_len 20 --pred_len 30 --n_neigh 3 --model_path /media/han/E46A4C3C6A4C0E2C/hsi_ws/argoverse-forecasting/model_path/knn_model_map.pkl --traj_save_path /media/han/E46A4C3C6A4C0E2C/hsi_ws/argoverse-forecasting/forecasted_trajectories/knn_map.pkl

and then I was trying Evaluating a K-NN baseline that can use map for pruning and allowing 6 guesses
with below command.

python eval_forecasting_helper.py --metrics --gt /media/han/E46A4C3C6A4C0E2C/hsi_ws/argoverse-forecasting/ground_truth_data/ground_truth_val.pkl --forecast /media/han/E46A4C3C6A4C0E2C/hsi_ws/argoverse-forecasting/forecasted_trajectories/knn_map.pkl --horizon 30 --obs_len 20 --features /media/han/E46A4C3C6A4C0E2C/hsi_ws/argoverse-forecasting/forecasting_features/forecasting_features_val.pkl --prune_n_guesses 6

q1

then I got this error.

Would you help me?

Ground truth pkl file for val & train

I was able to find the raw ground truth, but not the ground truth pkl file for the validation set to run the simple evaluator given in readme:

$ python eval_forecasting_helper.py --metrics --gt <path/to/ground/truth/pkl/file> --forecast <path/to/forecasted/trajectories/pkl/file> --horizon 30 --obs_len 20 --miss_threshold 2 --features <path/to/test/features> --max_n_guesses 6

Could you please share a link to the same, or guide to how to generate it with raw data, without generating features.

How do I get the ground truth?

I follow up baselines.
When I evaluate the baseline, I can not find any ground truth file.
I put the validation data pickle file to ground truth. It occurs the error(KeyError).

I think there is a specific ground truth file format.

Anyone can help me?

Precomputed Features

Hi, could you share the preprocessed features? The link provided is invalid. Thank you very much!

Centerlines missing in train and validation precomputed features

Hi there,

Thank you for sharing the baselines and setting up the benchmark!

I've noticed that, in the train/val sets of pre-computed features:

 "candidate_centerlines" -> array of nones
 "oracle_centerlines" --> ok!

But in the test set:

 "candidate_centerlines" ->ok!
 "oracle_centerlines" -->  array of nones

Is this intended, and if so, why?

Question about the training process

def train(
        train_loader: Any,
        epoch: int,
        criterion: Any,
        logger: Logger,
        encoder: Any,
        decoder: Any,
        encoder_optimizer: Any,
        decoder_optimizer: Any,
        model_utils: ModelUtils,
        rollout_len: int = 30,
) -> None:
 for i, (_input, target, helpers) in enumerate(train_loader):
       _input = _input.to(device)   # !!! here one whole batch of data are loaded !!!
       target = target.to(device)

       # Set to train mode
       encoder.train()
       decoder.train()

       # Zero the gradients
       encoder_optimizer.zero_grad()
       decoder_optimizer.zero_grad()

       # Encoder
       batch_size = _input.shape[0]
       input_length = _input.shape[1]
       # output_length = target.shape[1]
       # input_shape = _input.shape[2]

       # Initialize encoder hidden state
       encoder_hidden = model_utils.init_hidden(
           batch_size,
           encoder.module.hidden_size if use_cuda else encoder.hidden_size)

       # Initialize losses
       loss = 0

       # Encode observed trajectory
       for ei in range(input_length):       # !!! in this for loop, complete batch data of 2 sec. are fed through encoder !!!
           encoder_input = _input[:, ei, :]    # !!! each time the different data of certain time stamp ei * 0.1 sec are choosed !!!
           encoder_hidden = encoder(encoder_input, encoder_hidden)   

       # Initialize decoder input with last coordinate in encoder
       decoder_input = encoder_input[:, :2]    # !!! which data in the batch is used?? I don't clearly understand this !!!

       # Initialize decoder hidden state as encoder hidden state
       decoder_hidden = encoder_hidden

       decoder_outputs = torch.zeros(target.shape).to(device)

       # Decode hidden state in future trajectory
       for di in range(rollout_len):
           decoder_output, decoder_hidden = decoder(decoder_input, decoder_hidden)
           decoder_outputs[:, di, :] = decoder_output

           # Update loss
           loss += criterion(decoder_output[:, :2], target[:, di, :2])

           # Use own predictions as inputs at next step
           decoder_input = decoder_output

I don't clearly understand the codes above, especially the positions where I wrote some comments.

  1. during training with batch of data, the encoder is fed with all data of the same recording time. Is this a correct procedure and does it have influence on LSTM's internal states?

  2. after that decoder begins to take the encoder_input[:, :2] as its initial input. But what exactly is this data? Is this the last recorded trajectory in batch? Or all the data of same time points within all trajectories of the whole batch?

Thanks for more explanation on this

BR, Song

Is the Leaderboard correctly evaluating the test predictions?

I am trying to evaluate my model, at this moment unimodal (so I repeat 6 times the trajectory, assuming the same confidence), but after beating my metrics in the validation split, the test results are still the same.

In validation I have improved the model from 2.53/4.47 (ADE k=1, FDE k=1) to 1.701 and 3.765:

imagen

Then, how is it possible to obtainer the same metrics for k=1 in the Leaderboard? (even the other metrics are pretty much the same):

imagen

imagen

Moreover, how is it possible to obtain 1.5-1.8/3.7-4.1 in train and validation and on the other hand 9 and 19 in test?

Best regards,

Results of LSTM Social model on validation set are worse than the const velocity model.

Hi, @jagjeet-singh

Thanks for sharing the baseline code. I'm trying to train the LSTM Social model and evaluate on the validation set.
The results of LSTM Social model on validation set are worse than the const velocity model.

Results on LSTM Social Model:

------------------------------------------------
Prediction Horizon : 30, Max #guesses (K): 6
------------------------------------------------
{'minADE': 13.345417598687021, 'minFDE': 25.38952803770351, 'MR': 0.9912342926631537, 'DAC': 0.9880674908796109}

Results on Const Velocity Model:

------------------------------------------------
Prediction Horizon : 30, Max #guesses (K): 6
------------------------------------------------
{'minADE': 2.7151615658689465, 'minFDE': 6.05341305248324, 'MR': 0.742146331576814, 'DAC': 0.9222993514389948}

I'm using the default parameters for training. Could you please help me in sorting this issue?

I'm using the following scripts to do the same:

*Training*:

python lstm_train_test.py \
--train_features ../features/forecasting_features/forecasting_features_train.pkl \
--val_features ../features/forecasting_features/forecasting_features_val.pkl \
--test_features ../features/forecasting_features/forecasting_features_val.pkl \
--use_social --use_delta --normalize --obs_len 20 --pred_len 30  \
--model_path ./saved_models \
--traj_save_path ./saved_trajectories/lstm_social/rollout30_traj_sept.pkl

*Generating Forecast*:

python lstm_train_test.py \
--test_features ../features/forecasting_features/forecasting_features_val.pkl \
--use_social --use_delta --normalize --obs_len 20 --pred_len 30  --test \
--model_path ./saved_models/lstm_social/LSTM_rollout30.pth.tar \
--traj_save_path ./saved_trajectories/lstm_social/rollout30_traj_sept.pkl

*Metrics*:

python eval_forecasting_helper.py --metrics \
--gt ../features/dataset/ground_truth/ground_truth_val.pkl \
--forecast ./saved_trajectories/lstm_social/rollout30_traj_sept.pkl \
--horizon 30 --obs_len 20 \
--features ../features/forecasting_features/forecasting_features_val.pkl \
--miss_threshold 2  --max_n_guesses 6 

Generating Ground Truth:

import pandas as pd
import os
import pickle

df = pd.read_pickle("./forecasting_features/forecasting_features_val.pkl")
save_path = "./ground_truth_data"
if not os.path.exists(save_path):
    os.makedirs(save_path)
    
val_gt = {}
for i in range(len(df)):
    seq_id = df.iloc[i]['SEQUENCE']
    curr_arr = df.iloc[i]['FEATURES'][20:][:, 3:5]
    val_gt[seq_id] = curr_arr

with open(save_path + '/ground_truth_val.pkl', 'wb') as f:
    pickle.dump(val_gt, f)

KeyError Constant Velocity model

(argoverse) han@han:/media/han/E46A4C3C6A4C0E2C/hsi_ws/argoverse-forecasting$ python eval_forecasting_helper.py --metrics --gt /media/han/E46A4C3C6A4C0E2C/hsi_ws/argoverse-forecasting/ground_truth_data/ground_truth_val.pkl --forecast /media/han/E46A4C3C6A4C0E2C/hsi_ws/argoverse-forecasting/forecasted_trajectories/const_vel.pkl --horizon 30 --obs_len 20 --features /media/han/E46A4C3C6A4C0E2C/hsi_ws/argoverse-forecasting/forecasting_features/forecasting_features_val.pkl --max_n_guesses 6
Traceback (most recent call last):
File "eval_forecasting_helper.py", line 252, in
args.miss_threshold,
File "/media/han/E46A4C3C6A4C0E2C/hsi_ws/argoverse-api/argoverse/evaluation/eval_forecasting.py", line 215, in compute_forecasting_metrics
forecasted_probabilities,
File "/media/han/E46A4C3C6A4C0E2C/hsi_ws/argoverse-api/argoverse/evaluation/eval_forecasting.py", line 94, in get_displacement_errors_and_miss_rate
max_num_traj = min(max_guesses, len(forecasted_trajectories[k]))
KeyError: 31171

I used below code to make gt file

import os
import pickle

df = pd.read_pickle("./forecasting_features/forecasting_features_val.pkl")
print(df)
save_path = "./ground_truth_data"

if not os.path.exists(save_path):
os.makedirs(save_path)

val_gt = {}
for i in range(len(df)):
seq_id = df.iloc[i]['SEQUENCE']
curr_arr = df.iloc[i]['FEATURES'][20:][:, 3:5]
val_gt[seq_id] = curr_arr

with open(save_path + '/ground_truth_val.pkl', 'wb') as f:
pickle.dump(val_gt, f)

Screenshot from 2022-01-26 03-05-31

I tried to run Evaluation metrics with constant_velocity baseline.
how can I solve this ?

code mistake in velocity compute

vel_x, vel_y = zip(*[(
x_coord[i] - x_coord[i - 1] /
(float(timestamp[i]) - float(timestamp[i - 1])),
y_coord[i] - y_coord[i - 1] /
(float(timestamp[i]) - float(timestamp[i - 1])),
) for i in range(1, len(timestamp))])

Missing '()' in x_coord[i] - x_coord[i - 1] and y_coord[i] - y_coord[i - 1], which leads to huge velocity. Vehicles will always be judged to be not stationary.

How to get the object class for the forecasting data?

Hi~ In the argoverse-downloading page, I saw that object classes are annotated in the 3D tracking challenge while in the motion forecasting phase, the object classes are not included. Would there be any method to get the object classes in the forecasting challenge? Could you provide the tracking_id to the object class mapping profile?

Best regards,

Phyllis

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.