Giter VIP home page Giter VIP logo

openpcdet's People

Contributors

acivgin1 avatar cheng052 avatar chreisinger avatar danish87 avatar deepsthewarrior avatar dingry avatar djiajunustc avatar dk-liang avatar fnozarian avatar frezaeix avatar gus-guo avatar honggyuchoi avatar jihanyang avatar julianschoep avatar lea-v avatar lookquad avatar ltphy avatar lynnpepin avatar martinhahner avatar nopileos2 avatar shashankag14 avatar shijianjian avatar sshaoshuai avatar starrah avatar xaviersantos avatar xiangruhuang avatar xiazhongyv avatar yangtianyu92 avatar yezhen17 avatar yzichen avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar

openpcdet's Issues

Incorrect number of GTs and Preds used in eval.py

I was debugging the evaluation part of the code and found that total_dt_num and total_gt_num are interchanged while fetching the returns from calculate_iou_partly()

calculate_iou_partly() returns the following :

return overlaps, parted_overlaps, total_gt_num, total_dt_num

However, while using these in L474, we are interchanging the positions of total_dt_num and total_gt_num

rets = calculate_iou_partly(dt_annos, gt_annos, metric, num_parts)
overlaps, parted_overlaps, total_dt_num, total_gt_num = rets

Error after each epoch

An error occurs at the end of each epoch or beginning of the next epoch, but training continues w/o any problem. This should be probably related to our new torchmetrics stats.

train: 100%|██████████| 37/37 [00:42<00:00,  1.18s/it, total_it=36]
epochs:   0%|          | 0/60 [00:42<?, ?it/s, loss=3.25, lr=0.00104, d_time=0.00(0.01), f_time=0.82(0.67), b_time=1.07(1.00)]
epochs:   2%|▏         | 1/60 [00:42<41:49, 42.54s/it, loss=3.25, lr=0.00104, d_time=0.00(0.01), f_time=0.82(0.67), b_time=1.07(1.00)]Exception ignored in: <function _MultiProcessingDataLoaderIter.__del__ at 0x7f9f104e6280>
Traceback (most recent call last):
  File "/home/farzad/anaconda3/envs/pcdet/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1328, in __del__
    self._shutdown_workers()
  File "/home/farzad/anaconda3/envs/pcdet/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1320, in _shutdown_workers
    if w.is_alive():
  File "/home/farzad/anaconda3/envs/pcdet/lib/python3.8/multiprocessing/process.py", line 160, in is_alive
    assert self._parent_pid == os.getpid(), 'can only test a child process'
AssertionError: can only test a child process
Exception ignored in: <function _MultiProcessingDataLoaderIter.__del__ at 0x7f9f104e6280>
Traceback (most recent call last):
  File "/home/farzad/anaconda3/envs/pcdet/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1328, in __del__
    self._shutdown_workers()
  File "/home/farzad/anaconda3/envs/pcdet/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1320, in _shutdown_workers
    if w.is_alive():
  File "/home/farzad/anaconda3/envs/pcdet/lib/python3.8/multiprocessing/process.py", line 160, in is_alive
    assert self._parent_pid == os.getpid(), 'can only test a child process'
AssertionError: can only test a child process
Exception ignored in: <function _MultiProcessingDataLoaderIter.__del__ at 0x7f9f104e6280>
Traceback (most recent call last):
  File "/home/farzad/anaconda3/envs/pcdet/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1328, in __del__
    self._shutdown_workers()
  File "/home/farzad/anaconda3/envs/pcdet/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1320, in _shutdown_workers
    if w.is_alive():
  File "/home/farzad/anaconda3/envs/pcdet/lib/python3.8/multiprocessing/process.py", line 160, in is_alive
    assert self._parent_pid == os.getpid(), 'can only test a child process'
AssertionError: can only test a child process

train:   0%|          | 0/37 [00:00<?, ?it/s]Exception ignored in: <function _MultiProcessingDataLoaderIter.__del__ at 0x7f9f104e6280>
Traceback (most recent call last):
  File "/home/farzad/anaconda3/envs/pcdet/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1328, in __del__
    self._shutdown_workers()
  File "/home/farzad/anaconda3/envs/pcdet/lib/python3.8/site-packages/torch/utils/data/dataloader.py", line 1320, in _shutdown_workers
    if w.is_alive():
  File "/home/farzad/anaconda3/envs/pcdet/lib/python3.8/multiprocessing/process.py", line 160, in is_alive
    assert self._parent_pid == os.getpid(), 'can only test a child process'
AssertionError: can only test a child process

train:   3%|▎         | 1/37 [00:01<01:08,  1.91s/it]
epochs:   2%|▏         | 1/60 [00:44<41:49, 42.54s/it, loss=7.24, lr=0.00104, d_time=0.83(0.83), f_time=0.82(0.82), b_time=1.91(1.91)]
train:   5%|▌         | 2/37 [00:03<01:00,  1.73s/it, total_it=38]

Mean computation for per batch states

The following per-batch states are being updated with mean values in update() :

self.pred_ious = torch.tensor(pred_ious).cuda().mean()
self.pred_accs = torch.tensor(pred_accs).cuda().mean()
self.pred_fgs = torch.tensor(pred_fgs).cuda().mean()
self.sem_score_fgs = torch.tensor(sem_score_fgs).cuda().mean()
self.sem_score_bgs = torch.tensor(sem_score_bgs).cuda().mean()
self.num_pred_boxes = torch.tensor(num_pred_boxes).cuda().float().mean()
self.num_gt_boxes = torch.tensor(num_gt_boxes).cuda().float().mean()

When compute() is called, these states are already scalar tensors, so the mean computation can be avoided here.

def compute(self, stats_only=True):
results = {'pred_ious': self.pred_ious.mean(), 'pred_accs': self.pred_accs.mean(),
'pred_fgs': self.pred_fgs.mean(), 'sem_score_fgs': self.sem_score_fgs.mean(),
'sem_score_bgs': self.sem_score_bgs.mean(), 'num_pred_boxes': self.num_pred_boxes.mean(),
'num_gt_boxes': self.num_gt_boxes.mean()}

[DA-7] Repetitive BG samples due to mismatch in ROIs per image

Currently we have 100 max_overlap i.e. 100 proposals coming from the Teacher

gt_scores = batch_dict['pred_scores_ema'][index]
gt_boxes = batch_dict['gt_boxes'][index]
sampled_inds = self.subsample_rois(max_overlaps=gt_scores)

Whereas, while doing subsampling based on FG-BG ratio, we have self.roi_sampler_cfg.ROI_PER_IMAGE=128, due to which we are sampling some repetitive BGs.

bg_rois_per_this_image = self.roi_sampler_cfg.ROI_PER_IMAGE - fg_rois_per_this_image

Example of sampled indices for BGs:
image

Can we avoid this repetition by keeping self.roi_sampler_cfg.ROI_PER_IMAGE=100 and setting the remaining ROIs and their corresponding reg_valid_mask and rcnn_cls_labels as invalid (like we were doing before in _override_unlabeled_target) ?

Incorrect mAP calculation when precision is not fully determined for all 41 sample points

precision, recall and detailed_stats are initialized with 41 points, but most of the time they are not filled completely. Therefore mAP for example is calculated wrongly by averaging over all 41 points that most of them are zeros.

precision = np.zeros([num_class, num_minoverlap, N_SAMPLE_PTS])
recall = np.zeros([num_class, num_minoverlap, N_SAMPLE_PTS])
detailed_stats = np.zeros([num_class, num_minoverlap, N_SAMPLE_PTS, 5])

Screenshot from 2022-09-16 14-33-51

Checks for ignored_det and ignored_gt in compute_statistics_jit()

ignored_det and ignored_gt are being assigned either 0 or -1 based on valid class index in clean_data

def clean_data(gt_anno, dt_anno, current_class):
dc_bboxes, ignored_gt, ignored_dt = [], [], []
# TODO(farzad) hardcoded
num_gt = gt_anno.numel() // 8 # len(gt_anno["name"])
num_dt = dt_anno.numel() // 9 # len(dt_anno["name"])
num_valid_gt = 0
# TODO(farzad) cleanup and parallelize
for i in range(num_gt):
gt_cls_ind = gt_anno[i][-1]
if (gt_cls_ind == current_class):
ignored_gt.append(0)
num_valid_gt += 1
else:
ignored_gt.append(-1)
for i in range(num_dt):
dt_cls_ind = dt_anno[i][-2]
if (dt_cls_ind == current_class):
ignored_dt.append(0)
else:
ignored_dt.append(-1)
return num_valid_gt, ignored_gt, ignored_dt, dc_bboxes

However, in compute_statistics_jit, we are checking for some cases where ignored_det==1 and ignored_gt==1

elif (compute_fp and (overlap > min_overlap)
and (valid_detection == NO_DETECTION)
and ignored_det[j] == 1):
det_idx = j
valid_detection = 1
assigned_ignored_det = True
if (valid_detection == NO_DETECTION) and ignored_gt[i] == 0:
fn += 1
elif ((valid_detection != NO_DETECTION)
and (ignored_gt[i] == 1 or ignored_det[det_idx] == 1)):
assigned_detection[det_idx] = True
elif valid_detection != NO_DETECTION:
tp += 1
# thresholds.append(dt_scores[det_idx])
thresholds[thresh_idx] = dt_scores[det_idx]
thresh_idx += 1
assigned_detection[det_idx] = True
if compute_fp:
for i in range(det_size):
if (not (assigned_detection[i] or ignored_det[i] == -1
or ignored_det[i] == 1 or ignored_threshold[i])):
fp += 1
return tp, fp, fn, similarity, thresholds[:thresh_idx]

Is there any case, when these two values (ignored_det and ignored_gt) are assigned 1 ?

Recall values getting overriden in calc_statistics_new()

While iterating over different classes, max/mean recall values in statistics are getting overriden with new values.

# Get calculated recall
for c, cls_name in enumerate(['Car', 'Pedestrian', 'Cyclist']):
statistics['max_recall'] = results['raw_recall'][c].max().item()
statistics['mean_recall'] = results['raw_recall'][c].mean().item()

May be we can do something like we are doing for storing the precision i.e. adding the values for each class in class_metrics_all and storing this dictionary in statistics:

# Get calculated Precision
for m, metric_name in enumerate(['mAP_3d', 'mAP_3d_R40']):
class_metrics_all = {}
for c, cls_name in enumerate(['Car', 'Pedestrian', 'Cyclist']):
metric_value = results[metric_name][c].item()
class_metrics_all[cls_name] = metric_value
statistics[metric_name] = class_metrics_all

Missing key(s) in state_dict when dsnorm is true

When dsnorm is true there are some keys that are missing from the state dict so the loading can not be done properly

File "../pcdet/models/detectors/detector3d_template.py", line 380, in load_params_from_file state_dict, update_model_state = self._load_state_dict(model_state_disk, strict=strict) File "../pcdet/models/detectors/detector3d_template.py", line 356, in _load_state_dict self.load_state_dict(update_model_state) File "/opt/conda/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1070, in load_state_dict raise RuntimeError('Error(s) in loading state_dict for {}:\n\t{}'.format( RuntimeError: Error(s) in loading state_dict for PVRCNN: Missing key(s) in state_dict: "backbone_3d.conv_input.1.running_mean", "backbone_3d.conv_input.1.running_mean", "backbone_3d.conv_input.1.running_var", "backbone_3d.conv_input.1.running_var", "backbone_3d.conv1.0.1.running_mean", "backbone_3d.conv1.0.1.running_mean", "backbone_3d.conv1.0.1.running_var", "backbone_3d.conv1.0.1.running_var", "backbone_3d.conv2.0.1.running_mean", "backbone_3d.conv2.0.1.running_mean", "backbone_3d.conv2.0.1.running_var", "backbone_3d.conv2.0.1.running_var", "backbone_3d.conv2.1.1.running_mean", "backbone_3d.conv2.1.1.running_mean", "backbone_3d.conv2.1.1.running_var", "backbone_3d.conv2.1.1.running_var", "backbone_3d.conv2.2.1.running_mean", "backbone_3d.conv2.2.1.running_mean", "backbone_3d.conv2.2.1.running_var", "backbone_3d.conv2.2.1.running_var", "backbone_3d.conv3.0.1.running_mean", "backbone_3d.conv3.0.1.running_mean", "backbone_3d.conv3.0.1.running_var", "backbone_3d.conv3.0.1.running_var", "backbone_3d.conv3.1.1.running_mean", "backbone_3d.conv3.1.1.running_mean", "backbone_3d.conv3.1.1.running_var", "backbone_3d.conv3.1.1.running_var", "backbone_3d.conv3.2.1.running_mean", "backbone_3d.conv3.2.1.running_mean", "backbone_3d.conv3.2.1.running_var", "backbone_3d.conv3.2.1.running_var", "backbone_3d.conv4.0.1.running_mean", "backbone_3d.conv4.0.1.running_mean", "backbone_3d.conv4.0.1.running_var", "backbone_3d.conv4.0.1.running_var", "backbone_3d.conv4.1.1.running_mean", "backbone_3d.conv4.1.1.running_mean", "backbone_3d.conv4.1.1.running_var", "backbone_3d.conv4.1.1.running_var", "backbone_3d.conv4.2.1.running_mean", "backbone_3d.conv4.2.1.running_mean", "backbone_3d.conv4.2.1.running_var", "backbone_3d.conv4.2.1.running_var", "backbone_3d.conv_out.1.running_mean", "backbone_3d.conv_out.1.running_mean", "backbone_3d.conv_out.1.running_var", "backbone_3d.conv_out.1.running_var", "pfe.SA_layers.0.mlps.0.1.running_mean", "pfe.SA_layers.0.mlps.0.1.running_mean", "pfe.SA_layers.0.mlps.0.1.running_var", "pfe.SA_layers.0.mlps.0.1.running_var", "pfe.SA_layers.0.mlps.0.4.running_mean", "pfe.SA_layers.0.mlps.0.4.running_mean", "pfe.SA_layers.0.mlps.0.4.running_var", "pfe.SA_layers.0.mlps.0.4.running_var", "pfe.SA_layers.0.mlps.1.1.running_mean", "pfe.SA_layers.0.mlps.1.1.running_mean", "pfe.SA_layers.0.mlps.1.1.running_var", "pfe.SA_layers.0.mlps.1.1.running_var", "pfe.SA_layers.0.mlps.1.4.running_mean", "pfe.SA_layers.0.mlps.1.4.running_mean", "pfe.SA_layers.0.mlps.1.4.running_var", "pfe.SA_layers.0.mlps.1.4.running_var", "pfe.SA_layers.1.mlps.0.1.running_mean", "pfe.SA_layers.1.mlps.0.1.running_mean", "pfe.SA_layers.1.mlps.0.1.running_var", "pfe.SA_layers.1.mlps.0.1.running_var", "pfe.SA_layers.1.mlps.0.4.running_mean", "pfe.SA_layers.1.mlps.0.4.running_mean", "pfe.SA_layers.1.mlps.0.4.running_var", "pfe.SA_layers.1.mlps.0.4.running_var", "pfe.SA_layers.1.mlps.1.1.running_mean", "pfe.SA_layers.1.mlps.1.1.running_mean", "pfe.SA_layers.1.mlps.1.1.running_var", "pfe.SA_layers.1.mlps.1.1.running_var", "pfe.SA_layers.1.mlps.1.4.running_mean", "pfe.SA_layers.1.mlps.1.4.running_mean", "pfe.SA_layers.1.mlps.1.4.running_var", "pfe.SA_layers.1.mlps.1.4.running_var", "pfe.SA_rawpoints.mlps.0.1.running_mean", "pfe.SA_rawpoints.mlps.0.1.running_mean", "pfe.SA_rawpoints.mlps.0.1.running_var", "pfe.SA_rawpoints.mlps.0.1.running_var", "pfe.SA_rawpoints.mlps.0.4.running_mean", "pfe.SA_rawpoints.mlps.0.4.running_mean", "pfe.SA_rawpoints.mlps.0.4.running_var", "pfe.SA_rawpoints.mlps.0.4.running_var", "pfe.SA_rawpoints.mlps.1.1.running_mean", "pfe.SA_rawpoints.mlps.1.1.running_mean", "pfe.SA_rawpoints.mlps.1.1.running_var", "pfe.SA_rawpoints.mlps.1.1.running_var", "pfe.SA_rawpoints.mlps.1.4.running_mean", "pfe.SA_rawpoints.mlps.1.4.running_mean", "pfe.SA_rawpoints.mlps.1.4.running_var", "pfe.SA_rawpoints.mlps.1.4.running_var", "pfe.vsa_point_feature_fusion.1.running_mean", "pfe.vsa_point_feature_fusion.1.running_mean", "pfe.vsa_point_feature_fusion.1.running_var", "pfe.vsa_point_feature_fusion.1.running_var", "backbone_2d.blocks.0.2.running_mean", "backbone_2d.blocks.0.2.running_mean", "backbone_2d.blocks.0.2.running_var", "backbone_2d.blocks.0.2.running_var", "backbone_2d.blocks.0.5.running_mean", "backbone_2d.blocks.0.5.running_mean", "backbone_2d.blocks.0.5.running_var", "backbone_2d.blocks.0.5.running_var", "backbone_2d.blocks.0.8.running_mean", "backbone_2d.blocks.0.8.running_mean", "backbone_2d.blocks.0.8.running_var", "backbone_2d.blocks.0.8.running_var", "backbone_2d.blocks.0.11.running_mean", "backbone_2d.blocks.0.11.running_mean", "backbone_2d.blocks.0.11.running_var", "backbone_2d.blocks.0.11.running_var", "backbone_2d.blocks.0.14.running_mean", "backbone_2d.blocks.0.14.running_mean", "backbone_2d.blocks.0.14.running_var", "backbone_2d.blocks.0.14.running_var", "backbone_2d.blocks.0.17.running_mean", "backbone_2d.blocks.0.17.running_mean", "backbone_2d.blocks.0.17.running_var", "backbone_2d.blocks.0.17.running_var", "backbone_2d.blocks.1.2.running_mean", "backbone_2d.blocks.1.2.running_mean", "backbone_2d.blocks.1.2.running_var", "backbone_2d.blocks.1.2.running_var", "backbone_2d.blocks.1.5.running_mean", "backbone_2d.blocks.1.5.running_mean", "backbone_2d.blocks.1.5.running_var", "backbone_2d.blocks.1.5.running_var", "backbone_2d.blocks.1.8.running_mean", "backbone_2d.blocks.1.8.running_mean", "backbone_2d.blocks.1.8.running_var", "backbone_2d.blocks.1.8.running_var", "backbone_2d.blocks.1.11.running_mean", "backbone_2d.blocks.1.11.running_mean", "backbone_2d.blocks.1.11.running_var", "backbone_2d.blocks.1.11.running_var", "backbone_2d.blocks.1.14.running_mean", "backbone_2d.blocks.1.14.running_mean", "backbone_2d.blocks.1.14.running_var", "backbone_2d.blocks.1.14.running_var", "backbone_2d.blocks.1.17.running_mean", "backbone_2d.blocks.1.17.running_mean", "backbone_2d.blocks.1.17.running_var", "backbone_2d.blocks.1.17.running_var", "backbone_2d.deblocks.0.1.running_mean", "backbone_2d.deblocks.0.1.running_mean", "backbone_2d.deblocks.0.1.running_var", "backbone_2d.deblocks.0.1.running_var", "backbone_2d.deblocks.1.1.running_mean", "backbone_2d.deblocks.1.1.running_mean", "backbone_2d.deblocks.1.1.running_var", "backbone_2d.deblocks.1.1.running_var", "point_head.cls_layers.1.running_mean", "point_head.cls_layers.1.running_mean", "point_head.cls_layers.1.running_var", "point_head.cls_layers.1.running_var", "point_head.cls_layers.4.running_mean", "point_head.cls_layers.4.running_mean", "point_head.cls_layers.4.running_var", "point_head.cls_layers.4.running_var", "roi_head.roi_grid_pool_layer.mlps.0.1.running_mean", "roi_head.roi_grid_pool_layer.mlps.0.1.running_mean", "roi_head.roi_grid_pool_layer.mlps.0.1.running_var", "roi_head.roi_grid_pool_layer.mlps.0.1.running_var", "roi_head.roi_grid_pool_layer.mlps.0.4.running_mean", "roi_head.roi_grid_pool_layer.mlps.0.4.running_mean", "roi_head.roi_grid_pool_layer.mlps.0.4.running_var", "roi_head.roi_grid_pool_layer.mlps.0.4.running_var", "roi_head.roi_grid_pool_layer.mlps.1.1.running_mean", "roi_head.roi_grid_pool_layer.mlps.1.1.running_mean", "roi_head.roi_grid_pool_layer.mlps.1.1.running_var", "roi_head.roi_grid_pool_layer.mlps.1.1.running_var", "roi_head.roi_grid_pool_layer.mlps.1.4.running_mean", "roi_head.roi_grid_pool_layer.mlps.1.4.running_mean", "roi_head.roi_grid_pool_layer.mlps.1.4.running_var", "roi_head.roi_grid_pool_layer.mlps.1.4.running_var", "roi_head.shared_fc_layer.1.running_mean", "roi_head.shared_fc_layer.1.running_mean", "roi_head.shared_fc_layer.1.running_var", "roi_head.shared_fc_layer.1.running_var", "roi_head.shared_fc_layer.5.running_mean", "roi_head.shared_fc_layer.5.running_mean", "roi_head.shared_fc_layer.5.running_var", "roi_head.shared_fc_layer.5.running_var", "roi_head.cls_layers.1.running_mean", "roi_head.cls_layers.1.running_mean", "roi_head.cls_layers.1.running_var", "roi_head.cls_layers.1.running_var", "roi_head.cls_layers.5.running_mean", "roi_head.cls_layers.5.running_mean", "roi_head.cls_layers.5.running_var", "roi_head.cls_layers.5.running_var", "roi_head.reg_layers.1.running_mean", "roi_head.reg_layers.1.running_mean", "roi_head.reg_layers.1.running_var", "roi_head.reg_layers.1.running_var", "roi_head.reg_layers.5.running_mean", "roi_head.reg_layers.5.running_mean", "roi_head.reg_layers.5.running_var", "roi_head.reg_layers.5.running_var". srun: error: kyoto: task 0: Exited with exit code 1 srun: Terminating job step 358824.0 srun: error: serv-9222: task 1: Terminated srun: Force Terminated job step 358824.0

continue if there is no valid pred box

An exception has been raised in a case where we don't have any valid prediction in that case valid_pred_boxes is empty and cat operation can not be performed as pred_scores[i] was filled with zero in the previous stage (that filling should be with -1 and we can filter pred_scores[i] here)

valid_pred_boxes = preds[i][valid_preds_mask]
valid_gt_boxes = targets[i][valid_gts_mask]
# Starting class indices from zero
valid_pred_boxes[:, -1] -= 1
valid_gt_boxes[:, -1] -= 1
# Adding predicted scores as the last column
valid_pred_boxes = torch.cat([valid_pred_boxes, pred_scores[i].unsqueeze(dim=-1)], dim=-1)

I am not sure but we have to check if we could

  1. Skip the ith iteration if valid_preds_mask.sum()==0 or (not len(valid_pred_boxes))
  2. Make an assert valid_pred_mask.sum()==len(pred_scores[i])

Filling part:

pseudo_scores.append(pseudo_label.new_zeros((1,)).float())

Bug: missing mask for roi case in metrics update

@shashankag14 The following line has been changed in the main consistency branch. Please fix it in DA-20-roi-aug branch and make sure this has not influenced your analysis.

pred_boxes = targets_dict['gt_of_rois'][uind][mask, :-1] if pred_type == 'pl' else targets_dict['rois'][uind]

pred_boxes = targets_dict['gt_of_rois'][uind][mask, :-1] if pred_type == 'pl' else targets_dict['rois'][uind][mask]

scores (cls_preds) are already sigmoid-normalised

scores in L263 are already normalized as they are coming as an output from the post-processing stage. Do we still need to normalize them in L269 ?

for cur_module in self.pv_rcnn.module_list:
if cur_module.model_cfg['NAME'] == 'PVRCNNHead' and self.model_cfg['ROI_HEAD'].get('ENABLE_RCNN_CONSISTENCY', False):
# Pass teacher's proposal to the student.
# To let proposal_layer continues for labeled data we pass rois with _ema postfix
batch_dict['rois_ema'] = batch_dict_ema['rois'].detach().clone()
batch_dict['roi_scores_ema'] = torch.sigmoid(batch_dict_ema['roi_scores'].detach().clone())
batch_dict['roi_labels_ema'] = batch_dict_ema['roi_labels'].detach().clone()
batch_dict = self.apply_augmentation(batch_dict, batch_dict, unlabeled_inds, key='rois_ema')
if self.model_cfg['ROI_HEAD'].get('ENABLE_RELIABILITY', False):
# pseudo-labels used for training roi head
pred_dicts = self.ensemble_post_processing(batch_dict_ema, batch_dict_ema_wa, unlabeled_inds,
ensemble_option='mean_no_nms')
else:
pred_dicts, _ = self.pv_rcnn_ema.post_processing(batch_dict_ema, no_recall_dict=True,
override_thresh=0.0, no_nms_for_unlabeled=True)
boxes, labels, scores, sem_scores, boxes_var, scores_var = self._unpack_predictions(pred_dicts, unlabeled_inds)
pseudo_boxes = [torch.cat([box, label.unsqueeze(-1)], dim=-1) for box, label in zip(boxes, labels)]
self._fill_with_pseudo_labels(batch_dict, pseudo_boxes, unlabeled_inds, labeled_inds)
batch_dict = self.apply_augmentation(batch_dict, batch_dict, unlabeled_inds, key='gt_boxes')
batch_dict['pred_scores_ema'] = torch.zeros_like(batch_dict['roi_scores_ema'])
for i, ui in enumerate(unlabeled_inds):
batch_dict['pred_scores_ema'][ui] = torch.sigmoid(scores[i].detach().clone())

DA-18 Running code on headless (no display integrated) server

Current code imports mayavi in pv_rcnn_ssl.py and roi_head_template.py to visualize the boxes at different stages.

import mayavi.mlab as mlab
from visual_utils import visualize_utils as V

Running the same code on the cluster would fail, since we do not have the display support on that server. Following is a workaround in order to import mayavi based on the presence of display during runtime and avoid code failure.

if os.name == 'posix' and "DISPLAY" not in os.environ:
    headless_server = True
else:
    headless_server = False
    import mayavi.mlab as mlab
    from visual_utils import visualize_utils as V

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.