Giter VIP home page Giter VIP logo

anchor3dlane's People

Contributors

spyflying avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

anchor3dlane's Issues

question about the ablation experiment

Thanks for your amazing work!

Im very confuse about your ablation experiment in Table 5, you mention: including warping FV image to BEV image (line 1) and warping FV feature to BEV feature (line 2), and keep the other settings same as our original Anchor3DLane.

I want to know hou you warp the image or feature, is use the IPM method, use matrix sampling form orig image? or just use a Linear Layer like nn.Linear(input_hinput_w, output_houtput_w)?

did the code contain this part of experiment?

AttributeError

Hello,Thanks for your great work! When I test the code , i receive the "AttributeError: Anchor3DLane: EfficientNet: module 'geffnet' has no attribute 'tf_efficientnet_b3_ns_s8'" error, how should i do to solve it? I clone the code in July.

2023-08-29 09:40:23,293 - mmseg - INFO - Multi-processing start method is None
2023-08-29 09:40:23,293 - mmseg - INFO - OpenCV num_threads is `64
is_resample: True
Now loading annotations...
after load annotation
find 198 samples in /mnt/yuantiantian1/Anchor3DLane-main/data/OpenLane/data_lists/validation.txt.
anchor: 4431
Traceback (most recent call last):
File "/root/anaconda3/envs/lane3d/lib/python3.7/site-packages/mmcv/utils/registry.py", line 69, in build_from_cfg
return obj_cls(**args)
File "/mnt/yuantiantian1/Anchor3DLane-main/mmseg/models/backbones/efficientnet.py", line 131, in init
self.encoder = geffnet.tf_efficientnet_b3_ns_s8(pretrained=False)
AttributeError: module 'geffnet' has no attribute 'tf_efficientnet_b3_ns_s8'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "/root/anaconda3/envs/lane3d/lib/python3.7/site-packages/mmcv/utils/registry.py", line 69, in build_from_cfg
return obj_cls(**args)
File "/mnt/yuantiantian1/Anchor3DLane-main/mmseg/models/lane_detector/anchor_3dlane.py", line 109, in init
self.backbone = build_backbone(backbone)
File "/mnt/yuantiantian1/Anchor3DLane-main/mmseg/models/builder.py", line 24, in build_backbone
return BACKBONES.build(cfg)
File "/root/anaconda3/envs/lane3d/lib/python3.7/site-packages/mmcv/utils/registry.py", line 237, in build
return self.build_func(*args, **kwargs, registry=self)
File "/root/anaconda3/envs/lane3d/lib/python3.7/site-packages/mmcv/cnn/builder.py", line 27, in build_model_from_cfg
return build_from_cfg(cfg, registry, default_args)
File "/root/anaconda3/envs/lane3d/lib/python3.7/site-packages/mmcv/utils/registry.py", line 72, in build_from_cfg
raise type(e)(f'{obj_cls.name}: {e}')
AttributeError: EfficientNet: module 'geffnet' has no attribute 'tf_efficientnet_b3_ns_s8'

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
File "tools/test.py", line 238, in
main()
File "tools/test.py", line 187, in main
model = build_lanedetector(cfg.model)
File "/mnt/yuantiantian1/Anchor3DLane-main/mmseg/models/builder.py", line 42, in build_lanedetector
return LANENET2S.build(cfg)
File "/root/anaconda3/envs/lane3d/lib/python3.7/site-packages/mmcv/utils/registry.py", line 237, in build
return self.build_func(*args, **kwargs, registry=self)
File "/root/anaconda3/envs/lane3d/lib/python3.7/site-packages/mmcv/cnn/builder.py", line 27, in build_model_from_cfg
return build_from_cfg(cfg, registry, default_args)
File "/root/anaconda3/envs/lane3d/lib/python3.7/site-packages/mmcv/utils/registry.py", line 72, in build_from_cfg
raise type(e)(f'{obj_cls.name}: {e}')
AttributeError: Anchor3DLane: EfficientNet: module 'geffnet' has no attribute 'tf_efficientnet_b3_ns_s8'

GPU resources when testing?

@spyflying
Thanks to your outstanding work!!
I wanted to know that how much GPU memory should i supply when i run the test demo? In other words, what hardware resources could i run the test demo successfully?
Looking forward to your reply, sincerely.

Unsatisfactory effect

We trained the openlane dataset with iter, but the effect is not ideal, may I ask where we have not configured well?

incorrect testing results

I ran testing on Apollosim dataset, but got results that did not make any sense. The classification scores were all around 0.5, so if the prob_th set to 0.7 as default, there were no lanes detected.

The model apollo_anchor3dlane.pth was used for testing, which was downloaded from the model zoo https://pan.baidu.com/s/1HPYxsNNSOO5CY7-RwAt9cw?pwd=bqvy.

There was a warning during testing:
'''
load checkpoint from local path: ./pretrained/apollo_anchor3dlane.pth
The model and loaded state dict do not match exactly

missing keys in source state_dict: cls_layer.1.layer.0.weight, cls_layer.1.layer.0.bias, cls_layer.1.layer.2.weight, cls_layer.1.layer.2.bias, cls_layer.1.layer.4.weight, cls_layer.1.layer.4.bias, reg_x_layer.1.layer.0.weight, reg_x_layer.1.layer.0.bias, reg_x_layer.1.layer.2.weight, reg_x_layer.1.layer.2.bias, reg_x_layer.1.layer.4.weight, reg_x_layer.1.layer.4.bias, reg_z_layer.1.layer.0.weight, reg_z_layer.1.layer.0.bias, reg_z_layer.1.layer.2.weight, reg_z_layer.1.layer.2.bias, reg_z_layer.1.layer.4.weight, reg_z_layer.1.layer.4.bias, reg_vis_layer.1.layer.0.weight, reg_vis_layer.1.layer.0.bias, reg_vis_layer.1.layer.2.weight, reg_vis_layer.1.layer.2.bias, reg_vis_layer.1.layer.4.weight, reg_vis_layer.1.layer.4.bias
'''

Is this the underlining problem? Please advise on how to fix this.

Thanks!

GPU resource

what is your GPU resource when traing, and how much time does it cost?

关于等宽约束的问题

你好!我有个问题想请教一下!
论文3.4节等宽约束优化这里,宽度是如何计算的,我没读懂您的公式9-11,请问您可以解答一下吗~
截屏2023-05-27 18 00 02
截屏2023-05-27 18 56 00
我自己画了一个图,不知道我的理解是不是对的:
IMG_2351
θjk指的是一条车道线预测和Yg轴的夹角吗?论文中说是法向方向。

no data_list in openlane datasets

hi, thanks for your great work, there's a little question about the datasets, as your mention on your readme, "data_list"
├── data/
| └── Apollosim
| └── data_splits
| └── standard
| └── train.json
| └── test.json
| └── ...
| └── data_lists/...
| └── images/...
| └── cache_dense/...
| └── OpenLane
| └── data_splits/...
| └── data_lists/...
| └── images/...
| └── lane3d_1000/...
| └── cache_dense/...
| └── prev_data_release/...
| └── ONCE/
| └── raw_data/
| └── cam01/...
| └── annotations/
| └── train/...
| └── val/...
| └── ...

but there isn't any data_lists folder in openlane datasets
Looking forward for your reply, thanks!

openlane datasets readme:

├── images
| ├── training
| | ├── segment-xxx
| | | ├── xxx.jpg
| | | └── ...
| | ├── segment-xxx
| | | ├── xxx.jpg
| | | └── ...
| | └── ...
| └── validation
| ├── segment-xxx
| | ├── xxx.jpg
| | └── ...
| ├── segment-xxx
| | ├── xxx.jpg
| | └── ...
| └── ...
├── cipo
| ├── training
| | ├── segment-xxx
| | | ├── xxx.jpg.json
| | | └── ...
| | ├── segment-xxx
| | | ├── xxx.jpg.json
| | | └── ...
| | └── ...
| └── validation
| ├── segment-xxx
| | ├── xxx.jpg.json
| | └── ...
| ├── segment-xxx
| | ├── xxx.jpg.json
| | └── ...
| └── ...
├── lane3d_300
| ├── training
| | ├── segment-xxx
| | | ├── xxx.json
| | | └── ...
| | ├── segment-xxx
| | | ├── xxx.json
| | | └── ...
| | └── ...
| ├── validation
| | ├── segment-xxx
| | | ├── xxx.json
| | | └── ...
| | ├── segment-xxx
| | | ├── xxx.json
| | | └── ...
| | └── ...
| └── test
| ├── curve_case
| | ├── segment-xxx
| | | ├── xxx.json
| | | └── ...
| | ├── segment-xxx
| | | ├── xxx.json
| | | └── ...
| | └── ...
| ├── extreme_weather_case
| | ├── segment-xxx
| | | ├── xxx.json
| | | └── ...
| | ├── segment-xxx
| | | ├── xxx.json
| | | └── ...
| | └── ...
| ├── intersection_case
| | ├── segment-xxx
| | | ├── xxx.json
| | | └── ...
| | ├── segment-xxx
| | | ├── xxx.json
| | | └── ...
| | └── ...
| ├── merge_split_case
| | ├── segment-xxx
| | | ├── xxx.json
| | | └── ...
| | ├── segment-xxx
| | | ├── xxx.json
| | | └── ...
| | └── ...
| ├── night_case
| | ├── segment-xxx
| | | ├── xxx.json
| | | └── ...
| | ├── segment-xxx
| | | ├── xxx.json
| | | └── ...
| | └── ...
| ├── up_down_case
| | ├── segment-xxx
| | | ├── xxx.json
| | | └── ...
| | ├── segment-xxx
| | | ├── xxx.json
| | | └── ...
| | └── ...
| ├── curve.txt
| ├── extreme_weather.txt
| ├── intersection.txt
| ├── merge_split.txt
| ├── night.txt
| └── up_down.txt
├── lane3d_1000
| ├── training
| | ├── segment-xxx
| | | ├── xxx.json
| | | └── ...
| | ├── segment-xxx
| | | ├── xxx.json
| | | └── ...
| | └── ...
| ├── validation
| | ├── segment-xxx
| | | ├── xxx.json
| | | └── ...
| | ├── segment-xxx
| | | ├── xxx.json
| | | └── ...
| | └── ...
| └── test
| ├── curve_case
| | ├── segment-xxx
| | | ├── xxx.json
| | | └── ...
| | ├── segment-xxx
| | | ├── xxx.json
| | | └── ...
| | └── ...
| ├── extreme_weather_case
| | ├── segment-xxx
| | | ├── xxx.json
| | | └── ...
| | ├── segment-xxx
| | | ├── xxx.json
| | | └── ...
| | └── ...
| ├── intersection_case
| | ├── segment-xxx
| | | ├── xxx.json
| | | └── ...
| | ├── segment-xxx
| | | ├── xxx.json
| | | └── ...
| | └── ...
| ├── merge_split_case
| | ├── segment-xxx
| | | ├── xxx.json
| | | └── ...
| | ├── segment-xxx
| | | ├── xxx.json
| | | └── ...
| | └── ...
| ├── night_case
| | ├── segment-xxx
| | | ├── xxx.json
| | | └── ...
| | ├── segment-xxx
| | | ├── xxx.json
| | | └── ...
| | └── ...
| ├── up_down_case
| | ├── segment-xxx
| | | ├── xxx.json
| | | └── ...
| | ├── segment-xxx
| | | ├── xxx.json
| | | └── ...
| | └── ...
| ├── 1000_curve.txt
| ├── 1000_extreme_weather.txt
| ├── 1000_intersection.txt
| ├── 1000_merge_split.txt
| ├── 1000_night.txt
| └── 1000_up_down.txt
└── scene
└── SCENE
└── scene.json

@spyflying

新Openlane v2 数据集训练问题

您好,请问使用新发布的Openlane v2版本数据集的目录结构依然是你们展示的以下结构吗?
image
同时,我并没有在下载好的Openlane v2中找到对应data_splits/文件夹

What is the version of geffnet?

hello, when I want to train based on tf_efficientnet_b3_ns-9d44bf68.pth, it occurs the error "AttributeError: Anchor3DLane: EfficientNet: module 'geffnet' has no attribute 'tf_efficientnet_b3_ns_s8''. The version of My geffnet is 1.0.2, I can not find a version that contains tf_efficientnet_b3_ns_s8. So I want to know your version of geffnet, please!

Segmentation fault(core dumped) when running the “train.py”

I'm sorry to ask you something, but when running the "train.py" file, the last sentence "runner.run(data_loaders, cfg.workflow)" in the function "train" gives the following error:

  • mmseg - INFO - workflow: [('train', 50000)], max: 50000 iters
  • mmseg - INFO - Checkpoints will be saved to /xxx//output/apollosim/anchor3dlane by HardDiskBackend
  • Segmentation fault (core dumped)

Have you ever encountered such a problem?

openlane数据集训练

我们在openlane公开数据集上进行训练,但是训练到一半出现了cuda error的问题。我们怀疑这是数据集车道线类别超出了21类的限制?但是在用openlane.py处理数据集的时候,我们看到有对于超出的类别进行了修改

                if lane_results['category'] >= 21:
                    lane_results['category'] = 20

我们目前已经不知道是那里出了问题,您有什么建议吗?报错信息如下:

2023-07-28 00:20:42,672 - mmseg - INFO - Exp name: anchor3dlane_iter.py 2023-07-28 00:20:42,673 - mmseg - INFO - Iter [3000/60000] lr: 2.000e-04, eta: 10:10:30, time: 0.642, data_time: 0.014, memory: 11367, batch_positives: 12.7812, batch_negatives: 450.0000, cls_loss: 0.1614, reg_losses_x: 0.0256, reg_losses_z: 0.0040, reg_losses_vis: 0.0297, liou_losses_x: 0.3897, liou_losses_z: 0.2364, cls_loss0: 0.0699, reg_losses_x0: 0.0508, reg_losses_z0: 0.0053, reg_losses_vis0: 0.0249, liou_losses_x0: 0.5498, liou_losses_z0: 0.2677, loss: 1.8151 2023-07-28 00:20:49,113 - mmseg - INFO - Iter [3010/60000] lr: 2.000e-04, eta: 10:10:24, time: 0.644, data_time: 0.013, memory: 11367, batch_positives: 13.5938, batch_negatives: 450.0000, cls_loss: 0.1474, reg_losses_x: 0.0386, reg_losses_z: 0.0039, reg_losses_vis: 0.0316, liou_losses_x: 0.3700, liou_losses_z: 0.2279, cls_loss0: 0.0648, reg_losses_x0: 0.0706, reg_losses_z0: 0.0048, reg_losses_vis0: 0.0262, liou_losses_x0: 0.5458, liou_losses_z0: 0.2555, loss: 1.7871 2023-07-28 00:20:55,559 - mmseg - INFO - Iter [3020/60000] lr: 2.000e-04, eta: 10:10:18, time: 0.645, data_time: 0.014, memory: 11367, batch_positives: 13.1438, batch_negatives: 450.0000, cls_loss: 0.1464, reg_losses_x: 0.0312, reg_losses_z: 0.0052, reg_losses_vis: 0.0316, liou_losses_x: 0.3480, liou_losses_z: 0.2309, cls_loss0: 0.0611, reg_losses_x0: 0.0473, reg_losses_z0: 0.0065, reg_losses_vis0: 0.0261, liou_losses_x0: 0.5236, liou_losses_z0: 0.2592, loss: 1.7170 2023-07-28 00:21:01,921 - mmseg - INFO - Iter [3030/60000] lr: 2.000e-04, eta: 10:10:10, time: 0.636, data_time: 0.014, memory: 11367, batch_positives: 11.4375, batch_negatives: 450.0000, cls_loss: 0.1548, reg_losses_x: 0.0307, reg_losses_z: 0.0042, reg_losses_vis: 0.0278, liou_losses_x: 0.3571, liou_losses_z: 0.2288, cls_loss0: 0.0599, reg_losses_x0: 0.0616, reg_losses_z0: 0.0067, reg_losses_vis0: 0.0242, liou_losses_x0: 0.5342, liou_losses_z0: 0.2702, loss: 1.7603 2023-07-28 00:21:08,344 - mmseg - INFO - Iter [3040/60000] lr: 2.000e-04, eta: 10:10:04, time: 0.642, data_time: 0.014, memory: 11367, batch_positives: 13.1125, batch_negatives: 450.0000, cls_loss: 0.1414, reg_losses_x: 0.0200, reg_losses_z: 0.0052, reg_losses_vis: 0.0308, liou_losses_x: 0.3512, liou_losses_z: 0.2308, cls_loss0: 0.0537, reg_losses_x0: 0.0501, reg_losses_z0: 0.0058, reg_losses_vis0: 0.0270, liou_losses_x0: 0.5265, liou_losses_z0: 0.2559, loss: 1.6984 2023-07-28 00:21:14,719 - mmseg - INFO - Iter [3050/60000] lr: 2.000e-04, eta: 10:09:56, time: 0.637, data_time: 0.014, memory: 11367, batch_positives: 13.2000, batch_negatives: 450.0000, cls_loss: 0.1403, reg_losses_x: 0.0277, reg_losses_z: 0.0052, reg_losses_vis: 0.0311, liou_losses_x: 0.3714, liou_losses_z: 0.2311, cls_loss0: 0.0684, reg_losses_x0: 0.0540, reg_losses_z0: 0.0072, reg_losses_vis0: 0.0258, liou_losses_x0: 0.5408, liou_losses_z0: 0.2696, loss: 1.7728 2023-07-28 00:21:21,130 - mmseg - INFO - Iter [3060/60000] lr: 2.000e-04, eta: 10:09:49, time: 0.641, data_time: 0.013, memory: 11367, batch_positives: 11.3625, batch_negatives: 450.0000, cls_loss: 0.1447, reg_losses_x: 0.0208, reg_losses_z: 0.0035, reg_losses_vis: 0.0295, liou_losses_x: 0.3702, liou_losses_z: 0.2274, cls_loss0: 0.0565, reg_losses_x0: 0.0476, reg_losses_z0: 0.0047, reg_losses_vis0: 0.0268, liou_losses_x0: 0.5384, liou_losses_z0: 0.2676, loss: 1.7376 2023-07-28 00:21:27,589 - mmseg - INFO - Iter [3070/60000] lr: 2.000e-04, eta: 10:09:44, time: 0.646, data_time: 0.014, memory: 11367, batch_positives: 13.2188, batch_negatives: 450.0000, cls_loss: 0.1481, reg_losses_x: 0.0324, reg_losses_z: 0.0038, reg_losses_vis: 0.0312, liou_losses_x: 0.3801, liou_losses_z: 0.2369, cls_loss0: 0.0596, reg_losses_x0: 0.0729, reg_losses_z0: 0.0042, reg_losses_vis0: 0.0266, liou_losses_x0: 0.5654, liou_losses_z0: 0.2595, loss: 1.8206 2023-07-28 00:21:33,933 - mmseg - INFO - Iter [3080/60000] lr: 2.000e-04, eta: 10:09:36, time: 0.634, data_time: 0.013, memory: 11367, batch_positives: 13.8812, batch_negatives: 450.0000, cls_loss: 0.1477, reg_losses_x: 0.0295, reg_losses_z: 0.0069, reg_losses_vis: 0.0318, liou_losses_x: 0.3902, liou_losses_z: 0.2495, cls_loss0: 0.0649, reg_losses_x0: 0.0831, reg_losses_z0: 0.0071, reg_losses_vis0: 0.0274, liou_losses_x0: 0.5694, liou_losses_z0: 0.2682, loss: 1.8756 2023-07-28 00:21:40,287 - mmseg - INFO - Iter [3090/60000] lr: 2.000e-04, eta: 10:09:28, time: 0.635, data_time: 0.013, memory: 11367, batch_positives: 13.5938, batch_negatives: 450.0000, cls_loss: 0.1450, reg_losses_x: 0.0237, reg_losses_z: 0.0068, reg_losses_vis: 0.0308, liou_losses_x: 0.3682, liou_losses_z: 0.2500, cls_loss0: 0.0605, reg_losses_x0: 0.0485, reg_losses_z0: 0.0093, reg_losses_vis0: 0.0261, liou_losses_x0: 0.5408, liou_losses_z0: 0.2832, loss: 1.7929 2023-07-28 00:21:46,753 - mmseg - INFO - Iter [3100/60000] lr: 2.000e-04, eta: 10:09:22, time: 0.647, data_time: 0.015, memory: 11367, batch_positives: 13.6750, batch_negatives: 450.0000, cls_loss: 0.1374, reg_losses_x: 0.0236, reg_losses_z: 0.0057, reg_losses_vis: 0.0305, liou_losses_x: 0.3791, liou_losses_z: 0.2349, cls_loss0: 0.0578, reg_losses_x0: 0.0576, reg_losses_z0: 0.0067, reg_losses_vis0: 0.0271, liou_losses_x0: 0.5623, liou_losses_z0: 0.2624, loss: 1.7851 2023-07-28 00:21:53,178 - mmseg - INFO - Iter [3110/60000] lr: 2.000e-04, eta: 10:09:16, time: 0.642, data_time: 0.013, memory: 11367, batch_positives: 13.3875, batch_negatives: 450.0000, cls_loss: 0.1396, reg_losses_x: 0.0203, reg_losses_z: 0.0043, reg_losses_vis: 0.0323, liou_losses_x: 0.3550, liou_losses_z: 0.2296, cls_loss0: 0.0614, reg_losses_x0: 0.0441, reg_losses_z0: 0.0054, reg_losses_vis0: 0.0289, liou_losses_x0: 0.5231, liou_losses_z0: 0.2590, loss: 1.7030 2023-07-28 00:21:59,601 - mmseg - INFO - Iter [3120/60000] lr: 2.000e-04, eta: 10:09:09, time: 0.642, data_time: 0.013, memory: 11367, batch_positives: 13.2500, batch_negatives: 450.0000, cls_loss: 0.1420, reg_losses_x: 0.0206, reg_losses_z: 0.0036, reg_losses_vis: 0.0315, liou_losses_x: 0.3702, liou_losses_z: 0.2274, cls_loss0: 0.0663, reg_losses_x0: 0.0586, reg_losses_z0: 0.0050, reg_losses_vis0: 0.0270, liou_losses_x0: 0.5430, liou_losses_z0: 0.2599, loss: 1.7553 2023-07-28 00:22:06,054 - mmseg - INFO - Iter [3130/60000] lr: 2.000e-04, eta: 10:09:03, time: 0.645, data_time: 0.014, memory: 11367, batch_positives: 12.5625, batch_negatives: 450.0000, cls_loss: 0.1473, reg_losses_x: 0.0194, reg_losses_z: 0.0051, reg_losses_vis: 0.0305, liou_losses_x: 0.3650, liou_losses_z: 0.2466, cls_loss0: 0.0599, reg_losses_x0: 0.0498, reg_losses_z0: 0.0066, reg_losses_vis0: 0.0257, liou_losses_x0: 0.5442, liou_losses_z0: 0.2827, loss: 1.7829 2023-07-28 00:22:12,533 - mmseg - INFO - Iter [3140/60000] lr: 2.000e-04, eta: 10:08:58, time: 0.648, data_time: 0.014, memory: 11367, batch_positives: 12.8063, batch_negatives: 450.0000, cls_loss: 0.1401, reg_losses_x: 0.0304, reg_losses_z: 0.0041, reg_losses_vis: 0.0299, liou_losses_x: 0.3633, liou_losses_z: 0.2325, cls_loss0: 0.0563, reg_losses_x0: 0.0659, reg_losses_z0: 0.0052, reg_losses_vis0: 0.0265, liou_losses_x0: 0.5352, liou_losses_z0: 0.2644, loss: 1.7539 2023-07-28 00:22:19,005 - mmseg - INFO - Iter [3150/60000] lr: 2.000e-04, eta: 10:08:52, time: 0.647, data_time: 0.014, memory: 11367, batch_positives: 12.8063, batch_negatives: 450.0000, cls_loss: 0.1518, reg_losses_x: 0.0198, reg_losses_z: 0.0054, reg_losses_vis: 0.0323, liou_losses_x: 0.3584, liou_losses_z: 0.2361, cls_loss0: 0.0587, reg_losses_x0: 0.0531, reg_losses_z0: 0.0068, reg_losses_vis0: 0.0268, liou_losses_x0: 0.5368, liou_losses_z0: 0.2713, loss: 1.7572 2023-07-28 00:22:25,480 - mmseg - INFO - Iter [3160/60000] lr: 2.000e-04, eta: 10:08:47, time: 0.648, data_time: 0.014, memory: 11367, batch_positives: 11.9625, batch_negatives: 450.0000, cls_loss: 0.1476, reg_losses_x: 0.0203, reg_losses_z: 0.0039, reg_losses_vis: 0.0286, liou_losses_x: 0.3592, liou_losses_z: 0.2322, cls_loss0: 0.0604, reg_losses_x0: 0.0450, reg_losses_z0: 0.0062, reg_losses_vis0: 0.0247, liou_losses_x0: 0.5252, liou_losses_z0: 0.2661, loss: 1.7194 2023-07-28 00:22:31,967 - mmseg - INFO - Iter [3170/60000] lr: 2.000e-04, eta: 10:08:41, time: 0.649, data_time: 0.014, memory: 11367, batch_positives: 13.4187, batch_negatives: 450.0000, cls_loss: 0.1473, reg_losses_x: 0.0182, reg_losses_z: 0.0046, reg_losses_vis: 0.0329, liou_losses_x: 0.3593, liou_losses_z: 0.2498, cls_loss0: 0.0609, reg_losses_x0: 0.0429, reg_losses_z0: 0.0058, reg_losses_vis0: 0.0275, liou_losses_x0: 0.5267, liou_losses_z0: 0.2846, loss: 1.7605 2023-07-28 00:22:38,414 - mmseg - INFO - Iter [3180/60000] lr: 2.000e-04, eta: 10:08:35, time: 0.645, data_time: 0.014, memory: 11367, batch_positives: 12.9000, batch_negatives: 450.0000, cls_loss: 0.1436, reg_losses_x: 0.0247, reg_losses_z: 0.0040, reg_losses_vis: 0.0299, liou_losses_x: 0.3473, liou_losses_z: 0.2335, cls_loss0: 0.0534, reg_losses_x0: 0.0536, reg_losses_z0: 0.0048, reg_losses_vis0: 0.0250, liou_losses_x0: 0.5128, liou_losses_z0: 0.2604, loss: 1.6928 2023-07-28 00:22:44,884 - mmseg - INFO - Iter [3190/60000] lr: 2.000e-04, eta: 10:08:30, time: 0.647, data_time: 0.014, memory: 11367, batch_positives: 14.3000, batch_negatives: 450.0000, cls_loss: 0.1401, reg_losses_x: 0.0227, reg_losses_z: 0.0049, reg_losses_vis: 0.0356, liou_losses_x: 0.3705, liou_losses_z: 0.2517, cls_loss0: 0.0573, reg_losses_x0: 0.0490, reg_losses_z0: 0.0060, reg_losses_vis0: 0.0310, liou_losses_x0: 0.5568, liou_losses_z0: 0.2878, loss: 1.8134 /pytorch/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:312: operator(): block: [0,0,0], thread: [8,0,0] Assertion idx_dim >= 0 && idx_dim < index_size && "index out of bounds"failed. /pytorch/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:312: operator(): block: [0,0,0], thread: [13,0,0] Assertionidx_dim >= 0 && idx_dim < index_size && "index out of bounds"failed. /pytorch/aten/src/ATen/native/cuda/ScatterGatherKernel.cu:312: operator(): block: [0,0,0], thread: [18,0,0] Assertionidx_dim >= 0 && idx_dim < index_size && "index out of bounds" failed. Traceback (most recent call last): File "/snap/pycharm-community/342/plugins/python-ce/helpers/pydev/pydevd.py", line 1500, in _exec pydev_imports.execfile(file, globals, locals) # execute the script File "/snap/pycharm-community/342/plugins/python-ce/helpers/pydev/_pydev_imps/_pydev_execfile.py", line 18, in execfile exec(compile(contents+"\n", file, 'exec'), glob, loc) File "/home/buaa/songyue/Anchor3DLane-main/tools/train.py", line 364, in <module> main() File "/home/buaa/songyue/Anchor3DLane-main/tools/train.py", line 354, in main train( File "/home/buaa/songyue/Anchor3DLane-main/tools/train.py", line 242, in train runner.run(data_loaders, cfg.workflow) File "/home/buaa/anaconda3/envs/lane3d/lib/python3.8/site-packages/mmcv/runner/iter_based_runner.py", line 144, in run iter_runner(iter_loaders[i], **kwargs) File "/home/buaa/anaconda3/envs/lane3d/lib/python3.8/site-packages/mmcv/runner/iter_based_runner.py", line 64, in train outputs = self.model.train_step(data_batch, self.optimizer, **kwargs) File "/home/buaa/anaconda3/envs/lane3d/lib/python3.8/site-packages/mmcv/parallel/data_parallel.py", line 77, in train_step return self.module.train_step(*inputs[0], **kwargs[0]) File "/home/buaa/songyue/Anchor3DLane-main/mmseg/models/lane_detector/anchor_3dlane.py", line 477, in train_step losses, other_vars = self(**data_batch) File "/home/buaa/anaconda3/envs/lane3d/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/home/buaa/songyue/Anchor3DLane-main/mmseg/models/lane_detector/anchor_3dlane.py", line 398, in forward return self.forward_train(img, mask, img_metas, **kwargs) File "/home/buaa/anaconda3/envs/lane3d/lib/python3.8/site-packages/mmcv/runner/fp16_utils.py", line 116, in new_func return old_func(*args, **kwargs) File "/home/buaa/songyue/Anchor3DLane-main/mmseg/models/lane_detector/anchor_3dlane.py", line 448, in forward_train losses, other_vars = self.loss(output, gt_3dlanes, output_aux) File "/home/buaa/anaconda3/envs/lane3d/lib/python3.8/site-packages/mmcv/runner/fp16_utils.py", line 205, in new_func return old_func(*args, **kwargs) File "/home/buaa/songyue/Anchor3DLane-main/mmseg/models/lane_detector/anchor_3dlane.py", line 411, in loss anchor_losses = self.lane_loss(proposals_list, gt_3dlanes) File "/home/buaa/anaconda3/envs/lane3d/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/home/buaa/songyue/Anchor3DLane-main/mmseg/models/losses/lane_loss.py", line 137, in forward cls_loss = focal_loss(cls_pred, cls_target) File "/home/buaa/anaconda3/envs/lane3d/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/home/buaa/songyue/Anchor3DLane-main/mmseg/models/losses/kornia_focal.py", line 145, in forward return focal_loss(input, target, self.alpha, self.gamma, self.reduction, self.eps) File "/home/buaa/songyue/Anchor3DLane-main/mmseg/models/losses/kornia_focal.py", line 84, in focal_loss target_one_hot: torch.Tensor = one_hot(target, num_classes=input.shape[1], device=input.device, dtype=input.dtype) # [b, c, h, w] File "/home/buaa/songyue/Anchor3DLane-main/mmseg/models/losses/kornia_focal.py", line 50, in one_hot return one_hot.scatter_(1, labels.unsqueeze(1), 1.0) + eps RuntimeError: CUDA error: device-side assert triggered

使用once.py划分ONCE数据集问题

您好,使用你们提供的once.py划分ONCE数据集时出现了一些问题,在函数extract_data中,
image
跳过了训练集中车道线数大于8的样例,当test_mode为true时,会继续处理,但是在transform_annotation函数中,max_lanes=8
image
这意味着验证集中车道线数大于8时会导致越界,发生报错,这两处似乎有一些矛盾,请问您是如何处理的?

EWC

May I ask where in the code to add equal width constraints?

Error occured while reproducing result

Hello, thanks for your great work. I am reproducing this code but some errors spurted out. It complained while training:
../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:365: operator(): block: [0,0,0], thread: [7,0,0] Assertion idx_dim >= 0 && idx_dim < index_size && "index out of bounds" failed.
../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:365: operator(): block: [0,0,0], thread: [9,0,0] Assertion idx_dim >= 0 && idx_dim < index_size && "index out of bounds" failed.
../aten/src/ATen/native/cuda/ScatterGatherKernel.cu:365: operator(): block: [0,0,0], thread: [11,0,0] Assertion idx_dim >= 0 && idx_dim < index_size && "index out of bounds" failed.

image

I used the openLane dataset, and tried training with configs "anchor3dlane_effb3.py" & "anchor3dlane_iter.py", neither trained some iters and end up failing, throwing the errors mentioned above.

Noted that the openLane dataset seems not coincide with this code at first(which was also complained about in another open issue: #5). So I reconstructed "data_lists/traning.txt" and "data_lists/validation.txt" w.r.t the file lists in cache_dense to circumvent the file-missing-error, and I ended up here.

I assume that this index-exceeding error is the result of different versions of the openLane datasets and there might be some hard-coded index that causes this error?

`anchor_assign = False` in multi-frame with iter

https://github.com/tusen-ai/Anchor3DLane/blob/bf64bd152e4550c43d9e61b19e4bc18b83661a32/configs/openlane/anchor3dlane_mf_iter.py#LL118C23-L118C28

Hi, i am so confused that why the Output-AUX branch uses anchor assign while the Ouput branch uses proposal assign ?

with torch.no_grad():
    if self.anchor_assign:
        positives_mask, negatives_mask, target_positives_indices = self.assigner.match_proposals_with_targets(
            anchors, target)
    else:
        positives_mask, negatives_mask, target_positives_indices = self.assigner.match_proposals_with_targets(
            proposals[:, :5+self.anchor_len*3], target)

Thanks !

How to evaluate on multiple gpus ?

Hi, i was replicating your experiments recently and got errors when I run dist_test.sh for multi-GPU eval (single gpu eval is normal).
I saw workflow = [('train', 10000000)] in configs, that seems to be avoiding the multi-gpu evaluation process.
So, what's the problem and how to fix it?

Thanks a lot.

ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -11) local_rank: 0 (pid: 1416) of binary:

when I try to train the project with command " bash tools/dist_train.sh /home/com14u07/changyongshu/projects/bev/Anchor3DLane/configs/openlane/anchor3dlane.py 1", the error arise:

2023-06-07 13:25:33,779 - mmseg - INFO - Checkpoints will be saved to projects/bev/Anchor3DLane/output/openlane/anchor3dlane by HardDiskBackend.
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -11) local_rank: 0 (pid: 1416) of binary: /workspace/miniconda3/envs/lane3d/bin/python
Traceback (most recent call last):
File "/workspace/miniconda3/envs/lane3d/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"main", mod_spec)
File "/workspace/miniconda3/envs/lane3d/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/workspace/miniconda3/envs/lane3d/lib/python3.7/site-packages/torch/distributed/launch.py", line 193, in
main()
File "/workspace/miniconda3/envs/lane3d/lib/python3.7/site-packages/torch/distributed/launch.py", line 189, in main
launch(args)
File "/workspace/miniconda3/envs/lane3d/lib/python3.7/site-packages/torch/distributed/launch.py", line 174, in launch
run(args)
File "/workspace/miniconda3/envs/lane3d/lib/python3.7/site-packages/torch/distributed/run.py", line 692, in run
)(*cmd_args)
File "/workspace/miniconda3/envs/lane3d/lib/python3.7/site-packages/torch/distributed/launcher/api.py", line 116, in call
return launch_agent(self._config, self._entrypoint, list(args))
File "/workspace/miniconda3/envs/lane3d/lib/python3.7/site-packages/torch/distributed/launcher/api.py", line 246, in launch_agent
failures=result.failures,
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:


          tools/train.py FAILED              

=================================================
Root Cause:
[0]:
time: 2023-06-07_13:25:37
rank: 0 (local_rank: 0)
exitcode: -11 (pid: 1416)
error_file: <N/A>
msg: "Signal 11 (SIGSEGV) received by PID 1416"

Other Failures:
<NO_OTHER_FAILURES>


使用openlane.py脚本划分Openlane 数据集问题

您好,使用你们提供的openlane.py划分Openlane 数据集出现了一些问题,在使用--merge和--genrate后,data-lists文件中train.txt文件中只剩一行信息,如图:
image
生成的cache_dense文件夹中的pkl格式文件似乎不完整,开始训练时,出现:
image
似乎你们的openlane.py有一些问题,希望你们的回复,谢谢!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.