基于容器训练OpenPCdet
1.先拉取一个运行的镜像
docker pull djiajun1206/pcdet:pytorch1.6
2.基于镜像创建一个容器
nvidia-docker run -it --name pcdet --privileged --shm-size=15G -v
/etc/localtime:/etc/localtime:ro -v /tmp/.X11-unix:/tmp/.X11-unix
-e DISPLAY=unix\$DISPLAY -e GDK_SCALE -e GDK_DPI_SCALE
-v /home/user/data/:/data b478940ec7f6 /bin/bash
3.拉取git代码编译安装
git clone https://github.com/open-mmlab/OpenPCDet.git
进入主目录,运行setup.py,编译指令如下:
cd OpenPCDet
python setup.py develop
4.创建训练数据
在容器中进入OpenPCDet目录,在data子目录创建自己的数据集,点云应当是 .npy格式,标签应当是 .txt格式。标签内容如下:中心点,长宽高,航向角,类别名称
1.50 1.46 0.10 5.12 1.85 4.13 1.56 Vehicle
5.54 0.57 0.41 1.08 0.74 1.95 1.57 Pedestrian
数据放置位置如下,ImageSets目录放置训练和测试拆分文件
OpenPCDet
├── data
│ ├── custom
│ │ │── ImageSets
│ │ │ │── train.txt
│ │ │ │── val.txt
│ │ │── points
│ │ │ │── 000000.npy
│ │ │ │── 999999.npy
│ │ │── labels
│ │ │ │── 000000.txt
│ │ │ │── 999999.txt
├── pcdet
├── tools
5.配置训练参数
(1)修改tools/cfgs/dataset_configs/custom_dataset.yaml
主要修改点云范围,检测类别(如果你的训练集类别是['Vehicle', 'Pedestrian', 'Cyclist']就不用修改),训练数据名称,体素大小
DATASET: 'CustomDataset'
DATA_PATH: '../data/custom'
POINT_CLOUD_RANGE: [-75.2, -75.2, -2, 75.2, 75.2, 4]
MAP_CLASS_TO_KITTI: {
'Vehicle': 'Car',
'Pedestrian': 'Pedestrian',
'Cyclist': 'Cyclist',
}
DATA_SPLIT: {
'train': train,
'test': val
}
INFO_PATH: {
'train': [custom_infos_train.pkl],
'test': [custom_infos_val.pkl],
}
POINT_FEATURE_ENCODING: {
encoding_type: absolute_coordinates_encoding,
used_feature_list: ['x', 'y', 'z', 'intensity'],
src_feature_list: ['x', 'y', 'z', 'intensity'],
}
DATA_AUGMENTOR:
DISABLE_AUG_LIST: ['placeholder']
AUG_CONFIG_LIST:
- NAME: gt_sampling
USE_ROAD_PLANE: False
DB_INFO_PATH:
- custom_dbinfos_train.pkl
PREPARE: {
filter_by_min_points: ['Vehicle:5', 'Pedestrian:5', 'Cyclist:5'],
}
SAMPLE_GROUPS: ['Vehicle:20', 'Pedestrian:15', 'Cyclist:15']
NUM_POINT_FEATURES: 4
DATABASE_WITH_FAKELIDAR: False
REMOVE_EXTRA_WIDTH: [0.0, 0.0, 0.0]
LIMIT_WHOLE_SCENE: True
- NAME: random_world_flip
ALONG_AXIS_LIST: ['x', 'y']
- NAME: random_world_rotation
WORLD_ROT_ANGLE: [-0.78539816, 0.78539816]
- NAME: random_world_scaling
WORLD_SCALE_RANGE: [0.95, 1.05]
DATA_PROCESSOR:
- NAME: mask_points_and_boxes_outside_range
REMOVE_OUTSIDE_BOXES: True
- NAME: shuffle_points
SHUFFLE_ENABLED: {
'train': True,
'test': False
}
- NAME: transform_points_to_voxels
VOXEL_SIZE: [0.1, 0.1, 0.15]
MAX_POINTS_PER_VOXEL: 5
MAX_NUMBER_OF_VOXELS: {
'train': 150000,
'test': 150000
}
(2)检测类别,在pcdet/datasets/custom.custom_dataset.py中也要做相应修改
create_custom_infos(
dataset_cfg=dataset_cfg,
class_names=['Vehicle', 'Pedestrian', 'Cyclist'],
data_path=ROOT_DIR / 'data' / 'custom',
save_path=ROOT_DIR / 'data' / 'custom',
)
6.创建训练数据集
自定义数据集制作如下
python -m pcdet.datasets.custom.custom_dataset create_custom_infos tools/cfgs/dataset_configs/custom_dataset.yaml
如想使用KITT公开集,可以去官网下载数据,或者网上有网盘地址,下载后创建以下目录,将数据解压到相应目录下:
root@04f4d304ce39:/data# mkdir OpenPCDet/data/kitti/training root@04f4d304ce39:/data# mkdir OpenPCDet/data/kitti/training/calib
root@04f4d304ce39:/data# mkdir OpenPCDet/data/kitti/training/velodyne
root@04f4d304ce39:/data# mkdir OpenPCDet/data/kitti/training/label_2
root@04f4d304ce39:/data# mkdir OpenPCDet/data/kitti/training/image_2
#解压 unzip velodyne_training_1.zip -d OpenPCDet/data/kitti/training/velodyne/ ....
制作数据集
python -m pcdet.datasets.kitti.kitti_dataset
create_kitti_infos tools/cfgs/dataset_configs/kitti_dataset.yaml
7.开始训练
#CUDA_VISIBLE_DEVICES=0 python train.py --cfg_file ${CONFIG_FILE}
CUDA_VISIBLE_DEVICES=1 python train.py --cfg_file ./cfgs/kitti_models/pointpillar.yaml
如果稀疏卷积库报错则重装spconv
git clone -b v1.2.1 https://github.com/djiajunustc/spconv spconv --recursive
cd spconv
python setup.py bdist_wheel
cd ./dist
pip install *.whl
开始训练
2022-12-08 09:36:45,180 INFO **********************
Start training cfgs/kitti_models/pointpillar
(default)**********************
epochs: 0%| | 0/80 [00:00<?, ?it/s2022-12-08 09:36:49,308
INFO epoch: 0/80, acc_iter=1, cur_iter=0/928, batch_size=4, time_cost(epoch): 00:03/49:01, time_cost(all): 00:04/65:22:11, loss=3.2621874809265137, d_time=1.18(1.18), f_time=1.99(1.99), b_time=3.17(3.17), lr=0.0002999999999999999
2022-12-08 09:37:13,825
INFO epoch: 0/80, acc_iter=50, cur_iter=49/928, batch_size=4, time_cost(epoch): 00:27/08:06, time_cost(all): 00:28/11:24:42, loss=1.8049968481063843, d_time=0.00(0.03), f_time=0.38(0.53), b_time=0.38(0.55), lr=0.00030001813839255174
8,可视化测试
python demo.py --cfg_file ./cfgs/kitti_models/pointpillar.yaml --data_path kitti/testing/velodyne/ --ckpt pointpillar/default/ckpt/checkpoint_epoch_80.pth
问题1:
root@04f4d304ce39:/data/wyf/OpenPCDet# pip install torch==1.9 https://pypi.tuna.tsinghua.edu.cn/simple Collecting https://pypi.tuna.tsinghua.edu.cn/simple Downloading https://pypi.tuna.tsinghua.edu.cn/simple (24.7 MB) ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 24.7/24.7 MB 52.7 kB/s eta 0:00:00
ERROR: Cannot unpack file /tmp/pip-unpack-cp2uotjr/simple.html
(downloaded from /tmp/pip-req-build-b3e8cmaa, content-type: text/html);
cannot detect archive format ERROR: Cannot determine archive format
of /tmp/pip-req-build-b3e8cmaa
root@04f4d304ce39:/data/wyf/OpenPCDet
root@04f4d304ce39:/data/wyf/OpenPCDet
# 解决方法:
pip install -i https://pypi.douban.com/simple --trusted-host pypi.douban.com torch==1.9
问题2:
File "../pcdet/datasets/augmentor/database_sampler.py", line 498, in call
data_dict, sampled_gt_boxes, total_valid_sampled_dict, sampled_mv_height, sampled_gt_boxes2d
File "../pcdet/datasets/augmentor/database_sampler.py", line 372, in add_sampled_boxes_to_scene
sampled_gt_boxes, data_dict['road_plane'], data_dict['calib']
KeyError: 'road_plane'
将配置文件中数据增强部分的 USE_ROAD_PLANE 设置为 False
DATA_AUGMENTOR:
DISABLE_AUG_LIST: ['placeholder']
AUG_CONFIG_LIST:
- NAME: gt_sampling
USE_ROAD_PLANE: False
DB_INFO_PATH:
- kitti_dbinfos_train.pkl
PREPARE: {
filter_by_min_points: ['Car:5', 'Pedestrian:5', 'Cyclist:5'],
filter_by_difficulty: [-1],
}
问题3:
nvcc fatal : Unsupported gpu architecture 'compute_80' 是显卡算力与pytorch版本不匹配,升级pytorch
本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)