Skip to content

Commit fa231d7

Browse files
committed
v1.0
1 parent 6b7e8ac commit fa231d7

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

64 files changed

+7416
-0
lines changed

DATASETS.md

+79
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,79 @@
1+
# Dataset setup
2+
# [Human3.6M](http://vision.imar.ro/human3.6m/)
3+
The code for Human3.6M data preparation is borrowed from [VideoPose3D](https://github.com/facebookresearch/VideoPose3D), [SemGCN](https://github.com/garyzhao/SemGCN), [EvoSkeleton](https://github.com/Nicholasli1995/EvoSkeleton).
4+
5+
## Prepare the ground truth 2D 3D data pair for Human3.6
6+
* Setup from original source (recommended)
7+
* Please follow the instruction from [VideoPose3D](https://github.com/facebookresearch/VideoPose3D/blob/master/DATASETS.md) to process the data from the official [Human3.6M](http://vision.imar.ro/human3.6m/) website.
8+
9+
* Setup from preprocessed dataset
10+
* Get preprocessed `h36m.zip`:
11+
Please follow the instruction from [SemGCN](https://github.com/garyzhao/SemGCN/blob/master/data/README.md) to get the `h36m.zip`.
12+
* Convert `h36m.zip` to ground-truth 2D 3D npz file:
13+
Process `h36m.zip` by `prepare_data_h36m.py` to get `data_3d_h36m.npz` and `data_2d_h36m_gt.npz`
14+
```sh
15+
cd data
16+
python prepare_data_h36m.py --from-archive h36m.zip
17+
cd ..
18+
```
19+
After this step, you should end up with two files in the `data` directory: `data_3d_h36m.npz` for the 3D poses, and `data_2d_h36m_gt.npz` for the ground-truth 2D poses,
20+
which will look like:
21+
```
22+
${PoseAug}
23+
├── data
24+
├── data_3d_h36m.npz
25+
├── data_2d_h36m_gt.npz
26+
```
27+
28+
## Prepare other detected 2D pose for Human3.6M (optional)
29+
In this step, you need to download the detected 2D pose npz file and delete the Neck/Nose axis (e.g., the shape of array: nx17x2 -> nx16x2; n: number_of_frame) for every subject and action.
30+
31+
* To download the Det and CPN 2D pose, please follow the instruction of [VideoPose3D](https://github.com/facebookresearch/VideoPose3D/blob/master/DATASETS.md) and download the `data_2d_h36m_cpn_ft_h36m_dbb.npz` and `data_2d_h36m_detectron_ft_h36m.npz`.
32+
33+
```sh
34+
cd data
35+
wget https://dl.fbaipublicfiles.com/video-pose-3d/data_2d_h36m_cpn_ft_h36m_dbb.npz
36+
wget https://dl.fbaipublicfiles.com/video-pose-3d/data_2d_h36m_detectron_ft_h36m.npz
37+
cd ..
38+
```
39+
40+
* To download the HHR 2D pose, please follow the instruction of [EvoSkeleton](https://github.com/Nicholasli1995/EvoSkeleton/blob/master/docs/HHR.md) and download the `twoDPose_HRN_test.npy` and `twoDPose_HRN_train.npy`,
41+
and convert them to the same format as `data_2d_h36m_gt.npz`.
42+
43+
* You can also download our preprocessed 16 joints detect 2D from [detect2D](https://drive.google.com/drive/folders/1jVyz9HdT0Jq3-YPZnOQ6GEcOVDRZAifK?usp=sharing) directly.
44+
45+
Until here, you will have a data folder:
46+
```
47+
${PoseAug}
48+
├── data
49+
├── data_3d_h36m.npz
50+
├── data_2d_h36m_gt.npz
51+
├── data_2d_h36m_detectron_ft_h36m.npz
52+
├── data_2d_h36m_cpn_ft_h36m_dbb.npz
53+
├── data_2d_h36m_hr.npz
54+
```
55+
Please make sure the 2D data are all 16 joints setting.
56+
57+
58+
# [3DHP](http://gvv.mpi-inf.mpg.de/3dhp-dataset/)
59+
The code for 3DHP data preparation is borrowed from [SPIN](https://github.com/nkolot/SPIN)
60+
61+
* Please follow the instruction from [SPIN](https://github.com/nkolot/SPIN/blob/master/fetch_data.sh) to download the preprocessed compression file `dataset_extras.tar.gz` then unzip it to get mpi_inf_3dhp_valid.npz and put it at `data_extra/dataset_extras/mpi_inf_3dhp_valid.npz` (24 joints).
62+
* Then process the `dataset_extras/mpi_inf_3dhp_valid.npz` with `prepare_data_3dhp.py` or `prepare_data_3dhp.ipynb` file to get the `test_3dhp.npz` (16 joints) and place it at `data_extra/test_set`.
63+
64+
Until here, you will have a data_extra folder:
65+
```
66+
${PoseAug}
67+
├── data_extra
68+
├── bone_length_npy
69+
├── hm36s15678_bl_templates.npy
70+
├── dataset_extras
71+
├── mpi_inf_3dhp_valid.npz
72+
├── ... (not in use)
73+
├── test_set
74+
├── test_3dhp.npz
75+
├── prepare_data_3dhp.ipynb
76+
├── prepare_data_3dhp.py
77+
```
78+
79+
All the data are set up.

TRAIN.md

+99
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,99 @@
1+
# PoseAug
2+
3+
# Installation
4+
5+
### Environment
6+
* **Supported OS:** Ubuntu 16.04
7+
* **Packages:**
8+
* Python: 3.6.9
9+
* PyTorch: 1.0.1.post2 ([https://pytorch.org/](https://pytorch.org/))
10+
* **build the environment:**
11+
```sh
12+
cd PoseAug
13+
conda create -n poseaugEnv python=3.6.9
14+
conda activate poseaugEnv
15+
pip install -r requirements.txt
16+
```
17+
18+
### Dataset
19+
* Please follow the `DATASETS.md` to get the data ready.
20+
21+
# Train
22+
* There are 32 experiments in total (16 for baseline training, 16 for PoseAug training),
23+
including four pose estimators ([SemGCN](https://github.com/garyzhao/SemGCN), [SimpleBaseline](https://github.com/una-dinosauria/3d-pose-baseline), [ST-GCN](https://github.com/vanoracai/Exploiting-Spatial-temporal-Relationships-for-3D-Pose-Estimation-via-Graph-Convolutional-Networks), [VideoPose](https://github.com/facebookresearch/VideoPose3D))
24+
and four 2D pose settings (Ground Truth, CPN, DET, HR-Net).
25+
26+
### Pretrain
27+
```sh
28+
# gcn
29+
python3 run_baseline.py --note pretrain --dropout 0 --lr 2e-2 --epochs 100 --posenet_name 'gcn' --checkpoint './checkpoint/pretrain_baseline' --keypoints gt
30+
python3 run_baseline.py --note pretrain --dropout 0 --lr 2e-2 --epochs 100 --posenet_name 'gcn' --checkpoint './checkpoint/pretrain_baseline' --keypoints cpn_ft_h36m_dbb
31+
python3 run_baseline.py --note pretrain --dropout 0 --lr 2e-2 --epochs 100 --posenet_name 'gcn' --checkpoint './checkpoint/pretrain_baseline' --keypoints detectron_ft_h36m
32+
python3 run_baseline.py --note pretrain --dropout 0 --lr 2e-2 --epochs 100 --posenet_name 'gcn' --checkpoint './checkpoint/pretrain_baseline' --keypoints hr
33+
34+
# videopose
35+
python3 run_baseline.py --note pretrain --lr 1e-3 --posenet_name 'videopose' --checkpoint './checkpoint/pretrain_baseline' --keypoints gt
36+
python3 run_baseline.py --note pretrain --lr 1e-3 --posenet_name 'videopose' --checkpoint './checkpoint/pretrain_baseline' --keypoints cpn_ft_h36m_dbb
37+
python3 run_baseline.py --note pretrain --lr 1e-3 --posenet_name 'videopose' --checkpoint './checkpoint/pretrain_baseline' --keypoints detectron_ft_h36m
38+
python3 run_baseline.py --note pretrain --lr 1e-3 --posenet_name 'videopose' --checkpoint './checkpoint/pretrain_baseline' --keypoints hr
39+
40+
# mlp
41+
python3 run_baseline.py --note pretrain --lr 1e-3 --stages 2 --posenet_name 'mlp' --checkpoint './checkpoint/pretrain_baseline' --keypoints gt
42+
python3 run_baseline.py --note pretrain --lr 1e-3 --stages 2 --posenet_name 'mlp' --checkpoint './checkpoint/pretrain_baseline' --keypoints cpn_ft_h36m_dbb
43+
python3 run_baseline.py --note pretrain --lr 1e-3 --stages 2 --posenet_name 'mlp' --checkpoint './checkpoint/pretrain_baseline' --keypoints detectron_ft_h36m
44+
python3 run_baseline.py --note pretrain --lr 1e-3 --stages 2 --posenet_name 'mlp' --checkpoint './checkpoint/pretrain_baseline' --keypoints hr
45+
46+
# st-gcn
47+
python3 run_baseline.py --note pretrain --dropout -1 --lr 1e-3 --posenet_name 'stgcn' --checkpoint './checkpoint/pretrain_baseline' --keypoints gt
48+
python3 run_baseline.py --note pretrain --dropout -1 --lr 1e-3 --posenet_name 'stgcn' --checkpoint './checkpoint/pretrain_baseline' --keypoints cpn_ft_h36m_dbb
49+
python3 run_baseline.py --note pretrain --dropout -1 --lr 1e-3 --posenet_name 'stgcn' --checkpoint './checkpoint/pretrain_baseline' --keypoints detectron_ft_h36m
50+
python3 run_baseline.py --note pretrain --dropout -1 --lr 1e-3 --posenet_name 'stgcn' --checkpoint './checkpoint/pretrain_baseline' --keypoints hr
51+
# note: for st-gcn, the baseline requires a different dropout setting (-1: the default dropout setting), while the poseaug requires dropout=0.
52+
53+
```
54+
### PoseAug
55+
```sh
56+
# gcn
57+
python3 run_poseaug.py --note poseaug --dropout 0 --posenet_name 'gcn' --lr_p 1e-3 --checkpoint './checkpoint/poseaug' --keypoints gt
58+
python3 run_poseaug.py --note poseaug --dropout 0 --posenet_name 'gcn' --lr_p 1e-3 --checkpoint './checkpoint/poseaug' --keypoints cpn_ft_h36m_dbb
59+
python3 run_poseaug.py --note poseaug --dropout 0 --posenet_name 'gcn' --lr_p 1e-3 --checkpoint './checkpoint/poseaug' --keypoints detectron_ft_h36m
60+
python3 run_poseaug.py --note poseaug --dropout 0 --posenet_name 'gcn' --lr_p 1e-3 --checkpoint './checkpoint/poseaug' --keypoints hr
61+
62+
# video
63+
python3 run_poseaug.py --note poseaug --posenet_name 'videopose' --lr_p 1e-4 --checkpoint './checkpoint/poseaug' --keypoints gt
64+
python3 run_poseaug.py --note poseaug --posenet_name 'videopose' --lr_p 1e-4 --checkpoint './checkpoint/poseaug' --keypoints cpn_ft_h36m_dbb
65+
python3 run_poseaug.py --note poseaug --posenet_name 'videopose' --lr_p 1e-4 --checkpoint './checkpoint/poseaug' --keypoints detectron_ft_h36m
66+
python3 run_poseaug.py --note poseaug --posenet_name 'videopose' --lr_p 1e-4 --checkpoint './checkpoint/poseaug' --keypoints hr
67+
68+
# mlp
69+
python3 run_poseaug.py --note poseaug --posenet_name 'mlp' --lr_p 1e-4 --stages 2 --checkpoint './checkpoint/poseaug' --keypoints gt
70+
python3 run_poseaug.py --note poseaug --posenet_name 'mlp' --lr_p 1e-4 --stages 2 --checkpoint './checkpoint/poseaug' --keypoints cpn_ft_h36m_dbb
71+
python3 run_poseaug.py --note poseaug --posenet_name 'mlp' --lr_p 1e-4 --stages 2 --checkpoint './checkpoint/poseaug' --keypoints detectron_ft_h36m
72+
python3 run_poseaug.py --note poseaug --posenet_name 'mlp' --lr_p 1e-4 --stages 2 --checkpoint './checkpoint/poseaug' --keypoints hr
73+
74+
# st-gcn
75+
python3 run_poseaug.py --note poseaug --dropout 0 --posenet_name 'stgcn' --lr_p 1e-4 --checkpoint './checkpoint/poseaug' --keypoints gt
76+
python3 run_poseaug.py --note poseaug --dropout 0 --posenet_name 'stgcn' --lr_p 1e-4 --checkpoint './checkpoint/poseaug' --keypoints cpn_ft_h36m_dbb
77+
python3 run_poseaug.py --note poseaug --dropout 0 --posenet_name 'stgcn' --lr_p 1e-4 --checkpoint './checkpoint/poseaug' --keypoints detectron_ft_h36m
78+
python3 run_poseaug.py --note poseaug --dropout 0 --posenet_name 'stgcn' --lr_p 1e-4 --checkpoint './checkpoint/poseaug' --keypoints hr
79+
80+
```
81+
82+
### Comment:
83+
* For simplicity, all the hyper-param are the same. If you want to explore better performance for specific setting, try changing the hyper-param.
84+
* The GAN training may collapse, change the hyper-param (e.g., random_seed) and re-train the models will solve the problem.
85+
86+
### Monitor the PoseAug training process
87+
```sh
88+
cd ./checkpoint/poseaug
89+
tensorboard --logdir=/path/to/eventfile
90+
```
91+
92+
### Analysis the result
93+
We provide a `checkpoint/PoseAug_result_summary.ipynb` which can generate the result summary table for all 16 experiments.
94+
95+
96+
### Evaluate
97+
```sh
98+
python3 run_evaluate.py --posenet_name 'videopose' --keypoints gt --evaluate '/path/to/checkpoint'
99+
```

0 commit comments

Comments
 (0)