Skip to content

Commit 83ff41e

Browse files
author
sli
committed
commit message
0 parents  commit 83ff41e

File tree

84 files changed

+12527
-0
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

84 files changed

+12527
-0
lines changed

.gitignore

+139
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,139 @@
1+
assets/
2+
data/
3+
videos/
4+
ros/devel
5+
ros/build
6+
weights/
7+
# Created by https://www.gitignore.io/api/vim
8+
9+
### Vim ###
10+
# swap
11+
[._]*.s[a-v][a-z]
12+
[._]*.sw[a-p]
13+
[._]s[a-v][a-z]
14+
[._]sw[a-p]
15+
# session
16+
Session.vim
17+
# temporary
18+
.netrwhist
19+
*~
20+
# auto-generated tag files
21+
tags
22+
23+
# End of https://www.gitignore.io/api/vim
24+
25+
# Created by https://www.gitignore.io/api/python
26+
27+
### Python ###
28+
# Byte-compiled / optimized / DLL files
29+
__pycache__/
30+
*.py[cod]
31+
*$py.class
32+
33+
# C extensions
34+
*.so
35+
36+
# Distribution / packaging
37+
.Python
38+
build/
39+
develop-eggs/
40+
dist/
41+
downloads/
42+
eggs/
43+
.eggs/
44+
lib/
45+
lib64/
46+
parts/
47+
sdist/
48+
var/
49+
wheels/
50+
*.egg-info/
51+
.installed.cfg
52+
*.egg
53+
54+
# PyInstaller
55+
# Usually these files are written by a python script from a template
56+
# before PyInstaller builds the exe, so as to inject date/other infos into it.
57+
*.manifest
58+
*.spec
59+
60+
# Installer logs
61+
pip-log.txt
62+
pip-delete-this-directory.txt
63+
64+
# Unit test / coverage reports
65+
htmlcov/
66+
.tox/
67+
.coverage
68+
.coverage.*
69+
.cache
70+
nosetests.xml
71+
coverage.xml
72+
*.cover
73+
.hypothesis/
74+
75+
# Translations
76+
*.mo
77+
*.pot
78+
79+
# Django stuff:
80+
*.log
81+
local_settings.py
82+
83+
# Flask stuff:
84+
instance/
85+
.webassets-cache
86+
87+
# Scrapy stuff:
88+
.scrapy
89+
90+
# Sphinx documentation
91+
docs/_build/
92+
93+
# PyBuilder
94+
target/
95+
96+
# Jupyter Notebook
97+
.ipynb_checkpoints
98+
99+
# pyenv
100+
.python-version
101+
102+
# celery beat schedule file
103+
celerybeat-schedule
104+
105+
# SageMath parsed files
106+
*.sage.py
107+
108+
# Environments
109+
.env
110+
.venv
111+
env/
112+
venv/
113+
ENV/
114+
env.bak/
115+
venv.bak/
116+
117+
# Spyder project settings
118+
.spyderproject
119+
.spyproject
120+
121+
# Rope project settings
122+
.ropeproject
123+
124+
# mkdocs documentation
125+
/site
126+
127+
# mypy
128+
.mypy_cache/
129+
130+
# .idea
131+
.idea/
132+
133+
# End of https://www.gitignore.io/api/python
134+
ros/src/shadow_teleop/launch/human_Depth.launch
135+
ros/src/shadow_teleop/src/human_depth.cpp
136+
ros/src/shadow_teleop/scripts/human_image_crop.py
137+
138+
ros/src/shadow_teleop/src/gazebo_multi_bioik_dataset.cpp
139+
*.bag

README.md

+129
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,129 @@
1+
# Vision-based Teleoperation of Shadow Dexterous Hand using End-to-End Deep Neural Network
2+
3+
Author's mail : sli@informatik.uni-hamburg.de, jeasinema@gmail.com
4+
5+
6+
This package produces visually similar robot hand poses based on depth images of the human hand in an end-to-end fashion, which is a collaborative work done by TAMS and Fucun Sun's lab of Tsinghua University.
7+
8+
The special structure of TeachNet, combined with a consistency loss function, handles the differences in appearance and anatomy between human and robotic hands. A synchronized human-robot training set is generated from an existing dataset of labeled depth images of the human hand and from simulated depth images of a robotic hand.
9+
10+
Please cite this paper ([Vision-based Teleoperation of Shadow Dexterous Hand using End-to-End Deep Neural Network](https://arxiv.org/abs/1809.06268)), if you use our released code.
11+
12+
<img src="img/pipeline.svg" width="100%">
13+
14+
## Video
15+
<a href="https://www.youtube.com/watch?v=I1FTJ87CtDs
16+
" target="_blank"><img src="img/video.png"
17+
alt="video" border="0" /></a>
18+
19+
## Installation Instructions
20+
### OS
21+
- [ROS Kinetic and Ubuntu 16.04]]
22+
- [CUDA 9]
23+
24+
### ROS Dependency
25+
- [bio_ik](https://github.com/TAMS-Group/bio_ik.git)
26+
- [Moveit](https://github.com/ros-planning/moveit)
27+
- [common_resources](https://github.com/shadow-robot/common_resources.git)
28+
- [sr-config](https://github.com/shadow-robot/sr-config.git)
29+
- [sr_interface](https://github.com/shadow-robot/sr_interface.git)
30+
- [sr-ros-interface-ethercat](https://github.com/shadow-robot/sr-ros-interface-ethercat.git)
31+
- [ros_ethercat](https://github.com/shadow-robot/ros_ethercat.git)
32+
- [sr_common](https://github.com/shadow-robot/sr_common.git)
33+
- [sr_core](https://github.com/shadow-robot/sr_core.git)
34+
- [sr-ros-interface](https://github.com/shadow-robot/sr-ros-interface.git)
35+
- [sr_tools](https://github.com/shadow-robot/sr_tools.git)
36+
- [ros_control_robot](https://github.com/shadow-robot/ros_control_robot.git)
37+
38+
### Python Dependency
39+
- python2.7
40+
- PyTorch
41+
- mayavi
42+
- numpy
43+
- tensorboard
44+
- matplotlib
45+
- pickle
46+
- pandas
47+
- seaborn
48+
49+
### Camera Drive
50+
- librealsense
51+
52+
## Setup
53+
- Install necessary packages (kinetic branch) for Shadow Hand.
54+
- Install Bio IK packages. Please follow the Basic Usage
55+
in [README.md](https://github.com/TAMS-Group/bio_ik/blob/master/README.md) in bio_ik repository and set correct kinematics solver.
56+
- Install RealSense Camera package:
57+
```
58+
sudo apt install ros-kinetic-realsense-camera
59+
```
60+
- To simplify it, you can put above packages in one ros workspace.
61+
- Download our package in same workspace, then build this package with catkin_make.
62+
63+
## Dataset Generation
64+
- Download ([BigHand2.2M dataset](http://icvl.ee.ic.ac.uk/hands17/challenge/)). Put the lable file ```Training_Annotation.txt``` into ```ros/src/shadow_teleop/data/Human_label/```.
65+
- Generate robot mapping file by human hand keypoints from BigHand2.2M dataset. The generated file save in ```ros/src/shadow_teleop/data/human_robot_mapdata.csv```.
66+
```
67+
python ros/src/shadow_teleop/scripts/human_robot_mappingfile.py
68+
```
69+
- Run shadow hand in gazebo and use the our simulation world (./ros/src/shadow_teleop/data/world/shadowhand_multiview.world).
70+
- Generate dataset by running the code:
71+
```
72+
roslaunch shadow_teleop multi_shadow_sim_bio.launch
73+
```
74+
Please change the location of human_robot_mapdata.csv, the location of saved depth images, and the location of saved correspond joints in this launch file.
75+
76+
## Pretrained Models:
77+
- Download [pretrained models](https://tams.informatik.uni-hamburg.de/people/sli/data/TeachNet_model/) for real-time test.
78+
79+
## Model Training
80+
- If you want to train the network yourself instead of using a pretrained model, follow below steps.
81+
82+
- Launch a tensorboard for monitoring:
83+
```bash
84+
tensorboard --log-dir ./assets/log --port 8080
85+
```
86+
87+
and run an experiment for 200 epoch:
88+
```
89+
python main.py --epoch 200 --mode 'train' --batch-size 256 --lr 0.01 --gpu 1 --tag 'teachnet'
90+
```
91+
92+
File name and corresponding experiment:
93+
```
94+
main.py --- Teach Hard-Early approach
95+
main_baseline_human.py --- Single human
96+
main_baseline_shadow.py --- Single shadow
97+
main_gan.py --- Teach Soft-Early approach
98+
```
99+
100+
## RealsenseF200 Realtime Demo
101+
- Launch camera RealsenseF200 (If you use the other camera which is suitable for close-range tracking, please use corresponding launch file). Or you can download the recorded [example rosbag](https://tams.informatik.uni-hamburg.de/people/sli/data/TeachNet_model/), and play the bag file:
102+
```
103+
rosbag play [-l] example.bag
104+
```
105+
- Run Shadow hand in simulation or real world.
106+
- Run the collision check service:
107+
```
108+
rosrun shadow_teleop interpolate_traj_service
109+
```
110+
111+
- Run the demo code.
112+
113+
- Change the correct topic name based on your camera.
114+
- Limit your right hand to the viewpoint range of [30&deg;, 120&deg;] and the distance range of [15mm, 40mm] from the camera.
115+
```
116+
python shadow_demo.py --model-path pretrained-model-location [--gpu 1]
117+
```
118+
119+
## Citation
120+
If you use this work(collobrated with ), please cite:
121+
122+
```plain
123+
@article{li2018vision,
124+
title={Vision-based Teleoperation of Shadow Dexterous Hand using End-to-End Deep Neural Network},
125+
author={Li, Shuang and Ma, Xiaojian and Liang, Hongzhuo and G{\"o}rner, Michael and Ruppel, Philipp and Fang, Bing and Sun, Fuchun and Zhang, Jianwei},
126+
journal={arXiv preprint arXiv:1809.06268},
127+
year={2018}
128+
}
129+
```

evaluation/.gitignore

+2
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,2 @@
1+
*/*.csv
2+
*/*.pkl

evaluation/README.md

+17
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,17 @@
1+
### ShadowTeleop eval
2+
0. generate predict file `input.csv` by run main.py(uncomment some code in `test()`)
3+
1. put generated joint predict file `input.csv` in `predict`.
4+
5+
2. convert joint to xyz(`output.csv`)
6+
```bash
7+
$ roslaunch shadow_vision_telelop shadow_fk.launch
8+
```
9+
10+
3. convert xyz into uvd(remember to change the path in the script)
11+
```bash
12+
$ python cartesian2uvd.py
13+
```
14+
4. plot result
15+
```bash
16+
$ python eval.py <predict_pkl> <label_pkl>
17+
```

evaluation/cartesian2uvd.py

+68
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,68 @@
1+
import numpy as np
2+
import math
3+
import csv
4+
import pickle
5+
from numpy.linalg import inv
6+
from pyquaternion import Quaternion
7+
8+
def main():
9+
# base_path = "./label/"
10+
# base_path = "./predict_shadow/"
11+
base_path = "./predict_human/"
12+
DataFile = open(base_path + "output.csv", "r")
13+
lines = DataFile.read().splitlines()
14+
15+
# gazebo camera center coordinates and focal length
16+
mat = np.array([[554.255, 0, 320.5], [0, 554.25, 240.5], [0, 0, 1]])
17+
uv = np.zeros([15, 2])
18+
19+
uvd_view = [[] for _ in range(9)]
20+
for ln in lines:
21+
frame = ln.split(',')[0]
22+
print(frame)
23+
label_source = ln.split(',')[1:46]
24+
keypoints = np.array([float(ll) for ll in label_source]).reshape(15, 3)
25+
26+
camera_theta = np.array([[0, 0, 1.57], [0, 0.52, 0.84001], [0, 0.52, 2.4],
27+
[0, -0.52, 0.872664626],[0, -0.52, 2.2689280276],
28+
[0, 0, 1.04721975512],[0, 0, 2.0943951024], [0, 0.6, 1.57],
29+
[0, -0.52, 1.57]])
30+
# orientation: w, x,y,z in /gazebo/model_state
31+
camera_qt = np.array([[0.707388269167, 0, 0, 0.706825181105], [0.882398030774, -0.104828455998, 0.234736884563, 0.394060027311],
32+
[0.350178902426, -0.239609122606, 0.0931551315033, 0.900713231908],
33+
[0.875846686123, 0.108646979538, -0.232994085758, 0.408414216507],
34+
[0.408413188959, 0.232994213223, -0.108646706188, 0.875847165277],
35+
[0.866019791529, 0, 0, 0.500009720586],
36+
[0.499997879273, 0, 0, 0.866026628184],
37+
[0.675793825515, -0.208881123594, 0.209047527494, 0.675255886943],
38+
[0.683612933973, 0.18171100765, -0.18185576664, 0.683068771313],
39+
])
40+
camera_tran = np.array([[0, -0.5, 0.35], [-0.35, -0.3, 0.65], [0.35, -0.3, 0.65],
41+
[-0.3, -0.38, 0.11], [0.3, -0.35, 0.12],[-0.25, -0.4330127, 0.35],
42+
[0.25, -0.4330127, 0.35], [0, -0.3, 0.5], [0, -0.4, 0.1]])
43+
44+
for i in range(0, 9):
45+
w, x, y, z = camera_qt[i]
46+
quat = Quaternion(w, x, y, z)
47+
48+
R = quat.rotation_matrix
49+
R = R.T
50+
camera_t = camera_tran[i]
51+
52+
result = [frame]
53+
for j in range(0, len(keypoints)):
54+
key = keypoints[j] - camera_t
55+
cam_world = np.dot(R, key)
56+
cam_world = np.array([-cam_world[1], -cam_world[2],cam_world[0]])
57+
uv[j] = ((1 / cam_world[2]) * np.dot(mat, cam_world))[0:2]
58+
result.append(uv[j][0])
59+
result.append(uv[j][1])
60+
result.append(cam_world[2]*1000)
61+
uvd_view[i].append((frame, np.array(result[1:]).reshape(-1, 3)))
62+
63+
for i in range(9):
64+
pickle.dump(uvd_view[i], open(base_path + 'uv{}.pkl'.format(i), 'wb'))
65+
66+
67+
if __name__ == '__main__':
68+
main()

0 commit comments

Comments
 (0)