Depth data acquired by current low-cost real-time depth cameras provide a more informative description of the hand pose that can be exploited for gesture recognition purposes. Following this rationale, we introduced various hand gesture recognition schemes based on depth information. We also introduced some datasets for this task and a library for rendering synthetic hands.
Key research topics include:
- In the basic framework, color and depth data are firstly used together to extract the hand and divide it into palm and finger regions. Then different sets of feature descriptors are extracted accounting for different clues like the distances of the fingertips from the hand center, the curvature of the hand contour or the geometry of the palm region. Finally a multi-class SVM classifier is employed to recognize the performed gestures.
- Color-based descriptor have been combined with the depth-based ones.
- Several feature extraction schemes have been presented
- The use of a Leap Motion sensor together with a depth sensor has been considered
- Synthetic data obtained by an ad-hoc library made available on this website has been used for the training stage.
- The approach has been included into an augmented reality system including also an head mounted display
Selected publications:
Head-mounted gesture controlled interface for human-computer interaction Journal Article
In: Multimedia Tools and Applications, vol. 77, pp. 27–53, 2018.
Hand gesture recognition with jointly calibrated leap motion and depth sensor Journal Article
In: Multimedia Tools and Applications, vol. 75, pp. 14991–15015, 2016.
Hand gesture recognition with leap motion and kinect devices Proceedings Article
In: 2014 IEEE International conference on image processing (ICIP), pp. 1565–1569, IEEE 2014.
Combining multiple depth-based descriptors for hand gesture recognition Journal Article
In: Pattern Recognition Letters, vol. 50, pp. 101–111, 2014.