Search for a command to run...
In this paper we introduce our method for enabling dynamic gesture recognition for hand gestures. Like a number of other research work focusing on gesture recognition we use a camera to track the motions and interpret these in terms of actual meaningful gestures; however we emphasise the tracking of fingers as well as the hand in order to cover a much wider range of gestures. The recognition is processed as part of three key stages, with a fourth in development. The first stage processes the visual information from the camera, and identifies the key regions and elements (such as the hand and fingers), this classified information is passed to a 2D to 3D module that transforms the 2D classified information into a full 3D space applying it to a calibrated hand model using inverse projection matrices and inverse kinematics. Simplifying this model into posture curvature information we apply this to a hidden Markov model (HMM). This model is used to identify and differentiate between different gestures, even ones using the same finger combinations. We briefly discuss our current development in the application of context awareness to this scenario, which is used in combination with the HMM in order to apply a different semantic to each gesture. This is especially useful due to the huge overlap in semantics specifically appropriated to hand gestures
Published in: 2005 IEEE Instrumentationand Measurement Technology Conference Proceedings
Volume 3, pp. 1706-1711