Inertial-Sensor based Gesture Recognition
Displacement Estimation in Micro-sensor Motion Capture
Motion capture (Mocap) has been involved extensively into education, training, sports, and recently computer animation for television, cinema and video games as the technology matured. Locomotion is the fundamental skill of human beings, and many form of locomotion, like walking, running, jumping and climbing are being studied and used in computer animation.
Different from the optical Mocap, data capture and processing in MMocap are mainly done in the sensor coordinate system and the subject’s body coordinate system. There is no information of subject’s movements with reference to the global coordinate system, which is important for most applications.
- Build Human motion model to represent the MMocap.
- Fusion of multimodal sensor data to acheive high accuracy and low drift.
- Fusion of the segemeted motion models to obtain whole body motion model.
- Build a power efficient and low error rate motion model.
- Human motion system is complicated, how to represent a MMOcap.
- Due to variations in sensor noise, randomness and nonlinearity of body motion, how to realize high accuracy.
- How to minimize the drift due to integration of angular rate.
- How to fuse all body segment motion to obtain a consistent whole body motion.
- Gait event detection
- Segmental kinematics transmission
- Estimation of CoM displacement
Sensor subsystem: includes 16-20 micro-sensor nodes (each node contains a 3-axis accelerometer, a 3-axis gyroscope, and a 3-axis magnetometer) and a base station connected by a data bus. Sensor nodes are placed on the segments of the human body (head, shoulders, spine, upper and lower limbs) to collect the motion data. The base station controls the data sampling rate range from 50Hz-200Hz. And it also sends data packets via USB or high-speed wireless module to the CPU for data processing by the data fusion and animation subsystems.
Data fusion subsystem: fuses sensory data and biomechanical constraints to get the orientation information using Kalman Filter under Bayesian network theory and obtain the locomotion information by gait analysis.
Animation subsystem: uses the orientations and locomotion information to reconstruct the movements onto the 3D avatar model.
Real-time Action and Activity Recognition
Real-time activity/action recognition using body sensor networks is potentially a challenging task and it has many potential applications in health care, assisted living, sports coaching, and interactive games. Here, distributed sensors are placed on human body and continuous sensor observations are collected. The received sensor data is used to train the models for different activities. Later, these trained models are used to predict the activity of any new observation.
- To build an online real-time robust human action recognition system with optimal trade off between accuracy and timeliness.
- Building a real- time system is quite challenging.
- Accurate identification of start and end points of different activity or action in real time continuous data.
- Effectively capture the variations among different subjects.
- A hierarchical predictive human action recognition algorithm.
- An Adaptive Distributed recognition approach for Human Action Recognition