Displacement Estimation in Micro-sensor Motion Capture

 

 

Introduction

Motion capture (Mocap) has been involved extensively into education, training, sports, and recently computer animation for television, cinema and video games as the technology matured. Locomotion is the fundamental skill of human beings, and many form of locomotion, like walking, running, jumping and climbing are being studied and used in computer animation.

Different from the optical Mocap, data capture and processing in MMocap are mainly done in the sensor coordinate system and the subject’s body coordinate system. There is no information of subject’s movements with reference to the global coordinate system, which is important for most applications.

Objective

  1. Build Human motion model to represent the MMocap.
  2. Fusion of multimodal sensor data to acheive high accuracy and low drift.
  3. Fusion of the segemeted motion models to obtain whole body motion model.
  4. Build a power efficient and low error rate motion model.

Challenges

  1. Human motion system is complicated, how to represent a MMOcap.
  2. Due to variations in sensor noise, randomness and nonlinearity of body motion, how to realize high accuracy.
  3. How to minimize the drift due to integration of angular rate.
  4. How to fuse all body segment motion to obtain a consistent whole body motion.

Approach

  1. Gait event detection
  2. Segmental kinematics transmission
  3. Estimation of CoM displacement

_______________________________________________________________________________

System Prototype

 

Sensor subsystem: includes 16-20 micro-sensor nodes (each node contains a 3-axis accelerometer, a 3-axis gyroscope, and a 3-axis magnetometer) and a base station connected by a data bus. Sensor nodes are placed on the segments of the human body (head, shoulders, spine, upper and lower limbs) to collect the motion data. The base station controls the data sampling rate range from 50Hz-200Hz. And it also sends data packets via USB or high-speed wireless module to the CPU for data processing by the data fusion and animation subsystems.

Data fusion subsystem: fuses sensory data and biomechanical constraints to get the orientation  information using Kalman Filter under Bayesian network theory and obtain the locomotion information by gait analysis.

Animation subsystem: uses the orientations and locomotion information to reconstruct the movements onto the 3D avatar model.

_______________________________________________________________________________

Real-time Action and Activity Recognition

 

Introduction

Real-time activity/action recognition using body sensor networks is potentially a challenging task and it has many potential applications in health care, assisted living, sports coaching, and interactive games. Here, distributed sensors are placed on human body and continuous sensor observations are collected. The received sensor data is used to train the models for different activities. Later, these trained models are used to predict the activity of any new observation.

Objective

To build an online real-time robust human action recognition system with optimal trade off  between accuracy and timeliness.

Challenges

  1. Building a real- time system is quite challenging.
  2. Accurate identification of start and end points of different activity or action in real time continuous data.
  3. Effectively capture the variations among different subjects.

Approach

  1. A hierarchical predictive human action recognition algorithm.
  2. An Adaptive Distributed recognition approach for Human Action Recognition