Skip to main content
Kent Academic Repository

A Bayesian Framework for Extracting Human Gait using Strong Prior Knoweldge

Zhou, Ziheng, Prugel-Bennett, Adam, Damper, Robert I. (2006) A Bayesian Framework for Extracting Human Gait using Strong Prior Knoweldge. IEEE Transactions on Pattern Analysis and Machine Intelligence, 28 (11). pp. 1738-1752. ISSN 0162-8828. (doi:10.1109/TPAMI.2006.214) (The full text of this publication is not currently available from this repository. You may be able to access a copy if URLs are provided) (KAR id:9053)

The full text of this publication is not currently available from this repository. You may be able to access a copy if URLs are provided.
Official URL:
http://dx.doi.org/10.1109/TPAMI.2006.214

Abstract

Extracting full-body motion of walking people from monocular video sequences in complex, real-world environments is an important and difficult problem, going beyond simple tracking, whose satisfactory solution demands an appropriate balance between use of prior knowledge and learning from data. We propose a consistent Bayesian framework for introducing strong prior knowledge into a system for extracting human gait. In this work, the strong prior is built from a simple articulated model having both time-invariant (static) and time-variant (dynamic) parameters. The model is easily modified to cater to situations such as walkers wearing clothing that obscures the limbs. The statistics of the parameters are learned from high-quality (indoor laboratory) data and the Bayesian framework then allows us to "bootstrap" to accurate gait extraction on the noisy images typical of cluttered, outdoor scenes. To achieve automatic fitting, we use a hidden Markov model to detect the phases of images in a walking cycle. We demonstrate our approach on silhouettes extracted from fronto-parallel ("sideways on") sequences of walkers under both high-quality indoor and noisy outdoor conditions. As well as high-quality data with synthetic noise and occlusions added, we also test walkers with rucksacks, skirts, and trench coats. Results are quantified in terms of chamfer distance and average pixel error between automatically extracted body points and corresponding hand-labeled points. No one part of the system is novel in itself, but the overall framework makes it feasible to extract gait from very much poorer quality image sequences than hitherto. This is confirmed by comparing person identification by gait using our method and a well-established baseline recognition algorithm

Item Type: Article
DOI/Identification number: 10.1109/TPAMI.2006.214
Subjects: T Technology > TK Electrical engineering. Electronics. Nuclear engineering > TK7800 Electronics > TK7880 Applications of electronics > TK7885 Computer engineering. Computer hardware
Divisions: Divisions > Division of Computing, Engineering and Mathematical Sciences > School of Engineering and Digital Arts
Depositing User: Yiqing Liang
Date Deposited: 16 Mar 2009 10:11 UTC
Last Modified: 05 Nov 2024 09:41 UTC
Resource URI: https://kar.kent.ac.uk/id/eprint/9053 (The current URI for this page, for reference purposes)

University of Kent Author Information

Zhou, Ziheng.

Creator's ORCID:
CReDIT Contributor Roles:
  • Depositors only (login required):

Total unique views for this document in KAR since July 2020. For more details click on the image.