Alsedais, Rawabi (2019) Shape-based Person Re-identification. Doctor of Engineering (EngDoc) thesis, University of Kent,. (KAR id:80211)
PDF
Language: English
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
|
|
Download this file (PDF/13MB) |
Preview |
Abstract
The increasing demand for public security, including forensic security, has resulted in a substantial growth in the presence of surveillance camera networks (i.e., closed-circuit televisions, or CCTVs) in public areas. Significant improvements in the computer vision and machine learning fields have advanced the traditional surveillance camera network system (i.e., monitored by people) towards an intelligent surveillance system involving automated person detection, person tracking, activity recognition, and person re-identification. The field of person re-identification has recently received much attention from computer vision researchers. Appearance model-based features, which are detection features that are built based on elements of the subject's appearance, such as texture, colour, and clothes, are used in person re-identification. However, using the body shape (as one of the appearance model-based features) as a signature for person re-identification is an area of research still open for examination.
This thesis presents the methodology, implementation, and experimental framework of a shape-based person re-identification system. The proposed system segments the human silhouette into eight different parts: Body, Head & Neck, Shoulders, Middle, Lower, Upper Quarter, Upper Half, Torso, and Lower Half. These segmentations are built based on anthropometry studies. This system exploits the shape descriptor information of these segments to build a subject-unique signature for person re-identification using a Generic Fourier Descriptor (GFD). The discrimination level of shape-based signatures are assessed by classifying them using image-based and video-based approaches. The image-based system classifies the signatures on a frame-by-frame basis using Linear Discriminant Analysis (LDA), which evaluates the feasibility of re-identifying subjects based on their shape static feature. The video-based approach exploits the signatures of the entire sequence (i.e., multiple frames) to re-identify subjects based on their dynamic features that occur within a collection of frames using Dynamic Time Wrapping (DTW). Comprehensive system outcomes for image-based and video-based systems are analysed by comparing the performance of both systems for each segment individually. Finally, a rank list fusion method, which combines the image-based generated rank lists so that the lists generated by all frames in each sequence are replaced by one rank list for the entire sequence, is implemented for performance enhancement.
Extensive experiments were conducted using publicly available dataset to evaluate the proposed shape-based person re-identification. In scenarios where a subject who maintains the same appearance is identified and re-identified from the same angle, the image-based and video-based approaches were found to outperform a number of state-of-art systems. In situations where the subject is identified and re-identified from different viewing angles (inter-view) and with a change in appearance (cross-scenario), the results reflected a comparable performance. The results of the rank list fusion implementation indicate superior performance enhancement in all situations, including the inter-view and cross-scenario.
Item Type: | Thesis (Doctor of Engineering (EngDoc)) |
---|---|
Thesis advisor: | Guest, Richard |
Uncontrolled keywords: | Biometrics re-identification Silhouette Shape-descriptor |
Subjects: | T Technology |
Divisions: | Divisions > Division of Computing, Engineering and Mathematical Sciences > School of Engineering and Digital Arts |
Funders: | [37325] UNSPECIFIED |
SWORD Depositor: | System Moodle |
Depositing User: | System Moodle |
Date Deposited: | 24 Feb 2020 10:10 UTC |
Last Modified: | 05 Nov 2024 12:45 UTC |
Resource URI: | https://kar.kent.ac.uk/id/eprint/80211 (The current URI for this page, for reference purposes) |
- Link to SensusAccess
- Export to:
- RefWorks
- EPrints3 XML
- BibTeX
- CSV
- Depositors only (login required):