Skip to main content
Kent Academic Repository

2D recurrent neural networks: a high-performance tool for robust visual tracking in dynamic scenes

Masala, Giovanni Luca, Casu, Filippo, Golosio, Bruno, Grosso, Enrico (2019) 2D recurrent neural networks: a high-performance tool for robust visual tracking in dynamic scenes. Neural Computing and Applications, 29 (7). pp. 329-341. ISSN 0941-0643. (doi:10.1007/s00521-017-3235-x) (The full text of this publication is not currently available from this repository. You may be able to access a copy if URLs are provided) (KAR id:91405)

The full text of this publication is not currently available from this repository. You may be able to access a copy if URLs are provided. (Contact us about this Publication)
Official URL:
https://doi.org/10.1007/s00521-017-3235-x

Abstract

This paper proposes a novel method for robust visual tracking of arbitrary objects, based on the combination of image-based prediction and position refinement by weighted correlation. The effectiveness of the proposed approach is demonstrated on a challenging set of dynamic video sequences, extracted from the final of triple jump at the London 2012 Summer Olympics. A comparison is made against five baseline tracking systems. The novel system shows remarkable superior performances with respect to the other methods, in all considered cases characterized by changing background, and a large variety of articulated motions. The novel architecture, from here onward named 2D Recurrent Neural Network (2D-RNN), is derived from the well-known recurrent neural network model and adopts nearest neighborhood connections between the input and context layers in order to store the temporal information content of the video. Starting from the selection of the object of interest in the first frame, neural computation is applied to predict the position of the target in each video frame. Normalized cross-correlation is then applied to refine the predicted target position. 2D-RNN ensures limited complexity, great adaptability and a very fast learning time. At the same time, it shows on the considered dataset fast execution times and very good accuracy, making this approach an excellent candidate for automated analysis of complex video streams.

Item Type: Article
DOI/Identification number: 10.1007/s00521-017-3235-x
Uncontrolled keywords: Recurrent neural network; Convolutional network; Video tracking; Automated video analysis
Subjects: Q Science > QA Mathematics (inc Computing science) > QA 75 Electronic computers. Computer science
Divisions: Divisions > Division of Computing, Engineering and Mathematical Sciences > School of Computing
Depositing User: Amy Boaler
Date Deposited: 08 Nov 2021 10:20 UTC
Last Modified: 05 Nov 2024 12:57 UTC
Resource URI: https://kar.kent.ac.uk/id/eprint/91405 (The current URI for this page, for reference purposes)

University of Kent Author Information

Masala, Giovanni Luca.

Creator's ORCID: https://orcid.org/0000-0001-6734-9424
CReDIT Contributor Roles:
  • Depositors only (login required):

Total unique views for this document in KAR since July 2020. For more details click on the image.