Skip to main content
Kent Academic Repository

New multiexpert architecture for high-performance object recognition

Fairhurst, Michael C. and Rahman, A. Fuad R. (1996) New multiexpert architecture for high-performance object recognition. In: Solomon, Susan S. and Batchelor, Bruce G. and Waltz, Frederick M., eds. Machine Vision Applications, Architectures, and Systems Integration V. Proceedings of SPIE . SPIE, pp. 140-151. ISBN 0-8194-2310-6. (doi:10.1117/12.257256) (The full text of this publication is not currently available from this repository. You may be able to access a copy if URLs are provided) (KAR id:19169)

The full text of this publication is not currently available from this repository. You may be able to access a copy if URLs are provided.
Official URL:
http://dx.doi.org/10.1117/12.257256

Abstract

Considerable work has been reported in recent years on the utilisation of hierarchical architectures for efficient classification of image data typically encountered in task domains relevant to automated inspection, part sorting, quality monitoring, and so on.(1) Such work has opened up the possibility of further enhancements through the more effective use of multiple-experts in such structures, but a principal difficulty encountered is to formulate an efficient way to combine decisions of individual experts to form a consensus.(2) The approach proposed here can be envisaged as a structure with multiple layers of filters to separate an input object/image stream. In an n-way classification problem, the primary layer channels the input stream into n different streams, with subsequent further processing dependent on the form of decision taken at the earlier stages. The decision about combining the initially filtered streams is taken based on the degree of confusion among the classes present. The filter battery effectively creates two separate types of output. One is the relatively well-behaved filtered stream corresponding to the defined target classes, while the other contains the patterns which are rejected by different filters as not belonging to the target stream. Subsequently, more specialised classifiers are trained to recognise the intended target classes only, while the rejected patterns from all the second layer filters are collected and presented to a reject recovery classifier which is trained on all the n input classes. Thus, progressively more focusing of the decision making occurs as the processing path is traversed, with the resultant increase in the overall classification capability of the overall system. In this paper, classification results are presented to illustrate the relative performance levels achieved with single expert classifiers in comparison with this type of multi-expert configuration where these single experts are integrated within the processing framework outlined above. A number of conclu sions are drawn in relation to the value and potential of hierarchical/multi-expert systems in general and, more importantly, some guidelines are offered about optimising classifier structures for particular application domains such as automated inspection processing.

Item Type: Book section
DOI/Identification number: 10.1117/12.257256
Additional information: Conference on Machine Vision Applications, Architectures, and Systems Integration V BOSTON, MA, NOV 18-19, 1996 Soc Photo Opt Instrumentat Engineers
Uncontrolled keywords: object recognition; multiple-expert-classifiers; hierarchical structures
Subjects: Q Science > QA Mathematics (inc Computing science) > QA 75 Electronic computers. Computer science
Divisions: Divisions > Division of Computing, Engineering and Mathematical Sciences > School of Engineering and Digital Arts
Depositing User: R.F. Xu
Date Deposited: 09 Jun 2009 14:17 UTC
Last Modified: 05 Nov 2024 09:55 UTC
Resource URI: https://kar.kent.ac.uk/id/eprint/19169 (The current URI for this page, for reference purposes)

University of Kent Author Information

Fairhurst, Michael C..

Creator's ORCID:
CReDIT Contributor Roles:
  • Depositors only (login required):

Total unique views for this document in KAR since July 2020. For more details click on the image.