Skip to main content
Kent Academic Repository

Supervisor and searcher co-operation algorithms for stochastic optimisation with application to neural network training

Sirlantzis, Konstantinos (2002) Supervisor and searcher co-operation algorithms for stochastic optimisation with application to neural network training. Doctor of Philosophy (PhD) thesis, University of Kent at Canterbury, Canterbury, UK. (doi:10.22024/UniKent/01.02.7419) (KAR id:7419)

Abstract

In this thesis we studied a novel class of algorithms for unconstrained optimisation with particular focus to the issues arising in noisy optimisation. These algorithms were developed using an innovative framework for design of efficient and robust algorithms, namely the Supervisor and Searcher Co-operation (SSC) framework. This framework provides a systematic way to incorporate desirable characteristics of existing algorithms into a new improved scheme.

The aim was to explore the properties of the SSC-based algorithms focusing on their behaviour in practice under the presence of stochastic noises. To this end, first, a basic algorithm was proposed along with a number of modifications and extensions to it. Then, their properties were evaluated in a systematic way through a variety of experiments involving a wide range of non-trivial deterministic and stochastic problems. Our findings suggest that the SSC algorithms are demonstrably efficient in the deterministic case, but, mainly, that they are robust enough to successfully address the difficulties arising in the presence of stochastic noises. Also, they can easily be modified to meet specific application requirements, while the resulting algorithms retain the desirable properties of the original algorithm.

Finally, to assess the applicability of the SSC algorithms in real world problems an adaptation to the basic SSC algorithm was proposed for use in Multilayer Neural Network training. The corresponding evaluations were performed through statistical experiments on a number of regression and classification problems, designed to cover the complex issues associated with neural networks learning, such as overtraining, and mainly generalisation ability. The SSC-based algorithm exhibited significantly better performance than the two algorithms used as benchmarks for comparisons. Specifically, it was demonstrably faster with respect to reduction of the error in the training set, but more importantly, it showed increased ability to avoid overtraining and hence to generalise (perform successfully in unknown samples), which is the ultimate goal of learning in neural networks.

Item Type: Thesis (Doctor of Philosophy (PhD))
DOI/Identification number: 10.22024/UniKent/01.02.7419
Subjects: T Technology > TK Electrical engineering. Electronics. Nuclear engineering > TK7800 Electronics
Divisions: Divisions > Division of Computing, Engineering and Mathematical Sciences > School of Engineering and Digital Arts
Depositing User: Konstantinos Sirlantzis
Date Deposited: 18 Sep 2008 16:04 UTC
Last Modified: 05 Nov 2024 09:39 UTC
Resource URI: https://kar.kent.ac.uk/id/eprint/7419 (The current URI for this page, for reference purposes)

University of Kent Author Information

Sirlantzis, Konstantinos.

Creator's ORCID: https://orcid.org/0000-0002-0847-8880
CReDIT Contributor Roles:
  • Depositors only (login required):

Total unique views for this document in KAR since July 2020. For more details click on the image.