Skip to main content
Kent Academic Repository

Evaluating evaluation: Assessing progress and practices in computational creativity research

Jordanous, Anna (2019) Evaluating evaluation: Assessing progress and practices in computational creativity research. In: Veale, Tony and Cardoso, Amílcar F., eds. Computational Creativity: The Philosophy and Engineering of Autonomously Creative Systems. First edition. Computational Synthesis and Creative Systems . Springer, Cham, Switzerland, pp. 211-236. ISBN 978-3-319-43608-1. E-ISBN 978-3-319-43610-4. (doi:10.1007/978-3-319-43610-4_10) (The full text of this publication is not currently available from this repository. You may be able to access a copy if URLs are provided) (KAR id:80114)

The full text of this publication is not currently available from this repository. You may be able to access a copy if URLs are provided. (Contact us about this Publication)
Official URL:
https://link.springer.com/chapter/10.1007/978-3-31...

Abstract

Computational creativity research has produced many computational systems that are described as ‘creative’. Historically, these ‘creative systems’ have not received much in terms of evaluation of the actual creativity of the systems, although this has recently attracted more attention as a research perspective. As a scientific research community, computational creativity researchers can benefit from more systematic/standardised approaches to evaluation of the creativity of our systems, to help us progress in understanding creativity and modelling it computationally. A methodology for creativity evaluation should accommodate different manifestations of creativity but also requires a clear, definitive statement of the tests used for evaluation. Here a historical perspective is given on how computational creativity researchers have evaluated (or not evaluated) the creativity of their systems, considering contextual reasons behind this. Different evaluation approaches and frameworks are currently available, though it is not yet clear which (if any) of several recently proposed methods are emerging as the preferred options to use. The Standardised Procedure for Evaluating Creative Systems (SPECS) forms an overarching set of guidelines for how to tackle evaluation of creative systems and can incorporate recent proposals for creativity evaluation. To help decide which evaluation method is best to use, this chapter concludes by exploring five meta-evaluation criteria devised from cross-disciplinary research into good evaluative practice. Together, these considerations help us explore best practice in computational creativity evaluation, helping us develop the tools we have available to us as computational creativity researchers.

Item Type: Book section
DOI/Identification number: 10.1007/978-3-319-43610-4_10
Uncontrolled keywords: computational creativity, creative systems
Subjects: Q Science > Q Science (General) > Q335 Artificial intelligence
Q Science > QA Mathematics (inc Computing science) > QA 76 Software, computer programming, > QA76.76 Computer software
Divisions: Divisions > Division of Computing, Engineering and Mathematical Sciences > School of Computing
Depositing User: Anna Jordanous
Date Deposited: 18 Feb 2020 16:13 UTC
Last Modified: 05 Nov 2024 12:45 UTC
Resource URI: https://kar.kent.ac.uk/id/eprint/80114 (The current URI for this page, for reference purposes)

University of Kent Author Information

  • Depositors only (login required):

Total unique views for this document in KAR since July 2020. For more details click on the image.