Skip to main content
Kent Academic Repository

Mechanising Conceptual Spaces using Variational Autoencoders

Peeperkorn, Max, Saunders, Rob, Bown, Oliver, Jordanous, Anna (2022) Mechanising Conceptual Spaces using Variational Autoencoders. In: Proceedings of the thirteenth International Conference on Computational Creativity. . (KAR id:95665)

PDF Author's Accepted Manuscript
Language: English
Download this file
(PDF/766kB)
[thumbnail of ICCC_22___Mechanising_Conceptual_Spaces_using_Variational_Autoencoders_cameraready.pdf]
Preview
Request a format suitable for use with assistive technology e.g. a screenreader
PDF Publisher pdf
Language: English

Restricted to Repository staff only
Contact us about this Publication
[thumbnail of ICCC-2022_15S_Peeperkorn-et-al.pdf]

Abstract

In this pilot study, we explore the Variational Autoencoder as a computational model for conceptual spaces in a social interaction context. Conceptually, the Variational Autoencoder is a natural fit for this purpose. We apply this idea in an agent-based social creativity simulation to explore and understand

the effects of social interactions on adapting conceptual spaces. We demonstrate a simple simulation setup and run experiments with a focus on establishing a baseline. While ongoing work needs to identify if adaption was appropriate, the results so far suggest that the Variational Autoencoder appears to adapt to new artefacts and has potential for modelling conceptual spaces.

Item Type: Conference or workshop item (Paper)
Subjects: Q Science > Q Science (General) > Q335 Artificial intelligence
Divisions: Divisions > Division of Computing, Engineering and Mathematical Sciences > School of Computing
Depositing User: Max Peeperkorn
Date Deposited: 04 Jul 2022 14:01 UTC
Last Modified: 05 Nov 2024 13:00 UTC
Resource URI: https://kar.kent.ac.uk/id/eprint/95665 (The current URI for this page, for reference purposes)

University of Kent Author Information

  • Depositors only (login required):

Total unique views for this document in KAR since July 2020. For more details click on the image.