Sawicki, Piotr, Grzes, Marek, Goes, Fabricio, Brown, Dan, Peeperkorn, Max, Khatun, Aisha (2023) Bits of grass: does GPT already know how to write like Whitman? In: Proceedings of the 14th International Conference for Computational Creativity. . (In press) (KAR id:101550)
PDF
Author's Accepted Manuscript
Language: English |
|
Download this file (PDF/175kB) |
Preview |
Request a format suitable for use with assistive technology e.g. a screenreader |
Abstract
This study examines the ability of GPT-3.5, GPT-3.5-turbo (ChatGPT) and GPT-4 models to generate poems in the style of specific authors using zero-shot and many-shot prompts (which use the maximum context length of 8192 tokens). We assess the performance of models that are not fine-tuned for generating poetry in the style of specific authors, via automated evaluation. Our findings indicate that without finetuning, even when provided with the maximum number of 17 poem examples (8192 tokens) in the prompt, these models do not generate poetry in the desired style.
Item Type: | Conference or workshop item (Poster) |
---|---|
Uncontrolled keywords: | GPT; NLP; poetry; fine-tuning; LLMs |
Subjects: | Q Science > Q Science (General) > Q335 Artificial intelligence |
Divisions: | Divisions > Division of Computing, Engineering and Mathematical Sciences > School of Computing |
Funders: | University of Kent (https://ror.org/00xkeyj56) |
Depositing User: | Piotr Sawicki |
Date Deposited: | 05 Jun 2023 17:10 UTC |
Last Modified: | 09 Jun 2023 13:08 UTC |
Resource URI: | https://kar.kent.ac.uk/id/eprint/101550 (The current URI for this page, for reference purposes) |
- Link to SensusAccess
- Export to:
- RefWorks
- EPrints3 XML
- BibTeX
- CSV
- Depositors only (login required):