Saunders, Jack, Freitas, Alex A. (2022) Evaluating the predictive performance of positive-unlabelled classifiers: a brief critical review and practical recommendations for improvement. ACM SIGKDD Explorations Newsletter, 24 (2). pp. 5-11. (doi:10.1145/3575637.3575642) (Access to this publication is currently restricted. You may be able to access a copy if URLs are provided) (KAR id:106802)
PDF
Publisher pdf
Language: English Restricted to Repository staff only |
|
Contact us about this Publication
|
|
Official URL: https://dl.acm.org/doi/abs/10.1145/3575637.3575642 |
Abstract
Positive-Unlabelled (PU) learning is a growing area of machine learning that aims to learn classifiers from data consisting of labelled positive and unlabelled instances. Whilst much work has been done proposing methods for PU learning, little has been written on the subject of evaluating these methods. Many popular standard classification metrics cannot be precisely calculated due to the absence of fully labelled data, so alternative approaches must be taken. This short commentary paper critically reviews the main PU learning evaluation approaches and the choice of predictive accuracy measures in 51 articles proposing PU classifiers and provides practical recommendations for improvements in this area.
Item Type: | Article |
---|---|
DOI/Identification number: | 10.1145/3575637.3575642 |
Uncontrolled keywords: | machine learning; data mining; positive-unlabelled learning; classification |
Subjects: | Q Science > Q Science (General) > Q335 Artificial intelligence |
Divisions: | Divisions > Division of Computing, Engineering and Mathematical Sciences > School of Computing |
Funders: | University of Kent (https://ror.org/00xkeyj56) |
Depositing User: | Alex Freitas |
Date Deposited: | 06 Aug 2024 15:39 UTC |
Last Modified: | 05 Nov 2024 13:12 UTC |
Resource URI: | https://kar.kent.ac.uk/id/eprint/106802 (The current URI for this page, for reference purposes) |
- Export to:
- RefWorks
- EPrints3 XML
- BibTeX
- CSV
- Depositors only (login required):