Angelov, Plamen, Gu, Xiaowei, Kangin, Dmitry, Principe, Jose (2016) Empirical data analysis: A new tool for data analytics. In: 2016 IEEE International Conference on Systems, Man, and Cybernetics (SMC). . pp. 52-59. IEEE ISBN 978-1-5090-1898-7. (doi:10.1109/SMC.2016.7844219) (The full text of this publication is not currently available from this repository. You may be able to access a copy if URLs are provided) (KAR id:90177)
The full text of this publication is not currently available from this repository. You may be able to access a copy if URLs are provided. | |
Official URL: https://doi.org/10.1109/SMC.2016.7844219 |
Abstract
In this paper, a novel empirical data analysis approach (abbreviated as EDA) is introduced which is entirely data-driven and free from restricting assumptions and pre-defined problem- or user-specific parameters and thresholds. It is well known that the traditional probability theory is restricted by strong prior assumptions which are often impractical and do not hold in real problems. Machine learning methods, on the other hand, are closer to the real problems but they usually rely on problem- or user-specific parameters or thresholds making it rather art than science. In this paper we introduce a theoretically sound yet practically unrestricted and widely applicable approach that is based on the density in the data space. Since the data may have exactly the same value multiple times we distinguish between the data points and unique locations in the data space. The number of data points, k is larger or equal to the number of unique locations, l and at least one data point occupies each unique location. The number of different data points that have exactly the same location in the data space (equal value), f can be seen as frequency. Through the combination of the spatial density and the frequency of occurrence of discrete data points, a new concept called multimodal typicality, τ MM is proposed in this paper. It offers a closed analytical form that represents ensemble properties derived entirely from the empirical observations of data. Moreover, it is very close (yet different) from the histograms, from the probability density function (pdf) as well as from fuzzy set membership functions. Remarkably, there is no need to perform complicated pre-processing like clustering to get the multimodal representation. Moreover, the closed form for the case of Euclidean, Mahalanobis type of distance as well as some other forms (e.g. cosine-based dissimilarity) can be expressed recursively making it applicable to data streams and online algorithms. Inference/estimation of the typicality of data points that were not present in the data so far can be made. This new concept allows to rethink the very foundations of statistical and machine learning as well as to develop a series of anomaly detection, clustering, classification, prediction, control and other algorithms.
Item Type: | Conference or workshop item (Paper) |
---|---|
DOI/Identification number: | 10.1109/SMC.2016.7844219 |
Uncontrolled keywords: | Data analysis; Histograms; Temperature distribution; Probability density function; Meteorology; Conferences; Cybernetics; empirical data analysis; multimodal typicality; data-driven; recursive calculation; inference; estimation |
Subjects: | Q Science > QA Mathematics (inc Computing science) > QA 75 Electronic computers. Computer science |
Divisions: | Divisions > Division of Computing, Engineering and Mathematical Sciences > School of Computing |
Depositing User: | Amy Boaler |
Date Deposited: | 13 Sep 2021 10:45 UTC |
Last Modified: | 05 Nov 2024 12:55 UTC |
Resource URI: | https://kar.kent.ac.uk/id/eprint/90177 (The current URI for this page, for reference purposes) |
- Export to:
- RefWorks
- EPrints3 XML
- BibTeX
- CSV
- Depositors only (login required):