Skip to main content
Kent Academic Repository

Don't Trust Your Profiler: An Empirical Study on the Precision and Accuracy of Java Profilers

Burchell, Humphrey, Larose, Octave, Kaleba, Sophie, Marr, Stefan (2023) Don't Trust Your Profiler: An Empirical Study on the Precision and Accuracy of Java Profilers. In: MPLR 2023. MPLR'23 . pp. 100-113. ACM (doi:10.1145/3617651.3622985) (KAR id:102818)

Abstract

To identify optimisation opportunities, Java developers often use sampling profilers that attribute a percentage of run time to the methods of a program. Even so these profilers use sampling, are probabilistic in nature, and may suffer for instance from safepoint bias, they are normally considered to be relatively reliable. However, unreliable or inaccurate profiles may misdirect developers in their quest to resolve performance issues by not correctly identifying the program parts that would benefit most from optimisations.

With the wider adoption of profilers such as async-profiler and Honest Profiler, which are designed to avoid the safepoint bias, we wanted to investigate how precise and accurate Java sampling profilers are today. We investigate the precision, reliability, accuracy, and overhead of async-profiler, Honest Profiler, Java Flight Recorder, JProfiler, perf, and YourKit, which are all actively maintained. We assess them on the fully deterministic Are We Fast Yet benchmarks to have a stable foundation for the probabilistic profilers.

We find that profilers are relatively reliable over 30 runs and normally report the same hottest method. Unfortunately, this is not true for all benchmarks, which suggests their reliability may be application-specific. Different profilers also report different methods as hottest and cannot reliably agree on the set of top 5 hottest methods. On the positive side, the average run time overhead is in the range of 1% to 5.4% for the different profilers.

Future work should investigate how results can become more reliable, perhaps by reducing the observer effect of profilers by using optimisation decisions of unprofiled runs or by developing a principled approach of combining multiple profiles that explore different dynamic optimisations.

Item Type: Conference or workshop item (Paper)
DOI/Identification number: 10.1145/3617651.3622985
Additional information: For the purpose of open access, the author has applied a CC BY public copyright licence to any Author Accepted Manuscript version arising from this submission.
Uncontrolled keywords: profiling; CPU sampling; Profiler comparison; Analysis tools; Profiler precision
Subjects: T Technology
Divisions: Divisions > Division of Computing, Engineering and Mathematical Sciences > School of Computing
Funders: Engineering and Physical Sciences Research Council (https://ror.org/0439y7842)
Royal Society (https://ror.org/03wnrjx87)
Depositing User: Stefan Marr
Date Deposited: 18 Sep 2023 22:28 UTC
Last Modified: 06 Mar 2024 12:48 UTC
Resource URI: https://kar.kent.ac.uk/id/eprint/102818 (The current URI for this page, for reference purposes)

University of Kent Author Information

  • Depositors only (login required):

Total unique views for this document in KAR since July 2020. For more details click on the image.