Skip to main content

Quantifying Performance Changes with Effect Size Confidence Intervals

Kalibera, Tomas and Jones, Richard E. (2012) Quantifying Performance Changes with Effect Size Confidence Intervals. Technical report. University of Kent, Kent 4--12. (doi:4--12) (The full text of this publication is not currently available from this repository. You may be able to access a copy if URLs are provided) (KAR id:30809)

The full text of this publication is not currently available from this repository. You may be able to access a copy if URLs are provided.
Official URL:
http://www.cs.kent.ac.uk/pubs/2012/3233

Abstract

Measuring performance and quantifying a performance change are core evaluation techniques in programming language and systems research. Out of 122 recent scientific papers published at PLDI, ASPLOS, ISMM, TOPLAS, and TACO, as many as 65 included experimental evaluation that quantified a performance change using a ratio of execution times. Unfortunately, few of these papers evaluated their results with the level of rigour that has come to be expected in other experimental sciences. The uncertainty of measured results was largely ignored. Scarcely any of the papers mentioned uncertainty in the ratio of the mean execution times, and most did not even mention uncertainty in the two means themselves. Furthermore, most of the papers failed to address the non-deterministic execution of computer programs (caused by factors such as memory placement, for example), and none addressed non-deterministic compilation (when a compiler creates different binaries from the same sources, which differ in performance, for example again because of impact on memory placement). It turns out that the statistical methods presented in the computer systems performance evaluation literature for the design and summary of experiments do not readily allow this either. This poses a hazard to the repeatability, reproducibility and even validity of quantitative results. Inspired by statistical methods used in other fields of science, and building on results in statistics that did not make it to introductory textbooks, we present a statistical model that allows us both to quantify uncertainty in the ratio of (execution time) means and to design experiments with a rigorous treatment of those multiple sources of non-determinism that might impact measured performance. Better still, under our framework summaries can be as simple as ''system A is faster than system B by 5.5 ± 2.5, with 95 confidence'', a more natural statement than those derived from typical current practice, which are often misinterpreted.

Item Type: Reports and Papers (Technical report)
DOI/Identification number: 4--12
Uncontrolled keywords: statistical methods, random effects, effect size
Subjects: Q Science > QA Mathematics (inc Computing science) > QA 76 Software, computer programming,
Divisions: Divisions > Division of Computing, Engineering and Mathematical Sciences > School of Computing
Depositing User: Richard Jones
Date Deposited: 21 Sep 2012 09:49 UTC
Last Modified: 16 Nov 2021 10:08 UTC
Resource URI: https://kar.kent.ac.uk/id/eprint/30809 (The current URI for this page, for reference purposes)

University of Kent Author Information

Kalibera, Tomas.

Creator's ORCID:
CReDIT Contributor Roles:

Jones, Richard E..

Creator's ORCID: https://orcid.org/0000-0002-8159-0297
CReDIT Contributor Roles:
  • Depositors only (login required):

Total unique views for this document in KAR since July 2020. For more details click on the image.