Skip to main content
Kent Academic Repository

Truffle Interpreter Performance without the Holy Graal

Marr, Stefan, Larose, Octave, Kaleba, Sophie, Seaton, Chris (2022) Truffle Interpreter Performance without the Holy Graal. In: The 2022 Graal Workshop: Science, Art, Magic: Using and Developing The Graal Compiler, 2022-04-02, Online, Virtual. (Unpublished) (KAR id:93938)

Abstract

Language implementation frameworks such as Truffle+Graal and RPython make the promise of state-of-the-art performance by implementing “just” the interpreter,

and leaving the rest to the frameworks, which add a just-in-time compiler, garbage collection, and various other bits “for free”. One important assumption

for these frameworks is that real systems do not spend a lot of time interpreting user code, but reach highly-optimized compiled code quickly.

Unfortunately, for large codebases with millions of lines of code, this assumption does not hold as well as for common benchmarks. A significant amount of time is spent interpreting code. This is only exacerbated by modern development approaches, which lead to, what one would assume to be long running server applications, being updated every 30 minutes. In practice, this means for large and actively developed codebases, interpreter performance is key.

This brings us to the question of how Truffle-based interpreters such as Graal.js, TruffleRuby, GraalPython, and TruffleSOM compare to commonly used

interpreter implementations for the same language. We will present our results comparing these interpreters with and without just-in-time compilation on the

Are We Fast Yet benchmarks, which were designed for cross-language comparison.

We will further analyze where these interpreters spend their time, and experiment with an approach to approximate “best case” performance assuming an

interpreter could perform optimizations on the method level without requiring just-in-time compilation.

Based on our observations, we will discuss a number of possible steps forward based on the idea of supernodes, i.e., node combination, object inlining, and

generating interpreters using Graal’s partial evaluator. All these techniques attempt to mitigate the performance cost of the “everything is a node”

implementation style of Truffle interpreters, which leads to costly run-time program representation and a high degree of redundancy in correctness check

during interpretation.

Item Type: Conference or workshop item (Speech)
Divisions: Divisions > Division of Computing, Engineering and Mathematical Sciences > School of Computing
Depositing User: Stefan Marr
Date Deposited: 06 Apr 2022 22:04 UTC
Last Modified: 08 Apr 2022 13:29 UTC
Resource URI: https://kar.kent.ac.uk/id/eprint/93938 (The current URI for this page, for reference purposes)

University of Kent Author Information

Marr, Stefan.

Creator's ORCID: https://orcid.org/0000-0001-9059-5180
CReDIT Contributor Roles:

Larose, Octave.

Creator's ORCID:
CReDIT Contributor Roles:

Kaleba, Sophie.

Creator's ORCID:
CReDIT Contributor Roles:
  • Depositors only (login required):

Total unique views for this document in KAR since July 2020. For more details click on the image.