WIP: Interpreter performance optimizations (~16-24% speedup)#362
Merged
Conversation
Profile analysis of benchmark_closure.pl in interpreter mode: - Identified ThreadLocal lookup overhead in CALL opcode - CallerStack push/pop on every call even when caller() unused - Deep call chain indirection for subroutine dispatch - TreeMap lookup for line numbers Optimization plan with 4 phases documented. Generated with [Devin](https://cli.devin.ai/docs) Co-Authored-By: Devin <158243242+devin-ai-integration[bot]@users.noreply.github.com>
…erence - Cache InterpreterState.currentPackage.get() at start of execute() - Reuse cached RuntimeScalar for SET_PACKAGE opcode - Avoid repeated ThreadLocal lookups in CALL_SUB opcodes No measurable speedup on benchmark_closure.pl, but cleaner code. Profile shows ~10% of time in getCallSiteInfo + getSourceLocationAccurate for caller() support - Phase 2 target. Generated with [Devin](https://cli.devin.ai/docs) Co-Authored-By: Devin <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Defer caller() info computation until actually needed: - Add CallerStack.pushLazy() with lambda-based resolution - CALL_SUB/CALL_METHOD now push lazy entries - Line number computation only happens when caller() is called - pop() skips resolution for unneeded entries Benchmark improvement: 127s -> 103s = ~19% speedup on benchmark_closure.pl Generated with [Devin](https://cli.devin.ai/docs) Co-Authored-By: Devin <158243242+devin-ai-integration[bot]@users.noreply.github.com>
- Add fast path in CALL_SUB for InterpretedCode: call execute() directly - Bypass RuntimeCode.apply() indirection chain for interpreter-to-interpreter calls - Pass null for subroutineName to enable InterpreterFrame caching - Apply same optimization to TAILCALL handling Small improvement (~2%) combined with previous optimizations. Generated with [Devin](https://cli.devin.ai/docs) Co-Authored-By: Devin <158243242+devin-ai-integration[bot]@users.noreply.github.com>
- InterpretedCode.getRegisters() caches register arrays per-code-object - Uses ThreadLocal for thread safety with recursion detection - Recursive calls fallback to fresh allocation (no contention) - BytecodeInterpreter.execute() releases registers in finally block Benchmark: 97s from 101s baseline (4% improvement from allocation reduction) Generated with [Devin](https://cli.devin.ai/docs) Co-Authored-By: Devin <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Total improvement: 127s → 97s (~24% speedup) - Phase 3: Inline apply path (2% speedup) - Phase 4: Register pooling (4% speedup) Generated with [Devin](https://cli.devin.ai/docs) Co-Authored-By: Devin <158243242+devin-ai-integration[bot]@users.noreply.github.com>
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
Cherry-picked interpreter optimization work from PR #353 to a clean branch.
Optimizations included:
Phase 1: ThreadLocal Caching - Cache
currentPackageScalarreference to avoid ThreadLocal lookups in hot loopPhase 2: Lazy CallerStack (~19% speedup) - Defer line number computation until
caller()is actually called viapushLazy()Phase 3: Inline Apply Path (~2% speedup) - For InterpretedCode, bypass
RuntimeCode.apply()and callBytecodeInterpreter.execute()directlyPhase 4: Register Array Pooling (~4% speedup) - Cache register arrays per-code-object via
getRegisters()Benchmark results (from original PR):
Files changed:
src/main/java/org/perlonjava/backend/bytecode/BytecodeInterpreter.javasrc/main/java/org/perlonjava/backend/bytecode/InterpretedCode.javasrc/main/java/org/perlonjava/runtime/runtimetypes/CallerStack.javadev/design/INTERPRETER_OPTIMIZATION.md(design doc)Status
Generated with Devin