High-resolution performance logging for Python with persistent session tracking.
LogPulse is designed for developers who need to monitor execution latency across multiple, independent script runs—perfect for benchmarking RAG systems, AI Agents, and ephemeral cloud functions.
- ⏱️ Nanosecond Precision: Uses
time.perf_counter_ns()for the highest possible accuracy. - 💾 Persistent Sessions: Automatically tracks run IDs across script restarts using a local state.
- 🏷️ Session Tagging: Group your runs (e.g.,
gpt-4o-test,v1-prompt) for easy A/B comparison. - � Multi-Module Smart Reuse (v0.2.0+): Detects and consolidates multiple LogPulse instances in the same execution—no counter inflation!
- �📊 Built-in Visualization: One-line plotting for latency trends, distributions, and boxplots.
- ⚡ Lightweight: Zero-dependency core (Pandas only used for export/stats).
Install the core library:
pip install logpulseOr include the visualization suite:
pip install "logpulse[viz]"from logpulse import LogPulse
# Initialize (creates logs/perf_metrics.csv by default)
tracker = LogPulse(session_tag="experiment-alpha")
# Use as a decorator
@tracker.timeit("heavy_task")
def my_function():
# ... your code ...
pass
# Or as a context manager
with tracker.measure("database_query"):
# ... code to measure ...
pass
# Save results to disk
tracker.save()Because LogPulse remembers your previous runs, you can visualize trends across time:
from logpulse.viz import PulseVisualizer
viz = PulseVisualizer()
# Compare different session tags side-by-side
viz.compare_sessions(tags=["gpt-4o-test", "gpt-3.5-test"])
# View the latency distribution (Density Plot)
viz.plot_distribution()LogPulse automatically detects when multiple modules import it with the same session_tag within a single execution. Instead of inflating counters, all instances share the same run IDs and measurements list:
# module_a.py
from logpulse import LogPulse
logger = LogPulse(session_tag="my_pipeline")
# module_b.py
from logpulse import LogPulse
logger = LogPulse(session_tag="my_pipeline") # Reuses run IDs from module_a!
# When any instance calls save(), all measurements are capturedThis solves the "counter inflation" problem in multi-module projects—one execution = one accurate run count, regardless of how many modules create LogPulse instances.
Unlike other loggers, LogPulse stores a global state in .logpulse_state.json. If you run your script 100 times in a row, the run_id will correctly increment from 1 to 100 in your CSV, allowing for true time-series analysis of ephemeral scripts.
To reset the global counter and start fresh:
tracker = LogPulse()
tracker.clear_history(delete_logs=True)LogPulse is built for RAG Evaluation. Use different session_tag values to compare:
-
Different LLM Models (GPT-4 vs Claude 3.5)
-
Different Chunk Sizes
-
Different Embedding Models
Distributed under the MIT License. See LICENSE for more information.