Skip to content

[DataLoader] Emit OpenTelemetry metrics for read operations#582

Open
ShreyeshArangath wants to merge 8 commits into
linkedin:mainfrom
ShreyeshArangath:feat/reader-metrics
Open

[DataLoader] Emit OpenTelemetry metrics for read operations#582
ShreyeshArangath wants to merge 8 commits into
linkedin:mainfrom
ShreyeshArangath:feat/reader-metrics

Conversation

@ShreyeshArangath
Copy link
Copy Markdown
Collaborator

@ShreyeshArangath ShreyeshArangath commented May 12, 2026

Summary

DataLoader reads have no built-in observability today, so callers cannot track read latency, retries, or errors. This PR emits OpenTelemetry metrics from the DataLoader so operators can monitor table-reading workloads.

Changes

  • Client-facing API Changes
  • Internal API Changes
  • Bug Fixes
  • New Features
  • Performance Improvements
  • Code Style
  • Refactoring
  • Documentation
  • Tests

For all the boxes checked, please include additional details of the changes made in this pull request.

Testing Done

  • Manually Tested on local docker setup. Please include commands ran, and their output.
  • Added new tests for the changes made.
  • Updated existing tests to reflect the changes made.
  • No tests added or updated. Please explain why. If unsure, please feel free to ask for help.
  • Some other form of testing like staging or soak time in production. Please explain.

For all the boxes checked, include a detailed description of the testing done for the changes made in this pull request.

Additional Information

  • Breaking Changes
  • Deprecations
  • Large PR broken into smaller PRs, and PR plan linked in the description.

For all the boxes checked, include additional details of the changes made in this pull request.

Wire OpenTelemetry instruments into the dataloader read path so
operators can observe per-split and per-batch latency, throughput, and
failure rates per table. Adds load_table/plan_files duration & attempt
counters around the retry loop, and split/batch duration, row, byte,
batch-count and error instruments around DataLoaderSplit and
_TimedBatchIter. Common attributes (database, table, branch, and
caller-provided execution_context under openhouse.ctx.*) are attached
via a single build_attributes helper. execution_context is plumbed
through TableScanContext so worker-side splits carry the same labels.
DataLoaderContext gains a new ``metric_attributes`` field; values are
attached verbatim to every emitted metric so callers pick exactly which
dimensions are dimensions (and at what cardinality), rather than the
loader auto-promoting every ``execution_context`` key. ``execution_context``
keeps its original transformer-only role.

Drops the unused MappingProxyType empty-context singleton in favor of
``default_factory=dict`` per Python dataclass idiom, removes the now-superfluous
branch attribute, and documents why metric names stay lowercase — OSS
OpenTelemetry SDK normalizes names to lowercase on export, which would
turn LinkedIn-spec PascalCase into unreadable concatenated tokens
(``OpenHouse.DataLoader.LoadTableTime`` → ``openhouse.dataloader.loadtabletime``).
PascalCase is only preserved by the linkedin.opentelemetry wrapper, which
this OSS library does not depend on.
Drops the dedicated metrics/instruments.py module; instruments are now
declared at the top of the file that uses them (data_loader.py for the
load_table / plan_files pair, data_loader_split.py for the split / batch
pairs). Matches the pattern used by OSS Python OTel libraries that emit
their own metrics — no central inventory file, no extra import indirection.

Metric and attribute names switch to LinkedIn's
``{Domain}.{Subsystem}.{MetricName}`` PascalCase convention
(``OpenHouse.DataLoader.LoadTableTime``, ``OpenHouse.Database``, etc.) per
observability-instrumentation/concepts/metrics.md. METER_NAME becomes
``OpenHouse.DataLoader``. The OSS opentelemetry-sdk lowercases instrument
names at registration time, so the stored/exported form ends up as
``openhouse.dataloader.loadtabletime``; tests assert against that lowercased
form and a docstring on the test helper points to the SDK source so the
discrepancy is obvious to a future reader. Source matches LinkedIn
schema-registry convention; runtime form is whatever the SDK emits.

build_attributes now merges caller-provided extras verbatim — no
``openhouse.ctx.*`` namespacing — so callers control dimension naming
entirely.
Replaces internal identifiers (load_table, plan_files, DataLoaderSplit,
RecordBatch) in instrument descriptions with prose suited to dashboard
labels. Also reverts the metrics package docstring to match master.
# Conflicts:
#	integrations/python/dataloader/src/openhouse/dataloader/data_loader.py
Replaces DataLoaderContext.metric_attributes with metric_attribute_keys,
a whitelist of keys pulled from execution_context. Single source of
truth for caller-supplied identifiers; callers opt in to which keys
become metric dimensions, keeping cardinality controlled.
The build_attributes helper merged OpenHouse.Database / OpenHouse.Table
with caller-supplied extras at every emission boundary. Move that merge
into the loader's cached _resolved_metric_attributes property so the
table-level identifier is part of the dict from the start, and drop
the helper plus its module.
@ShreyeshArangath ShreyeshArangath marked this pull request as ready for review May 14, 2026 18:47
unit="s",
description="Time spent loading the Iceberg table from the catalog.",
)
_load_table_attempts = _meter.create_counter(
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Technically we can derive successful and failed table loads from this, but to me it feels more intuitive and easier to alert on have a metric for load_table_success and load_table_failure instead of attempts. WDYT?

Same comment for file planning

unit="1",
description="Errors raised while reading a record batch.",
)

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about timing for the transformation when table transformer is provided too?

…form timing

- Replace LoadTable/PlanFiles Attempts counters with explicit Success and
  Failure counters so alerting can target final outcome directly without
  deriving from durations.
- Add a TransformTime histogram emitted per-batch from inside the existing
  _timed_transform generator so transformer execution is observable when a
  table transformer is configured.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants