lib: fluent-otel-proto: update to v0.12#11601
Conversation
Signed-off-by: Eduardo Silva <eduardo@chronosphere.io>
Signed-off-by: Eduardo Silva <eduardo@chronosphere.io>
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughLarge addition of CProfiles model, encode/decode (OTLP + MsgPack + text) and variant helpers with strict dictionary/index validation; many lifecycle helpers and tests added; protobuf schemas updated for string-index fields; CI and CMake/tooling tweaks applied. Changes
Sequence DiagramsequenceDiagram
participant Test
participant Decoder as cprof_decode_opentelemetry
participant Dict as DictionaryTables
participant Variant as VariantHelper
participant StringTable as StringTable
Test->>Decoder: cprof_decode_opentelemetry_create(buffer)
Decoder->>Dict: check dictionary presence & table pointers
alt dictionary missing or table NULL when index used
Dict-->>Decoder: INVALID_ARGUMENT
Decoder-->>Test: return INVALID_ARGUMENT_ERROR
else dictionary present and tables valid
Decoder->>Decoder: parse resources/scopes/profiles/samples
Decoder->>Dict: validate indices (stack, location, function, attribute, link)
alt index out-of-range or NULL mapping
Dict-->>Decoder: INVALID_ARGUMENT
Decoder-->>Test: return INVALID_ARGUMENT_ERROR
else indices valid
Decoder->>Variant: clone_variant(any_value, string_table, out_variant)
Variant->>StringTable: resolve string_value_strindex / key_strindex
alt resolution fails
StringTable-->>Variant: NULL -> INVALID_ARGUMENT
Variant-->>Decoder: return error
else resolved
Variant-->>Decoder: success + variant out-param
Decoder-->>Test: return decoded cprof (offset set)
end
end
end
Estimated code review effort🎯 5 (Critical) | ⏱️ ~120 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: e9915714e9
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| if (input_sample->stack_index >= 0) { | ||
| if (dictionary == NULL || | ||
| dictionary->stack_table == NULL || | ||
| (size_t) input_sample->stack_index >= dictionary->n_stack_table) { | ||
| return CPROF_DECODE_OPENTELEMETRY_INVALID_ARGUMENT_ERROR; |
There was a problem hiding this comment.
Allow zero sentinel sample refs without a dictionary
When service_request->dictionary is omitted, Sample.stack_index and link_index still unpack as 0 because they are plain proto3 ints. The OTLP profiles schema documents 0 as the null / not-set sentinel for dictionary references (profiles.pb-c.h lines 47-49 and 403-404), but this new >= 0 check now turns that default into INVALID_ARGUMENT. In practice, any profile that legitimately has no stack/link table—or any sender that simply leaves those refs unset—will now fail to decode before its values are read. Please treat 0 as "unset" here (and in the analogous link_index check below) instead of requiring a dictionary entry.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Actionable comments posted: 3
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
lib/cprofiles/src/cprof_decode_opentelemetry.c (1)
368-369:⚠️ Potential issue | 🟠 MajorReject profile-level attribute references when
dictionaryis absent.
Profile.attribute_indicesare still only resolved inside the outerif (dictionary != NULL)block. If a payload carries profile-level attribute references but omitsdictionary, decode now succeeds and silently drops them, whileSample.attribute_indicescorrectly fail. This should returnCPROF_DECODE_OPENTELEMETRY_INVALID_ARGUMENT_ERRORtoo.🛡️ Suggested guard
static int decode_profile_entry(struct cprof_profile *profile, Opentelemetry__Proto__Profiles__V1development__Profile *input_profile, Opentelemetry__Proto__Profiles__V1development__ProfilesDictionary *dictionary) { + if (dictionary == NULL && input_profile->n_attribute_indices > 0) { + return CPROF_DECODE_OPENTELEMETRY_INVALID_ARGUMENT_ERROR; + } + /* Copy dictionary tables into profile when dictionary is present */ if (dictionary != NULL) {Also applies to: 531-542
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/cprofiles/src/cprof_decode_opentelemetry.c` around lines 368 - 369, The decoder currently only rejects attribute index references when resolving Sample.attribute_indices but not for Profile.attribute_indices; update the guard around handling Profile.attribute_indices (in cprof_decode_opentelemetry / the block that checks "if (dictionary != NULL)" near the Profile.attribute_indices handling) so that if dictionary == NULL and Profile.attribute_indices is present you return CPROF_DECODE_OPENTELEMETRY_INVALID_ARGUMENT_ERROR instead of silently dropping them; mirror the same validation logic used for Sample.attribute_indices (and similarly for the later block around lines ~531-542) to ensure both profile- and sample-level attribute references fail when dictionary is absent.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@lib/cprofiles/src/cprof_opentelemetry_variant_helpers.c`:
- Around line 265-279: The fallback else-if that treats any non-NULL source->key
as a resolved key allows protobuf_c_empty_string (an unresolved
dictionary-backed key) to be accepted; update the logic in the block that sets
resolved_key so unresolved keys still fail: remove or tighten the problematic
branch (else if (source->key != NULL)) so it does not accept empty-string
placeholders—either delete that branch and let the final error return occur, or
change its condition to require a non-empty string (e.g., source->key != NULL &&
source->key[0] != '\0' and/or source->key != protobuf_c_empty_string). Ensure
you reference and preserve the existing checks using source->key,
source->key_strindex, string_table and string_table_len and still return
CPROF_DECODE_OPENTELEMETRY_INVALID_ARGUMENT_ERROR for unresolved keys.
In `@lib/cprofiles/tests/text_encoder.c`:
- Around line 830-832: The early-return on decode failure can leak
decoded_context; when checking "if (result != CPROF_DECODE_OPENTELEMETRY_SUCCESS
|| decoded_context == NULL) { return; }" ensure decoded_context is cleaned up
first—call the appropriate deallocator for decoded_context (e.g.,
cprof_decoded_context_free(decoded_context) or free(decoded_context) depending
on the allocation API used), then set decoded_context to NULL before returning
so no partial allocation is leaked.
- Around line 733-734: The call to cprof_sample_type_str_create(profile, "cpu",
"nanoseconds", CPROF_AGGREGATION_TEMPORALITY_CUMULATIVE) can fail but its return
value is ignored; update the fixture setup to capture and check the return
(e.g., an int or pointer result from cprof_sample_type_str_create), and on
failure perform early cleanup/return or fail the test (use the existing teardown
helpers or test fail macros used elsewhere in this file) so subsequent setup
doesn't run on an invalid sample type; locate the call to
cprof_sample_type_str_create in the fixture setup and add the validation + error
path.
---
Outside diff comments:
In `@lib/cprofiles/src/cprof_decode_opentelemetry.c`:
- Around line 368-369: The decoder currently only rejects attribute index
references when resolving Sample.attribute_indices but not for
Profile.attribute_indices; update the guard around handling
Profile.attribute_indices (in cprof_decode_opentelemetry / the block that checks
"if (dictionary != NULL)" near the Profile.attribute_indices handling) so that
if dictionary == NULL and Profile.attribute_indices is present you return
CPROF_DECODE_OPENTELEMETRY_INVALID_ARGUMENT_ERROR instead of silently dropping
them; mirror the same validation logic used for Sample.attribute_indices (and
similarly for the later block around lines ~531-542) to ensure both profile- and
sample-level attribute references fail when dictionary is absent.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 3bc98562-2c5d-48a8-8445-65fcb17999c8
📒 Files selected for processing (14)
lib/cprofiles/.github/workflows/packages.yamllib/cprofiles/CMakeLists.txtlib/cprofiles/src/cprof_decode_opentelemetry.clib/cprofiles/src/cprof_encode_text.clib/cprofiles/src/cprof_opentelemetry_variant_helpers.clib/cprofiles/tests/opentelemetry_transcoder.clib/cprofiles/tests/text_encoder.clib/fluent-otel-proto/proto_c/opentelemetry/proto/common/v1/common.pb-c.clib/fluent-otel-proto/proto_c/opentelemetry/proto/common/v1/common.pb-c.hlib/fluent-otel-proto/proto_c/opentelemetry/proto/logs/v1/logs.pb-c.hlib/fluent-otel-proto/proto_c/opentelemetry/proto/profiles/v1development/profiles.pb-c.clib/fluent-otel-proto/proto_c/opentelemetry/proto/profiles/v1development/profiles.pb-c.hlib/fluent-otel-proto/proto_c/protobuf-c/protobuf-c.clib/fluent-otel-proto/proto_c/protobuf-c/protobuf-c.h
💤 Files with no reviewable changes (1)
- lib/fluent-otel-proto/proto_c/opentelemetry/proto/logs/v1/logs.pb-c.h
| if (source->key != NULL && source->key[0] != '\0') { | ||
| resolved_key = source->key; | ||
| } | ||
| else if (string_table != NULL && | ||
| source->key_strindex >= 0 && | ||
| (size_t) source->key_strindex < string_table_len && | ||
| string_table[source->key_strindex] != NULL) { | ||
| resolved_key = string_table[source->key_strindex]; | ||
| } | ||
| else if (source->key != NULL) { | ||
| resolved_key = source->key; | ||
| } | ||
| else { | ||
| return CPROF_DECODE_OPENTELEMETRY_INVALID_ARGUMENT_ERROR; | ||
| } |
There was a problem hiding this comment.
Don't downgrade an unresolved key_strindex to an empty key.
Opentelemetry__Proto__Common__V1__KeyValue__INIT leaves source->key as protobuf_c_empty_string, so the else if (source->key != NULL) fallback accepts unresolved dictionary-backed keys as "" instead of failing. That bypasses the new invalid-index handling and lets malformed attributes decode under an empty key.
🛠️ Minimal fix
- else if (source->key != NULL) {
- resolved_key = source->key;
- }
- else {
+ else {
return CPROF_DECODE_OPENTELEMETRY_INVALID_ARGUMENT_ERROR;
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@lib/cprofiles/src/cprof_opentelemetry_variant_helpers.c` around lines 265 -
279, The fallback else-if that treats any non-NULL source->key as a resolved key
allows protobuf_c_empty_string (an unresolved dictionary-backed key) to be
accepted; update the logic in the block that sets resolved_key so unresolved
keys still fail: remove or tighten the problematic branch (else if (source->key
!= NULL)) so it does not accept empty-string placeholders—either delete that
branch and let the final error return occur, or change its condition to require
a non-empty string (e.g., source->key != NULL && source->key[0] != '\0' and/or
source->key != protobuf_c_empty_string). Ensure you reference and preserve the
existing checks using source->key, source->key_strindex, string_table and
string_table_len and still return
CPROF_DECODE_OPENTELEMETRY_INVALID_ARGUMENT_ERROR for unresolved keys.
| cprof_sample_type_str_create(profile, "cpu", "nanoseconds", | ||
| CPROF_AGGREGATION_TEMPORALITY_CUMULATIVE); |
There was a problem hiding this comment.
Check sample-type creation result before continuing fixture setup.
Line 733 currently ignores failure from cprof_sample_type_str_create(...), which can produce an invalid fixture and misleading test outcomes.
Suggested fix
- cprof_sample_type_str_create(profile, "cpu", "nanoseconds",
- CPROF_AGGREGATION_TEMPORALITY_CUMULATIVE);
+ if (cprof_sample_type_str_create(profile, "cpu", "nanoseconds",
+ CPROF_AGGREGATION_TEMPORALITY_CUMULATIVE) != 0) {
+ cprof_profile_destroy(profile);
+ cprof_scope_profiles_destroy(scope_profiles);
+ cprof_resource_profiles_destroy(resource_profiles);
+ cprof_destroy(cprof);
+ return NULL;
+ }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@lib/cprofiles/tests/text_encoder.c` around lines 733 - 734, The call to
cprof_sample_type_str_create(profile, "cpu", "nanoseconds",
CPROF_AGGREGATION_TEMPORALITY_CUMULATIVE) can fail but its return value is
ignored; update the fixture setup to capture and check the return (e.g., an int
or pointer result from cprof_sample_type_str_create), and on failure perform
early cleanup/return or fail the test (use the existing teardown helpers or test
fail macros used elsewhere in this file) so subsequent setup doesn't run on an
invalid sample type; locate the call to cprof_sample_type_str_create in the
fixture setup and add the validation + error path.
| if (result != CPROF_DECODE_OPENTELEMETRY_SUCCESS || decoded_context == NULL) { | ||
| return; | ||
| } |
There was a problem hiding this comment.
Tighten cleanup on decode failure path.
If decode fails after partial allocation, returning immediately can leak decoded_context.
Suggested fix
- if (result != CPROF_DECODE_OPENTELEMETRY_SUCCESS || decoded_context == NULL) {
+ if (result != CPROF_DECODE_OPENTELEMETRY_SUCCESS || decoded_context == NULL) {
+ if (decoded_context != NULL) {
+ cprof_decode_opentelemetry_destroy(decoded_context);
+ }
return;
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@lib/cprofiles/tests/text_encoder.c` around lines 830 - 832, The early-return
on decode failure can leak decoded_context; when checking "if (result !=
CPROF_DECODE_OPENTELEMETRY_SUCCESS || decoded_context == NULL) { return; }"
ensure decoded_context is cleaned up first—call the appropriate deallocator for
decoded_context (e.g., cprof_decoded_context_free(decoded_context) or
free(decoded_context) depending on the allocation API used), then set
decoded_context to NULL before returning so no partial allocation is leaked.
There was a problem hiding this comment.
Actionable comments posted: 5
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
lib/cprofiles/.github/workflows/build.yaml (1)
119-140:⚠️ Potential issue | 🟠 MajorThe ARM64
clangleg is not actually testing clang.Line 134 exports
CCfrom${{ env.compiler }}, but this step-level env block only definesCC, notcompiler. This causes the export to set CC to an empty string, overwriting the environment variable. Combined with the missingclangpackage installation, the clang matrix entry will not use clang. Installclangfor that branch and use${{ matrix.compiler }}instead.Suggested fix
apt-get update && \ apt-get install -y --no-install-recommends \ build-essential \ file \ make \ python3 \ python3-pip + if [ "${{ matrix.compiler }}" = "clang" ]; then + apt-get install -y --no-install-recommends clang + fi python3 -m pip install --user --upgrade "cmake>=3.20,<4" export PATH="$(python3 -m site --user-base)/bin:$PATH" - export CC=${{ env.compiler }} + export CC=${{ matrix.compiler }}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/cprofiles/.github/workflows/build.yaml` around lines 119 - 140, The ARM64 clang job is not actually using clang because the step exports CC from ${{ env.compiler }} (which is unset) and also never installs clang; change the export to use ${{ matrix.compiler }} (export CC=${{ matrix.compiler }}) and ensure the step installs clang when the matrix entry requires it (add installing the clang package or clang-<version> in the apt-get install list for the clang matrix case); update the step-level env removal or keep only CC: ${{ matrix.compiler }} so CC is correctly populated for the cmake/run steps (references: export CC, env: CC, matrix.compiler, and the apt-get install lines).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@lib/cmetrics/src/cmt_decode_opentelemetry.c`:
- Around line 74-76: The decoder currently coerces
OPENTELEMETRY__PROTO__COMMON__V1__ANY_VALUE__VALUE_STRING_VALUE_STRINDEX to an
empty string using cfl_variant_create_from_string(""), which can collapse
distinct labels/metadata; instead detect this value_case and return
CMT_DECODE_OPENTELEMETRY_INVALID_ARGUMENT_ERROR so the caller knows string-table
resolution is required. Update the handling code for the symbol
source->value_case ==
OPENTELEMETRY__PROTO__COMMON__V1__ANY_VALUE__VALUE_STRING_VALUE_STRINDEX (and
the identical logic at the other location handling lines 645-648) to abort with
CMT_DECODE_OPENTELEMETRY_INVALID_ARGUMENT_ERROR rather than creating an
empty-string variant.
- Around line 160-163: The code branch that checks for
OPENTELEMETRY__PROTO__COMMON__V1__ANY_VALUE__VALUE_STRING_VALUE_STRINDEX
currently returns CMT_DECODE_OPENTELEMETRY_SUCCESS without inserting a value,
which drops elements/keys; modify the handler (the same places as the checks
around the
OPENTELEMETRY__PROTO__COMMON__V1__ANY_VALUE__VALUE_STRING_VALUE_STRINDEX
condition) to preserve structure by inserting a placeholder AnyValue variant
(e.g., a sentinel/empty-string or explicit "strindex" placeholder) into arrays
and kvlists instead of returning success, or alternatively return a decode
error; ensure you update both occurrences (the one shown and the similar block
at lines ~207-210) and keep the return value consistent with other decode
failures (do not silently succeed).
In `@lib/cprofiles/CMakeLists.txt`:
- Around line 1-20: The src file list in CMakeLists.txt is missing the src/
prefix so CMake looks for files in lib/cprofiles/ instead of lib/cprofiles/src/;
update the source list assigned to the variable named src (the block that
currently contains cprofiles.c, cprof_attribute_unit.c, cprof_function.c, etc.)
to prepend "src/" to each filename so CMake can find cprofiles.c and the other
files under the src directory.
In `@lib/cprofiles/src/cprof_decode_opentelemetry.c`:
- Around line 809-811: Guard and validate the offset before using it in the
unpack call: check that offset is non-NULL and that *offset is within bounds (0
<= *offset <= in_size) before computing expressions like in_size - *offset or
&in_buf[*offset]; if offset is NULL or out of range return an error (or handle
appropriately) instead of proceeding to the unpack call in the function that
performs the unpacking and later sets *offset = in_size. Ensure you reference
the same variables used in the diff (offset, in_size, in_buf) so the guard is
placed before any use of in_size - *offset or &in_buf[*offset].
- Around line 535-542: Add a guard at the start of the dictionary-only decode
path to reject payloads that reference indexed attributes but provide no
dictionary: check the profile's n_attribute_indices (or similar counter) and if
dictionary == NULL and n_attribute_indices > 0 return the appropriate decode
error (e.g. CPROF_DECODE_OPENTELEMETRY_ERROR_MISSING_DICTIONARY or the project’s
existing error constant) before any attempts to process
indexed_attribute_entry/clone_variant; this mirrors the sample-path validation
and prevents silently dropping profile-level attributes when the dictionary is
absent.
---
Outside diff comments:
In `@lib/cprofiles/.github/workflows/build.yaml`:
- Around line 119-140: The ARM64 clang job is not actually using clang because
the step exports CC from ${{ env.compiler }} (which is unset) and also never
installs clang; change the export to use ${{ matrix.compiler }} (export CC=${{
matrix.compiler }}) and ensure the step installs clang when the matrix entry
requires it (add installing the clang package or clang-<version> in the apt-get
install list for the clang matrix case); update the step-level env removal or
keep only CC: ${{ matrix.compiler }} so CC is correctly populated for the
cmake/run steps (references: export CC, env: CC, matrix.compiler, and the
apt-get install lines).
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 1922e7f7-cc48-44ef-8907-8a26138db50f
⛔ Files ignored due to path filters (1)
lib/cmetrics/tests/prometheus_remote_write_payload.binis excluded by!**/*.bin
📒 Files selected for processing (10)
lib/cmetrics/.github/workflows/packages.yamllib/cmetrics/CMakeLists.txtlib/cmetrics/include/cmetrics/cmt_compat.hlib/cmetrics/src/cmt_atomic_msvc.clib/cmetrics/src/cmt_cat.clib/cmetrics/src/cmt_decode_opentelemetry.clib/cmetrics/tests/cat.clib/cprofiles/.github/workflows/build.yamllib/cprofiles/CMakeLists.txtlib/cprofiles/src/cprof_decode_opentelemetry.c
💤 Files with no reviewable changes (2)
- lib/cmetrics/tests/cat.c
- lib/cmetrics/src/cmt_cat.c
✅ Files skipped from review due to trivial changes (4)
- lib/cmetrics/include/cmetrics/cmt_compat.h
- lib/cmetrics/.github/workflows/packages.yaml
- lib/cmetrics/src/cmt_atomic_msvc.c
- lib/cmetrics/CMakeLists.txt
| else if (source->value_case == OPENTELEMETRY__PROTO__COMMON__V1__ANY_VALUE__VALUE_STRING_VALUE_STRINDEX) { | ||
| result_instance = cfl_variant_create_from_string(""); | ||
| } |
There was a problem hiding this comment.
Empty-string fallback will merge distinct label and metadata values.
This decoder has no string-table context, so coercing VALUE_STRING_VALUE_STRINDEX to "" silently changes the original value. For labels that can collapse distinct series into the same identity; for metadata it hides the actual payload. Please fail with CMT_DECODE_OPENTELEMETRY_INVALID_ARGUMENT_ERROR until real resolution is available.
Also applies to: 645-648
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@lib/cmetrics/src/cmt_decode_opentelemetry.c` around lines 74 - 76, The
decoder currently coerces
OPENTELEMETRY__PROTO__COMMON__V1__ANY_VALUE__VALUE_STRING_VALUE_STRINDEX to an
empty string using cfl_variant_create_from_string(""), which can collapse
distinct labels/metadata; instead detect this value_case and return
CMT_DECODE_OPENTELEMETRY_INVALID_ARGUMENT_ERROR so the caller knows string-table
resolution is required. Update the handling code for the symbol
source->value_case ==
OPENTELEMETRY__PROTO__COMMON__V1__ANY_VALUE__VALUE_STRING_VALUE_STRINDEX (and
the identical logic at the other location handling lines 645-648) to abort with
CMT_DECODE_OPENTELEMETRY_INVALID_ARGUMENT_ERROR rather than creating an
empty-string variant.
| if (source != NULL && | ||
| source->value_case == OPENTELEMETRY__PROTO__COMMON__V1__ANY_VALUE__VALUE_STRING_VALUE_STRINDEX) { | ||
| return CMT_DECODE_OPENTELEMETRY_SUCCESS; | ||
| } |
There was a problem hiding this comment.
Don't drop string_value_strindex entries from arrays or kvlists.
These branches return success without inserting anything, so arrays lose elements and kvlists lose keys whenever a VALUE_STRING_VALUE_STRINDEX value appears. Either append the placeholder variant or reject the payload, but silently succeeding corrupts the decoded structure.
Also applies to: 207-210
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@lib/cmetrics/src/cmt_decode_opentelemetry.c` around lines 160 - 163, The code
branch that checks for
OPENTELEMETRY__PROTO__COMMON__V1__ANY_VALUE__VALUE_STRING_VALUE_STRINDEX
currently returns CMT_DECODE_OPENTELEMETRY_SUCCESS without inserting a value,
which drops elements/keys; modify the handler (the same places as the checks
around the
OPENTELEMETRY__PROTO__COMMON__V1__ANY_VALUE__VALUE_STRING_VALUE_STRINDEX
condition) to preserve structure by inserting a placeholder AnyValue variant
(e.g., a sentinel/empty-string or explicit "strindex" placeholder) into arrays
and kvlists instead of returning success, or alternatively return a decode
error; ensure you update both occurrences (the one shown and the similar block
at lines ~207-210) and keep the return value consistent with other decode
failures (do not silently succeed).
| set(src | ||
| cprofiles.c | ||
| cprof_attribute_unit.c | ||
| cprof_function.c | ||
| cprof_instrumentation_scope.c | ||
| cprof_line.c | ||
| cprof_link.c | ||
| cprof_location.c | ||
| cprof_mapping.c | ||
| cprof_profile.c | ||
| cprof_resource_profiles.c | ||
| cprof_resource.c | ||
| cprof_sample.c | ||
| cprof_scope_profiles.c | ||
| cprof_decode_opentelemetry.c | ||
| cprof_encode_opentelemetry.c | ||
| cprof_encode_text.c | ||
| cprof_encode_msgpack.c | ||
| cprof_decode_msgpack.c | ||
| cprof_mpack_utils.c |
There was a problem hiding this comment.
Use the src/ prefix for this source list.
CMake resolves these paths relative to lib/cprofiles/, so Line 28 is already failing in CI because cprofiles.c and the rest live under lib/cprofiles/src/.
Suggested fix
set(src
- cprofiles.c
- cprof_attribute_unit.c
- cprof_function.c
- cprof_instrumentation_scope.c
- cprof_line.c
- cprof_link.c
- cprof_location.c
- cprof_mapping.c
- cprof_profile.c
- cprof_resource_profiles.c
- cprof_resource.c
- cprof_sample.c
- cprof_scope_profiles.c
- cprof_decode_opentelemetry.c
- cprof_encode_opentelemetry.c
- cprof_encode_text.c
- cprof_encode_msgpack.c
- cprof_decode_msgpack.c
- cprof_mpack_utils.c
+ src/cprofiles.c
+ src/cprof_attribute_unit.c
+ src/cprof_function.c
+ src/cprof_instrumentation_scope.c
+ src/cprof_line.c
+ src/cprof_link.c
+ src/cprof_location.c
+ src/cprof_mapping.c
+ src/cprof_profile.c
+ src/cprof_resource_profiles.c
+ src/cprof_resource.c
+ src/cprof_sample.c
+ src/cprof_scope_profiles.c
+ src/cprof_decode_opentelemetry.c
+ src/cprof_encode_opentelemetry.c
+ src/cprof_encode_text.c
+ src/cprof_encode_msgpack.c
+ src/cprof_decode_msgpack.c
+ src/cprof_mpack_utils.c
)Also applies to: 27-28
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@lib/cprofiles/CMakeLists.txt` around lines 1 - 20, The src file list in
CMakeLists.txt is missing the src/ prefix so CMake looks for files in
lib/cprofiles/ instead of lib/cprofiles/src/; update the source list assigned to
the variable named src (the block that currently contains cprofiles.c,
cprof_attribute_unit.c, cprof_function.c, etc.) to prepend "src/" to each
filename so CMake can find cprofiles.c and the other files under the src
directory.
| result = clone_variant(&indexed_attribute_value, | ||
| indexed_attribute_entry->value, | ||
| dictionary->string_table, | ||
| dictionary->n_string_table); | ||
|
|
||
| if (indexed_attribute_value == NULL) { | ||
| return CPROF_DECODE_OPENTELEMETRY_ALLOCATION_ERROR; | ||
| if (result != CPROF_DECODE_OPENTELEMETRY_SUCCESS) { | ||
| return result; | ||
| } |
There was a problem hiding this comment.
Profiles with indexed attributes still pass when the dictionary is absent.
This entire path is skipped by the outer if (dictionary != NULL), so a payload with n_attribute_indices > 0 and no dictionary now decodes successfully but drops all profile-level attributes. Mirror the sample-path validation and reject that case before entering the dictionary-only section.
Suggested guard
static int decode_profile_entry(struct cprof_profile *profile,
Opentelemetry__Proto__Profiles__V1development__Profile *input_profile,
Opentelemetry__Proto__Profiles__V1development__ProfilesDictionary *dictionary)
{
+ if (dictionary == NULL &&
+ input_profile->n_attribute_indices > 0) {
+ return CPROF_DECODE_OPENTELEMETRY_INVALID_ARGUMENT_ERROR;
+ }
+
/* Copy dictionary tables into profile when dictionary is present */
if (dictionary != NULL) {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| result = clone_variant(&indexed_attribute_value, | |
| indexed_attribute_entry->value, | |
| dictionary->string_table, | |
| dictionary->n_string_table); | |
| if (indexed_attribute_value == NULL) { | |
| return CPROF_DECODE_OPENTELEMETRY_ALLOCATION_ERROR; | |
| if (result != CPROF_DECODE_OPENTELEMETRY_SUCCESS) { | |
| return result; | |
| } | |
| static int decode_profile_entry(struct cprof_profile *profile, | |
| Opentelemetry__Proto__Profiles__V1development__Profile *input_profile, | |
| Opentelemetry__Proto__Profiles__V1development__ProfilesDictionary *dictionary) | |
| { | |
| if (dictionary == NULL && | |
| input_profile->n_attribute_indices > 0) { | |
| return CPROF_DECODE_OPENTELEMETRY_INVALID_ARGUMENT_ERROR; | |
| } | |
| /* Copy dictionary tables into profile when dictionary is present */ | |
| if (dictionary != NULL) { | |
| result = clone_variant(&indexed_attribute_value, | |
| indexed_attribute_entry->value, | |
| dictionary->string_table, | |
| dictionary->n_string_table); | |
| if (result != CPROF_DECODE_OPENTELEMETRY_SUCCESS) { | |
| return result; | |
| } | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@lib/cprofiles/src/cprof_decode_opentelemetry.c` around lines 535 - 542, Add a
guard at the start of the dictionary-only decode path to reject payloads that
reference indexed attributes but provide no dictionary: check the profile's
n_attribute_indices (or similar counter) and if dictionary == NULL and
n_attribute_indices > 0 return the appropriate decode error (e.g.
CPROF_DECODE_OPENTELEMETRY_ERROR_MISSING_DICTIONARY or the project’s existing
error constant) before any attempts to process
indexed_attribute_entry/clone_variant; this mirrors the sample-path validation
and prevents silently dropping profile-level attributes when the dictionary is
absent.
| if (offset != NULL) { | ||
| *offset = in_size; | ||
| } |
There was a problem hiding this comment.
Guard offset before unpacking too.
The new null check only covers the success path. The unpack call above still evaluates in_size - *offset and &in_buf[*offset], so a NULL offset still crashes before this block, and an out-of-range offset still underflows the length calculation.
Suggested fix
int cprof_decode_opentelemetry_create(struct cprof **result_context,
unsigned char *in_buf, size_t in_size,
size_t *offset)
{
Opentelemetry__Proto__Collector__Profiles__V1development__ExportProfilesServiceRequest *service_request;
struct cprof *context;
int result;
+ size_t unpack_offset;
@@
- service_request = opentelemetry__proto__collector__profiles__v1development__export_profiles_service_request__unpack(
+ unpack_offset = offset != NULL ? *offset : 0;
+ if (unpack_offset > in_size) {
+ return CPROF_DECODE_OPENTELEMETRY_INVALID_ARGUMENT_ERROR;
+ }
+
+ service_request = opentelemetry__proto__collector__profiles__v1development__export_profiles_service_request__unpack(
NULL,
- in_size - *offset,
- &in_buf[*offset]);
+ in_size - unpack_offset,
+ &in_buf[unpack_offset]);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@lib/cprofiles/src/cprof_decode_opentelemetry.c` around lines 809 - 811, Guard
and validate the offset before using it in the unpack call: check that offset is
non-NULL and that *offset is within bounds (0 <= *offset <= in_size) before
computing expressions like in_size - *offset or &in_buf[*offset]; if offset is
NULL or out of range return an error (or handle appropriately) instead of
proceeding to the unpack call in the function that performs the unpacking and
later sets *offset = in_size. Ensure you reference the same variables used in
the diff (offset, in_size, in_buf) so the guard is placed before any use of
in_size - *offset or &in_buf[*offset].
Signed-off-by: Eduardo Silva <eduardo@chronosphere.io>
Signed-off-by: Eduardo Silva <eduardo@chronosphere.io>
Signed-off-by: Eduardo Silva <eduardo@chronosphere.io>
Signed-off-by: Eduardo Silva <eduardo@chronosphere.io>
Signed-off-by: Eduardo Silva <eduardo@chronosphere.io>
Signed-off-by: Eduardo Silva <eduardo@chronosphere.io>
8fbb2fe to
b23be53
Compare
There was a problem hiding this comment.
Actionable comments posted: 16
🧹 Nitpick comments (2)
lib/cprofiles/cprof_instrumentation_scope.c (1)
27-29: Minor formatting inconsistency: opening brace placement.The opening brace on line 29 appears after a blank line following the parameter list. This differs from the project's typical style where the brace follows immediately.
💅 Suggested formatting fix
struct cfl_kvlist *attributes, uint32_t dropped_attributes_count) - - { +{🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/cprofiles/cprof_instrumentation_scope.c` around lines 27 - 29, Move the opening brace so it sits immediately after the function parameter list instead of on its own blank line: locate the function whose signature ends with "uint32_t dropped_attributes_count)" in cprof_instrumentation_scope.c and change the brace placement from on the following blank line to directly after the closing parenthesis, matching the project's typical brace style.lib/cprofiles/cprof_attribute_unit.c (1)
38-47: Note: This destroy function has different semantics thancprof_line_destroy,cprof_function_destroy, andcprof_link_destroy.This implementation safely unlinks from the parent list before freeing, while the other entity destroyers rely on the parent to handle list iteration. Both approaches are valid, but the inconsistency could cause confusion. Consider aligning the pattern across all entity modules for maintainability.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/cprofiles/cprof_attribute_unit.c` around lines 38 - 47, The destroyer cprof_attribute_unit_destroy currently unlinks its list node (using cfl_list_entry_is_orphan and cfl_list_del) before freeing, which differs from cprof_line_destroy, cprof_function_destroy, and cprof_link_destroy; make the semantics consistent by removing the unlink logic from cprof_attribute_unit_destroy (leave only the NULL check and free(instance)) so that parent list iteration is responsible for removing entries, or conversely update the other destroyers to unlink themselves—pick one consistent pattern and apply it across cprof_attribute_unit_destroy and the other destroy functions (cprof_line_destroy, cprof_function_destroy, cprof_link_destroy).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@lib/cprofiles/cprof_decode_opentelemetry.c`:
- Around line 784-791: Validate the public decode entrypoint inputs before
dereferencing: check that offset, in_buf, and result_context are non-NULL and
that *offset <= in_size (compute remaining = in_size - *offset in a safe
unsigned/size_t-safe way) before calling
opentelemetry__proto__collector__profiles__v1development__export_profiles_service_request__unpack
and before writing to *result_context; return an error or NULL early if any
check fails and only set *result_context = NULL after validating pointers so you
never read/write through a NULL or perform an OOB read.
- Around line 325-343: The decoder currently accepts any non-empty
input_link->trace_id/span_id and truncates using minimum_size() and memcpy into
link->trace_id/span_id; change this to require exact OTLP lengths: validate
input_link->trace_id.len == 16 and input_link->span_id.len == 8 and reject the
link (return an error or mark invalid) when lengths differ instead of
copying/truncating; replace the minimum_size()/memcpy branches with explicit
length checks referencing input_link, link, trace_id, span_id, and remove the
silent normalization via minimum_size().
- Around line 228-233: The loop adding dictionary-backed attribute indices casts
potentially negative or out-of-range integers to uint64_t and must validate them
first: for each input_mapping->attribute_indices[index] (and likewise the
similar loop that adds attributes to locations), check the raw integer is >= 0
and < the dictionary size (the appropriate count for the attribute dictionary
used by the OTLP input) before casting; if the check fails, return
CPROF_DECODE_OPENTELEMETRY_INVALID_ARGUMENT_ERROR. Apply this validation around
the calls to cprof_mapping_add_attribute (and the analogous
cprof_location_add_attribute) so malformed OTLP cannot insert wrapped indices
into the in-memory model.
- Around line 660-682: On the failure path after decode_instrumentation_scope,
unlink the newly created profiles node from the parent list before destroying
it: locate resource_profiles->scope_profiles, if it equals profiles set
resource_profiles->scope_profiles = profiles->next, otherwise walk the list to
find the previous node and set previous->next = profiles->next, then call
cprof_scope_profiles_destroy(profiles) and return result; this ensures
cprof_resource_profiles_destroy() won't see a dangling pointer.
In `@lib/cprofiles/cprof_encode_msgpack.c`:
- Around line 748-772: The functions encode_cprof_instrumentation_scope and
encode_cprof_resource each call mpack_start_map but return without closing the
map; add a matching mpack_finish_map(&context->writer) before every return
(including error paths) so maps are completed and check
mpack_writer_error(&context->writer) after finishing to propagate writer errors.
In cprof_encode_msgpack_create, check the result of mpack_writer_destroy and
convert any non-MPACK_OK into a non-success CPROF_ENCODE_MSGPACK_* return code
(do not just log), and ensure you only build and return the output buffer when
the writer reports success (use mpack_writer_error/MPACK_OK to gate success).
In `@lib/cprofiles/cprof_encode_opentelemetry.c`:
- Around line 1063-1086: The dictionary-backed attributes are not being encoded:
update dict_build_mapping() and dict_build_location() to set their
attribute_indices to indices into dictionary->attribute_table (not leave zero),
implement build_profiles_dictionary() so it constructs
dictionary->attribute_table from cprof attribute keys/values, and change
pack_cprof_sample() to translate sample.attribute_indices into the corresponding
indices into dictionary->attribute_table instead of copying them verbatim;
ensure all references to attribute_indices (in dict_build_mapping,
dict_build_location, build_profiles_dictionary, and pack_cprof_sample)
consistently use the dictionary lookup/mapping so the decoder can resolve
non-empty attribute_indices.
- Around line 2198-2238: pack_cprof_profile currently only serializes
samples/time/period, so fields read by the decoder (profile_id,
dropped_attributes_count, attribute_indices, original_payload_format,
original_payload) are never emitted; update pack_cprof_profile (the block that
creates otlp_profile via initialize_profile and calls
pack_cprof_value_type/pack_cprof_sample) to also serialize those fields before
assigning *output_instance: set otlp_profile->profile_id from input_instance,
write otlp_profile->dropped_attributes_count (and use the existing helper or add
a small loop/pack function to populate otlp_profile->attribute_indices from
input_instance->attribute_indices), and emit original_payload_format and
original_payload (copy/serialize the buffer or call the encoder helper used
elsewhere for payloads); ensure you use destroy_profile on error like the other
pack_* calls and return the pack result on failure.
In `@lib/cprofiles/cprof_mpack_utils.c`:
- Around line 43-60: In cprof_mpack_consume_binary_or_nil_tag, avoid
dereferencing output_buffer on the nil path; before setting *output_buffer =
NULL after calling cprof_mpack_consume_nil_tag, check that output_buffer is
non-NULL (mirror the fix in cprof_mpack_consume_string_or_nil_tag) so you only
assign when the caller provided a buffer pointer; leave error handling and
return value logic unchanged.
- Around line 24-41: The nil-handling branch in
cprof_mpack_consume_string_or_nil_tag assigns to *output_buffer without checking
output_buffer; add a NULL check before that assignment: after calling
cprof_mpack_consume_nil_tag(reader) verify output_buffer is non-NULL and only
then set *output_buffer = NULL, otherwise set/return an appropriate error (e.g.
CPROF_MPACK_INVALID_ARG_ERROR) so the function mirrors the validation performed
by cprof_mpack_consume_string_tag.
- Around line 275-329: Add the same length validation used in
cprof_mpack_consume_string_tag to cprof_mpack_consume_binary_tag: after
computing string_length = mpack_tag_bin_length(&tag) in
cprof_mpack_consume_binary_tag, check that string_length is <=
CPROF_MPACK_MAX_STRING_LENGTH (and optionally non-negative if needed) and if it
exceeds the limit return the appropriate error (e.g.,
CPROF_MPACK_UNEXPECTED_DATA_TYPE_ERROR) without allocating; this prevents huge
allocations when a malformed tag claims an excessive binary length.
In `@lib/cprofiles/cprof_profile.c`:
- Around line 228-237: The function cprof_profile_string_add currently returns
size_t but uses -1 as an error which is invalid for unsigned types; change its
error-handling by either (A) switching the return type to a signed 64-bit type
(e.g., int64_t or ssize_t) and update the function prototype and all callers to
check for negative error values, or (B) keep size_t but use SIZE_MAX as the
documented sentinel error value (replace return -1 with return SIZE_MAX and
update the header docs and all callers to test against SIZE_MAX). Apply the
chosen approach consistently in cprof_profile_string_add (and its declaration)
and update any callers to use the new sentinel/checked type.
In `@lib/cprofiles/cprof_resource_profiles.c`:
- Around line 28-33: The code assigns instance->schema_url with
cfl_sds_create(schema_url) but doesn't check for allocation failure, so modify
the creation sequence in the function that allocates a cprof_resource_profiles:
call cfl_sds_create(schema_url) into a temporary pointer, if it returns NULL
free the partially constructed instance (and any other allocated fields) and
return NULL; only set instance->schema_url and call
cfl_list_init(&instance->scope_profiles) after successful duplication. Ensure
you reference the cprof_resource_profiles instance, the schema_url pointer, and
the cfl_sds_create and cfl_list_init calls when making the change.
In `@lib/cprofiles/cprof_sample.c`:
- Around line 269-279: In cprof_sample_type_destroy_all, the loop uses
cfl_list_foreach_safe to iterate sample types but fails to remove each node
before destroying it, mirroring the bug in cprof_sample_destroy_all; update the
loop in cprof_sample_type_destroy_all to call cfl_list_del(&sample_type->_head)
(or equivalent) for each struct cprof_value_type before calling
cprof_sample_type_destroy(sample_type) so nodes are removed from
profile->sample_type and no dangling pointers remain.
- Around line 203-213: The cprof_sample_destroy_all function removes each sample
without unlinking it from profile->samples, leaving dangling list pointers;
before calling cprof_sample_destroy(sample) you must call
cfl_list_del(&sample->_head) to remove the sample node from the list (same
approach as in cprof_profile_destroy). Update cprof_sample_destroy_all to
iterate the list and call cfl_list_del(&sample->_head) for each sample prior to
cprof_sample_destroy(sample), referencing the functions/fields
cprof_sample_destroy_all, profile->samples, sample->_head, cfl_list_del, and
cprof_sample_destroy.
In `@lib/cprofiles/cprof_scope_profiles.c`:
- Around line 31-38: The call to cfl_sds_create(schema_url) in the
cprof_scope_profiles allocation path can fail and is currently unchecked; if it
returns NULL you must not leave a partially-initialized cprof_scope_profiles in
the list. After calling cfl_sds_create in the function that creates/initializes
the instance (where you call cfl_sds_create(schema_url),
cfl_list_init(&instance->profiles) and cfl_list_add(&instance->_head,
&resource_profiles->scope_profiles)), check the returned pointer and on failure
free the instance (and any other allocated fields), do NOT call cfl_list_add,
and return NULL (or propagate the error) so callers never see a non-NULL
instance with instance->schema_url == NULL.
In `@lib/cprofiles/cprofiles.c`:
- Around line 1-65: The build fails because headers generated from templates
(cprof_info.h.in and cprof_version.h.in) are not produced; update the
lib/cprofiles/CMakeLists.txt to call configure_file() for those two .in
templates (e.g., configure_file(cprof_info.h.in cprof_info.h `@ONLY`) and
configure_file(cprof_version.h.in cprof_version.h `@ONLY`>) so the generated
headers exist before compiling cprofiles.c, and make sure the directory
containing the generated headers (CMAKE_CURRENT_BINARY_DIR or the target's
binary dir) is added to the include path for the target that builds cprofiles
(use target_include_directories or include_directories as appropriate) so
includes like `#include` <cprofiles/cprof_info.h> and `#include`
<cprofiles/cprof_version.h> resolve.
---
Nitpick comments:
In `@lib/cprofiles/cprof_attribute_unit.c`:
- Around line 38-47: The destroyer cprof_attribute_unit_destroy currently
unlinks its list node (using cfl_list_entry_is_orphan and cfl_list_del) before
freeing, which differs from cprof_line_destroy, cprof_function_destroy, and
cprof_link_destroy; make the semantics consistent by removing the unlink logic
from cprof_attribute_unit_destroy (leave only the NULL check and free(instance))
so that parent list iteration is responsible for removing entries, or conversely
update the other destroyers to unlink themselves—pick one consistent pattern and
apply it across cprof_attribute_unit_destroy and the other destroy functions
(cprof_line_destroy, cprof_function_destroy, cprof_link_destroy).
In `@lib/cprofiles/cprof_instrumentation_scope.c`:
- Around line 27-29: Move the opening brace so it sits immediately after the
function parameter list instead of on its own blank line: locate the function
whose signature ends with "uint32_t dropped_attributes_count)" in
cprof_instrumentation_scope.c and change the brace placement from on the
following blank line to directly after the closing parenthesis, matching the
project's typical brace style.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 8e3c25f2-5e98-4089-b212-c31f6ea3f088
📒 Files selected for processing (20)
lib/cprofiles/cprof_attribute_unit.clib/cprofiles/cprof_decode_msgpack.clib/cprofiles/cprof_decode_opentelemetry.clib/cprofiles/cprof_encode_msgpack.clib/cprofiles/cprof_encode_opentelemetry.clib/cprofiles/cprof_encode_text.clib/cprofiles/cprof_function.clib/cprofiles/cprof_instrumentation_scope.clib/cprofiles/cprof_line.clib/cprofiles/cprof_link.clib/cprofiles/cprof_location.clib/cprofiles/cprof_mapping.clib/cprofiles/cprof_mpack_utils.clib/cprofiles/cprof_opentelemetry_variant_helpers.clib/cprofiles/cprof_profile.clib/cprofiles/cprof_resource.clib/cprofiles/cprof_resource_profiles.clib/cprofiles/cprof_sample.clib/cprofiles/cprof_scope_profiles.clib/cprofiles/cprofiles.c
| for (index = 0; index < input_mapping->n_attribute_indices; index++) { | ||
| result = cprof_mapping_add_attribute(mapping, | ||
| (uint64_t)input_mapping->attribute_indices[index]); | ||
| if (result != 0) { | ||
| return CPROF_DECODE_OPENTELEMETRY_INVALID_ARGUMENT_ERROR; | ||
| } |
There was a problem hiding this comment.
Validate dictionary-backed attribute indices in mappings and locations.
Unlike sample/profile attributes, these loops never reject negative or out-of-range indices before casting to uint64_t. Malformed OTLP input can therefore stash wrapped indexes in the in-memory model instead of failing fast.
Also applies to: 288-293
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@lib/cprofiles/cprof_decode_opentelemetry.c` around lines 228 - 233, The loop
adding dictionary-backed attribute indices casts potentially negative or
out-of-range integers to uint64_t and must validate them first: for each
input_mapping->attribute_indices[index] (and likewise the similar loop that adds
attributes to locations), check the raw integer is >= 0 and < the dictionary
size (the appropriate count for the attribute dictionary used by the OTLP input)
before casting; if the check fails, return
CPROF_DECODE_OPENTELEMETRY_INVALID_ARGUMENT_ERROR. Apply this validation around
the calls to cprof_mapping_add_attribute (and the analogous
cprof_location_add_attribute) so malformed OTLP cannot insert wrapped indices
into the in-memory model.
Signed-off-by: Eduardo Silva <eduardo@chronosphere.io>
Signed-off-by: Eduardo Silva <eduardo@chronosphere.io>
Signed-off-by: Eduardo Silva <eduardo@chronosphere.io>
|
CI is failing on specific jobs due to socket issues |
Update fluent-otel-proto to otel-proto 1.10 and cprofiles
Test suite results
env FLUENT_BIT_BINARY=/home/edsiper/c/fluent-bit-worktree-otel-proto-1.10/build/bin/fluent-bit
./tests/fluent-bit-test-suite/.venv/bin/pytest -q tests/fluent-bit-test-suite
Result:
Binary under test:
Warnings:
Notable point: this run includes the newer scenarios as well, including in_forward, in_mqtt, expanded
in_opentelemetry OAuth/gRPC coverage, and the full out_opentelemetry set.
Fluent Bit is licensed under Apache 2.0, by submitting this pull request I understand that this code will be released under the terms of that license.
Summary by CodeRabbit