Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
19 changes: 19 additions & 0 deletions bootstrap/sql/migrations/native/2.0.2/mysql/schemaChanges.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
-- Restore a composite index on profiler_data_time_series(entityFQNHash, extension, timestamp).
--
-- Root cause: the 1.9.9 postDataMigrationSQLScript.sql created several temporary indexes for
-- the data migration and then dropped ALL of them at the end, including the only index that
-- covered entityFQNHash. After that migration the table retains only the unique constraint
-- (entityFQNHash, extension, operation, timestamp) where `operation` sits between `extension`
-- and `timestamp`. Queries of the form
--
-- SELECT entityFQNHash, MAX(timestamp) FROM profiler_data_time_series
-- WHERE entityFQNHash IN (...) AND extension = 'table.columnProfile'
-- GROUP BY entityFQNHash
--
-- cannot use that index efficiently for MAX(timestamp) because `operation` (nullable) breaks
-- the prefix. On a large table (millions of profile rows) this causes a full table scan and
-- 100+ second response times on the columns API when `fields=profile` is requested.
--
-- The new index covers the exact predicate pattern used by getLatestExtensionsBatch().
CREATE INDEX IF NOT EXISTS idx_pdts_fqnhash_ext_ts
ON profiler_data_time_series (entityFQNHash, extension, timestamp);
Comment on lines +18 to +19
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🚨 Bug: MySQL migration uses unsupported IF NOT EXISTS for CREATE INDEX

The MySQL migration uses CREATE INDEX IF NOT EXISTS, but this syntax is not supported in MySQL prior to 8.0.29. No other MySQL migration in the project uses this pattern — all 148 other CREATE INDEX statements in mysql/*.sql files use plain CREATE INDEX without IF NOT EXISTS. The project's docker-compose pins mysql:8.0, which today resolves to ≥8.0.29, but users running older 8.0.x versions (e.g., 8.0.28 or below) will get a syntax error and the migration will fail, blocking the upgrade.

PostgreSQL correctly supports CREATE INDEX IF NOT EXISTS in all supported versions, so the Postgres file is fine.

Suggested fix:

-- Use a conditional check via stored procedure,
-- or match existing project pattern with plain CREATE INDEX
-- wrapped in a procedure that tolerates 'duplicate key' error:
SET @index_exists = (SELECT COUNT(1) FROM information_schema.STATISTICS
  WHERE TABLE_SCHEMA = DATABASE()
    AND TABLE_NAME = 'profiler_data_time_series'
    AND INDEX_NAME = 'idx_pdts_fqnhash_ext_ts');
SET @sql = IF(@index_exists = 0,
  'CREATE INDEX idx_pdts_fqnhash_ext_ts ON profiler_data_time_series (entityFQNHash, extension, timestamp)',
  'SELECT 1');
PREPARE stmt FROM @sql;
EXECUTE stmt;
DEALLOCATE PREPARE stmt;

Was this helpful? React with 👍 / 👎 | Reply gitar fix to apply this suggestion

Comment on lines +18 to +19
Copy link

Copilot AI Apr 26, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

MySQL does not support CREATE INDEX IF NOT EXISTS syntax (unlike PostgreSQL). Existing MySQL migrations use a conditional information_schema.statistics check + prepared statement to keep index creation idempotent (e.g., bootstrap/sql/migrations/native/1.13.0/mysql/schemaChanges.sql:27-40). This statement will fail on MySQL during migration; please rewrite it using the established conditional pattern (or equivalent idempotent ALTER TABLE logic).

Suggested change
CREATE INDEX IF NOT EXISTS idx_pdts_fqnhash_ext_ts
ON profiler_data_time_series (entityFQNHash, extension, timestamp);
SET @sql = (
SELECT IF(
EXISTS(
SELECT 1
FROM information_schema.statistics
WHERE table_schema = DATABASE()
AND table_name = 'profiler_data_time_series'
AND index_name = 'idx_pdts_fqnhash_ext_ts'
),
'SELECT 1',
'CREATE INDEX idx_pdts_fqnhash_ext_ts ON profiler_data_time_series (entityFQNHash, extension, timestamp)'
)
);
PREPARE statement FROM @sql;
EXECUTE statement;
DEALLOCATE PREPARE statement;

Copilot uses AI. Check for mistakes.
23 changes: 23 additions & 0 deletions bootstrap/sql/migrations/native/2.0.2/postgres/schemaChanges.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
-- Restore a composite index on profiler_data_time_series(entityFQNHash, extension, timestamp).
--
-- Root cause: the 1.9.9 schemaChanges.sql explicitly dropped the unique constraint
-- profiler_data_time_series_unique_hash_extension_ts (entityFQNHash, extension, operation, timestamp)
-- to allow changing the `operation` generated-column expression, but never recreated it.
-- After the 1.9.9 migration the table retains only
-- profiler_data_time_series_combined_id_ts (extension, timestamp)
-- which is useless for queries that filter by entityFQNHash.
--
-- The 1.9.9 postDataMigrationSQLScript.sql also created temporary indexes
-- (idx_pdts_entityFQNHash, idx_pdts_composite, etc.) during its bulk UPDATE pass and then
-- dropped them all, leaving no index on entityFQNHash.
--
-- Queries of the form
--
-- SELECT entityFQNHash, MAX(timestamp) FROM profiler_data_time_series
-- WHERE entityFQNHash IN (...) AND extension = 'table.columnProfile'
-- GROUP BY entityFQNHash
--
-- issued by getLatestExtensionsBatch() perform a full table scan without this index,
-- causing 100+ second response times on the columns API when `fields=profile` is requested.
CREATE INDEX IF NOT EXISTS idx_pdts_fqnhash_ext_ts
ON profiler_data_time_series (entityFQNHash, extension, timestamp);
Original file line number Diff line number Diff line change
Expand Up @@ -5900,4 +5900,138 @@
assertFalse(table.getTags().isEmpty(), "Table tags should not be empty");
}
}

// ===================================================================
// REGRESSION TEST - columns API with fields=profile (collate#3488)
// ===================================================================

@Test
@Execution(ExecutionMode.SAME_THREAD)
void test_getColumnsWithProfileField_correctnessAndNoBatchRegression(TestNamespace ns) {
OpenMetadataClient client = SdkClients.adminClient();

DatabaseService service = DatabaseServiceTestFactory.createPostgres(ns);
DatabaseSchema schema = DatabaseSchemaTestFactory.createSimple(ns, service);

CreateClassification createClassification =
new CreateClassification()
.withName(ns.prefix("profile_test_cls"))
.withDescription("Classification for profile regression test");
Classification cls = client.classifications().create(createClassification);

CreateTag createTag =
new CreateTag()
.withName(ns.prefix("profile_test_tag"))
.withDescription("Tag for profile regression test")
.withClassification(cls.getName());
Tag tag = client.tags().create(createTag);

TagLabel tagLabel =
new TagLabel()
.withTagFQN(tag.getFullyQualifiedName())
.withSource(TagLabel.TagSource.CLASSIFICATION);

Column idCol = ColumnBuilder.of("id", "BIGINT").primaryKey().notNull().build();
idCol.setTags(List.of(tagLabel));
Column emailCol = ColumnBuilder.of("email", "VARCHAR").dataLength(255).build();
emailCol.setTags(List.of(tagLabel));
Column nameCol = ColumnBuilder.of("name", "VARCHAR").dataLength(255).build();

CreateTable createRequest = createRequest(ns.prefix("profile_regression_table"), ns);
createRequest.setDatabaseSchema(schema.getFullyQualifiedName());
createRequest.setColumns(List.of(idCol, emailCol, nameCol));
Table table = client.tables().create(createRequest);

Check failure on line 5943 in openmetadata-integration-tests/src/test/java/org/openmetadata/it/tests/TableResourceIT.java

View workflow job for this annotation

GitHub Actions / Test Report

TableResourceIT.test_getColumnsWithProfileField_correctnessAndNoBatchRegression(TestNamespace)

An exception with message [java.sql.BatchUpdateException: Batch entry 0 /* TagUsageDAO.applyTagsBatchInternal */ INSERT INTO tag_usage (source, tagFQN, tagFQNHash, targetFQNHash, labelType, state, reason, appliedBy, metadata) VALUES (('0'::int4), ('profile_test_cls__caf64e909edf497c878485a6e5bc0560__TableResourceIT__test_getColumnsWithProfileField_correctnessAndNoBatchRegression.profile_test_tag__caf64e909edf497c878485a6e5bc0560__TableResourceIT__test_getColumnsWithProfileField_correctnessAndNoBatchRegression'), ('414ce2fff41ad089388e3a2db1e75e6d.f0c52d34d9722eb3ccb458440d77c608'), ('7539f44cb3f196beab02d884068a654e.df7e43b310f4aee0b444adc856e23b72.5c028919ff6372b951cdfe8b01ff7b7d.cf39c3037bd7369cf156ad4eea281c89.0c83f57c786a0b4a39efab23731c7ebc'), ('0'::int4), ('1'::int4), (NULL), (NULL), (NULL) :: jsonb),(('0'::int4), ('profile_test_cls__caf64e909edf497c878485a6e5bc0560__TableResourceIT__test_getColumnsWithProfileField_correctnessAndNoBatchRegression.profile_test_tag__caf64e909edf497c878485a6e5bc0560__TableResourceIT__test_getColumnsWithProfileField_correctnessAndNoBatchRegression'), ('414ce2fff41ad089388e3a2db1e75e6d.f0c52d34d9722eb3ccb458440d77c608'), ('7539f44cb3f196beab02d884068a654e.df7e43b310f4aee0b444adc856e23b72.5c028919ff6372b951cdfe8b01ff7b7d.cf39c3037bd7369cf156ad4eea281c89.b80bb7740288fda1f201890375a60c8f'), ('0'::int4), ('1'::int4), (NULL), (NULL), (NULL) :: jsonb) ON CONFLICT (source, tagFQNHash, targetFQNHash) DO UPDATE SET labelType = EXCLUDED.labelType, state = EXCLUDED.state, reason = EXCLUDED.reason, metadata = EXCLUDED.metadata was aborted: ERROR: value too long for type character varying(256) Call getNextException to see other errors in the batch. [statement:"/* TagUsageDAO.applyTagsBatchInternal */ INSERT INTO tag_usage (source, tagFQN, tagFQNHash, targetFQNHash, labelType, state, reason, appliedBy, metadata) VALUES (:source, :tagFQN, :tagFQNHash, :targetFQNHash, :labelType, :state, :reason, :appliedBy, :metadata :: jsonb) ON CONFLICT (source, tagFQNHash, targetFQNHash) DO UPDATE SET labelType = EXCLUDED.labelType, state = EXCLUDED.state, reason = EXCLUDED.reason, metadata = EXCLUDED.metadata", arguments:{positional:{0:0,1:profile_test_cls__caf64e909edf497c878485a6e5bc0560__TableResourceIT__test_getColumnsWithProfileField_correctnessAndNoBatchRegression.profile_test_tag__caf64e909edf497c878485a6e5bc0560__TableResourceIT__test_getColumnsWithProfileField_correctnessAndNoBatchRegression,2:414ce2fff41ad089388e3a2db1e75e6d.f0c52d34d9722eb3ccb458440d77c608,3:7539f44cb3f196beab02d884068a654e.df7e43b310f4aee0b444adc856e23b72.5c028919ff6372b951cdfe8b01ff7b7d.cf39c3037bd7369cf156ad4eea281c89.b80bb7740288fda1f201890375a60c8f,4:0,5:1,6:null,7:null,8:null}, named:{tagFQN:profile_test_cls__caf64e909edf497c878485a6e5bc0560__TableResourceIT__test_getColumnsWithProfileField_correctnessAndNoBatchRegression.profile_test_tag__caf64e909edf497c878485a6e5bc0560__TableResourceIT__test_getColumnsWithProfileField_correctnessAndNoBatchRegression,reason:null,metadata:null,targetFQNHash:7539f44cb3f196beab02d884068a654e.df7e43b310f4aee0b444adc856e23b72.5c028919ff6372b951cdfe8b01ff7b7d.cf39c3037bd7369cf156ad4eea281c89.b80bb7740288fda1f2018[...]]] was thrown while processing request.
Raw output
ApiException (500): An exception with message [java.sql.BatchUpdateException: Batch entry 0 /* TagUsageDAO.applyTagsBatchInternal */ INSERT INTO tag_usage (source, tagFQN, tagFQNHash, targetFQNHash, labelType, state, reason, appliedBy, metadata) VALUES (('0'::int4), ('profile_test_cls__caf64e909edf497c878485a6e5bc0560__TableResourceIT__test_getColumnsWithProfileField_correctnessAndNoBatchRegression.profile_test_tag__caf64e909edf497c878485a6e5bc0560__TableResourceIT__test_getColumnsWithProfileField_correctnessAndNoBatchRegression'), ('414ce2fff41ad089388e3a2db1e75e6d.f0c52d34d9722eb3ccb458440d77c608'), ('7539f44cb3f196beab02d884068a654e.df7e43b310f4aee0b444adc856e23b72.5c028919ff6372b951cdfe8b01ff7b7d.cf39c3037bd7369cf156ad4eea281c89.0c83f57c786a0b4a39efab23731c7ebc'), ('0'::int4), ('1'::int4), (NULL), (NULL), (NULL) :: jsonb),(('0'::int4), ('profile_test_cls__caf64e909edf497c878485a6e5bc0560__TableResourceIT__test_getColumnsWithProfileField_correctnessAndNoBatchRegression.profile_test_tag__caf64e909edf497c878485a6e5bc0560__TableResourceIT__test_getColumnsWithProfileField_correctnessAndNoBatchRegression'), ('414ce2fff41ad089388e3a2db1e75e6d.f0c52d34d9722eb3ccb458440d77c608'), ('7539f44cb3f196beab02d884068a654e.df7e43b310f4aee0b444adc856e23b72.5c028919ff6372b951cdfe8b01ff7b7d.cf39c3037bd7369cf156ad4eea281c89.b80bb7740288fda1f201890375a60c8f'), ('0'::int4), ('1'::int4), (NULL), (NULL), (NULL) :: jsonb) ON CONFLICT (source, tagFQNHash, targetFQNHash) DO UPDATE SET labelType = EXCLUDED.labelType, state = EXCLUDED.state, reason = EXCLUDED.reason, metadata = EXCLUDED.metadata was aborted: ERROR: value too long for type character varying(256)  Call getNextException to see other errors in the batch. [statement:"/* TagUsageDAO.applyTagsBatchInternal */ INSERT INTO tag_usage (source, tagFQN, tagFQNHash, targetFQNHash, labelType, state, reason, appliedBy, metadata) VALUES (:source, :tagFQN, :tagFQNHash, :targetFQNHash, :labelType, :state, :reason, :appliedBy, :metadata :: jsonb) ON CONFLICT (source, tagFQNHash, targetFQNHash) DO UPDATE SET labelType = EXCLUDED.labelType, state = EXCLUDED.state, reason = EXCLUDED.reason, metadata = EXCLUDED.metadata", arguments:{positional:{0:0,1:profile_test_cls__caf64e909edf497c878485a6e5bc0560__TableResourceIT__test_getColumnsWithProfileField_correctnessAndNoBatchRegression.profile_test_tag__caf64e909edf497c878485a6e5bc0560__TableResourceIT__test_getColumnsWithProfileField_correctnessAndNoBatchRegression,2:414ce2fff41ad089388e3a2db1e75e6d.f0c52d34d9722eb3ccb458440d77c608,3:7539f44cb3f196beab02d884068a654e.df7e43b310f4aee0b444adc856e23b72.5c028919ff6372b951cdfe8b01ff7b7d.cf39c3037bd7369cf156ad4eea281c89.b80bb7740288fda1f201890375a60c8f,4:0,5:1,6:null,7:null,8:null}, named:{tagFQN:profile_test_cls__caf64e909edf497c878485a6e5bc0560__TableResourceIT__test_getColumnsWithProfileField_correctnessAndNoBatchRegression.profile_test_tag__caf64e909edf497c878485a6e5bc0560__TableResourceIT__test_getColumnsWithProfileField_correctnessAndNoBatchRegression,reason:null,metadata:null,targetFQNHash:7539f44cb3f196beab02d884068a654e.df7e43b310f4aee0b444adc856e23b72.5c028919ff6372b951cdfe8b01ff7b7d.cf39c3037bd7369cf156ad4eea281c89.b80bb7740288fda1f2018[...]]] was thrown while processing request.
	at org.openmetadata.sdk.network.OpenMetadataHttpClient.handleErrorResponse(OpenMetadataHttpClient.java:339)
	at org.openmetadata.sdk.network.OpenMetadataHttpClient.handleResponse(OpenMetadataHttpClient.java:273)
	at org.openmetadata.sdk.network.OpenMetadataHttpClient.execute(OpenMetadataHttpClient.java:70)
	at org.openmetadata.sdk.network.OpenMetadataHttpClient.execute(OpenMetadataHttpClient.java:56)
	at org.openmetadata.sdk.services.dataassets.TableService.create(TableService.java:32)
	at org.openmetadata.it.tests.TableResourceIT.test_getColumnsWithProfileField_correctnessAndNoBatchRegression(TableResourceIT.java:5943)
	at java.base/java.lang.reflect.Method.invoke(Method.java:580)
	at java.base/java.util.concurrent.ForkJoinTask.doExec(ForkJoinTask.java:387)
	at java.base/java.util.concurrent.ForkJoinPool$WorkQueue.topLevelExec(ForkJoinPool.java:1312)
	at java.base/java.util.concurrent.ForkJoinPool.scan(ForkJoinPool.java:1843)
	at java.base/java.util.concurrent.ForkJoinPool.runWorker(ForkJoinPool.java:1808)
	at java.base/java.util.concurrent.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:188)

Long timestamp = System.currentTimeMillis();
ColumnProfile idProfile =
new ColumnProfile()
.withName("id")
.withMin(1.0)
.withMax(999.0)
.withUniqueCount(100.0)
.withTimestamp(timestamp);
ColumnProfile emailProfile =
new ColumnProfile()
.withName("email")
.withNullCount(5.0)
.withNullProportion(0.05)
.withTimestamp(timestamp);

TableProfile tableProfile =
new TableProfile().withRowCount(100.0).withColumnCount(3.0).withTimestamp(timestamp);

CreateTableProfile createProfile =
new CreateTableProfile()
.withTableProfile(tableProfile)
.withColumnProfile(List.of(idProfile, emailProfile));
client.tables().updateTableProfile(table.getId(), createProfile);

// Verify all four field combinations don't regress:
// (a) fields=profile — profile data returned, no full-table-scan on profiler_data_time_series
TableColumnList withProfile =
assertTimeout(
Duration.ofSeconds(30),
() -> client.tables().getColumns(table.getId(), "profile"),
"columns?fields=profile should complete within 30s");
Comment on lines +5969 to +5975
Copy link

Copilot AI Apr 26, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The test comment says it verifies “all four field combinations”, but this test exercises three calls: profile, tags,customMetrics,extension,profile, and tags,profile. Please correct the comment (or add the missing 4th combination) so the test documentation matches what’s actually asserted.

Copilot uses AI. Check for mistakes.

assertEquals(3, withProfile.getData().size());
Column returnedId =
withProfile.getData().stream()
.filter(c -> "id".equals(c.getName()))
.findFirst()
.orElse(null);
Column returnedName =
withProfile.getData().stream()
.filter(c -> "name".equals(c.getName()))
.findFirst()
.orElse(null);
assertNotNull(returnedId, "id column should be present");
assertNotNull(returnedId.getProfile(), "id column should have profile data");
assertEquals(1.0, returnedId.getProfile().getMin(), "id column min should match");
assertEquals(999.0, returnedId.getProfile().getMax(), "id column max should match");
assertNotNull(returnedName, "name column should be present");
assertNull(returnedName.getProfile(), "name column has no profile, should be null");

// (b) fields=tags,customMetrics,extension,profile — the exact production query
TableColumnList withAllFields =
assertTimeout(
Duration.ofSeconds(30),
() -> client.tables().getColumns(table.getId(), "tags,customMetrics,extension,profile"),
"columns?fields=tags,customMetrics,extension,profile should complete within 30s");

assertEquals(3, withAllFields.getData().size());

Column idResult =
withAllFields.getData().stream()
.filter(c -> "id".equals(c.getName()))
.findFirst()
.orElse(null);
assertNotNull(idResult, "id column must be present");
assertNotNull(idResult.getProfile(), "id column must have profile");
assertNotNull(idResult.getTags(), "id column must have tags");
assertFalse(idResult.getTags().isEmpty(), "id column tags must not be empty");
assertTrue(
idResult.getTags().stream()
.anyMatch(t -> tag.getFullyQualifiedName().equals(t.getTagFQN())),
"id column should carry the test tag");

// (c) fields=tags,profile — duplicate populateEntityFieldTags must not run twice
TableColumnList withTagsAndProfile =
assertTimeout(
Duration.ofSeconds(30),
() -> client.tables().getColumns(table.getId(), "tags,profile"),
"columns?fields=tags,profile should complete within 30s");
Comment on lines +6018 to +6023
Copy link

Copilot AI Apr 26, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The comment claims this call validates that duplicate populateEntityFieldTags “must not run twice”, but the assertions only check correctness of returned tags/profile (no assertion around number of calls/queries). Please reword the comment to describe what is actually verified, or add an assertion/metric that specifically detects the duplicate-tag population regression.

Copilot uses AI. Check for mistakes.

assertEquals(3, withTagsAndProfile.getData().size());
Column idTagsProfile =
withTagsAndProfile.getData().stream()
.filter(c -> "id".equals(c.getName()))
.findFirst()
.orElse(null);
assertNotNull(idTagsProfile);
assertNotNull(idTagsProfile.getTags());
assertFalse(
idTagsProfile.getTags().isEmpty(), "Tags must be present even when profile requested");
assertNotNull(idTagsProfile.getProfile(), "Profile must be present when profile requested");
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -2891,8 +2891,21 @@ private ResultList<Column> getTableColumnsInternal(
}

if (fieldsParam != null && fieldsParam.contains("customMetrics")) {
List<ExtensionRecord> allColumnMetricRecords =
daoCollection
.entityExtensionDAO()
.getExtensions(table.getId(), CUSTOM_METRICS_EXTENSION + TABLE_COLUMN_EXTENSION);
Map<String, List<CustomMetric>> metricsByColumn = new HashMap<>();
for (ExtensionRecord record : allColumnMetricRecords) {
CustomMetric metric = JsonUtils.readValue(record.extensionJson(), CustomMetric.class);
if (metric != null && metric.getColumnName() != null) {
metricsByColumn
.computeIfAbsent(metric.getColumnName(), k -> new ArrayList<>())
.add(metric);
}
}
for (Column column : paginatedColumns) {
column.setCustomMetrics(getCustomMetrics(table, column.getName()));
column.setCustomMetrics(metricsByColumn.getOrDefault(column.getName(), new ArrayList<>()));
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Quality: Unnecessary ArrayList allocation per column without metrics

getOrDefault(column.getName(), new ArrayList<>()) allocates a new ArrayList for every column that has no custom metrics. Since the returned list is only set on the column and never mutated afterwards, an immutable empty list would avoid unnecessary allocations.

Suggested fix:

column.setCustomMetrics(
    metricsByColumn.getOrDefault(column.getName(), List.of()));

Was this helpful? React with 👍 / 👎 | Reply gitar fix to apply this suggestion

Copy link

Copilot AI Apr 26, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

metricsByColumn.getOrDefault(column.getName(), new ArrayList<>()) allocates a new list for every column without metrics. Prefer Collections.emptyList() (or a shared constant empty list) to avoid unnecessary allocations and to signal the list is intentionally empty.

Suggested change
column.setCustomMetrics(metricsByColumn.getOrDefault(column.getName(), new ArrayList<>()));
column.setCustomMetrics(
metricsByColumn.getOrDefault(column.getName(), Collections.emptyList()));

Copilot uses AI. Check for mistakes.
}
}

Expand All @@ -2904,7 +2917,9 @@ private ResultList<Column> getTableColumnsInternal(

if (fieldsParam != null && fieldsParam.contains("profile")) {
setColumnProfile(paginatedColumns);
populateEntityFieldTags(entityType, paginatedColumns, table.getFullyQualifiedName(), true);
if (!fieldsParam.contains("tags")) {
populateEntityFieldTags(entityType, paginatedColumns, table.getFullyQualifiedName(), true);
}
paginatedColumns =
piiOwners != null
? PIIMasker.getTableProfile(piiOwners, paginatedColumns, authorizer, securityContext)
Expand Down
Loading