From 241cc7c20cce6f4e199b5c52a770fc0f24b63a4b Mon Sep 17 00:00:00 2001 From: "rahul.munshi@canonical.com" Date: Thu, 19 Mar 2026 22:01:35 -0400 Subject: [PATCH 1/6] Doc: Updated additional steps for better clarity --- docs/how-to/migrate-gagent-to-otelcol.md | 82 ++++++++++++++++++++---- 1 file changed, 71 insertions(+), 11 deletions(-) diff --git a/docs/how-to/migrate-gagent-to-otelcol.md b/docs/how-to/migrate-gagent-to-otelcol.md index 65d9dd18..1929e27b 100644 --- a/docs/how-to/migrate-gagent-to-otelcol.md +++ b/docs/how-to/migrate-gagent-to-otelcol.md @@ -1,19 +1,79 @@ # Migrate from Grafana Agent to OpenTelemetry Collector -> Grafana Agent has reached End-of-Life (EOL) on November 1, 2025. +```{important} +Grafana Agent has reached End-of-Life (EOL) on November 1, 2025. Grafana Agent is no longer receiving support, security, or bug fixes from the vendor. Since it is part of COS, the charmed operators for Grafana Agent will continue to receive bug fixes until July 2026. You should plan to migrate from charmed Grafana Agent to charmed Opentelemetry Collector before that date. - +``` These are the steps to follow: -1. Ensure you are using Juju 3.6. -1. Deploy the collector next to the agent charm -1. Look at the relations for grafana-agent, and replicate them for the collector - - Note that some relation endpoints have slightly different names, for clarity: - - `logging-consumer` is now `send-loki-logs` - - `grafana-cloud-config` is now `cloud-config` -1. Verify that data is appearing in the backends (Mimir, Prometheus, Loki, etc.) -1. Remove grafana-agent from your deployment +## Prerequisites +- Ensure you are using Juju 3.6+. [Upgrade Juju](https://documentation.ubuntu.com/juju/latest/reference/upgrading-things/index.html) first if necessary. + +
+ +### 1. Deploy the collector next to the agent charm +#### Machine model + +Replace the value for `--base` to be consistent with your existing model. +``` +juju deploy opentelemetry-collector otelco --channel 2/stable --base ubuntu@22.04 --config ports="metrics=8889" +``` +The reason we changed the default port for metric from 8888 --> 8889 is because a some of the known applications like haproxy have conflicts with it. + +#### Kubernetes Model +``` +juju deploy opentelemetry-collector-k8s otelco --channel 2/stable +``` +
+ +### 2. Look at the relations for grafana-agent, and replicate them for the collector + +```{note} +- Some relation endpoints have slightly different names, for clarity: + - `logging-consumer` is now `send-loki-logs` + - `grafana-cloud-config` is now `cloud-config` +``` + +The best way is to copy the workload charm relation endpoint that was connected to `grafana-agent` +``` +juju status --relations grafana-agent | grep "grafana-agent:" | grep -v ":peers" +``` +This is a sample relation output, +``` +grafana-agent:grafana-dashboards-provider grafana:grafana-dashboard grafana_dashboard regular +keystone:juju-info grafana-agent:juju-info juju-info subordinate +prometheus-recieve-remote-write:receive-remote-write grafana-agent:send-remote-write prometheus_remote_write regular +``` +Then integrate each of those relations with otelco, like so: +``` +juju integrate otelco grafana:grafana-dashboard +juju integrate otelco keystone:juju-info +juju integrate otelco prometheus-recieve-remote-write:receive-remote-write +``` +and so on.. + +If you get a `quota limit exceeded` error then remove the relation from the payload first and then try again. For example: +``` +ERROR cannot add relation "otelco:cos-agent openstack-exporter:cos-agent": establishing a new relation for openstack-exporter:cos-agent would exceed its maximum relation limit of 1 (quota limit exceeded) +``` + +``` +juju remove-relation grafana-agent openstack-exporter:cos-agent +juju integrate otelco openstack-exporter:cos-agent +``` +
+ +### 3. Verify that data is appearing in the backends (Mimir, Prometheus, Loki, etc.) +```{tip} +For metrics, the tags are visible in the Grafana dashboard section. For logs you can run a query from the Explore page and select one of the logs to see which `juju_application` ingested it. +``` +
+### 4. Remove grafana-agent from your deployment +``` +juju remove-application grafana-agent +``` ## Known Issues -Unlike `grafana-agent`, OpenTelemetry Collector maintains state in-memory by default: this means that queued telemetry data will be lost on restart. This will be addressed in the future with the **File Storage extension**, tracked in [opentelemetry-collector-k8s#34](https://github.com/canonical/opentelemetry-collector-k8s-operator/issues/34). +- Unlike `grafana-agent`, OpenTelemetry Collector maintains state in-memory by default: this means that queued telemetry data will be lost on restart. This will be addressed in the future with the **File Storage extension**, tracked in [opentelemetry-collector-k8s#34](https://github.com/canonical/opentelemetry-collector-k8s-operator/issues/34). +- The [bug with metric port conflict](https://github.com/canonical/opentelemetry-collector-operator/issues/178) only gets patched in Track 2. \ No newline at end of file From cb0d84ce9c1619f93e382db57a87684ca6dca7b7 Mon Sep 17 00:00:00 2001 From: "rahul.munshi@canonical.com" Date: Thu, 19 Mar 2026 22:01:35 -0400 Subject: [PATCH 2/6] Doc: Updated additional steps for better clarity --- docs/how-to/migrate-gagent-to-otelcol.md | 82 ++++++++++++++++++++---- 1 file changed, 71 insertions(+), 11 deletions(-) diff --git a/docs/how-to/migrate-gagent-to-otelcol.md b/docs/how-to/migrate-gagent-to-otelcol.md index 65d9dd18..1929e27b 100644 --- a/docs/how-to/migrate-gagent-to-otelcol.md +++ b/docs/how-to/migrate-gagent-to-otelcol.md @@ -1,19 +1,79 @@ # Migrate from Grafana Agent to OpenTelemetry Collector -> Grafana Agent has reached End-of-Life (EOL) on November 1, 2025. +```{important} +Grafana Agent has reached End-of-Life (EOL) on November 1, 2025. Grafana Agent is no longer receiving support, security, or bug fixes from the vendor. Since it is part of COS, the charmed operators for Grafana Agent will continue to receive bug fixes until July 2026. You should plan to migrate from charmed Grafana Agent to charmed Opentelemetry Collector before that date. - +``` These are the steps to follow: -1. Ensure you are using Juju 3.6. -1. Deploy the collector next to the agent charm -1. Look at the relations for grafana-agent, and replicate them for the collector - - Note that some relation endpoints have slightly different names, for clarity: - - `logging-consumer` is now `send-loki-logs` - - `grafana-cloud-config` is now `cloud-config` -1. Verify that data is appearing in the backends (Mimir, Prometheus, Loki, etc.) -1. Remove grafana-agent from your deployment +## Prerequisites +- Ensure you are using Juju 3.6+. [Upgrade Juju](https://documentation.ubuntu.com/juju/latest/reference/upgrading-things/index.html) first if necessary. + +
+ +### 1. Deploy the collector next to the agent charm +#### Machine model + +Replace the value for `--base` to be consistent with your existing model. +``` +juju deploy opentelemetry-collector otelco --channel 2/stable --base ubuntu@22.04 --config ports="metrics=8889" +``` +The reason we changed the default port for metric from 8888 --> 8889 is because a some of the known applications like haproxy have conflicts with it. + +#### Kubernetes Model +``` +juju deploy opentelemetry-collector-k8s otelco --channel 2/stable +``` +
+ +### 2. Look at the relations for grafana-agent, and replicate them for the collector + +```{note} +- Some relation endpoints have slightly different names, for clarity: + - `logging-consumer` is now `send-loki-logs` + - `grafana-cloud-config` is now `cloud-config` +``` + +The best way is to copy the workload charm relation endpoint that was connected to `grafana-agent` +``` +juju status --relations grafana-agent | grep "grafana-agent:" | grep -v ":peers" +``` +This is a sample relation output, +``` +grafana-agent:grafana-dashboards-provider grafana:grafana-dashboard grafana_dashboard regular +keystone:juju-info grafana-agent:juju-info juju-info subordinate +prometheus-recieve-remote-write:receive-remote-write grafana-agent:send-remote-write prometheus_remote_write regular +``` +Then integrate each of those relations with otelco, like so: +``` +juju integrate otelco grafana:grafana-dashboard +juju integrate otelco keystone:juju-info +juju integrate otelco prometheus-recieve-remote-write:receive-remote-write +``` +and so on.. + +If you get a `quota limit exceeded` error then remove the relation from the payload first and then try again. For example: +``` +ERROR cannot add relation "otelco:cos-agent openstack-exporter:cos-agent": establishing a new relation for openstack-exporter:cos-agent would exceed its maximum relation limit of 1 (quota limit exceeded) +``` + +``` +juju remove-relation grafana-agent openstack-exporter:cos-agent +juju integrate otelco openstack-exporter:cos-agent +``` +
+ +### 3. Verify that data is appearing in the backends (Mimir, Prometheus, Loki, etc.) +```{tip} +For metrics, the tags are visible in the Grafana dashboard section. For logs you can run a query from the Explore page and select one of the logs to see which `juju_application` ingested it. +``` +
+### 4. Remove grafana-agent from your deployment +``` +juju remove-application grafana-agent +``` ## Known Issues -Unlike `grafana-agent`, OpenTelemetry Collector maintains state in-memory by default: this means that queued telemetry data will be lost on restart. This will be addressed in the future with the **File Storage extension**, tracked in [opentelemetry-collector-k8s#34](https://github.com/canonical/opentelemetry-collector-k8s-operator/issues/34). +- Unlike `grafana-agent`, OpenTelemetry Collector maintains state in-memory by default: this means that queued telemetry data will be lost on restart. This will be addressed in the future with the **File Storage extension**, tracked in [opentelemetry-collector-k8s#34](https://github.com/canonical/opentelemetry-collector-k8s-operator/issues/34). +- The [bug with metric port conflict](https://github.com/canonical/opentelemetry-collector-operator/issues/178) only gets patched in Track 2. \ No newline at end of file From 55ad528cc01e69b76432756ed92099fd76450b2b Mon Sep 17 00:00:00 2001 From: Rahul Munshi Date: Wed, 25 Mar 2026 14:59:31 -0400 Subject: [PATCH 3/6] Docs: Updated changes from comments --- docs/.custom_wordlist.txt | 1 - docs/how-to/migrate-gagent-to-otelcol.md | 41 ++++++++++++------------ 2 files changed, 21 insertions(+), 21 deletions(-) diff --git a/docs/.custom_wordlist.txt b/docs/.custom_wordlist.txt index 55031dec..59b2f248 100644 --- a/docs/.custom_wordlist.txt +++ b/docs/.custom_wordlist.txt @@ -111,7 +111,6 @@ OpentelemetryCollector OSD OSDs OTEL -otelco otelcol otf OTLP diff --git a/docs/how-to/migrate-gagent-to-otelcol.md b/docs/how-to/migrate-gagent-to-otelcol.md index 1929e27b..6617e836 100644 --- a/docs/how-to/migrate-gagent-to-otelcol.md +++ b/docs/how-to/migrate-gagent-to-otelcol.md @@ -10,22 +10,24 @@ These are the steps to follow:
-### 1. Deploy the collector next to the agent charm +### Deploy the collector next to the agent charm #### Machine model Replace the value for `--base` to be consistent with your existing model. ``` -juju deploy opentelemetry-collector otelco --channel 2/stable --base ubuntu@22.04 --config ports="metrics=8889" +juju deploy opentelemetry-collector otelco \ + --channel 2/stable \ + --base ubuntu@22.04 \ + --config ports="metrics=8889" # optional ``` -The reason we changed the default port for metric from 8888 --> 8889 is because a some of the known applications like haproxy have conflicts with it. - +Note that if port 8888 (or others) is already taken by another application (e.g. haproxy), use a config option to override the default with e.g. 8889. #### Kubernetes Model ``` -juju deploy opentelemetry-collector-k8s otelco --channel 2/stable +juju deploy opentelemetry-collector-k8s otelcol --channel 2/stable ```
-### 2. Look at the relations for grafana-agent, and replicate them for the collector +### Inspect grafana-agent integrations, and replicate them for the otel collector ```{note} - Some relation endpoints have slightly different names, for clarity: @@ -37,43 +39,42 @@ The best way is to copy the workload charm relation endpoint that was connected ``` juju status --relations grafana-agent | grep "grafana-agent:" | grep -v ":peers" ``` -This is a sample relation output, +This is a sample relation output: ``` grafana-agent:grafana-dashboards-provider grafana:grafana-dashboard grafana_dashboard regular keystone:juju-info grafana-agent:juju-info juju-info subordinate prometheus-recieve-remote-write:receive-remote-write grafana-agent:send-remote-write prometheus_remote_write regular ``` -Then integrate each of those relations with otelco, like so: +Then integrate each of those relations with otelcol, like so: ``` -juju integrate otelco grafana:grafana-dashboard -juju integrate otelco keystone:juju-info -juju integrate otelco prometheus-recieve-remote-write:receive-remote-write +juju integrate otelcol grafana:grafana-dashboard +juju integrate otelcol keystone:juju-info +juju integrate otelcol prometheus-receive-remote-write:receive-remote-write ``` -and so on.. +and so on. -If you get a `quota limit exceeded` error then remove the relation from the payload first and then try again. For example: +If you get a `quota limit exceeded` error, for example ``` -ERROR cannot add relation "otelco:cos-agent openstack-exporter:cos-agent": establishing a new relation for openstack-exporter:cos-agent would exceed its maximum relation limit of 1 (quota limit exceeded) +ERROR cannot add relation "otelcol:cos-agent openstack-exporter:cos-agent": establishing a new relation for openstack-exporter:cos-agent would exceed its maximum relation limit of 1 (quota limit exceeded) ``` +Then remove the relation from the payload first and then try again. ``` juju remove-relation grafana-agent openstack-exporter:cos-agent -juju integrate otelco openstack-exporter:cos-agent +juju integrate otelcol openstack-exporter:cos-agent ```
-### 3. Verify that data is appearing in the backends (Mimir, Prometheus, Loki, etc.) +### Verify that data is appearing in the backends (Mimir, Prometheus, Loki, etc.) ```{tip} For metrics, the tags are visible in the Grafana dashboard section. For logs you can run a query from the Explore page and select one of the logs to see which `juju_application` ingested it. ```
-### 4. Remove grafana-agent from your deployment +### Remove grafana-agent from your deployment ``` -juju remove-application grafana-agent +juju remove-application grafana-agent --destroy-storage ``` ## Known Issues -- Unlike `grafana-agent`, OpenTelemetry Collector maintains state in-memory by default: this means that queued telemetry data will be lost on restart. This will be addressed in the future with the **File Storage extension**, tracked in [opentelemetry-collector-k8s#34](https://github.com/canonical/opentelemetry-collector-k8s-operator/issues/34). -- The [bug with metric port conflict](https://github.com/canonical/opentelemetry-collector-operator/issues/178) only gets patched in Track 2. \ No newline at end of file From 4ccb4e2035217dd23526caa594d5e386528ef526 Mon Sep 17 00:00:00 2001 From: Rahul Munshi Date: Wed, 25 Mar 2026 15:08:15 -0400 Subject: [PATCH 4/6] Docs: Updated changes from comments --- docs/how-to/migrate-gagent-to-otelcol.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/how-to/migrate-gagent-to-otelcol.md b/docs/how-to/migrate-gagent-to-otelcol.md index 6617e836..ea1935ff 100644 --- a/docs/how-to/migrate-gagent-to-otelcol.md +++ b/docs/how-to/migrate-gagent-to-otelcol.md @@ -27,7 +27,7 @@ juju deploy opentelemetry-collector-k8s otelcol --channel 2/stable ```
-### Inspect grafana-agent integrations, and replicate them for the otel collector +### Inspect grafana-agent integrations, and replicate them for the otecol collector ```{note} - Some relation endpoints have slightly different names, for clarity: @@ -45,7 +45,7 @@ grafana-agent:grafana-dashboards-provider grafana:grafana-dashboard keystone:juju-info grafana-agent:juju-info juju-info subordinate prometheus-recieve-remote-write:receive-remote-write grafana-agent:send-remote-write prometheus_remote_write regular ``` -Then integrate each of those relations with otelcol, like so: +Then integrate each of those charms with otelcol, for example: ``` juju integrate otelcol grafana:grafana-dashboard juju integrate otelcol keystone:juju-info From 2430cd1474f6b081df30cb4a5bdb298f1bc1aa83 Mon Sep 17 00:00:00 2001 From: Rahul Munshi Date: Wed, 25 Mar 2026 15:30:38 -0400 Subject: [PATCH 5/6] Docs: Updated changes from comments --- docs/how-to/migrate-gagent-to-otelcol.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/how-to/migrate-gagent-to-otelcol.md b/docs/how-to/migrate-gagent-to-otelcol.md index ea1935ff..9065cedf 100644 --- a/docs/how-to/migrate-gagent-to-otelcol.md +++ b/docs/how-to/migrate-gagent-to-otelcol.md @@ -27,7 +27,7 @@ juju deploy opentelemetry-collector-k8s otelcol --channel 2/stable ```
-### Inspect grafana-agent integrations, and replicate them for the otecol collector +### Inspect grafana-agent integrations, and replicate them for the otelcol collector ```{note} - Some relation endpoints have slightly different names, for clarity: From 8107816110095915d4baebae4009157c58162966 Mon Sep 17 00:00:00 2001 From: Rahul Munshi Date: Wed, 25 Mar 2026 16:57:51 -0400 Subject: [PATCH 6/6] Docs: Udating changes recommended from the docs --- docs/how-to/migrate-gagent-to-otelcol.md | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/docs/how-to/migrate-gagent-to-otelcol.md b/docs/how-to/migrate-gagent-to-otelcol.md index 9065cedf..b1331eef 100644 --- a/docs/how-to/migrate-gagent-to-otelcol.md +++ b/docs/how-to/migrate-gagent-to-otelcol.md @@ -14,13 +14,18 @@ These are the steps to follow: #### Machine model Replace the value for `--base` to be consistent with your existing model. + +```{note} +If port 8888 (or others) is already taken by another application (e.g. haproxy), use a config option to override the default with e.g. 8889. ``` -juju deploy opentelemetry-collector otelco \ + +``` +juju deploy opentelemetry-collector otelcol \ --channel 2/stable \ --base ubuntu@22.04 \ --config ports="metrics=8889" # optional ``` -Note that if port 8888 (or others) is already taken by another application (e.g. haproxy), use a config option to override the default with e.g. 8889. + #### Kubernetes Model ``` juju deploy opentelemetry-collector-k8s otelcol --channel 2/stable