diff --git a/docs/.custom_wordlist.txt b/docs/.custom_wordlist.txt
index 6a4ee4a3..59b2f248 100644
--- a/docs/.custom_wordlist.txt
+++ b/docs/.custom_wordlist.txt
@@ -58,6 +58,7 @@ Grafana's
gRPCs
gyre
HostHealth
+haproxy
hostname
hostPath
html
diff --git a/docs/how-to/migrate-gagent-to-otelcol.md b/docs/how-to/migrate-gagent-to-otelcol.md
index 65d9dd18..b1331eef 100644
--- a/docs/how-to/migrate-gagent-to-otelcol.md
+++ b/docs/how-to/migrate-gagent-to-otelcol.md
@@ -1,19 +1,85 @@
# Migrate from Grafana Agent to OpenTelemetry Collector
-> Grafana Agent has reached End-of-Life (EOL) on November 1, 2025.
+```{important}
+Grafana Agent has reached End-of-Life (EOL) on November 1, 2025.
Grafana Agent is no longer receiving support, security, or bug fixes from the vendor. Since it is part of COS, the charmed operators for Grafana Agent will continue to receive bug fixes until July 2026. You should plan to migrate from charmed Grafana Agent to charmed Opentelemetry Collector before that date.
-
+```
These are the steps to follow:
-1. Ensure you are using Juju 3.6.
-1. Deploy the collector next to the agent charm
-1. Look at the relations for grafana-agent, and replicate them for the collector
- - Note that some relation endpoints have slightly different names, for clarity:
- - `logging-consumer` is now `send-loki-logs`
- - `grafana-cloud-config` is now `cloud-config`
-1. Verify that data is appearing in the backends (Mimir, Prometheus, Loki, etc.)
-1. Remove grafana-agent from your deployment
+## Prerequisites
+- Ensure you are using Juju 3.6+. [Upgrade Juju](https://documentation.ubuntu.com/juju/latest/reference/upgrading-things/index.html) first if necessary.
+
+
+
+### Deploy the collector next to the agent charm
+#### Machine model
+
+Replace the value for `--base` to be consistent with your existing model.
+
+```{note}
+If port 8888 (or others) is already taken by another application (e.g. haproxy), use a config option to override the default with e.g. 8889.
+```
+
+```
+juju deploy opentelemetry-collector otelcol \
+ --channel 2/stable \
+ --base ubuntu@22.04 \
+ --config ports="metrics=8889" # optional
+```
+
+#### Kubernetes Model
+```
+juju deploy opentelemetry-collector-k8s otelcol --channel 2/stable
+```
+
+
+### Inspect grafana-agent integrations, and replicate them for the otelcol collector
+
+```{note}
+- Some relation endpoints have slightly different names, for clarity:
+ - `logging-consumer` is now `send-loki-logs`
+ - `grafana-cloud-config` is now `cloud-config`
+```
+
+The best way is to copy the workload charm relation endpoint that was connected to `grafana-agent`
+```
+juju status --relations grafana-agent | grep "grafana-agent:" | grep -v ":peers"
+```
+This is a sample relation output:
+```
+grafana-agent:grafana-dashboards-provider grafana:grafana-dashboard grafana_dashboard regular
+keystone:juju-info grafana-agent:juju-info juju-info subordinate
+prometheus-recieve-remote-write:receive-remote-write grafana-agent:send-remote-write prometheus_remote_write regular
+```
+Then integrate each of those charms with otelcol, for example:
+```
+juju integrate otelcol grafana:grafana-dashboard
+juju integrate otelcol keystone:juju-info
+juju integrate otelcol prometheus-receive-remote-write:receive-remote-write
+```
+and so on.
+
+If you get a `quota limit exceeded` error, for example
+```
+ERROR cannot add relation "otelcol:cos-agent openstack-exporter:cos-agent": establishing a new relation for openstack-exporter:cos-agent would exceed its maximum relation limit of 1 (quota limit exceeded)
+```
+
+Then remove the relation from the payload first and then try again.
+```
+juju remove-relation grafana-agent openstack-exporter:cos-agent
+juju integrate otelcol openstack-exporter:cos-agent
+```
+
+
+### Verify that data is appearing in the backends (Mimir, Prometheus, Loki, etc.)
+```{tip}
+For metrics, the tags are visible in the Grafana dashboard section. For logs you can run a query from the Explore page and select one of the logs to see which `juju_application` ingested it.
+```
+
+### Remove grafana-agent from your deployment
+```
+juju remove-application grafana-agent --destroy-storage
+```
## Known Issues
-Unlike `grafana-agent`, OpenTelemetry Collector maintains state in-memory by default: this means that queued telemetry data will be lost on restart. This will be addressed in the future with the **File Storage extension**, tracked in [opentelemetry-collector-k8s#34](https://github.com/canonical/opentelemetry-collector-k8s-operator/issues/34).