diff --git a/examples/ai-transport-message-per-response/react/README.md b/examples/ai-transport-message-per-response/react/README.md
index bb10ac8f99..b0b5e85be2 100644
--- a/examples/ai-transport-message-per-response/react/README.md
+++ b/examples/ai-transport-message-per-response/react/README.md
@@ -18,7 +18,7 @@ Use the following components to implement AI Transport message-per-response stre
- [`rewind`](/docs/channels/options/rewind) channel option: enables seamless message recovery during reconnections, delivering historical messages as `message.update` events.
- [`appendMessage()`](/docs/api/realtime-sdk/channels#append-message): appends tokens to an existing message using its serial.
-Find out more about [AI Transport](/docs/ai-transport) and [message-per-response](/docs/ai-transport/features/token-streaming/message-per-response).
+Find out more about [AI Transport](/docs/ai-transport) and [message-per-response](/docs/ai-transport/token-streaming/message-per-response).
## Getting started
diff --git a/src/pages/docs/ai-transport/messaging/citations.mdx b/src/pages/docs/ai-transport/messaging/citations.mdx
index 53d473b0be..194e451385 100644
--- a/src/pages/docs/ai-transport/messaging/citations.mdx
+++ b/src/pages/docs/ai-transport/messaging/citations.mdx
@@ -140,7 +140,7 @@ When streaming response tokens using the [message-per-response](/docs/ai-transpo
-Identify the agent with a [`clientId`](/docs/messages#properties) in order to attribute a citation to a specific agent. This is useful in multi-agent architectures where multiple agents may contribute citations to the same response. For more information, see [Agent identity](/docs/ai-transport/features/sessions-identity/identifying-users-and-agents#agent-identity).
+Identify the agent with a [`clientId`](/docs/messages#properties) in order to attribute a citation to a specific agent. This is useful in multi-agent architectures where multiple agents may contribute citations to the same response. For more information, see [Agent identity](/docs/ai-transport/sessions-identity/identifying-users-and-agents#agent-identity).
diff --git a/src/pages/docs/ai-transport/sessions-identity/index.mdx b/src/pages/docs/ai-transport/sessions-identity/index.mdx
index 828fdd6538..16de4bee05 100644
--- a/src/pages/docs/ai-transport/sessions-identity/index.mdx
+++ b/src/pages/docs/ai-transport/sessions-identity/index.mdx
@@ -37,7 +37,7 @@ AI Transport uses a channel-oriented model where sessions persist independently
In this model, sessions are associated with the channel, enabling seamless reconnection, background agent work, and multi-device access without additional complexity.
-
+
The channel-oriented model provides key benefits for modern AI applications: sessions maintain continuity in the face of disconnections, users can refresh or navigate back to the ongoing session, multiple users or devices can participate in the same session, and agents can continue long-running or asynchronous workloads even when clients disconnect.
diff --git a/src/pages/docs/ai-transport/token-streaming/index.mdx b/src/pages/docs/ai-transport/token-streaming/index.mdx
index 9dfa3030b0..3de9566ea3 100644
--- a/src/pages/docs/ai-transport/token-streaming/index.mdx
+++ b/src/pages/docs/ai-transport/token-streaming/index.mdx
@@ -28,7 +28,7 @@ Ably AI Transport solves this by decoupling token delivery from connection state
2. Server responds with a unique ID for the session, which is used to identify the channel
3. All further communication happens over the channel
-
+
Dropping in AI Transport to handle the token stream completely changes the user's experience of device switching and failures. You do not need to add complex failure-handling code to your application or deploy additional infrastructure.
diff --git a/src/pages/docs/ai-transport/token-streaming/token-rate-limits.mdx b/src/pages/docs/ai-transport/token-streaming/token-rate-limits.mdx
index 1efaeb4aaa..8ea7a13087 100644
--- a/src/pages/docs/ai-transport/token-streaming/token-rate-limits.mdx
+++ b/src/pages/docs/ai-transport/token-streaming/token-rate-limits.mdx
@@ -16,7 +16,7 @@ The limits in the second category, however, cannot be increased arbitrarily and
## Message-per-response
-The [message-per-response](/docs/ai-transport/features/token-streaming/message-per-response) pattern includes automatic rate limit protection. AI Transport prevents a single response stream from reaching the message rate limit for a connection by rolling up multiple appends into a single published message:
+The [message-per-response](/docs/ai-transport/token-streaming/message-per-response) pattern includes automatic rate limit protection. AI Transport prevents a single response stream from reaching the message rate limit for a connection by rolling up multiple appends into a single published message:
1. Your agent streams tokens to the channel at the model's output rate
2. Ably publishes the first token immediately, then automatically rolls up subsequent tokens on receipt
@@ -56,7 +56,7 @@ If you configure the `appendRollupWindow` to allow a single response to use more
## Message-per-token
-The [message-per-token](/docs/ai-transport/features/token-streaming/message-per-token) pattern requires you to manage rate limits directly. Each token publishes as a separate message, so high-speed model output can cause per-connection or per-channel rate limits to be hit, as well as consuming overall message allowances quickly.
+The [message-per-token](/docs/ai-transport/token-streaming/message-per-token) pattern requires you to manage rate limits directly. Each token publishes as a separate message, so high-speed model output can cause per-connection or per-channel rate limits to be hit, as well as consuming overall message allowances quickly.
To stay within limits:
@@ -70,5 +70,5 @@ If your application requires higher message rates than your current package allo
## Next steps
- Review [Ably platform limits](/docs/platform/pricing/limits) to understand rate limit thresholds for your package
-- Learn about the [message-per-response](/docs/ai-transport/features/token-streaming/message-per-response) pattern for automatic rate limit protection
-- Learn about the [message-per-token](/docs/ai-transport/features/token-streaming/message-per-token) pattern for fine-grained control
+- Learn about the [message-per-response](/docs/ai-transport/token-streaming/message-per-response) pattern for automatic rate limit protection
+- Learn about the [message-per-token](/docs/ai-transport/token-streaming/message-per-token) pattern for fine-grained control