Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
33 changes: 13 additions & 20 deletions docs/services/index.md
Original file line number Diff line number Diff line change
@@ -1,22 +1,15 @@
# Supporting Services (Beta)

The pgEdge Control Plane lets you run services alongside your
databases. Services are applications that attach to a database, run on
any host in the cluster, and connect via automatically-managed
database credentials.

## What Are Supporting Services?

A supporting service is an application that runs alongside a database.
Each service instance runs on a single host and receives its own set of
database credentials scoped to that instance. The Control Plane supports
the following service types:
The pgEdge Control Plane lets you run services alongside your databases.
A supporting service is an application that attaches to a database,
runs on any host in the cluster, and connects using a database user you
specify with the `connect_as` field. The Control Plane supports the
following service types:

- The [pgEdge Postgres MCP Server](mcp.md) connects AI agents and
LLM-powered applications to your database, enabling natural language
queries and AI-powered data access.
- The pgEdge RAG Server *(coming soon)* enables retrieval-augmented
generation workflows using your database as a knowledge store.
LLM-powered applications to your database.
- The [pgEdge RAG Server](rag.md) enables retrieval-augmented generation
workflows using your database as a knowledge store.
- [PostgREST](postgrest.md) automatically generates a REST API from
your PostgreSQL schema, making your data accessible over HTTP without
writing backend code.
Expand All @@ -25,9 +18,9 @@ the following service types:

When you add a service to a database, the Control Plane creates one
service instance per host listed in the service's `host_ids`. Each
instance runs on a single host and receives its own database
credentials. Services can run on any host in the cluster; they do not
need to be co-located with database instances.
instance runs on a single host and connects to the database using the
credentials of the `connect_as` user. Services can run on any host in
the cluster; they do not need to be co-located with database instances.

The following table describes the lifecycle states for service
instances:
Expand All @@ -52,8 +45,8 @@ deployment patterns are common:
with no database instance, which isolates the service workload from
the database.
- In a multiple-instances topology, one service instance runs per host
for redundancy or regional proximity; each instance receives its own
credentials and connects to the database independently.
for redundancy or regional proximity; each instance connects to the
database independently using the same `connect_as` credentials.

In the following example, the service runs on the same host as the
database node (`host-1`):
Expand Down
15 changes: 13 additions & 2 deletions docs/services/managing.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,15 +46,24 @@ database with one MCP service instance:
"nodes": [
{ "name": "n1", "host_ids": ["host-1"] }
],
"database_users": [
{
"username": "mcp_user",
"password": "changeme",
"db_owner": true,
"attributes": ["LOGIN"]
}
],
"services": [
{
"service_id": "mcp-server",
"service_type": "mcp",
"version": "latest",
"host_ids": ["host-1"],
"port": 8080,
"connect_as": "mcp_user",
"config": {
"llm_enabled": true,
"llm_enabled": true,
"llm_provider": "anthropic",
"llm_model": "claude-sonnet-4-5",
"anthropic_api_key": "sk-ant-..."
Expand Down Expand Up @@ -100,6 +109,7 @@ database with a PostgREST service instance. The service exposes the
"port": 3100,
"connect_as": "app",
"config": {
"db_anon_role": "web_anon",
"jwt_secret": "a-secret-key-of-at-least-32-characters"
}
}
Expand Down Expand Up @@ -145,6 +155,7 @@ use a different model:
"version": "latest",
"host_ids": ["host-1"],
"port": 8080,
"connect_as": "mcp_user",
"config": {
"llm_enabled": true,
"llm_provider": "anthropic",
Expand All @@ -161,7 +172,7 @@ use a different model:

To remove a service, submit an update request that omits the service
from the `services` array. The Control Plane stops and deletes all
service instances for that service and revokes its database credentials.
service instances for that service.

!!! warning

Expand Down
54 changes: 43 additions & 11 deletions docs/services/mcp.md
Original file line number Diff line number Diff line change
@@ -1,19 +1,15 @@
# pgEdge Postgres MCP Server

The MCP service runs a [Model Context Protocol](https://modelcontextprotocol.io)
server alongside your database. AI agents and LLM-powered applications
use the MCP server to query and interact with your data. For more
information, see the
server alongside your database. The Control Plane provisions an MCP
server container on each specified host; the server connects to the
database using the credentials of the `connect_as` user. AI agents
and LLM-powered applications call the server's tools to query data,
inspect schemas, run EXPLAIN plans, and perform vector similarity
searches. For more information, see the
[pgEdge Postgres MCP](https://github.com/pgEdge/pgedge-postgres-mcp)
project.

## Overview

The Control Plane provisions an MCP server container on each specified
host. The server connects to the database using automatically-managed
credentials. AI agents call the server's tools to query data, inspect
schemas, run EXPLAIN plans, and perform vector similarity searches.

See [Managing Services](managing.md) for instructions on adding,
updating, and removing services. The sections below cover MCP-specific
configuration.
Expand Down Expand Up @@ -49,7 +45,7 @@ security configuration fields:

| Field | Type | Default | Description |
|------------------|---------|---------|-------------|
| `allow_writes` | boolean | `false` | When `true`, the service connects using the read-write database user (`svc_{service_id}_rw`) and the `query_database` tool can execute write statements. When `false`, the read-only user (`svc_{service_id}_ro`) is used and write statements are rejected at the database level. |
| `allow_writes` | boolean | `false` | When `true`, the `query_database` tool can execute write statements and the service connects to the primary node. When `false`, write statements are rejected by the MCP server and the service prefers a standby node. |
| `init_token` | string | — | A bootstrap token for initial access to the MCP server. See [Bootstrapping](#bootstrapping). |
| `init_users` | array | — | Initial user accounts to create on the MCP server. See [Bootstrapping](#bootstrapping). |

Expand Down Expand Up @@ -161,13 +157,22 @@ you connect via an MCP client that supplies its own LLM:
"nodes": [
{ "name": "n1", "host_ids": ["host-1"] }
],
"database_users": [
{
"username": "mcp_user",
"password": "changeme",
"db_owner": true,
"attributes": ["LOGIN"]
}
],
"services": [
{
"service_id": "mcp-server",
"service_type": "mcp",
"version": "latest",
"host_ids": ["host-1"],
"port": 8080,
"connect_as": "mcp_user",
"config": {
"init_token": "my-bootstrap-token",
"init_users": [
Expand Down Expand Up @@ -197,13 +202,22 @@ Anthropic as the provider:
"nodes": [
{ "name": "n1", "host_ids": ["host-1"] }
],
"database_users": [
{
"username": "mcp_user",
"password": "changeme",
"db_owner": true,
"attributes": ["LOGIN"]
}
],
"services": [
{
"service_id": "mcp-server",
"service_type": "mcp",
"version": "latest",
"host_ids": ["host-1"],
"port": 8080,
"connect_as": "mcp_user",
"config": {
"llm_enabled": true,
"llm_provider": "anthropic",
Expand Down Expand Up @@ -237,13 +251,22 @@ OpenAI and configures embedding support:
"nodes": [
{ "name": "n1", "host_ids": ["host-1"] }
],
"database_users": [
{
"username": "mcp_user",
"password": "changeme",
"db_owner": true,
"attributes": ["LOGIN"]
}
],
"services": [
{
"service_id": "mcp-server",
"service_type": "mcp",
"version": "latest",
"host_ids": ["host-1"],
"port": 8080,
"connect_as": "mcp_user",
"config": {
"llm_enabled": true,
"llm_provider": "openai",
Expand Down Expand Up @@ -280,13 +303,22 @@ to use a self-hosted Ollama server for both the LLM and embeddings:
"nodes": [
{ "name": "n1", "host_ids": ["host-1"] }
],
"database_users": [
{
"username": "mcp_user",
"password": "changeme",
"db_owner": true,
"attributes": ["LOGIN"]
}
],
"services": [
{
"service_id": "mcp-server",
"service_type": "mcp",
"version": "latest",
"host_ids": ["host-1"],
"port": 8080,
"connect_as": "mcp_user",
"config": {
"llm_enabled": true,
"llm_provider": "ollama",
Expand Down