From 3aab4e6c285292902aedccf7d375315d29fe2a19 Mon Sep 17 00:00:00 2001 From: Benita Volkmann Date: Wed, 4 Feb 2026 19:18:03 +0200 Subject: [PATCH 01/11] First set up updates --- architecture/client-architecture.mdx | 1 + architecture/powersync-protocol.mdx | 1 + architecture/powersync-service.mdx | 44 ++++--- .../advanced/custom-types-arrays-and-json.mdx | 2 +- client-sdks/advanced/gis-data-postgis.mdx | 2 +- docs.json | 24 ++-- intro/examples.mdx | 1 + intro/powersync-overview.mdx | 1 + intro/setup-guide.mdx | 124 +++++++++++------- sync/overview.mdx | 51 +++---- sync/rules/data-queries.mdx | 2 +- sync/rules/organize-data-into-buckets.mdx | 2 +- sync/rules/overview.mdx | 21 +-- sync/rules/parameter-queries.mdx | 6 +- sync/{rules => }/supported-sql.mdx | 20 ++- sync/types.mdx | 10 +- 16 files changed, 185 insertions(+), 127 deletions(-) rename sync/{rules => }/supported-sql.mdx (92%) diff --git a/architecture/client-architecture.mdx b/architecture/client-architecture.mdx index 36588461..bad47514 100644 --- a/architecture/client-architecture.mdx +++ b/architecture/client-architecture.mdx @@ -1,5 +1,6 @@ --- title: "Client Architecture" +description: Learn how the PowerSync Client SDK manages connections, authentication, and the local SQLite database. --- The [PowerSync Client SDK](/client-sdks/overview) is embedded into a software application. diff --git a/architecture/powersync-protocol.mdx b/architecture/powersync-protocol.mdx index 8408fe73..81e7f5cb 100644 --- a/architecture/powersync-protocol.mdx +++ b/architecture/powersync-protocol.mdx @@ -1,5 +1,6 @@ --- title: "PowerSync Protocol" +description: Overview of the sync protocol used between PowerSync clients and the PowerSync Service for efficient delta syncing. --- This contains a broad overview of the sync protocol used between PowerSync clients and the [PowerSync Service](/architecture/powersync-service). diff --git a/architecture/powersync-service.mdx b/architecture/powersync-service.mdx index 5a89e146..66b58d04 100644 --- a/architecture/powersync-service.mdx +++ b/architecture/powersync-service.mdx @@ -1,28 +1,43 @@ --- title: "PowerSync Service" +description: Understand the PowerSync Service architecture, including the bucket system, data replication, and real-time streaming sync. --- -When we say "PowerSync instance" we are referring to an instance of the [PowerSync Service](https://github.com/powersync-ja/powersync-service), which is the server-side component of the sync engine responsible for the _read path_ from the source database to client-side SQLite databases: The primary purposes of the PowerSync Service are (1) replicating data from your source database (Postgres, MongoDB, MySQL, SQL Server), and (2) streaming data to clients. Both of these happen based on your _Sync Rules_ or _Sync Streams_ configuration. +When we say "PowerSync instance" we are referring to an instance of the [PowerSync Service](https://github.com/powersync-ja/powersync-service), which is the server-side component of the sync engine responsible for the _read path_ from the source database to client-side SQLite databases: The primary purposes of the PowerSync Service are (1) replicating data from your source database (Postgres, MongoDB, MySQL, SQL Server), and (2) streaming data to clients. Both of these happen based on your [Sync Streams](/sync/streams/overview) configuration (or legacy [Sync Rules](/sync/rules/overview)). ## Bucket System The concept of _buckets_ is core to PowerSync and its scalability. -_Buckets_ are basically partitions of data that allows the PowerSync Service to efficiently query the correct data that a specific client needs to sync. +_Buckets_ are basically partitions of data that allow the PowerSync Service to efficiently query the correct data that a specific client needs to sync. -When you define [Sync Rules](/sync/rules/overview), you define the different buckets that exist, and you define which [parameters](/sync/rules/parameter-queries) are used for each bucket. + + + With [Sync Streams](/sync/streams/overview), buckets are created **implicitly** based on your stream definitions, their queries, and subqueries. You don't need to understand or manage buckets directly — the PowerSync Service handles this automatically. -**Sync Streams: Implicit Buckets**: In our new [Sync Streams](/sync/streams) system which is in [early alpha](/sync/overview), buckets and parameters are not explicitly defined, and are instead implicit based on the streams, their queries and subqueries. + For example, if you define a stream like: + ```yaml + streams: + user_lists: + query: SELECT * FROM lists WHERE owner_id = auth.user_id() + ``` + + PowerSync automatically creates the appropriate buckets internally based on the query parameters. + + + With legacy [Sync Rules](/sync/rules/overview), you explicitly define the buckets using `bucket_definitions` and specify which [parameters](/sync/rules/parameter-queries) are used for each bucket. + + -For example, let's say you have a bucket named `user_todo_lists` that contains the to-do lists for a user, and that bucket utilizes a `user_id` parameter (which will be embedded in the JWT) to scope those to-do lists. +### How Buckets Work -Now let's say users with IDs `1`, `2` and `3` exist in the source database. PowerSync will then replicate data from the source database and create individual buckets with bucket IDs of `user_todo_lists["1"]`, `user_todo_lists["2"]` and `user_todo_lists["3"]`. +To understand how buckets enable efficient syncing, consider this example: Let's say you have data scoped to users — the to-do lists for each user. PowerSync will create individual buckets for each user, such as buckets with IDs `user_todo_lists["1"]`, `user_todo_lists["2"]`, and `user_todo_lists["3"]` for users with IDs `1`, `2`, and `3`. -If a user with `user_id=1` in its JWT connects to the PowerSync Service and syncs data, PowerSync can very efficiently look up the appropriate bucket to sync, i.e. `user_todo_lists["1"]`. +When a user with `user_id=1` in their JWT connects to the PowerSync Service, PowerSync can very efficiently look up the appropriate bucket to sync, i.e. `user_todo_lists["1"]`. -As you can see above, a bucket's definition name and set of parameter values together form its _bucket ID_, for example `user_todo_lists["1"]`. If a bucket makes use of multiple parameters, they are comma-separated in the bucket ID, for example `user_todos["user1","admin"]` +Internally, a bucket ID is formed from the bucket definition name and its parameter values, for example `user_todo_lists["1"]`. With Sync Streams, these bucket IDs are generated automatically based on your stream queries — you don't need to manage them directly. @@ -41,7 +56,7 @@ Each bucket stores the _recent history_ of operations on each @@ -61,13 +76,13 @@ As mentioned above, one of the primary purposes of the PowerSync Service is repl When the PowerSync Service replicates data from the source database, it: -1. Pre-processes the data according to the [Sync Rules](/sync/rules/overview) or [Sync Streams](/sync/streams/overview), splitting data into _buckets_ (as explained above) and transforming the data if required. +1. Pre-processes the data according to your [Sync Streams](/sync/streams/overview) (or [Sync Rules](/sync/rules/overview)), splitting data into _buckets_ (as explained above) and transforming the data if required. 2. Persists each operation into the relevant buckets, ready to be streamed to clients. ### Initial Replication vs. Incremental Replication -Whenever a new version of Sync Rules or Sync Streams are deployed, initial replication takes place by means of taking a snapshot of all tables/collections referenced in the Sync Rules / Streams. +Whenever a new version of Sync Streams (or Sync Rules) is deployed, initial replication takes place by means of taking a snapshot of all tables/collections referenced in the configuration. After that, data is incrementally replicated using a change data capture stream (the specific mechanism depends on the source database type: Postgres logical replication, MongoDB change streams, the MySQL binlog, or SQL Server Change Data Capture). @@ -78,7 +93,7 @@ As mentioned above, the other primary purpose of the PowerSync Service is stream The PowerSync Service authenticates clients/users using [JWTs](/configuration/auth/overview). Once a client/user is authenticated: -1. The PowerSync Service calculates a list of buckets for the user to sync using [Parameter Queries](/sync/rules/parameter-queries). +1. The PowerSync Service calculates a list of buckets for the user to sync based on their Sync Stream subscriptions (or Parameter Queries in legacy Sync Rules). 2. The Service streams any operations added to those buckets since the last time the client/user connected. The Service then continuously monitors for buckets that are added or removed, as well as for new operations within those buckets, and streams those changes. @@ -92,6 +107,5 @@ For more details on exactly how streaming sync works, see [PowerSync Protocol](/ The repo for the PowerSync Service can be found here: - - + diff --git a/client-sdks/advanced/custom-types-arrays-and-json.mdx b/client-sdks/advanced/custom-types-arrays-and-json.mdx index 0751c138..d4354524 100644 --- a/client-sdks/advanced/custom-types-arrays-and-json.mdx +++ b/client-sdks/advanced/custom-types-arrays-and-json.mdx @@ -207,7 +207,7 @@ bucket_definitions: ``` -See these additional details when using the `IN` operator: [Operators](/sync/rules/supported-sql#operators) +See these additional details when using the `IN` operator: [Operators](/sync/supported-sql#operators) ### Client SDK diff --git a/client-sdks/advanced/gis-data-postgis.mdx b/client-sdks/advanced/gis-data-postgis.mdx index e39852f5..77a214a6 100644 --- a/client-sdks/advanced/gis-data-postgis.mdx +++ b/client-sdks/advanced/gis-data-postgis.mdx @@ -115,7 +115,7 @@ The data looks exactly how it’s stored in the Postgres database i.e. Example use case: Extract x (long) and y (lat) values from a PostGIS type, to use these values independently in an application. -Currently, PowerSync supports the following functions that can be used when selecting data in your Sync Rules: [Operators and Functions](/sync/rules/supported-sql#functions) +Currently, PowerSync supports the following functions that can be used when selecting data in your Sync Rules: [Operators and Functions](/sync/supported-sql#functions) 1. `ST_AsGeoJSON` 2. `ST_AsText` diff --git a/docs.json b/docs.json index c59340fa..d786e0a1 100644 --- a/docs.json +++ b/docs.json @@ -163,29 +163,29 @@ ] }, { - "group": "Sync Rules & Streams", + "group": "Sync Streams & Sync Rules", "icon": "arrows-rotate", "pages": [ "sync/overview", { - "group": "Sync Rules (GA)", + "group": "Sync Streams (Beta)", + "pages": [ + "sync/streams/overview" + ] + }, + { + "group": "Sync Rules (Legacy)", "pages": [ "sync/rules/overview", "sync/rules/organize-data-into-buckets", "sync/rules/global-buckets", "sync/rules/parameter-queries", "sync/rules/data-queries", - "sync/rules/supported-sql", "sync/rules/client-parameters" ] }, - { - "group": "Sync Streams (Early Alpha)", - "pages": [ - "sync/streams/overview" - ] - }, "sync/types", + "sync/supported-sql", { "group": "Advanced", "pages": [ @@ -728,7 +728,11 @@ }, { "source": "/usage/sync-rules/operators-and-functions", - "destination": "/sync/rules/supported-sql" + "destination": "/sync/supported-sql" + }, + { + "source": "/sync/rules/supported-sql", + "destination": "/sync/supported-sql" }, { "source": "/usage/sync-rules/advanced-topics", diff --git a/intro/examples.mdx b/intro/examples.mdx index 75533b89..1106cd17 100644 --- a/intro/examples.mdx +++ b/intro/examples.mdx @@ -1,6 +1,7 @@ --- title: "Demo Apps & Example Projects" sidebarTitle: "Examples" +description: Explore demo apps and example projects to see PowerSync in action across different platforms and backends. --- The best way to understand how PowerSync works is to explore it hands-on. Browse our collection of demo apps and example projects to see PowerSync in action, experiment with different features, or use as a reference for your own app. diff --git a/intro/powersync-overview.mdx b/intro/powersync-overview.mdx index 24678c59..6d5eead7 100644 --- a/intro/powersync-overview.mdx +++ b/intro/powersync-overview.mdx @@ -1,6 +1,7 @@ --- title: PowerSync Docs sidebarTitle: Introduction +description: PowerSync is a sync engine that keeps backend databases in sync with client-side SQLite for real-time, offline-first apps. --- import ClientSdks from '/snippets/client-sdks.mdx'; diff --git a/intro/setup-guide.mdx b/intro/setup-guide.mdx index e64b6727..9485c765 100644 --- a/intro/setup-guide.mdx +++ b/intro/setup-guide.mdx @@ -186,7 +186,7 @@ PowerSync is available as a cloud-hosted service (PowerSync Cloud) or can be sel Self-hosted PowerSync runs via Docker. - Below is a minimal example of setting up the PowerSync Service with Postgres as the bucket storage database and example Sync Rules. MongoDB is also supported as a bucket storage database (docs are linked at the end of this step), and you will learn more about Sync Rules in a next step. + Below is a minimal example of setting up the PowerSync Service with Postgres as the bucket storage database and example Sync Streams. MongoDB is also supported as a bucket storage database (docs are linked at the end of this step), and you will learn more about Sync Streams in a later step. ```bash # 1. Create a directory for your config @@ -235,14 +235,18 @@ PowerSync is available as a cloud-hosted service (PowerSync Cloud) or can be sel uri: postgresql://powersync_storage_user:my_secure_user_password@powersync-postgres-storage:5432/powersync_storage sslmode: disable # Use 'disable' only for local/private networks - # Sync Rules (defined in a later step) - sync_rules: + # Sync Streams (defined in a later step) + sync_config: content: | - bucket_definitions: - global: - data: - - SELECT * FROM lists - - SELECT * FROM todos + config: + edition: 2 + streams: + all_lists: + query: SELECT * FROM lists + auto_subscribe: true + all_todos: + query: SELECT * FROM todos + auto_subscribe: true ``` @@ -345,85 +349,105 @@ The next step is to connect your PowerSync Service instance to your source datab -# 4. Define Basic Sync Rules +# 4. Define Sync Streams -Sync Rules control which data gets synced to which users/devices. They consist of SQL-like queries organized into "buckets" (groupings of data). Each PowerSync Service instance has a Sync Rules definition in YAML format. +Sync Streams control which data gets synced to which users/devices. They use SQL-like queries to define what data to sync. Each PowerSync Service instance has a Sync Streams definition in YAML format. -We recommend starting with a simple **global bucket** that syncs data to all users. This is the simplest way to get started. +We recommend starting with simple **auto-subscribed streams** that sync data to all users by default. This is the simplest way to get started. ```yaml Postgres Example - bucket_definitions: - global: - data: - - SELECT * FROM todos - - SELECT * FROM lists WHERE archived = false + config: + edition: 2 + streams: + all_todos: + query: SELECT * FROM todos + auto_subscribe: true + unarchived_lists: + query: SELECT * FROM lists WHERE archived = false + auto_subscribe: true ``` ```yaml MongoDB Example - bucket_definitions: - global: - data: + config: + edition: 2 + streams: + all_lists: # Note that MongoDB uses “_id” as the name of the ID field in collections whereas # PowerSync uses “id” in its client-side database. This is why the below syntax - # should always be used in the data queries when pairing PowerSync with MongoDB. - - SELECT _id as id, * FROM lists - - SELECT _id as id, * FROM todos WHERE archived = false + # should always be used in queries when pairing PowerSync with MongoDB. + query: SELECT _id as id, * FROM lists + auto_subscribe: true + unarchived_todos: + query: SELECT _id as id, * FROM todos WHERE archived = false + auto_subscribe: true ``` ```yaml MySQL Example - bucket_definitions: - global: - data: - - SELECT * FROM todos - - SELECT * FROM lists WHERE archived = 0 + config: + edition: 2 + streams: + all_todos: + query: SELECT * FROM todos + auto_subscribe: true + unarchived_lists: + query: SELECT * FROM lists WHERE archived = 0 + auto_subscribe: true ``` ```yaml SQL Server Example - bucket_definitions: - global: - data: - - SELECT * FROM todos - - SELECT * FROM lists WHERE archived = 0 + config: + edition: 2 + streams: + all_todos: + query: SELECT * FROM todos + auto_subscribe: true + unarchived_lists: + query: SELECT * FROM lists WHERE archived = 0 + auto_subscribe: true ``` -### Deploy Sync Rules +### Deploy Sync Streams In the [PowerSync Dashboard](https://dashboard.powersync.com/): 1. Select your project and instance - 2. Go to the **Sync Rules** view + 2. Go to the **Sync Streams** view 3. Edit the YAML directly in the dashboard - 4. Click **Deploy** to validate and deploy your Sync Rules + 4. Click **Deploy** to validate and deploy your Sync Streams Add to your `config.yaml`: ```yaml - sync_rules: + sync_config: content: | - bucket_definitions: - global: - data: - - SELECT * FROM todos - - SELECT * FROM lists WHERE archived = false + config: + edition: 2 + streams: + all_todos: + query: SELECT * FROM todos + auto_subscribe: true + unarchived_lists: + query: SELECT * FROM lists WHERE archived = false + auto_subscribe: true ``` - **Note**: Table/collection names within your Sync Rules must match the table names defined in your client-side schema (defined in a later step below). + **Note**: Table/collection names within your Sync Streams must match the table names defined in your client-side schema (defined in a later step below). **Learn More** - For more details on Sync Rules usage, see the [Sync Rules documentation](/sync/rules/overview). + For more details on Sync Streams usage, see the [Sync Streams documentation](/sync/streams/overview). @@ -432,7 +456,7 @@ We recommend starting with a simple **global bucket** that syncs data to all use For quick development and testing, you can generate a temporary development token instead of implementing full authentication. You'll use this token for two purposes: -- **Testing with the _Sync Diagnostics Client_** (in the next step) to verify your setup and Sync Rules +- **Testing with the _Sync Diagnostics Client_** (in the next step) to verify your setup and Sync Streams - **Connecting your app** (in a later step) to test the client SDK integration @@ -441,7 +465,7 @@ You'll use this token for two purposes: 2. Go to the **Client Auth** view 3. Check the **Development tokens** setting and save your changes 4. Click the **Connect** button in the top bar - 5. **Enter token subject**: Since you're starting with just a simple global bucket in your Sync Rules that syncs all data to all users (as we recommended in the previous step), you can just put something like `test-user` as the token subject (which would normally be the user ID you want to test with). + 5. **Enter token subject**: Since you're starting with simple auto-subscribed Sync Streams that sync all data to all users (as we recommended in the previous step), you can just put something like `test-user` as the token subject (which would normally be the user ID you want to test with). 6. Click **Generate token** and copy the token @@ -493,7 +517,7 @@ The Sync Diagnostics Client will connect to your PowerSync Service instance and **Checkpoint:** - Inspect your global bucket and synced tables in the Sync Diagnostics Client — these should match the Sync Rules you [defined previously](#4-define-basic-sync-rules). This confirms your setup is working correctly before integrating the client SDK into your app. + Inspect your synced tables in the Sync Diagnostics Client — these should match the Sync Streams you [defined previously](#4-define-sync-streams). This confirms your setup is working correctly before integrating the client SDK into your app. # 7. Use the Client SDK @@ -544,7 +568,7 @@ import SdkClientSideSchema from '/snippets/sdk-client-side-schema.mdx'; -_PowerSync Cloud:_ The easiest way to generate your schema is using the [PowerSync Dashboard](https://dashboard.powersync.com/). Click the **Connect** button in the top bar to generate the client-side schema based on your Sync Rules in your preferred language. +_PowerSync Cloud:_ The easiest way to generate your schema is using the [PowerSync Dashboard](https://dashboard.powersync.com/). Click the **Connect** button in the top bar to generate the client-side schema based on your Sync Streams in your preferred language. Here's an example schema for a simple `todos` table: @@ -559,12 +583,12 @@ import SdkSchemaExamples from '/snippets/sdk-schema-examples.mdx'; **Learn More** - The client-side schema uses three column types: `text`, `integer`, and `real`. These map directly to values from your Sync Rules and are automatically cast if needed. For details on how backend database types map to SQLite types, see [Types](/sync/types). + The client-side schema uses three column types: `text`, `integer`, and `real`. These map directly to values from your Sync Streams and are automatically cast if needed. For details on how backend database types map to SQLite types, see [Types](/sync/types). ### Instantiate the PowerSync Database -Now that you have your client-side schema defined, instantiate the PowerSync database in your app. This creates the client-side SQLite database that will be kept in sync with your source database based on your Sync Rules configuration. +Now that you have your client-side schema defined, instantiate the PowerSync database in your app. This creates the client-side SQLite database that will be kept in sync with your source database based on your Sync Streams configuration. import SdkInstantiateDbExamples from '/snippets/sdk-instantiate-db-examples.mdx'; @@ -1050,7 +1074,7 @@ For production deployments, you'll need to: ### Additional Resources -- Learn more about [Sync Rules](/sync/rules/overview) for advanced data filtering +- Learn more about [Sync Streams](/sync/streams/overview) for advanced data filtering and on-demand syncing - Explore [Live Queries / Watch Queries](/client-sdks/watch-queries) for reactive UI updates - Check out [Example Projects](/intro/examples) for complete implementations - Review the [Client SDK References](/client-sdks/overview) for client-side platform-specific details diff --git a/sync/overview.mdx b/sync/overview.mdx index 7146cff5..405f7163 100644 --- a/sync/overview.mdx +++ b/sync/overview.mdx @@ -1,59 +1,60 @@ --- -title: "Sync Rules & Sync Streams" +title: "Sync Streams & Sync Rules" sidebarTitle: "Overview" +description: Learn how Sync Streams and Sync Rules enable partial sync to control which data gets synchronized to each client. --- -PowerSync Sync Rules and Sync Streams allow developers to control which data gets synchronized to which clients/devices (i.e. they enable dynamic partial replication). +PowerSync Sync Streams and the legacy Sync Rules allow developers to control which data gets synchronized to which clients/devices (i.e. they enable partial sync). -## Sync Rules (GA/Stable) +## Sync Streams (Beta) — Recommended -Sync Rules is the current generally-available / stable approach to use, that is production-ready: - - - - - -## Sync Streams (Early Alpha) - -[Sync Streams](/sync/streams/overview) are now available in early alpha! Sync Streams will eventually replace Sync Rules and are designed to allow for more dynamic on-demand syncing, while not compromising on the "sync data upfront" strengths of PowerSync for offline-first architecture use cases. +[Sync Streams](/sync/streams/overview) are now in beta and production-ready! We recommend Sync Streams for all new projects. Sync Streams will eventually replace Sync Rules and are designed to allow for more dynamic on-demand syncing, while not compromising on the "sync data upfront" strengths of PowerSync for offline-first architecture use cases. Key improvements in Sync Streams over Sync Rules include: - **On-demand syncing**: You define Sync Streams on the PowerSync Service, and a client can then subscribe to them one or more times with different parameters, on-demand. You still have the option of auto-subscribing streams when a client connects, for "sync data upfront" behavior. - **Temporary caching-like behavior**: Each subscription includes a configurable TTL that keeps data active after the client unsubscribes, acting as a warm cache for re-subscribing. - **Simpler developer experience**: Simplified syntax and mental model, and capabilities such as your UI components automatically managing subscriptions (for example, React hooks). -We encourage you to explore Sync Streams, and once they're in Beta, migrating existing projects: + + - - +## Sync Rules (Legacy) + +Sync Rules is the legacy approach for controlling data sync. It remains available and supported for existing projects: + + + + +If you're currently using Sync Rules and want to migrate to Sync Streams, see our migration guide (coming soon). + ## How It Works You may also find it useful to look at the [PowerSync Service architecture](/architecture/powersync-service) for background. -Each [PowerSync Service](/architecture/powersync-service) instance has a deployed _Sync Rules_ or _Sync Streams_ configuration. This takes the form of a YAML file which contains: -- **In the case of Sync Rules:** Definitions of the different [buckets](/architecture/powersync-service#bucket-system) that exist, with SQL-like queries to specify the parameters used by each bucket (if any), as well as the data contained in each bucket. +Each [PowerSync Service](/architecture/powersync-service) instance has a deployed _Sync Streams_ or _Sync Rules_ configuration. This takes the form of a YAML file which contains: - **In the case of Sync Streams:** Definitions of the streams that exist, with a SQL-like query (which can also contain limited subqueries), which defines the data in the stream, and references the necessary parameters. +- **In the case of Sync Rules:** Definitions of the different [buckets](/architecture/powersync-service#bucket-system) that exist, with SQL-like queries to specify the parameters used by each bucket (if any), as well as the data contained in each bucket. -A _parameter_ is a value that can be used in Sync Rules or Streams to create dynamic sync behavior for each user/client. Each client syncs only the relevant [_buckets_](/architecture/powersync-service#bucket-system) based on the parameters for that client. -* Sync Rules can make use of _authentication parameters_ from the JWT token (such as the user ID or other JWT claims), as well [_client parameters_](/sync/rules/client-parameters) (passed directly from the client when it connects to the PowerSync Service). -* Sync Streams can similarly make use of _authentication parameters_ from the JWT token, _connection parameters_ (the equivalent of _client parameters_, specified at connection), and _subscription parameters_ (specified by the client when it subscribes to a stream at any time). See details [here](/sync/streams/overview#accessing-parameters). +A _parameter_ is a value that can be used in Sync Streams or Sync Rules to create dynamic sync behavior for each user/client. Each client syncs only the relevant [_buckets_](/architecture/powersync-service#bucket-system) based on the parameters for that client. +* Sync Streams can make use of _authentication parameters_ from the JWT token (such as the user ID or other JWT claims), _connection parameters_ (specified at connection), and _subscription parameters_ (specified by the client when it subscribes to a stream at any time). See details [here](/sync/streams/overview#accessing-parameters). +* Sync Rules can make use of _authentication parameters_ from the JWT token, as well as [_client parameters_](/sync/rules/client-parameters) (passed directly from the client when it connects to the PowerSync Service). -It is also possible to have buckets with no parameters. These sync to all users/clients and we refer to them as "Global Buckets". +It is also possible to have buckets/streams with no parameters. These sync to all users/clients. The concept of _buckets_ is core to PowerSync and key to its performance and scalability. The [PowerSync Service architecture overview](/architecture/powersync-service) provides more background on this. -* In the _Sync Rules_ system, buckets and their parameters are [explicitly defined](/sync/rules/overview#bucket-definition). -* In our new _Sync Streams_ system which is in early alpha, buckets and parameters are not explicitly defined, and are instead implicit based on the streams, their queries and subqueries. +* In _Sync Streams_, buckets and parameters are implicit — they are automatically created based on the streams, their queries and subqueries. You don't need to manage buckets directly. +* In legacy _Sync Rules_, buckets and their parameters are [explicitly defined](/sync/rules/overview#bucket-definition). -There are limitations on the SQL syntax and functionality that is supported in the Sync Rules and Sync Streams. For Sync Rules, details and limitations are documented at [Supported SQL](/sync/rules/supported-sql). +There are limitations on the SQL syntax and functionality that is supported in Sync Streams and Sync Rules. See [Supported SQL](/sync/supported-sql) for details and limitations. -In addition to filtering data based on parameters, Sync Rules and Sync Streams also enable: +In addition to filtering data based on parameters, Sync Streams and Sync Rules also enable: * Selecting only specific tables/collections and columns/fields to sync. * Filtering data based on static conditions. diff --git a/sync/rules/data-queries.mdx b/sync/rules/data-queries.mdx index e436386f..c673cd93 100644 --- a/sync/rules/data-queries.mdx +++ b/sync/rules/data-queries.mdx @@ -18,7 +18,7 @@ Data Queries are used to group data into buckets, so each Data Query must use ev ## Supported SQL -The supported SQL in Data Queries is based on a small subset of the SQL standard syntax. Not all SQL constructs are supported. See [Supported SQL](/sync/rules/supported-sql) for full details. +The supported SQL in Data Queries is based on a small subset of the SQL standard syntax. Not all SQL constructs are supported. See [Supported SQL](/sync/supported-sql) for full details. ## Examples diff --git a/sync/rules/organize-data-into-buckets.mdx b/sync/rules/organize-data-into-buckets.mdx index e5feba62..eb112342 100644 --- a/sync/rules/organize-data-into-buckets.mdx +++ b/sync/rules/organize-data-into-buckets.mdx @@ -43,7 +43,7 @@ bucket_definitions: - The supported SQL in _Parameter Queries_ and _Data Queries_ is based on a small subset of the SQL standard syntax. Not all SQL constructs are supported. See [Supported SQL](/sync/rules/supported-sql). + The supported SQL in _Parameter Queries_ and _Data Queries_ is based on a small subset of the SQL standard syntax. Not all SQL constructs are supported. See [Supported SQL](/sync/supported-sql). diff --git a/sync/rules/overview.mdx b/sync/rules/overview.mdx index 73db1f24..fc5f88ed 100644 --- a/sync/rules/overview.mdx +++ b/sync/rules/overview.mdx @@ -1,15 +1,18 @@ --- -title: "Sync Rules" +title: "Sync Rules (Legacy)" sidebarTitle: "Overview & Key Concepts" +description: Understand Sync Rules, the legacy mechanism for controlling data sync with explicit bucket definitions and parameter queries. --- -PowerSync Sync Rules is the current generally-available/stable/production-ready mechanism to control which data gets synchronized to which clients/devices (i.e. they enable _dynamic partial replication_). +PowerSync Sync Rules is the legacy mechanism to control which data gets synchronized to which clients/devices (i.e. they enable _partial sync_). - -**Sync Streams Available in Early Alpha** + +**Sync Streams Recommended** -[Sync Streams](/sync/streams/overview) are now available in early alpha! Sync Streams will eventually replace Sync Rules and are designed to allow for more dynamic syncing, while not compromising on existing offline-first capabilities. See the [Overview](/sync/overview) page for more details. - +[Sync Streams](/sync/streams/overview) are now in beta and production-ready. We recommend Sync Streams for all new projects — they offer a simpler developer experience, on-demand syncing with subscription parameters, and caching-like behavior with TTL. Sync Rules remain supported for existing projects. + +See the [Sync Streams documentation](/sync/streams/overview) to get started. + Sync Rules are defined in a YAML file. For PowerSync Cloud, they are edited and deployed to a specific PowerSync instance in the [PowerSync Dashboard](/tools/powersync-dashboard#project-&-instance-level). For self-hosting setups, they are defined as part of your [instance configuration](/configuration/powersync-service/self-hosted-instances). @@ -49,7 +52,7 @@ The following values can be selected in Parameter Queries: - **Client Parameters** (see below) - **Values From a Table/Collection** (see below) -See [Parameter Queries](/sync/rules/parameter-queries) for more details and examples. Also see [Supported SQL](/sync/rules/supported-sql) for limitations. +See [Parameter Queries](/sync/rules/parameter-queries) for more details and examples. Also see [Supported SQL](/sync/supported-sql) for limitations. ### Authentication Parameters @@ -69,7 +72,7 @@ Clients can specify **Client Parameters** when connecting to PowerSync (i.e. whe ```yaml Example of selecting a Client Parameter in a Parameter Query parameters: SELECT (request.parameters() ->> 'current_project') as current_project ``` -The `->>` operator in the above example extracts a value from a string containing JSON (which is the format provided by ``request.parameters()``). See [Operators and Functions](/sync/rules/supported-sql#operators) +The `->>` operator in the above example extracts a value from a string containing JSON (which is the format provided by ``request.parameters()``). See [Operators and Functions](/sync/supported-sql#operators) A client can pass any value for a Client Parameter. Hence, Client Parameters should always be treated with care, and should [not be used](/sync/rules/client-parameters#security-consideration) for access control purposes. @@ -100,7 +103,7 @@ data: - SELECT * FROM lists WHERE owner_id = bucket.user_id ``` -See [Data Queries](/sync/rules/data-queries) for more details and examples. Also see [Supported SQL](/sync/rules/supported-sql) for limitations. +See [Data Queries](/sync/rules/data-queries) for more details and examples. Also see [Supported SQL](/sync/supported-sql) for limitations. ### Global Buckets diff --git a/sync/rules/parameter-queries.mdx b/sync/rules/parameter-queries.mdx index 97eae31d..bf723370 100644 --- a/sync/rules/parameter-queries.mdx +++ b/sync/rules/parameter-queries.mdx @@ -26,7 +26,7 @@ The following functions allow you to select Authentication Parameters in your Pa | `request.user_id()` | Returns the JWT subject (`sub`). Same as `request.jwt() ->> 'sub'` (see below) | | `request.jwt()` | Returns the entire (signed) JWT payload as a JSON string. If there are other _claims_ in your JWT (in addition to the user ID), you can select them from this JSON string. | -Since `request.jwt()` is a string containing JSON, use the `->>` [operator](/sync/rules/supported-sql#operators) to select values from it: +Since `request.jwt()` is a string containing JSON, use the `->>` [operator](/sync/supported-sql#operators) to select values from it: ```sql request.jwt() ->> 'sub' -- the 'subject' of the JWT - same as `request.user_id() @@ -118,7 +118,7 @@ bucket_definitions: ## Supported SQL -The supported SQL in Parameter Queries is based on a small subset of the SQL standard syntax. Not all SQL constructs are supported. See [Supported SQL](/sync/rules/supported-sql) for full details. +The supported SQL in Parameter Queries is based on a small subset of the SQL standard syntax. Not all SQL constructs are supported. See [Supported SQL](/sync/supported-sql) for full details. ## Usage Examples @@ -202,7 +202,7 @@ For more advanced details on many-to-many relationships and join tables, see [th ### Expanding JSON Array Into Multiple Parameters -Using the `json_each()` [function](/sync/rules/supported-sql#functions) and `->` [operator](/sync/rules/supported-sql#operators), we can expand a parameter that is a JSON array into multiple rows, thereby filtering by multiple parameter values: +Using the `json_each()` [function](/sync/supported-sql#functions) and `->` [operator](/sync/supported-sql#operators), we can expand a parameter that is a JSON array into multiple rows, thereby filtering by multiple parameter values: ```yaml bucket_definitions: diff --git a/sync/rules/supported-sql.mdx b/sync/supported-sql.mdx similarity index 92% rename from sync/rules/supported-sql.mdx rename to sync/supported-sql.mdx index ad2609f6..dbed1ed9 100644 --- a/sync/rules/supported-sql.mdx +++ b/sync/supported-sql.mdx @@ -1,24 +1,32 @@ --- title: "Supported SQL" +description: SQL syntax, operators, and functions supported in Sync Streams and Sync Rules queries. --- -## Parameter Queries +This page documents the SQL syntax and functions supported in both Sync Streams and Sync Rules (legacy). + + +**Sync Streams** have some additional capabilities not available in Sync Rules, such as limited subqueries and `IN (SELECT ...)` syntax. See the [Sync Streams documentation](/sync/streams/overview) for details. + + +## Query Syntax The supported SQL is based on a small subset of the SQL standard syntax. Notable features and restrictions: 1. Only simple `SELECT` statements are supported. -2. No `JOIN`, `GROUP BY` or other aggregation, `ORDER BY`, `LIMIT`, or subqueries are supported. -3. For token parameters, only `=` operators are supported, and `IN` to a limited extent. -4. A limited set of operators and functions are supported — see below. +2. No `JOIN`, `GROUP BY` or other aggregation, `ORDER BY`, or `LIMIT` are supported in basic queries. +3. **Sync Streams**: Limited subqueries with `IN (SELECT ...)` are supported. +4. **Sync Rules**: No subqueries are supported. For token parameters, only `=` operators are supported, and `IN` to a limited extent. +5. A limited set of operators and functions are supported — see below. ## Operators and Functions -Operators and functions can be used to transform columns/fields before being synced to a client. +Operators and functions can be used to transform columns/fields before being synced to a client. These work the same in both Sync Streams and Sync Rules. -When filtering on parameters (token or [client parameters](/sync/rules/client-parameters) in the case of [parameter queries](/sync/rules/parameter-queries), and bucket parameters in the case of [data queries](/sync/rules/data-queries)), operators can only be used in a limited way. Typically only `=` , `IN` and `IS NULL` are allowed on the parameters, and special limits apply when combining clauses with `AND`, `OR` or `NOT`. +When filtering on parameters, operators can only be used in a limited way. Typically only `=`, `IN` and `IS NULL` are allowed on the parameters, and special limits apply when combining clauses with `AND`, `OR` or `NOT`. When transforming output columns/fields, or filtering on row/document values, those restrictions do not apply. diff --git a/sync/types.mdx b/sync/types.mdx index 8d415a37..21406c5b 100644 --- a/sync/types.mdx +++ b/sync/types.mdx @@ -33,8 +33,8 @@ Postgres types are mapped to SQLite types as follows: | interval | text | | | macaddr | text | | | inet | text | | -| bytea | blob | Cannot sync directly to client — convert to hex or base64 first. See [Operators & Functions](/sync/rules/supported-sql). | -| geometry (PostGIS) | text | hex string of the binary data Use the [ST functions](/sync/rules/supported-sql#functions) to convert to other formats | +| bytea | blob | Cannot sync directly to client — convert to hex or base64 first. See [Operators & Functions](/sync/supported-sql). | +| geometry (PostGIS) | text | hex string of the binary data Use the [ST functions](/sync/supported-sql#functions) to convert to other formats | | Arrays | text | JSON array. | | `DOMAIN` types | text / depends | Depending on [compatibility options](/sync/advanced/compatibility#custom-postgres-types), inner type or raw wire representation (legacy). | | Custom types | text | Dependig on [compatibility options](/sync/advanced/compatibility#custom-postgres-types), JSON object or raw wire representation (legacy). | @@ -65,7 +65,7 @@ MongoDB types are mapped to SQLite types as follows: | Boolean | integer | 1 for true, 0 for false | | Date | text | Format: `YYYY-MM-DD hh:mm:ss.sssZ` | | Null | null | | -| Binary | blob | Cannot sync directly to client — convert to hex or base64 first. See [Operators & Functions](/sync/rules/supported-sql). | +| Binary | blob | Cannot sync directly to client — convert to hex or base64 first. See [Operators & Functions](/sync/supported-sql). | | Regular Expression | text | JSON text in the format `{"pattern":"...","options":"..."}` | | Timestamp | integer | Converted to a 64-bit integer | | Undefined | null | | @@ -110,7 +110,7 @@ MySQL types are mapped to SQLite types as follows: Binary data can be accessed in the Sync Rules, but cannot be used as bucket parameters. Before it can be synced directly to clients it needs to be converted to hex or base64 first. - See [Operators & Functions](/sync/rules/supported-sql) + See [Operators & Functions](/sync/supported-sql) @@ -140,5 +140,5 @@ SQL Server types are mapped to SQLite types as follows: | User Defined Types: hiearchyid | blob | * See note below regarding binary types | - Binary data can be accessed in the Sync Rules, but cannot be used as bucket parameters. Before it can be synced directly to clients it needs to be converted to hex or Base64 first. See [Operators & Functions](/sync/rules/supported-sql) + Binary data can be accessed in the Sync Rules, but cannot be used as bucket parameters. Before it can be synced directly to clients it needs to be converted to hex or Base64 first. See [Operators & Functions](/sync/supported-sql) From 5da1c6945db5d40402086711f2301cb4c182239d Mon Sep 17 00:00:00 2001 From: Benita Volkmann Date: Wed, 4 Feb 2026 19:20:45 +0200 Subject: [PATCH 02/11] Redundant description --- intro/powersync-overview.mdx | 1 - 1 file changed, 1 deletion(-) diff --git a/intro/powersync-overview.mdx b/intro/powersync-overview.mdx index 6d5eead7..24678c59 100644 --- a/intro/powersync-overview.mdx +++ b/intro/powersync-overview.mdx @@ -1,7 +1,6 @@ --- title: PowerSync Docs sidebarTitle: Introduction -description: PowerSync is a sync engine that keeps backend databases in sync with client-side SQLite for real-time, offline-first apps. --- import ClientSdks from '/snippets/client-sdks.mdx'; From f269dd37981c809aff229abb6da969fe954b5afc Mon Sep 17 00:00:00 2001 From: Benita Volkmann Date: Wed, 4 Feb 2026 19:22:17 +0200 Subject: [PATCH 03/11] Sync Streams only in section heading --- docs.json | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs.json b/docs.json index d786e0a1..121116b4 100644 --- a/docs.json +++ b/docs.json @@ -163,7 +163,7 @@ ] }, { - "group": "Sync Streams & Sync Rules", + "group": "Sync Streams", "icon": "arrows-rotate", "pages": [ "sync/overview", From 608bee8897e1ef5463e4eb2b346780a9804994ae Mon Sep 17 00:00:00 2001 From: Benita Volkmann Date: Wed, 4 Feb 2026 19:29:03 +0200 Subject: [PATCH 04/11] Update Early alpha -> beta references --- resources/feature-status.mdx | 2 +- sync/rules/client-parameters.mdx | 8 ++++---- sync/streams/overview.mdx | 14 +++++++------- 3 files changed, 12 insertions(+), 12 deletions(-) diff --git a/resources/feature-status.mdx b/resources/feature-status.mdx index 0fed5a20..820efd09 100644 --- a/resources/feature-status.mdx +++ b/resources/feature-status.mdx @@ -52,7 +52,7 @@ Below is a summary of the current main PowerSync features and their release stat | | | | **PowerSync Service** | | | Enterprise Self-Hosted | Closed Alpha | -| Open Edition | Beta | +| Sync Streams | Beta | | Postgres Bucket Storage | V1 | | | | | **Client SDKs** | | diff --git a/sync/rules/client-parameters.mdx b/sync/rules/client-parameters.mdx index 2d72655a..3987799c 100644 --- a/sync/rules/client-parameters.mdx +++ b/sync/rules/client-parameters.mdx @@ -13,11 +13,11 @@ PowerSync already supports using **token parameters** in parameter queries. An e **Client parameters** are specified directly by the client (i.e. not through the JWT authentication token). The advantage of client parameters is that they give client-side control over what data to sync, and can therefore be used to further filter or limit synced data. A common use case is [lazy-loading](/client-sdks/infinite-scrolling#2-control-data-sync-using-client-parameters), where data is split into pages and a client parameter can be used to specify which page(s) to sync to a user, and this can update dynamically as the user paginates (or reaches the end of an infinite-scrolling feed). - - [Sync Streams](/sync/streams/overview) make it easier to use client parameters, especially for apps where parameters are managed across different UI components and tabs. + + [Sync Streams](/sync/streams/overview) make it easier to manage dynamic parameters, especially for apps where parameters are managed across different UI components and tabs. Sync Streams offer _subscription parameters_ (specified when subscribing to a stream) and _connection parameters_ (the equivalent of client parameters). - For new apps that require client parameters, we recommend using [Sync Streams](/sync/streams/overview) (Early Alpha). - + For new apps, we recommend using Sync Streams instead. + ### Usage diff --git a/sync/streams/overview.mdx b/sync/streams/overview.mdx index 5473950e..1c61870b 100644 --- a/sync/streams/overview.mdx +++ b/sync/streams/overview.mdx @@ -1,5 +1,5 @@ --- -title: "Sync Streams (Early Alpha)" +title: "Sync Streams (Beta)" description: Sync Streams will replace Sync Rules and are designed to allow for more dynamic syncing, while not compromising on existing offline-first capabilities. sidebarTitle: "Overview" --- @@ -25,15 +25,15 @@ Key improvements in Sync Streams over Sync Rules include: If you want “sync everything upfront” behavior (like the current Sync Rules system), that’s easy too: you can configure any of your Sync Streams to be auto-subscribed by the client on connecting. - -**Early Alpha Release** + +**Beta Release — Production-Ready** -Sync Streams will ultimately replace the current Sync Rules system. They are currently in an early alpha release, which of course means they're not yet suitable for production use, and the APIs and DX likely still need refinement. +Sync Streams are now in beta and production-ready. We recommend Sync Streams for all new projects. -They are open for anyone to test: we are actively seeking your feedback on their performance for your use cases, the developer experience, missing capabilities, and potential optimizations. Please share your feedback with us in Discord 🫡 +Sync Streams will ultimately replace the current Sync Rules system. Sync Rules will continue to be supported for existing projects, but we recommend migrating to Sync Streams. -Sync Streams will be supported alongside Sync Rules for the foreseeable future, although we recommend migrating to Sync Streams once in Beta. - +We welcome your feedback on Sync Streams — please share with us in [Discord](https://discord.gg/powersync). + ## Requirements for Using Sync Streams From cff35dd6967efce0e10e2648aa9863ef7c9f5685 Mon Sep 17 00:00:00 2001 From: Benita Volkmann Date: Thu, 5 Feb 2026 13:32:19 +0200 Subject: [PATCH 05/11] Sync streams section --- docs.json | 7 +- intro/setup-guide.mdx | 234 ++++++---- sync/advanced/sharded-databases.mdx | 2 +- sync/overview.mdx | 10 +- sync/rules/client-parameters.mdx | 2 +- sync/rules/overview.mdx | 6 +- sync/streams/client-usage.mdx | 378 ++++++++++++++++ sync/streams/ctes.mdx | 6 + sync/streams/examples.mdx | 144 +++++++ sync/streams/migration.mdx | 258 +++++++++++ sync/streams/overview.mdx | 644 +++++++++++----------------- sync/streams/queries.mdx | 409 ++++++++++++++++++ 12 files changed, 1605 insertions(+), 495 deletions(-) create mode 100644 sync/streams/client-usage.mdx create mode 100644 sync/streams/ctes.mdx create mode 100644 sync/streams/examples.mdx create mode 100644 sync/streams/migration.mdx create mode 100644 sync/streams/queries.mdx diff --git a/docs.json b/docs.json index 121116b4..bff8946a 100644 --- a/docs.json +++ b/docs.json @@ -170,7 +170,12 @@ { "group": "Sync Streams (Beta)", "pages": [ - "sync/streams/overview" + "sync/streams/overview", + "sync/streams/queries", + "sync/streams/ctes", + "sync/streams/client-usage", + "sync/streams/examples", + "sync/streams/migration" ] }, { diff --git a/intro/setup-guide.mdx b/intro/setup-guide.mdx index 9485c765..7c117fff 100644 --- a/intro/setup-guide.mdx +++ b/intro/setup-guide.mdx @@ -312,7 +312,7 @@ The next step is to connect your PowerSync Service instance to your source datab # Note: 'disable' is only suitable for local/private networks, not for public networks ``` - ```yaml MongoDB +```yaml MongoDB replication: connections: - type: mongodb @@ -320,14 +320,14 @@ The next step is to connect your PowerSync Service instance to your source datab post_images: auto_configure ``` - ```yaml MySQL +```yaml MySQL replication: connections: - type: mysql uri: mysql://repl_user:password@host:3306/database ``` - ```yaml SQL Server +```yaml SQL Server replication: connections: - type: mssql @@ -338,7 +338,7 @@ The next step is to connect your PowerSync Service instance to your source datab pollingIntervalMs: 1000 pollingBatchSize: 20 ``` - + **Learn More** @@ -349,107 +349,163 @@ The next step is to connect your PowerSync Service instance to your source datab -# 4. Define Sync Streams +# 4. Define Sync Streams or Sync Rules + +PowerSync uses either **Sync Streams** or **Sync Rules** to control which data gets synced to which users/devices. Both use SQL-like queries defined in YAML format. -Sync Streams control which data gets synced to which users/devices. They use SQL-like queries to define what data to sync. Each PowerSync Service instance has a Sync Streams definition in YAML format. + + -We recommend starting with simple **auto-subscribed streams** that sync data to all users by default. This is the simplest way to get started. +Sync Streams are now in beta and production-ready. We recommend Sync Streams for new projects — they offer a simpler syntax and support on-demand syncing for web apps. + +Start with simple **auto-subscribed streams** that sync data to all users by default: - ```yaml Postgres Example - config: - edition: 2 - streams: - all_todos: - query: SELECT * FROM todos - auto_subscribe: true - unarchived_lists: - query: SELECT * FROM lists WHERE archived = false - auto_subscribe: true - ``` +```yaml Postgres Example +config: + edition: 2 +streams: + all_todos: + query: SELECT * FROM todos + auto_subscribe: true + unarchived_lists: + query: SELECT * FROM lists WHERE archived = false + auto_subscribe: true +``` + +```yaml MongoDB Example +config: + edition: 2 +streams: + all_lists: + # MongoDB uses "_id" but PowerSync uses "id" on the client + query: SELECT _id as id, * FROM lists + auto_subscribe: true + unarchived_todos: + query: SELECT _id as id, * FROM todos WHERE archived = false + auto_subscribe: true +``` + +```yaml MySQL Example +config: + edition: 2 +streams: + all_todos: + query: SELECT * FROM todos + auto_subscribe: true + unarchived_lists: + query: SELECT * FROM lists WHERE archived = 0 + auto_subscribe: true +``` + +```yaml SQL Server Example +config: + edition: 2 +streams: + all_todos: + query: SELECT * FROM todos + auto_subscribe: true + unarchived_lists: + query: SELECT * FROM lists WHERE archived = 0 + auto_subscribe: true +``` + - ```yaml MongoDB Example - config: - edition: 2 - streams: - all_lists: - # Note that MongoDB uses “_id” as the name of the ID field in collections whereas - # PowerSync uses “id” in its client-side database. This is why the below syntax - # should always be used in queries when pairing PowerSync with MongoDB. - query: SELECT _id as id, * FROM lists - auto_subscribe: true - unarchived_todos: - query: SELECT _id as id, * FROM todos WHERE archived = false - auto_subscribe: true - ``` +**Learn more:** [Sync Streams documentation](/sync/streams/overview) - ```yaml MySQL Example - config: - edition: 2 - streams: - all_todos: - query: SELECT * FROM todos - auto_subscribe: true - unarchived_lists: - query: SELECT * FROM lists WHERE archived = 0 - auto_subscribe: true - ``` + - ```yaml SQL Server Example - config: - edition: 2 - streams: - all_todos: - query: SELECT * FROM todos - auto_subscribe: true - unarchived_lists: - query: SELECT * FROM lists WHERE archived = 0 - auto_subscribe: true - ``` - + +Sync Rules is the original, stable system for controlling data sync. Use this if you prefer a fully released (non-beta) solution. -### Deploy Sync Streams + +```yaml Postgres Example +bucket_definitions: + global: + data: + - SELECT * FROM todos + - SELECT * FROM lists WHERE archived = false +``` + +```yaml MongoDB Example +bucket_definitions: + global: + data: + # MongoDB uses "_id" but PowerSync uses "id" on the client + - SELECT _id as id, * FROM lists + - SELECT _id as id, * FROM todos WHERE archived = false +``` + +```yaml MySQL Example +bucket_definitions: + global: + data: + - SELECT * FROM todos + - SELECT * FROM lists WHERE archived = 0 +``` + +```yaml SQL Server Example +bucket_definitions: + global: + data: + - SELECT * FROM todos + - SELECT * FROM lists WHERE archived = 0 +``` + - - - In the [PowerSync Dashboard](https://dashboard.powersync.com/): +**Learn more:** [Sync Rules documentation](/sync/rules/overview) - 1. Select your project and instance - 2. Go to the **Sync Streams** view - 3. Edit the YAML directly in the dashboard - 4. Click **Deploy** to validate and deploy your Sync Streams - + + - - Add to your `config.yaml`: +### Deploy Your Configuration - ```yaml - sync_config: - content: | - config: - edition: 2 - streams: - all_todos: - query: SELECT * FROM todos - auto_subscribe: true - unarchived_lists: - query: SELECT * FROM lists WHERE archived = false - auto_subscribe: true - ``` - - + + +In the [PowerSync Dashboard](https://dashboard.powersync.com/): + +1. Select your project and instance +2. Go to the **Sync Streams** or **Sync Rules** view (depending on which you're using) +3. Edit the YAML directly in the dashboard +4. Click **Deploy** to validate and deploy + + + +Add a `sync_config` section to your `config.yaml`: + +**For Sync Streams:** +```yaml config.yaml +sync_config: + content: | + config: + edition: 2 + streams: + all_todos: + query: SELECT * FROM todos + auto_subscribe: true + unarchived_lists: + query: SELECT * FROM lists WHERE archived = false + auto_subscribe: true +``` + +**For Sync Rules:** +```yaml config.yaml +sync_config: + content: | + bucket_definitions: + global: + data: + - SELECT * FROM todos + - SELECT * FROM lists WHERE archived = false +``` + + - **Note**: Table/collection names within your Sync Streams must match the table names defined in your client-side schema (defined in a later step below). +Table/collection names in your configuration must match the table names defined in your client-side schema (defined in a later step below). - - **Learn More** - - For more details on Sync Streams usage, see the [Sync Streams documentation](/sync/streams/overview). - - # 5. Generate a Development Token @@ -944,7 +1000,7 @@ Read data using SQL queries. The data comes from your client-side SQLite databas // Call query.Dispose() to stop watching for updates query.Dispose(); ``` - + **Learn More** diff --git a/sync/advanced/sharded-databases.mdx b/sync/advanced/sharded-databases.mdx index 662f8f64..06f22967 100644 --- a/sync/advanced/sharded-databases.mdx +++ b/sync/advanced/sharded-databases.mdx @@ -22,7 +22,7 @@ Some specific scenarios: #### 1\. Different tables on different databases -This is common when separate "services" use separate databases, but multiple tables across those databases need to be synchronized to the same users. +This is common when separate "services" use separate databases, but multiple tables across those databases need to be synced to the same users. Use a single PowerSync Service instance, with a separate connection for each source database ([planned](https://roadmap.powersync.com/c/84-support-for-sharding-multiple-database-connections); this capability will be available in a future release). Use a unique [connection tag](/sync/advanced/schemas-and-connections) for each source database, allowing them to be distinguished in the Sync Rules. diff --git a/sync/overview.mdx b/sync/overview.mdx index 405f7163..45def1fc 100644 --- a/sync/overview.mdx +++ b/sync/overview.mdx @@ -1,15 +1,15 @@ --- title: "Sync Streams & Sync Rules" sidebarTitle: "Overview" -description: Learn how Sync Streams and Sync Rules enable partial sync to control which data gets synchronized to each client. +description: Learn how Sync Streams and Sync Rules enable partial sync to control which data syncs to each client. --- -PowerSync Sync Streams and the legacy Sync Rules allow developers to control which data gets synchronized to which clients/devices (i.e. they enable partial sync). +PowerSync Sync Streams and the legacy Sync Rules allow developers to control which data syncs to which clients/devices (i.e. they enable partial sync). ## Sync Streams (Beta) — Recommended -[Sync Streams](/sync/streams/overview) are now in beta and production-ready! We recommend Sync Streams for all new projects. Sync Streams will eventually replace Sync Rules and are designed to allow for more dynamic on-demand syncing, while not compromising on the "sync data upfront" strengths of PowerSync for offline-first architecture use cases. +[Sync Streams](/sync/streams/overview) are now in beta and production-ready. We recommend Sync Streams for all new projects, and encourage existing projects to [migrate](/sync/streams/migration). Sync Streams are designed to allow for more dynamic on-demand syncing, while not compromising on the "sync data upfront" strengths of PowerSync for offline-first architecture use cases. Key improvements in Sync Streams over Sync Rules include: - **On-demand syncing**: You define Sync Streams on the PowerSync Service, and a client can then subscribe to them one or more times with different parameters, on-demand. You still have the option of auto-subscribing streams when a client connects, for "sync data upfront" behavior. @@ -25,9 +25,7 @@ Sync Rules is the legacy approach for controlling data sync. It remains availabl - -If you're currently using Sync Rules and want to migrate to Sync Streams, see our migration guide (coming soon). - +If you're currently using Sync Rules and want to migrate to Sync Streams, see our [migration docs](/sync/streams/migration). ## How It Works diff --git a/sync/rules/client-parameters.mdx b/sync/rules/client-parameters.mdx index 3987799c..e001dc26 100644 --- a/sync/rules/client-parameters.mdx +++ b/sync/rules/client-parameters.mdx @@ -16,7 +16,7 @@ PowerSync already supports using **token parameters** in parameter queries. An e [Sync Streams](/sync/streams/overview) make it easier to manage dynamic parameters, especially for apps where parameters are managed across different UI components and tabs. Sync Streams offer _subscription parameters_ (specified when subscribing to a stream) and _connection parameters_ (the equivalent of client parameters). - For new apps, we recommend using Sync Streams instead. + We recommend Sync Streams for new projects, and [migrating](/sync/streams/migration) existing projects. ### Usage diff --git a/sync/rules/overview.mdx b/sync/rules/overview.mdx index fc5f88ed..29b69a5c 100644 --- a/sync/rules/overview.mdx +++ b/sync/rules/overview.mdx @@ -4,14 +4,14 @@ sidebarTitle: "Overview & Key Concepts" description: Understand Sync Rules, the legacy mechanism for controlling data sync with explicit bucket definitions and parameter queries. --- -PowerSync Sync Rules is the legacy mechanism to control which data gets synchronized to which clients/devices (i.e. they enable _partial sync_). +PowerSync Sync Rules is the legacy mechanism to control which data gets synced to which clients/devices (i.e. they enable _partial sync_). **Sync Streams Recommended** -[Sync Streams](/sync/streams/overview) are now in beta and production-ready. We recommend Sync Streams for all new projects — they offer a simpler developer experience, on-demand syncing with subscription parameters, and caching-like behavior with TTL. Sync Rules remain supported for existing projects. +[Sync Streams](/sync/streams/overview) are now in beta and production-ready. We recommend Sync Streams for all new projects — they offer a simpler developer experience, on-demand syncing with subscription parameters, and caching-like behavior with TTL. -See the [Sync Streams documentation](/sync/streams/overview) to get started. +Existing projects should [migrate to Sync Streams](/sync/streams/migration). Sync Rules remain supported but are considered legacy. Sync Rules are defined in a YAML file. For PowerSync Cloud, they are edited and deployed to a specific PowerSync instance in the [PowerSync Dashboard](/tools/powersync-dashboard#project-&-instance-level). For self-hosting setups, they are defined as part of your [instance configuration](/configuration/powersync-service/self-hosted-instances). diff --git a/sync/streams/client-usage.mdx b/sync/streams/client-usage.mdx new file mode 100644 index 00000000..010cb327 --- /dev/null +++ b/sync/streams/client-usage.mdx @@ -0,0 +1,378 @@ +--- +title: "Client-Side Usage" +description: Subscribe to Sync Streams from your client app, manage subscriptions, and track sync progress. +--- + +After defining your streams on the server, your client app subscribes to them to start syncing data. This page covers everything you need to use Sync Streams from your client code. + +## Quick Start + +The basic pattern is: **subscribe** to a stream, **wait** for data to sync, then **unsubscribe** when done. + + + +```js +// Subscribe to a stream with parameters +const sub = await db.syncStream('list_todos', { list_id: 'abc123' }).subscribe(); + +// Wait for initial data to sync +await sub.waitForFirstSync(); + +// Your data is now available - query it normally +const todos = await db.getAll('SELECT * FROM todos WHERE list_id = ?', ['abc123']); + +// When leaving the screen or component... +sub.unsubscribe(); +``` + + + +```dart +// Subscribe to a stream with parameters +final sub = await db.syncStream('list_todos', {'list_id': 'abc123'}).subscribe(); + +// Wait for initial data to sync +await sub.waitForFirstSync(); + +// Your data is now available - query it normally +final todos = await db.getAll('SELECT * FROM todos WHERE list_id = ?', ['abc123']); + +// When leaving the screen or component... +sub.unsubscribe(); +``` + + + +```kotlin +// Subscribe to a stream with parameters +val sub = database.syncStream("list_todos", mapOf("list_id" to JsonParam.String("abc123"))) + .subscribe() + +// Wait for initial data to sync +sub.waitForFirstSync() + +// Your data is now available - query it normally +val todos = database.getAll("SELECT * FROM todos WHERE list_id = ?", listOf("abc123")) + +// When leaving the screen or component... +sub.unsubscribe() +``` + + + +```swift +// Subscribe to a stream with parameters +let sub = try await db.syncStream("list_todos", ["list_id": "abc123"]).subscribe() + +// Wait for initial data to sync +try await sub.waitForFirstSync() + +// Your data is now available - query it normally +let todos = try await db.getAll("SELECT * FROM todos WHERE list_id = ?", ["abc123"]) + +// When leaving the screen or component... +sub.unsubscribe() +``` + + + +```csharp +// Subscribe to a stream with parameters +var sub = await db.SyncStream("list_todos", new() { ["list_id"] = "abc123" }).Subscribe(); + +// Wait for initial data to sync +await sub.WaitForFirstSync(); + +// Your data is now available - query it normally +var todos = await db.GetAll("SELECT * FROM todos WHERE list_id = ?", new[] { "abc123" }); + +// When leaving the screen or component... +sub.Unsubscribe(); +``` + + + +## Framework Integrations + +Most developers use framework-specific hooks that handle subscription lifecycle automatically. These are the recommended approach for React and Compose apps. + +### React Hooks + +The `useSyncStream` hook automatically subscribes when the component mounts and unsubscribes when it unmounts: + +```jsx +function TodoList({ listId }) { + // Automatically subscribes/unsubscribes based on component lifecycle + const stream = useSyncStream({ name: 'list_todos', parameters: { list_id: listId } }); + + // Check if data has synced + if (!stream?.subscription.hasSynced) { + return ; + } + + // Data is ready - query and render + const { data: todos } = useQuery('SELECT * FROM todos WHERE list_id = ?', [listId]); + return ; +} +``` + +You can also have `useQuery` wait for a stream before running: + +```jsx +// This query waits for the stream to sync before executing +const { data: todos } = useQuery( + 'SELECT * FROM todos WHERE list_id = ?', + [listId], + { streams: [{ name: 'list_todos', parameters: { list_id: listId }, waitForStream: true }] } +); +``` + +### Kotlin Compose + +Use `composeSyncStream` to tie subscription lifecycle to a composable: + +```kotlin +@Composable +fun TodoListScreen(db: PowerSyncDatabase, listId: String) { + // Automatically subscribes while this composable is active + val stream = db.composeSyncStream( + name = "list_todos", + parameters = mapOf("list_id" to JsonParam.String(listId)) + ) + + // Check sync state and render accordingly + if (stream.subscription.hasSynced) { + TodoList(listId) + } else { + LoadingIndicator() + } +} +``` + +## Checking Sync Status + +You can check whether a subscription has synced and monitor download progress: + + + +```js +const sub = await db.syncStream('list_todos', { list_id: 'abc123' }).subscribe(); + +// Check if this subscription has completed initial sync +const status = db.currentStatus.forStream(sub); +console.log(status?.subscription.hasSynced); // true/false +console.log(status?.progress); // download progress +``` + + + +```dart +final sub = await db.syncStream('list_todos', {'list_id': 'abc123'}).subscribe(); + +// Check if this subscription has completed initial sync +final status = db.currentStatus.forStream(sub); +print(status?.subscription.hasSynced); // true/false +print(status?.progress); // download progress +``` + + + +```kotlin +val sub = database.syncStream("list_todos", mapOf("list_id" to JsonParam.String("abc123"))) + .subscribe() + +// Check if this subscription has completed initial sync +val status = database.currentStatus.forStream(sub) +println(status?.subscription?.hasSynced) // true/false +println(status?.progress) // download progress +``` + + + +```swift +let sub = try await db.syncStream("list_todos", ["list_id": "abc123"]).subscribe() + +// Check if this subscription has completed initial sync +let status = db.currentStatus.forStream(sub) +print(status?.subscription.hasSynced ?? false) // true/false +print(status?.progress ?? 0) // download progress +``` + + + +```csharp +var sub = await db.SyncStream("list_todos", new() { ["list_id"] = "abc123" }).Subscribe(); + +// Check if this subscription has completed initial sync +var status = db.CurrentStatus.ForStream(sub); +Console.WriteLine(status?.Subscription.HasSynced); // true/false +Console.WriteLine(status?.Progress); // download progress +``` + + + +## TTL (Time-To-Live) + +TTL controls how long data remains cached after you unsubscribe. This enables "warm cache" behavior — when users navigate back to a screen, data may already be available without waiting for a sync. + +**Default behavior:** Data is cached for 24 hours after unsubscribing. For most apps, this default works well. + +### Setting a Custom TTL + + + +```js +// Cache for 1 hour after unsubscribe (TTL in seconds) +const sub = await db.syncStream('todos', { list_id: 'abc' }) + .subscribe({ ttl: 3600 }); + +// Cache indefinitely (data never expires) +const sub = await db.syncStream('todos', { list_id: 'abc' }) + .subscribe({ ttl: Infinity }); + +// No caching (remove data immediately on unsubscribe) +const sub = await db.syncStream('todos', { list_id: 'abc' }) + .subscribe({ ttl: 0 }); +``` + + + +```dart +// Cache for 1 hour after unsubscribe +final sub = await db.syncStream('todos', {'list_id': 'abc'}) + .subscribe(ttl: const Duration(hours: 1)); + +// Cache for 7 days +final sub = await db.syncStream('todos', {'list_id': 'abc'}) + .subscribe(ttl: const Duration(days: 7)); +``` + + + +```kotlin +// Cache for 1 hour after unsubscribe +val sub = database.syncStream("todos", mapOf("list_id" to JsonParam.String("abc"))) + .subscribe(ttl = 1.hours) + +// Cache for 7 days +val sub = database.syncStream("todos", mapOf("list_id" to JsonParam.String("abc"))) + .subscribe(ttl = 7.days) +``` + + + +```swift +// Cache for 1 hour after unsubscribe +let sub = try await db.syncStream("todos", ["list_id": "abc"]) + .subscribe(ttl: .hours(1)) + +// Cache for 7 days +let sub = try await db.syncStream("todos", ["list_id": "abc"]) + .subscribe(ttl: .days(7)) +``` + + + +```csharp +// Cache for 1 hour after unsubscribe +var sub = await db.SyncStream("todos", new() { ["list_id"] = "abc" }) + .Subscribe(new SyncStreamSubscribeOptions { Ttl = TimeSpan.FromHours(1) }); + +// Cache for 7 days +var sub = await db.SyncStream("todos", new() { ["list_id"] = "abc" }) + .Subscribe(new SyncStreamSubscribeOptions { Ttl = TimeSpan.FromDays(7) }); +``` + + + +### How TTL Works + +- **Per-subscription**: Each `(stream name, parameters)` pair has its own TTL +- **First subscription wins**: If you subscribe to the same stream with the same parameters multiple times, the TTL from the first subscription is used +- **After unsubscribe**: Data continues syncing for the TTL duration, then is removed from the local database + +```js +// Example: User opens two lists with different TTLs +const subA = await db.syncStream('todos', { list_id: 'A' }).subscribe({ ttl: 43200 }); // 12h +const subB = await db.syncStream('todos', { list_id: 'B' }).subscribe({ ttl: 86400 }); // 24h + +// Each subscription is independent +// List A data cached for 12h after unsubscribe +// List B data cached for 24h after unsubscribe +``` + +## Connection Parameters + +Connection parameters are a more advanced feature for values that apply to all streams in a session. They're the Sync Streams equivalent of [Client Parameters](/sync/rules/client-parameters) in Sync Rules. + + +For most use cases, **subscription parameters** (passed when subscribing) are more flexible and recommended. Use connection parameters only when you need a single global value across all streams, like an environment flag. + + +Define streams that use connection parameters: + +```yaml +streams: + config: + query: SELECT * FROM config WHERE env = connection.parameter('environment') + auto_subscribe: true +``` + +Set connection parameters when connecting: + + + +```js +await db.connect(connector, { + params: { environment: 'production' } +}); +``` + + + +```dart +await db.connect( + connector: connector, + params: {'environment': 'production'}, +); +``` + + + +```kotlin +database.connect( + connector, + params = mapOf("environment" to JsonParam.String("production")) +) +``` + + + +```swift +try await db.connect( + connector: connector, + options: ConnectOptions(params: ["environment": "production"]) +) +``` + + + +```csharp +await db.Connect(connector, new ConnectOptions { + Params = new() { ["environment"] = "production" } +}); +``` + + + +## API Reference + +For quick reference, here are the key methods available in each SDK: + +| Method | Description | +|--------|-------------| +| `db.syncStream(name, params)` | Get a `SyncStream` instance for a stream with optional parameters | +| `stream.subscribe(options)` | Subscribe to the stream, returns a `SyncStreamSubscription` | +| `subscription.waitForFirstSync()` | Wait until the subscription has completed its initial sync | +| `subscription.unsubscribe()` | Unsubscribe from the stream (data remains cached for TTL duration) | +| `db.currentStatus.forStream(sub)` | Get sync status and progress for a subscription | diff --git a/sync/streams/ctes.mdx b/sync/streams/ctes.mdx new file mode 100644 index 00000000..d420da25 --- /dev/null +++ b/sync/streams/ctes.mdx @@ -0,0 +1,6 @@ +--- +title: "Using CTEs" +description: Reuse common query patterns across multiple Sync Streams using Common Table Expressions (CTEs). +--- + +todo diff --git a/sync/streams/examples.mdx b/sync/streams/examples.mdx new file mode 100644 index 00000000..cd1d76c8 --- /dev/null +++ b/sync/streams/examples.mdx @@ -0,0 +1,144 @@ +--- +title: "Examples & Demos" +description: Working demo apps and complete Sync Streams examples for common patterns. +--- + +Explore working demo apps that demonstrate Sync Streams in action. + +## Demo Apps + +These demo apps show how to combine auto-subscribe streams (for data that should always be available) with on-demand streams (for data loaded when needed). + + + +Try the [`react-supabase-todolist-sync-streams`](https://github.com/powersync-ja/powersync-js/tree/main/demos/react-supabase-todolist-sync-streams) demo app by following the instructions in the README. + +In this demo: +- The app syncs `lists` by default, so they're available immediately and offline (demonstrating auto-subscribe behavior). +- The app syncs `todos` on demand when a user opens a list (demonstrating subscription parameters). +- When the user navigates back to the same list, they won't see a loading state, because the data is cached locally (demonstrating TTL caching behavior). + + +Try the [`supabase-todolist`](https://github.com/powersync-ja/powersync.dart/tree/main/demos/supabase-todolist) demo app, which supports Sync Streams. + +Deploy the following Sync Streams configuration: + +```yaml +config: + edition: 2 + +streams: + lists: + query: SELECT * FROM lists + auto_subscribe: true + todos: + query: SELECT * FROM todos WHERE list_id = subscription.parameter('list') +``` + +In this demo: +- The app syncs `lists` by default, so they're available immediately and offline (demonstrating auto-subscribe behavior). +- The app syncs `todos` on demand when a user opens a list (demonstrating subscription parameters). +- When the user navigates back to the same list, they won't see a loading state, because the data is cached locally (demonstrating TTL caching behavior). + + +Kotlin Sync Streams support is available. Demo app coming soon. + + +Swift Sync Streams support is available. Demo app coming soon. + + +.NET Sync Streams support is available. Demo app coming soon. + + + +## Common Patterns + +### Todo List with On-Demand Loading + +A classic pattern: sync the list of lists upfront, but only sync todos when the user opens a specific list. + +```yaml +config: + edition: 2 + +streams: + # Always available - user can see their lists offline + lists: + query: SELECT * FROM lists WHERE owner_id = auth.user_id() + auto_subscribe: true + + # Loaded on demand - only sync todos for the list being viewed + list_todos: + query: | + SELECT * FROM todos + WHERE list_id = subscription.parameter('list_id') + AND list_id IN (SELECT id FROM lists WHERE owner_id = auth.user_id()) +``` + +Client usage: + +```js +// Lists are already synced (auto_subscribe: true) +const lists = await db.getAll('SELECT * FROM lists'); + +// When user opens a list, subscribe to its todos +const sub = await db.syncStream('list_todos', { list_id: selectedListId }).subscribe(); +await sub.waitForFirstSync(); + +// Todos are now available locally +const todos = await db.getAll('SELECT * FROM todos WHERE list_id = ?', [selectedListId]); +``` + +### Project Workspace + +Sync project metadata upfront, but load project contents on demand. + +```yaml +config: + edition: 2 + +streams: + # User's projects - always available for navigation + my_projects: + query: SELECT * FROM projects WHERE owner_id = auth.user_id() + auto_subscribe: true + + # Project details - loaded when user opens a project + project_tasks: + query: | + SELECT * FROM tasks + WHERE project_id = subscription.parameter('project_id') + AND project_id IN (SELECT id FROM projects WHERE owner_id = auth.user_id()) + + project_files: + query: | + SELECT * FROM files + WHERE project_id = subscription.parameter('project_id') + AND project_id IN (SELECT id FROM projects WHERE owner_id = auth.user_id()) +``` + +### Chat Application + +Sync conversation list upfront, load messages on demand. + +```yaml +config: + edition: 2 + +streams: + # User's conversations - always show the conversation list + my_conversations: + query: | + SELECT * FROM conversations + WHERE id IN (SELECT conversation_id FROM participants WHERE user_id = auth.user_id()) + auto_subscribe: true + + # Messages - only load for the active conversation + conversation_messages: + query: | + SELECT * FROM messages + WHERE conversation_id = subscription.parameter('conversation_id') + AND conversation_id IN ( + SELECT conversation_id FROM participants WHERE user_id = auth.user_id() + ) +``` diff --git a/sync/streams/migration.mdx b/sync/streams/migration.mdx new file mode 100644 index 00000000..6e9861a6 --- /dev/null +++ b/sync/streams/migration.mdx @@ -0,0 +1,258 @@ +--- +title: "Migrating from Sync Rules" +description: How to migrate existing projects from Sync Rules to Sync Streams. +--- + +## Why Migrate? + +PowerSync's original Sync Rules system was optimized for offline-first use cases where you want to "sync everything upfront" when the client connects, so data is available locally if the user goes offline. + +However, many developers are building apps where users are mostly online, and you don't want to make users wait to sync a lot of data upfront. This is especially true for **web apps**: users are mostly online, you often want to sync only the data needed for the current page, and users frequently have multiple browser tabs open — each needing different subsets of data. + +### The Problem with Client Parameters + +[Client Parameters](/sync/rules/client-parameters) in Sync Rules partially support on-demand syncing — for example, using a `project_ids` array to sync only specific projects. However, manually managing these arrays across different browser tabs becomes painful: + +- You need to aggregate IDs across all open tabs +- You need additional logic for different data types (tables) +- If you want to keep data around after a tab closes (caching), you need even more management + +### How Sync Streams Solve This + +Sync Streams address these limitations: + +1. **On-demand syncing**: Define streams once, then subscribe from your app one or more times with different parameters. No need to manage arrays of IDs — each subscription is independent. + +2. **Multi-tab support**: Each subscription manages its own lifecycle. Open the same list in two tabs? Each tab subscribes independently. Close one? The other keeps working. + +3. **Built-in caching**: Each subscription has a configurable `ttl` that keeps data cached after unsubscribing. When users return to a screen, data may already be available — no loading state needed. + +4. **Simpler syntax**: Just queries with subqueries. No separate parameter queries. The syntax is closer to plain SQL. + +5. **Framework integration**: React hooks and Kotlin Compose extensions let your UI components automatically manage subscriptions based on what's rendered. + +### Still Need Offline-First? + +If you want "sync everything upfront" behavior (like Sync Rules), set `auto_subscribe: true` on your streams and clients will subscribe automatically when they connect. + +## Requirements + +- PowerSync Service v1.15.0+ (Cloud instances already meet this) +- Latest SDK versions with [Rust-based sync client](https://releases.powersync.com/announcements/improved-sync-performance-in-our-client-sdks) (enabled by default on latest SDKs) +- `config: edition: 2` in your sync config + + + +| SDK | Minimum Version | Rust Client Default | +|-----|-----------------|---------------------| +| JS Web | v1.27.0 | v1.32.0 | +| React Native | v1.25.0 | v1.29.0 | +| React hooks | v1.8.0 | — | +| Node.js | — | v0.16.0 | +| Capacitor | — | v0.3.0 | +| Dart/Flutter | v1.16.0 | v1.17.0 | +| Kotlin | v1.7.0 | v1.9.0 | +| Swift | [In progress](https://github.com/powersync-ja/powersync-swift/pull/86) | v1.8.0 | +| .NET | v0.0.8-alpha.1 | v0.0.5-alpha.1 | + + + +If you're on an SDK version below the "Rust Client Default" version, enable the Rust client manually: + +**JavaScript:** +```js +await db.connect(new MyConnector(), { + clientImplementation: SyncClientImplementation.RUST +}); +``` + +**Dart:** +```dart +database.connect( + connector: YourConnector(), + options: const SyncOptions( + syncImplementation: SyncClientImplementation.rust, + ), +); +``` + +**Kotlin:** +```kotlin +database.connect(MyConnector(), options = SyncOptions( + newClientImplementation = true, +)) +``` + +**Swift:** +```swift +@_spi(PowerSyncExperimental) import PowerSync + +try await db.connect(connector: connector, options: ConnectOptions( + newClientImplementation: true, +)) +``` + + + +## Migration Tool + +Use the [Sync Rules to Sync Streams converter](https://powersync-community.github.io/bucket-definitions-to-sync-streams/) to automatically convert your existing Sync Rules to Sync Streams. This tool handles most common patterns and gives you a starting point for your migration. + +## Stream Options + +All available stream options: + +```yaml +streams: + my_stream: + query: SELECT * FROM table WHERE ... + auto_subscribe: true + priority: 1 + accept_potentially_dangerous_queries: true +``` + +| Option | Default | Description | +|--------|---------|-------------| +| `query` | (required) | SQL-like query defining which data to sync | +| `auto_subscribe` | `false` | When `true`, clients automatically subscribe to this stream on connect | +| `priority` | — | Sync priority (lower value = higher priority). See [Prioritized Sync](/sync/advanced/prioritized-sync) | +| `accept_potentially_dangerous_queries` | `false` | Silences security warnings. PowerSync warns when queries use subscription or connection parameters without also including JWT-based authorization (e.g., `auth.user_id()`). Since clients can send any value for these parameters, relying on them alone for access control could be insecure. Set to `true` if you've verified the query is safe or authorization is handled elsewhere. | + +## Migration Examples + +### Global Data (No Parameters) + +In Sync Rules, a "global" bucket syncs the same data to all users. In Sync Streams, you achieve this with queries that have no parameters. Add `auto_subscribe: true` to maintain the Sync Rules behavior where data syncs automatically on connect. + +**Sync Rules:** +```yaml +bucket_definitions: + global: + data: + - SELECT * FROM todos + - SELECT * FROM lists WHERE archived = false +``` + +**Sync Streams:** +```yaml +config: + edition: 2 + +streams: + all_todos: + query: SELECT * FROM todos + auto_subscribe: true # Sync automatically like Sync Rules + unarchived_lists: + query: SELECT * FROM lists WHERE archived = false + auto_subscribe: true # Sync automatically like Sync Rules +``` + + +Without `auto_subscribe: true`, clients would need to explicitly subscribe to these streams. This gives you flexibility to migrate incrementally or switch to on-demand syncing later. + + +### User-Scoped Data + +**Sync Rules:** +```yaml +bucket_definitions: + user_lists: + priority: 1 + parameters: SELECT request.user_id() as user_id + data: + - SELECT * FROM lists WHERE owner_id = bucket.user_id +``` + +**Sync Streams:** +```yaml +config: + edition: 2 + +streams: + user_lists: + priority: 1 + query: SELECT * FROM lists WHERE owner_id = auth.user_id() + auto_subscribe: true +``` + +### Data with Subqueries (Replaces Parameter Queries) + +**Sync Rules:** +```yaml +bucket_definitions: + owned_lists: + parameters: | + SELECT id as list_id FROM lists WHERE owner_id = request.user_id() + data: + - SELECT * FROM lists WHERE lists.id = bucket.list_id + - SELECT * FROM todos WHERE todos.list_id = bucket.list_id +``` + +**Sync Streams:** +```yaml +config: + edition: 2 + +streams: + owned_lists: + query: SELECT * FROM lists WHERE owner_id = auth.user_id() + auto_subscribe: true + list_todos: + query: | + SELECT * FROM todos + WHERE list_id = subscription.parameter('list_id') + AND list_id IN (SELECT id FROM lists WHERE owner_id = auth.user_id()) +``` + +### Client Parameters → Subscription Parameters + +**Sync Rules** used global Client Parameters: +```yaml +bucket_definitions: + posts: + parameters: SELECT (request.parameters() ->> 'current_page') as page_number + data: + - SELECT * FROM posts WHERE page_number = bucket.page_number +``` + +**Sync Streams** use Subscription Parameters, which are more flexible — you can subscribe multiple times with different values: +```yaml +config: + edition: 2 + +streams: + posts: + query: SELECT * FROM posts WHERE page_number = subscription.parameter('page_number') +``` + +```js +// Subscribe to multiple pages simultaneously +const page1 = await db.syncStream('posts', { page_number: 1 }).subscribe(); +const page2 = await db.syncStream('posts', { page_number: 2 }).subscribe(); +``` + +## Parameter Syntax Changes + +| Sync Rules | Sync Streams | +|------------|--------------| +| `request.user_id()` | `auth.user_id()` | +| `request.jwt() ->> 'claim'` | `auth.parameter('claim')` | +| `request.parameters() ->> 'key'` | `connection.parameter('key')` or `subscription.parameter('key')` | +| `bucket.param_name` | Use the parameter directly in the query | + +## Client-Side Changes + +After updating your sync config, update your client code to use subscriptions: + +```js +// Before (Sync Rules with Client Parameters) +await db.connect(connector, { + params: { current_project: projectId } +}); + +// After (Sync Streams with Subscriptions) +await db.connect(connector); +const sub = await db.syncStream('project_data', { project_id: projectId }).subscribe(); +``` + +See [Client-Side Usage](/sync/streams/client-usage) for detailed examples. diff --git a/sync/streams/overview.mdx b/sync/streams/overview.mdx index 1c61870b..238bc3b8 100644 --- a/sync/streams/overview.mdx +++ b/sync/streams/overview.mdx @@ -1,463 +1,319 @@ --- -title: "Sync Streams (Beta)" -description: Sync Streams will replace Sync Rules and are designed to allow for more dynamic syncing, while not compromising on existing offline-first capabilities. -sidebarTitle: "Overview" +title: "Sync Streams" +description: Sync Streams enable partial sync, letting you define exactly which data from your backend syncs to each client using simple SQL-like queries. +sidebarTitle: "Quickstart" --- -## Motivation +Sync Streams enable partial sync — instead of syncing entire tables, you tell PowerSync exactly which data each user should have on their device. You write simple SQL-like queries to define the data, and your client app subscribes to the streams it needs. PowerSync handles the rest, keeping data in sync in real-time and making it available offline. -PowerSync's original [Sync Rules](/sync/rules/overview) system was optimized for offline-first use cases where you want to “sync everything upfront” when the client connects, so that data is available locally if a user goes offline at any point. +For example, you might create a stream that syncs only the current user's todo items, another for shared projects they have access to, and another for reference data that everyone needs. Your app subscribes to these streams on demand, and only that data syncs to the client's local SQLite database. -However, many developers are building apps where users are mostly online, and you don't want to make users wait to sync a lot of data upfront. In these cases, it's more suited to sync data on-demand. This is especially true for web apps: users are mostly online and you often want to sync only the data needed for the current page. Users also frequently have multiple tabs open, each needing different subsets of data. - -Sync engines like PowerSync are still great for these online web app use cases, because they provide you with real-time updates, simplified state management, and ease of working with data locally. + +**Beta Release** -[Client Parameters](/sync/rules/client-parameters) in the current Sync Rules system support on-demand syncing across different browser tabs to some extent: For example, using a `project_ids` array as a Client Parameter to sync only specific projects. However, manually managing these arrays across different browser tabs becomes quite painful. +Sync Streams are now in beta and production-ready. We recommend Sync Streams for all new projects, and encourage existing projects to [migrate from Sync Rules](/sync/streams/migration). -We are introducing **Sync Streams** to provide the best of both worlds: support for dynamic on-demand syncing, as well as "syncing everything upfront". +We welcome your feedback — please share with us in [Discord](https://discord.gg/powersync). + -Key improvements in Sync Streams over Sync Rules include: +## Defining Streams -1. **On-demand syncing**: You define Sync Streams on the PowerSync Service, and a client can then subscribe to them one or more times with different parameters. -2. **Temporary caching-like behavior**: Each subscription includes a configurable `ttl` that keeps data active after your app unsubscribes, acting as a warm cache for recently accessed data. -3. **Simpler developer experience**: Simplified syntax and mental model, and capabilities such as your UI components automatically managing subscriptions (for example, React hooks). +Streams are defined in a YAML configuration file. Each stream has a **name** and a **query** that specifies which rows to sync using SQL-like syntax. The query can reference parameters like the authenticated user's ID to personalize what each user receives. -If you want “sync everything upfront” behavior (like the current Sync Rules system), that’s easy too: you can configure any of your Sync Streams to be auto-subscribed by the client on connecting. + + +In the [PowerSync Dashboard](https://dashboard.powersync.com/): +1. Select your project and instance +2. Go to the **Sync Streams** view +3. Edit the YAML directly in the dashboard +4. Click **Deploy** to validate and deploy - -**Beta Release — Production-Ready** +```yaml +config: + edition: 2 -Sync Streams are now in beta and production-ready. We recommend Sync Streams for all new projects. +streams: + todos: + query: SELECT * FROM todos WHERE owner_id = auth.user_id() +``` + -Sync Streams will ultimately replace the current Sync Rules system. Sync Rules will continue to be supported for existing projects, but we recommend migrating to Sync Streams. + +Add a `sync_config` section to your `config.yaml`: -We welcome your feedback on Sync Streams — please share with us in [Discord](https://discord.gg/powersync). - +```yaml config.yaml +sync_config: + content: | + config: + edition: 2 -## Requirements for Using Sync Streams - -* v1.15.0 of the PowerSync Service (Cloud instances are already on this version) -* Minimum SDK versions: - * JS: - * Web: v1.27.0 - * React Native: v1.25.0 - * React hooks: v1.8.0 - * Dart: v1.16.0 - * Kotlin: v1.7.0 - * .NET: v0.0.8-alpha.1 - * Swift: [In progress](https://github.com/powersync-ja/powersync-swift/pull/86). -* Use of the [Rust-based sync client](https://releases.powersync.com/announcements/improved-sync-performance-in-our-client-sdks). The Rust-based sync client is enabled by default on the latest version of all SDKs. If you are on a lower version, follow the instructions below to enable it. - - - - - The Rust client became the default in Web SDK v1.32.0, React Native SDK v1.29.0, Node.js SDK v0.16.0, and Capacitor SDK v0.3.0. For lower versions, pass the `clientImplementation` option when connecting: - - ```js - await db.connect(new MyConnector(), { - clientImplementation: SyncClientImplementation.RUST - }); - ``` - - You can migrate back to the JavaScript client later by removing the option. - - - The Rust client became the default in Flutter/Dart SDK v1.17.0. Pass the `syncImplementation` option when connecting: - - ```dart - database.connect( - connector: YourConnector(), - options: const SyncOptions( - syncImplementation: SyncClientImplementation.rust, - ), - ); - ``` - - You can migrate back to the Dart client later by removing the option. - - - The Rust client became the default in Kotlin SDK v1.9.0. For lower versions, pass the `newClientImplementation` option when connecting: - - ```kotlin - //@file:OptIn(ExperimentalPowerSyncAPI::class) - database.connect(MyConnector(), options = SyncOptions( - newClientImplementation = true, - )) - ``` - - You can migrate back to the Kotlin client later by removing the option. - - - The Rust client became the default in Swift SDK v1.8.0. For lower versions, pass the `newClientImplementation` option when connecting: - - ```swift - @_spi(PowerSyncExperimental) import PowerSync - - try await db.connect(connector: connector, options: ConnectOptions( - newClientImplementation: true, - )) - ``` - - You can migrate back to the Swift client later by removing the option. - - - The Rust client was introduced as the default in .NET SDK v0.0.5-alpha.1. No additional configuration is required. - - - -* Sync Stream definitions. They are currently defined in the same YAML file as Sync Rules: `sync_rules.yaml` (PowerSync Cloud) or `config.yaml` (Open Edition/self-hosted). To enable Sync Streams, add the following configuration: - - ```yaml sync_rules.yaml - config: - # see https://docs.powersync.com/sync/advanced/compatibility - # this edition also deploys several backwards-incompatible fixes - # see the docs for details - edition: 2 - - streams: - ... # see 'Stream Definition Syntax' section below - ``` - -## Stream Definition Syntax - -You specify **stream definitions** similar to bucket definitions in Sync Rules. Clients then subscribe to the defined streams one or more times, with different parameters. - -Syntax: -```yaml sync_rules.yaml -streams: - : - query: string # similar to Data Queries in Sync Rules, but also support limited subqueries. - auto_subscribe: boolean # true to subscribe to this stream by default (similar to how Sync Rules work), false (default) if clients should explicitly subscribe. - priority: number # sync priority, same as in Sync Rules: https://docs.powersync.com/sync/advanced/prioritized-sync - accept_potentially_dangerous_queries: boolean # silence warnings on dangerous queries, same as in Sync Rules. + streams: + todos: + query: SELECT * FROM todos WHERE owner_id = auth.user_id() ``` + + + +## Basic Examples + +There are two independent concepts to understand: + +- **Data scope**: What data the stream returns + - *Global data*: No parameters, same data for all users (e.g. reference tables like categories) + - *User-scoped data*: Uses `auth.user_id()` or JWT claims, different per user + - *Parameterized data*: Uses `subscription.parameter()`, varies based on what the client subscribes to -Basic example: -```yaml sync_rules.yaml +- **Subscription behavior**: When the client syncs the data + - *Auto-subscribe*: Client automatically subscribes on connect (`auto_subscribe: true`) + - *On-demand*: Client explicitly subscribes when needed (default behavior) + +### Global Data + +Data without parameters is "global" data, meaning the same data goes to all users. This is useful for reference tables: + +```yaml config: edition: 2 + streams: - issue: # Define a stream to a specific issue - query: select * from issues where id = subscription.parameters() ->> 'id' - issue_comments: # Define a stream to a specific issue's comments - query: select * from comments where issue_id = subscription.parameters() ->> 'id' + # Same categories for everyone + categories: + query: SELECT * FROM categories + # Same active products for everyone + products: + query: SELECT * FROM products WHERE active = true ``` + +Global data streams still require clients to subscribe explicitly unless you set `auto_subscribe: true`. + + +### User-Scoped Data -### Just Queries with Subqueries +Use `auth.user_id()` or JWT claims to return different data per user: -Whereas Sync Rules had separate [Parameter Queries](/sync/rules/parameter-queries) and [Data Queries](/sync/rules/data-queries), Sync Streams only have a `query`. Instead of Parameter Queries, Sync Streams can use parameters directly in the query, and support a limited form of subqueries. For example: +```yaml +config: + edition: 2 -```yaml sync_rules.yaml -# use parameters directly in the query (see below for details on accessing parameters) -select * from issues where id = subscription.parameters() ->> 'id' and owner_id = auth.user_id() +streams: + # Each user gets their own lists + my_lists: + query: SELECT * FROM lists WHERE owner_id = auth.user_id() -# "in (subquery)" replaces parameter queries: -select * from comments where issue_id in (select id from issues where owner_id = auth.user_id()) + # Each user gets their own orders + my_orders: + query: SELECT * FROM orders WHERE user_id = auth.user_id() ``` -Under the hood, Sync Streams use the same bucket system as Sync Rules, so you get the same functionality as before with Parameter Queries, however, the Sync Streams syntax is closer to plain SQL. +### Parameterized Data (On-Demand) + +Use `subscription.parameter()` for data that clients subscribe to explicitly: +```yaml +config: + edition: 2 -### Accessing Parameters +streams: + # Sync todos for a specific list when the client subscribes with a list_id + list_todos: + query: | + SELECT * FROM todos + WHERE list_id = subscription.parameter('list_id') + AND list_id IN (SELECT id FROM lists WHERE owner_id = auth.user_id()) +``` -We have streamlined how different kinds of parameters are accessed in Sync Streams [compared](/sync/rules/parameter-queries) to Sync Rules. +### Using Auto-Subscribe -**Subscription Parameters**: Passed from the client when it subscribes to a Sync Stream. See [Client-Side Syntax](#client-side-syntax) below. Clients can subscribe to the same stream multiple times with -different parameters: +Set `auto_subscribe: true` to sync data automatically when clients connect. This is useful for: +- Reference data that all screens need +- User data that should always be available offline +- Maintaining [Sync Rules](/sync/rules/overview) behavior (sync everything upfront) during migration ```yaml -subscription.parameters() # all parameters for the subscription, as JSON -subscription.parameter('key') # shorthand for getting a single specific parameter +config: + edition: 2 + +streams: + # Global data, synced automatically + categories: + query: SELECT * FROM categories + auto_subscribe: true + + # User-scoped data, synced automatically + my_orders: + query: SELECT * FROM orders WHERE user_id = auth.user_id() + auto_subscribe: true + + # Parameterized data, subscribed on-demand (no auto_subscribe) + order_items: + query: | + SELECT * FROM order_items + WHERE order_id = subscription.parameter('order_id') + AND order_id IN (SELECT id FROM orders WHERE user_id = auth.user_id()) ``` -**Auth Parameters**: Claims from the JWT: +## Using Parameters + +Parameters let you filter data dynamically. The two most common types are: + +**Auth parameters** filter by user identity. Use `auth.user_id()` to sync data belonging to the current user: ```yaml -auth.parameters() # JWT token payload, as JSON -auth.parameter('key') # short-hand for getting a single specific token payload parameter -auth.user_id() # same as auth.parameter('sub') +streams: + my_lists: + query: SELECT * FROM lists WHERE owner_id = auth.user_id() ``` - -**Connection Parameters**: Specified "globally" on the connection level. These are the equivalent of [Client Parameters](/sync/rules/client-parameters) in Sync Rules: +**Subscription parameters** are passed from the client when subscribing. Use these for on-demand data: ```yaml -connection.parameters() # all parameters for the connection, as JSON -connection.parameter('key') # shorthand for getting a single specific parameter +streams: + list_todos: + query: SELECT * FROM todos WHERE list_id = subscription.parameter('list_id') ``` -### Usage Examples: Sync Rules vs Sync Streams +```js +// Client subscribes with the list they want to view +const sub = await db.syncStream('list_todos', { list_id: 'abc123' }).subscribe(); +``` + +See [Writing Stream Queries](/sync/streams/queries) for the full parameter reference, subqueries, and more patterns. + +## Client-Side Usage - +Subscribe to streams from your client app: -### Global data -**Sync Rules:** -```yaml sync_rules.yaml - bucket_definitions: - global: - data: - # Sync all todos - - SELECT * FROM todos - # Sync all lists except archived ones - - SELECT * FROM lists WHERE archived = false + + +```js +const sub = await db.syncStream('list_todos', { list_id: 'abc123' }) + .subscribe({ ttl: 3600 }); + +// Wait for this subscription to have synced +await sub.waitForFirstSync(); + +// When the component needing the subscription is no longer active... +sub.unsubscribe(); ``` -**Sync Streams:** "Global" data — the data you want all of your users to have by default — is also defined as streams. Specify `auto_subscribe: true` so your users subscribe to them by default. -```yaml sync_rules.yaml - streams: - all_todos: - query: SELECT * FROM todos - auto_subscribe: true - unarchived_lists: - query: SELECT * FROM lists WHERE archived = false - auto_subscribe: true +**React hooks:** + +```jsx +const stream = useSyncStream({ name: 'list_todos', parameters: { list_id: 'abc123' } }); +// Check download progress or subscription information +stream?.progress; +stream?.subscription.hasSynced; ``` -### A user's owned lists, with a priority -**Sync Rules:** -```yaml sync_rules.yaml - bucket_definitions: - user_lists: - priority: 1 # See https://docs.powersync.com/sync/advanced/prioritized-sync - parameters: SELECT request.user_id() as user_id - data: - - SELECT * FROM lists WHERE owner_id = bucket.user_id +The `useQuery` hook can wait for Sync Streams before running queries: + +```jsx +const { data } = useQuery( + 'SELECT * FROM todos WHERE list_id = ?', + [listId], + { streams: [{ name: 'list_todos', parameters: { list_id: listId }, waitForStream: true }] } +); ``` + -**Sync Streams:** -```yaml sync_rules.yaml - streams: - user_lists: - priority: 1 # See https://docs.powersync.com/sync/advanced/prioritized-sync - query: SELECT * FROM lists WHERE owner_id = auth.user_id() + +```dart +final sub = await db + .syncStream('list_todos', {'list_id': 'abc123'}) + .subscribe(ttl: const Duration(hours: 1)); + +// Wait for this subscription to have synced +await sub.waitForFirstSync(); + +// When the component needing the subscription is no longer active... +sub.unsubscribe(); ``` + + + +```kotlin +val sub = database.syncStream("list_todos", mapOf("list_id" to JsonParam.String("abc123"))) + .subscribe(ttl = 1.0.hours) -### Grouping by `list_id` -**Sync Rules:** -```yaml sync_rules.yaml - bucket_definitions: - owned_lists: - parameters: | - SELECT id as list_id FROM lists WHERE - owner_id = request.user_id() - data: - - SELECT * FROM lists WHERE lists.id = bucket.list_id - - SELECT * FROM todos WHERE todos.list_id = bucket.list_id +// Wait for this subscription to have synced +sub.waitForFirstSync() + +// When the component needing the subscription is no longer active... +sub.unsubscribe() ``` -**Sync Streams:** -```yaml sync_rules.yaml - streams: - owned_lists: - query: SELECT * FROM lists WHERE owner_id = auth.user_id() - list_todos: - query: SELECT * FROM todos WHERE list_id = subscription.parameter('list_id') AND list_id IN (SELECT id FROM lists WHERE owner_id = auth.user_id()) - + +**Compose:** + +```kotlin +@Composable +fun TodoListPage(db: PowerSyncDatabase, listId: String) { + val stream = db.composeSyncStream( + name = "list_todos", + parameters = mapOf("list_id" to JsonParam.String(listId)) + ) + // Define component based on stream state +} ``` + -### Parameters usage -**Sync Rules:** -```yaml sync_rules.yaml - bucket_definitions: - posts: - parameters: SELECT (request.parameters() ->> 'current_page') as page_number - data: - - SELECT * FROM posts WHERE page_number = bucket.page_number + +```swift +let sub = try await db.syncStream("list_todos", ["list_id": "abc123"]) + .subscribe(ttl: .hours(1)) + +// Wait for this subscription to have synced +try await sub.waitForFirstSync() + +// When the component needing the subscription is no longer active... +sub.unsubscribe() ``` -**Sync Streams:** -```yaml sync_rules.yaml - streams: - posts: - query: SELECT * FROM posts WHERE page_number = subscription.parameter('page_number') + + + +```csharp +var sub = await db.SyncStream("list_todos", new() { ["list_id"] = "abc123" }) + .Subscribe(new SyncStreamSubscribeOptions { Ttl = TimeSpan.FromHours(1) }); + +// Wait for this subscription to have synced +await sub.WaitForFirstSync(); + +// When the component needing the subscription is no longer active... +sub.Unsubscribe(); ``` -Note that the behavior here is different to Sync Rules because `subscription.parameter('page_number')` is local to the subscription, so the Sync Stream can be subscribed to multiple times with different page numbers, whereas Sync Rules only allow a single global Client Parameter value at a time. Connection Parameters (`connection.parameter()`) are available in Sync Streams as the equivalent of the global Client Parameters in Sync Rules, but Subscription Parameters are recommended because they are much more flexible. - -### Specific columns/fields, renames and transformations - -Selecting, renaming or transforming specific columns/fields is identical between Sync Rules and Sync Streams: - -```yaml sync_rules.yaml - streams: - todos: - # Example 1: Select specific columns - query: SELECT id, name, owner_id FROM todos - - # Example 2: Rename columns - # query: SELECT id, name, created_timestamp AS created_at FROM todos - - # Example 3: Cast number to text - # query: SELECT id, item_number :: text AS item_number FROM todos - - # Example 4: Alternative syntax for the same cast - # query: id, CAST(item_number as TEXT) AS item_number FROM todos - - # Example 5: Convert binary data (bytea) to base64 - # query: id, base64(thumbnail) AS thumbnail_base64 FROM todos - - # Example 6: Extract field from JSON or JSONB column - # query: id, metadata_json ->> 'description' AS description FROM todos - - # Example 7: Convert time to epoch number - # query: id, unixepoch(created_at) AS created_at FROM todos + + + +### TTL (Time-To-Live) + +Each subscription has a `ttl` that keeps data cached after unsubscribing. This enables warm cache behavior — when users return to a screen, data is already available. Default TTL is 24 hours. See [Client-Side Usage](/sync/streams/client-usage) for details. + +```js +// Set TTL in seconds when subscribing +const sub = await db.syncStream('todos', { list_id: 'abc' }) + .subscribe({ ttl: 3600 }); // Cache for 1 hour after unsubscribe ``` - +## Examples & Demos -## Client-Side Syntax +See [Examples & Demos](/sync/streams/examples) for common examples and demo apps that can be used as a reference for your own project. -In general, each SDK lets you: +## Developer Notes -* Use `db.syncStream(name, [subscription-params])` to get a `SyncStream` instance. -* Call `subscribe()` on a `SyncStream` to get a `SyncStreamSubscription`. This gives you access to `waitForFirstSync()` and `unsubscribe()`. -* Inspect `SyncStatus` for a list of `SyncSubscriptionDefinition`s describing all Sync Streams your app is subscribed to (either due to an explicit subscription or because the Sync Stream has `auto_subscribe: true`). It also reports per-stream download progress. -* Each Sync Stream has a `ttl` (time-to-live). After you call `unsubscribe()`, or when the page/app closes, the stream keeps syncing for the `ttl` duration, enabling caching-like behavior. Each SDK lets you specify the `ttl`, or ignore the `ttl` and delete the data as soon as possible. If not specified, a default TTL of 24 hours applies. +- **SQL Syntax**: Stream queries use a SQL-like syntax, but only `SELECT` statements are supported. You can use `IN (SELECT ...)` subqueries for filtering, but `JOIN`, `GROUP BY`, `ORDER BY`, and `LIMIT` are not available. See [Supported SQL](/sync/supported-sql) for the full list of supported operators and functions. -Select your language for specific examples: - - - ```js - const sub = await powerSync.syncStream('issues', {id: 'issue-id'}).subscribe(ttl: 3600); - - // Resolve current status for subscription - const status = powerSync.currentStatus.forStream(sub); - const progress = status?.progress; - - // Wait for this subscription to have synced - await sub.waitForFirstSync(); - - // When the component needing the subscription is no longer active... - sub.unsubscribe(); - ``` - - If you're using React, you can also use hooks to automatically subscribe components to Sync Streams: - - ```js - const stream = useSyncStream({ name: 'todo_list', parameters: { list: 'foo' } }); - // Can then check for download progress or subscription information - stream?.progress; - stream?.subscription.hasSynced; - ``` - - This hook is useful when you want to explicitly ensure a stream is active (for example a root component) or when you need progress/hasSynced state; this makes data available for all child components without each query declaring the stream. - - Additionally, the `useQuery` hook for React can wait for Sync Streams to be complete before running - queries. Pass `streams` only when the component knows which specific stream subscription(s) it depends on and it should wait before querying. - - ```js - const results = useQuery( - 'SELECT ...', - queryParameters, - // This will wait for the stream to sync before running the query - { streams: [{ name: 'todo_list', parameters: { list: 'foo' }, waitForStream: true }] } - ); - ``` - - - - ```dart - final sub = await db - .syncStream('issues', {'id': 'issue-id'}) - .subscribe(ttl: const Duration(hours: 1)); - - // Resolve current status for subscription - final status = db.currentStatus.forStream(sub); - final progress = status?.progress; - - // Wait for this subscription to have synced - await sub.waitForFirstSync(); - - // When the component needing the subscription is no longer active... - sub.unsubscribe(); - ``` - - - - ```Kotlin - val sub = database.syncStream("issues", mapOf("id" to JsonParam.String("issue-id"))).subscribe(ttl = 1.0.hours); - - // Resolve current status for subscription - val status = database.currentStatus.forStream(sub) - val progress = status?.progress - - // Wait for this subscription to have synced - sub.waitForFirstSync() - - // When the component needing the subscription is no longer active... - sub.unsubscribe() - ``` - - If you're using Compose, you can use the `composeSyncStream` extension to subscribe to a stream while - a composition is active: - - ```Kotlin - @Composable - fun TodoListPage(db: PowerSyncDatabase, id: String) { - val syncStream = db.composeSyncStream(name = "list", parameters = mapOf("list_id" to JsonParam.String(id))) - // Define component based on stream state - } - ``` - - - - ```csharp - var sub = await db.SyncStream("issues", new() { ["id"] = "issue-id" }) - .Subscribe(new SyncStreamSubscribeOptions { Ttl = TimeSpan.FromHours(1) }); - - // Resolve current status for subscription - var status = db.CurrentStatus.ForStream(sub); - var progress = status?.Progress; - - // Wait for this subscription to have synced - await sub.WaitForFirstSync(); - - // When the component needing the subscription is no longer active... - sub.Unsubscribe(); - ``` - - - - Coming soon - - +- **Type Conversion**: Data types from your backend database (Postgres, MongoDB, MySQL, etc.) are converted when synced to the client's SQLite database. Most types become `text`, so you may need to parse or cast values in your app code. See [Type Mapping](/sync/types) for details on how each type is handled. -## Examples +- **Primary Key**: PowerSync requires every synced table to have a primary key column named `id` of type `text`. If your backend uses a different column name or type, you'll need to map it. For MongoDB, the `_id` field automatically maps to `id`. See [Client ID](/sync/advanced/client-id) for setup instructions. - - - Try the [`react-supabase-todolist-sync-streams`](https://github.com/powersync-ja/powersync-js/tree/main/demos/react-supabase-todolist-sync-streams) demo app by following the instructions in the README. - - In this demo: - - - The app syncs `lists` by default (demonstrating equivalent behavior to Sync Rules, i.e. optimized for offline-first). - - The app syncs `todos` on demand when a user opens a list. - - When the user navigates back to the same list, they won't see a loading state — demonstrating caching behavior. - - - - Try the [`supabase-todolist`](https://github.com/powersync-ja/powersync.dart/tree/main/demos/supabase-todolist) demo app, which we updated to use Sync Streams (Sync Rules are still supported). - - Deploy the following Sync Streams: - - ```yaml sync_rules.yaml - config: - edition: 2 - streams: - lists: - query: SELECT * FROM lists - auto_subscribe: true - todos: - query: SELECT * FROM todos WHERE list_id = subscription.parameter('list') - ``` - - In this demo: - - - The app syncs `lists` by default (demonstrating equivalent behavior to Sync Rules, i.e. optimized for offline-first). - - The app syncs `todos` on demand when a user opens a list. - - When the user navigates back to the same list, they won't see a loading state — demonstrating caching behavior. - - - In progress, follow along: https://github.com/powersync-ja/powersync-kotlin/pull/270 - - \ No newline at end of file +- **Case Sensitivity**: To avoid issues across different databases and platforms, use **lowercase identifiers** for all table and column names in your Sync Streams. If your backend uses mixed case, see [Case Sensitivity](/sync/advanced/case-sensitivity) for how to handle it. + +- **Bucket Limits**: PowerSync uses internal partitions called "buckets" to efficiently sync data. There's a limit of 1,000 buckets per user. Each unique combination of a stream and its parameters creates one bucket, so keep this in mind when designing streams that use subscription parameters. See [Buckets](/architecture/powersync-service#bucket-system) for more on how this works. + +- **Troubleshooting**: If data isn't syncing as expected, the [Sync Diagnostics Client](/tools/diagnostics-client) helps you inspect what's happening for a specific user — you can see which buckets the user has and what data is being synced. + +## Migrating from Sync Rules + +If you have an existing project using Sync Rules, see the [Migration Guide](/sync/streams/migration) for step-by-step instructions, syntax changes, and examples. diff --git a/sync/streams/queries.mdx b/sync/streams/queries.mdx new file mode 100644 index 00000000..1bc7c3ab --- /dev/null +++ b/sync/streams/queries.mdx @@ -0,0 +1,409 @@ +--- +title: "Writing Stream Queries" +description: Learn how to filter data using parameters and subqueries, select specific columns, and transform data types in your stream queries. +--- + +Stream queries define what data syncs to each client. You write SQL-like queries that filter, select, and transform data based on who the user is and what they need to see. + +## Basic Queries + +The simplest stream query syncs all rows from a table: + +```yaml +streams: + categories: + query: SELECT * FROM categories +``` + +Add a `WHERE` clause to filter: + +```yaml +streams: + active_products: + query: SELECT * FROM products WHERE active = true +``` + +## Filtering by User + +Most apps need to sync different data to different users. Use `auth.user_id()` to filter by the authenticated user: + +```yaml +streams: + my_lists: + query: SELECT * FROM lists WHERE owner_id = auth.user_id() +``` + +This syncs only the lists owned by the current user. The user ID comes from the `sub` claim in their JWT token. + +## On-Demand Data with Parameters + +For data that should only sync when the user navigates to a specific screen, use subscription parameters. The client passes these when subscribing: + +```yaml +streams: + list_todos: + query: SELECT * FROM todos WHERE list_id = subscription.parameter('list_id') +``` + +```js +// When user opens a specific list, subscribe with that list's ID +const sub = await db.syncStream('list_todos', { list_id: 'abc123' }).subscribe(); +``` + +A client can subscribe to the same stream multiple times with different parameters: + +```js +// User has two lists open +const workSub = await db.syncStream('list_todos', { list_id: 'work' }).subscribe(); +const personalSub = await db.syncStream('list_todos', { list_id: 'personal' }).subscribe(); +// Both sync independently +``` + +## Selecting Columns + +Select specific columns instead of `*` to reduce data transfer: + +```yaml +streams: + users: + query: SELECT id, name, email, avatar_url FROM users WHERE org_id = auth.parameter('org_id') +``` + +### Renaming Columns + +Use `AS` to rename columns in the synced data: + +```yaml +streams: + todos: + query: SELECT id, name, created_timestamp AS created_at FROM todos +``` + +## Using Subqueries + +Subqueries let you filter based on related tables. Use `IN (SELECT ...)` to sync data where a foreign key matches rows in another table: + +```yaml +streams: + # Sync comments for issues owned by the current user + my_issue_comments: + query: | + SELECT * FROM comments + WHERE issue_id IN (SELECT id FROM issues WHERE owner_id = auth.user_id()) +``` + +Subqueries can be nested: + +```yaml +streams: + # Sync tasks for projects in organizations the user belongs to + org_tasks: + query: | + SELECT * FROM tasks + WHERE project_id IN ( + SELECT id FROM projects WHERE org_id IN ( + SELECT org_id FROM org_members WHERE user_id = auth.user_id() + ) + ) +``` + +### Combining Parameters with Subqueries + +A common pattern is using subscription parameters to select what data to sync, while using subqueries for authorization: + +```yaml +streams: + # User subscribes with a list_id, but can only see lists they own or that are shared with them + list_items: + query: | + SELECT * FROM items + WHERE list_id = subscription.parameter('list_id') + AND list_id IN ( + SELECT id FROM lists + WHERE owner_id = auth.user_id() + OR id IN (SELECT list_id FROM list_shares WHERE shared_with = auth.user_id()) + ) +``` + +## Type Transformations + +PowerSync syncs data to SQLite on the client. You may need to transform types for compatibility. + +### Cast to Text + +```yaml +streams: + items: + # Using CAST syntax + query: SELECT id, CAST(item_number AS TEXT) AS item_number FROM items + + # Or using :: syntax + # query: "SELECT id, item_number :: text AS item_number FROM items" +``` + +### Extract from JSON/JSONB + +```yaml +streams: + items: + query: SELECT id, metadata_json ->> 'description' AS description FROM items +``` + +### Convert Binary to Base64 + +```yaml +streams: + documents: + query: SELECT id, base64(thumbnail) AS thumbnail_base64 FROM documents +``` + +### Convert DateTime to Unix Epoch + +```yaml +streams: + events: + query: SELECT id, unixepoch(created_at) AS created_at FROM events +``` + +## Parameter Types + +Sync Streams support three types of parameters, each serving a different purpose. + +### Subscription Parameters + +Passed from the client when it subscribes to a stream. This is the most common way to request specific data. + +For example, if a user opens two different todo lists, the client subscribes to the same `list_todos` stream twice, once for each list: + +```yaml +streams: + list_todos: + query: SELECT * FROM todos WHERE list_id = subscription.parameter('list_id') +``` + +```js +// User opens List A - subscribe with list_id = 'list-a' +const subA = await db.syncStream('list_todos', { list_id: 'list-a' }).subscribe(); + +// User also opens List B - subscribe again with list_id = 'list-b' +const subB = await db.syncStream('list_todos', { list_id: 'list-b' }).subscribe(); + +// Both lists' todos are now syncing independently +``` + +| Function | Description | +|----------|-------------| +| `subscription.parameter('key')` | Get a single parameter by name | +| `subscription.parameters()` | All parameters as JSON (for dynamic access) | + +### Auth Parameters + +Claims from the user's JWT token. Use these to filter data based on who the user is. These values are secure and tamper-proof since they come from your authentication system. + +```yaml +streams: + my_lists: + query: SELECT * FROM lists WHERE owner_id = auth.user_id() + + # Access custom JWT claims + org_data: + query: SELECT * FROM projects WHERE org_id = auth.parameter('org_id') +``` + +| Function | Description | +|----------|-------------| +| `auth.user_id()` | The user's ID (same as `auth.parameter('sub')`) | +| `auth.parameter('key')` | Get a specific JWT claim | +| `auth.parameters()` | Full JWT payload as JSON | + +### Connection Parameters + +Specified "globally" at the connection level, before any streams are subscribed. These are the equivalent of [Client Parameters](/sync/rules/client-parameters) in Sync Rules. Use them when you need a value that applies across all streams for the session. + +```yaml +streams: + config: + query: SELECT * FROM config WHERE environment = connection.parameter('environment') +``` + +| Function | Description | +|----------|-------------| +| `connection.parameter('key')` | Get a single connection parameter | +| `connection.parameters()` | All connection parameters as JSON | + + +Changing connection parameters requires reconnecting. For values that change during a session, use subscription parameters instead. + + +### When to Use Each + +**Subscription parameters** are the most flexible option. Use them when the client needs to choose what data to sync at runtime. Each subscription operates independently, so a user can have multiple subscriptions to the same stream with different parameters. + +**Auth parameters** are the most secure option. Use them when you need to filter data based on who the user is. Since these values come from the signed JWT, they can't be tampered with by the client. + +**Connection parameters** apply globally across all streams for the session. Use them for values that rarely change, like environment flags or feature toggles. Keep in mind that changing them requires reconnecting. + +For most use cases, subscription parameters are the best choice. They're more flexible and work well with modern app patterns like multiple tabs. + +## Advanced Patterns + +### Syncing Related Data + +When viewing an item, sync its related data (e.g. comments) using separate streams: + +```yaml +streams: + issue: + query: | + SELECT * FROM issues + WHERE id = subscription.parameter('issue_id') + AND project_id IN (SELECT project_id FROM project_members WHERE user_id = auth.user_id()) + + issue_comments: + query: | + SELECT * FROM comments + WHERE issue_id = subscription.parameter('issue_id') + AND issue_id IN ( + SELECT id FROM issues WHERE project_id IN ( + SELECT project_id FROM project_members WHERE user_id = auth.user_id() + ) + ) +``` + +Subscribe to all when the user opens an issue: + +```js +const issueSub = await db.syncStream('issue', { issue_id: issueId }).subscribe(); +const commentsSub = await db.syncStream('issue_comments', { issue_id: issueId }).subscribe(); + +await Promise.all([ + issueSub.waitForFirstSync(), + commentsSub.waitForFirstSync() +]); +``` + +### Multi-Tenant Applications + +For apps where users belong to organizations, use JWT claims to scope data to the tenant: + +```yaml +streams: + # All projects in the user's organization (auto-sync on connect) + org_projects: + query: SELECT * FROM projects WHERE org_id = auth.parameter('org_id') + auto_subscribe: true + + # Tasks for a specific project (on-demand) + project_tasks: + query: | + SELECT * FROM tasks + WHERE project_id = subscription.parameter('project_id') + AND project_id IN (SELECT id FROM projects WHERE org_id = auth.parameter('org_id')) +``` + +### Role-Based Access + +Filter data based on user roles from JWT claims: + +```yaml +streams: + # Admins see all articles, others see only published or their own + articles: + query: | + SELECT * FROM articles + WHERE org_id = auth.parameter('org_id') + AND ( + status = 'published' + OR author_id = auth.user_id() + OR auth.parameter('role') = 'admin' + ) + auto_subscribe: true +``` + +### Conditional Global Data + +Sync data only to users who meet certain criteria. Use a subquery to check user properties: + +```yaml +streams: + # Only sync admin settings to users who are admins + admin_settings: + query: | + SELECT * FROM admin_settings + WHERE EXISTS ( + SELECT 1 FROM users + WHERE id = auth.user_id() AND is_admin = true + ) + auto_subscribe: true +``` + +### User's Default or Primary Item + +Sync a user's default item based on a preference stored in another table: + +```yaml +streams: + # Sync todos from the user's primary list + primary_list_todos: + query: | + SELECT * FROM todos + WHERE list_id IN ( + SELECT primary_list_id FROM users WHERE id = auth.user_id() + ) + auto_subscribe: true +``` + +### Expanding JSON Arrays + +If your JWT contains an array of values (like project IDs), use `json_each()` to expand them: + +```yaml +streams: + # User's JWT contains: { "project_ids": ["proj-1", "proj-2", "proj-3"] } + my_projects: + query: | + SELECT * FROM projects + WHERE id IN ( + SELECT value FROM json_each(auth.parameters() -> 'project_ids') + ) + auto_subscribe: true +``` + +This syncs all projects whose IDs are listed in the user's JWT `project_ids` claim. + +## Complete Example + +A full configuration combining multiple techniques: + +```yaml +config: + edition: 2 + +streams: + # Global reference data (no parameters, auto-subscribed) + categories: + query: SELECT id, name, CAST(sort_order AS TEXT) AS sort_order FROM categories + auto_subscribe: true + + # User's own items with transformed fields (auth parameter, auto-subscribed) + my_items: + query: | + SELECT + id, + name, + metadata ->> 'status' AS status, + unixepoch(created_at) AS created_at, + base64(thumbnail) AS thumbnail + FROM items + WHERE owner_id = auth.user_id() + auto_subscribe: true + + # On-demand item details (subscription parameter with auth check) + item_comments: + query: | + SELECT * FROM comments + WHERE item_id = subscription.parameter('item_id') + AND item_id IN (SELECT id FROM items WHERE owner_id = auth.user_id()) +``` + +See [Supported SQL](/sync/supported-sql) for all available operators and functions. From 9ab1686c2f8c061eca8584b039efbac12d1c1f45 Mon Sep 17 00:00:00 2001 From: Benita Volkmann Date: Thu, 5 Feb 2026 14:33:03 +0200 Subject: [PATCH 06/11] Sync Streams section with new features --- docs.json | 3 +- snippets/stream-definition-reference.mdx | 34 +++ sync/streams/ctes.mdx | 218 ++++++++++++++- sync/streams/examples.mdx | 331 ++++++++++++++++++----- sync/streams/migration.mdx | 22 +- sync/streams/overview.mdx | 21 +- sync/streams/parameters.mdx | 145 ++++++++++ sync/streams/queries.mdx | 323 +++++++++------------- 8 files changed, 805 insertions(+), 292 deletions(-) create mode 100644 snippets/stream-definition-reference.mdx create mode 100644 sync/streams/parameters.mdx diff --git a/docs.json b/docs.json index bff8946a..11f47c1c 100644 --- a/docs.json +++ b/docs.json @@ -171,10 +171,11 @@ "group": "Sync Streams (Beta)", "pages": [ "sync/streams/overview", + "sync/streams/parameters", "sync/streams/queries", "sync/streams/ctes", - "sync/streams/client-usage", "sync/streams/examples", + "sync/streams/client-usage", "sync/streams/migration" ] }, diff --git a/snippets/stream-definition-reference.mdx b/snippets/stream-definition-reference.mdx new file mode 100644 index 00000000..a4e6f8d5 --- /dev/null +++ b/snippets/stream-definition-reference.mdx @@ -0,0 +1,34 @@ +```yaml +config: + edition: 2 + +with: + # Global CTEs (optional) - reusable subqueries available to all streams + cte_name: SELECT ... FROM ... + +streams: + stream_name: + # Query options (use one) + query: SELECT * FROM table WHERE ... # Single query + queries: # Multiple queries (same bucket) + - SELECT * FROM table_a WHERE ... + - SELECT * FROM table_b WHERE ... + + # Stream-scoped CTEs (optional) + with: + local_cte: SELECT ... FROM ... + + # Behavior options + auto_subscribe: true # Auto-subscribe clients on connect (default: false) + priority: 1 # Sync priority, lower = higher priority (optional) + accept_potentially_dangerous_queries: true # Silence security warnings (default: false) +``` + +| Option | Default | Description | +|--------|---------|-------------| +| `query` | — | SQL-like query defining which data to sync. Use either `query` or `queries`, not both. | +| `queries` | — | Array of queries sharing the same bucket. See [Multiple Queries](/sync/streams/queries#multiple-queries-per-stream). | +| `with` | — | Stream-scoped [CTEs](/sync/streams/ctes) available only to this stream's queries. | +| `auto_subscribe` | `false` | When `true`, clients automatically subscribe on connect. | +| `priority` | — | Sync priority (lower value = higher priority). See [Prioritized Sync](/sync/advanced/prioritized-sync). | +| `accept_potentially_dangerous_queries` | `false` | Silences security warnings when queries use client-controlled parameters without JWT-based authorization. Set to `true` only if you've verified the query is safe. | diff --git a/sync/streams/ctes.mdx b/sync/streams/ctes.mdx index d420da25..f67b727c 100644 --- a/sync/streams/ctes.mdx +++ b/sync/streams/ctes.mdx @@ -1,6 +1,218 @@ --- -title: "Using CTEs" -description: Reuse common query patterns across multiple Sync Streams using Common Table Expressions (CTEs). +title: "Common Table Expressions (CTEs)" +description: Reuse common query patterns across multiple streams to simplify complex configurations and improve efficiency. --- -todo +When multiple streams need the same filtering logic, you can define it once using a Common Table Expression (CTE) and reuse it everywhere. This keeps your configuration DRY and makes it easier to maintain. + +## Why Use CTEs + +Consider an app where users belong to organizations. Many streams need to filter by the user's organization: + +```yaml +# Without CTEs - repetitive and hard to maintain +streams: + org_projects: + query: | + SELECT * FROM projects + WHERE org_id IN (SELECT org_id FROM org_members WHERE user_id = auth.user_id()) + + org_repositories: + query: | + SELECT * FROM repositories + WHERE org_id IN (SELECT org_id FROM org_members WHERE user_id = auth.user_id()) + + org_settings: + query: | + SELECT * FROM settings + WHERE org_id IN (SELECT org_id FROM org_members WHERE user_id = auth.user_id()) +``` + +The same subquery appears three times. If the membership logic changes, you'd need to update all three. CTEs solve this: + +```yaml +# With CTEs - define once, use everywhere +with: + user_orgs: SELECT org_id FROM org_members WHERE user_id = auth.user_id() + +streams: + org_projects: + query: SELECT * FROM projects WHERE org_id IN user_orgs + + org_repositories: + query: SELECT * FROM repositories WHERE org_id IN user_orgs + + org_settings: + query: SELECT * FROM settings WHERE org_id IN user_orgs +``` + +## Defining CTEs + +CTEs are defined in a `with` block. Each CTE has a name and a SELECT query: + +```yaml +with: + cte_name: SELECT columns FROM table WHERE conditions +``` + +The CTE query can include any filtering logic, including parameters: + +```yaml +with: + user_orgs: SELECT org_id FROM org_members WHERE user_id = auth.user_id() + active_projects: SELECT id FROM projects WHERE archived = false +``` + +## Using CTEs in Queries + +Once defined, use a CTE name anywhere you'd use a subquery. There are two syntaxes: + +**Short-hand syntax** (recommended for simple cases): + +```yaml +streams: + projects: + query: SELECT * FROM projects WHERE org_id IN user_orgs +``` + +**Explicit subquery syntax** (when you need to select specific columns): + +```yaml +streams: + projects: + query: SELECT * FROM projects WHERE org_id IN (SELECT org_id FROM user_orgs) +``` + +Both forms work the same way. The short-hand `IN cte_name` is equivalent to `IN (SELECT * FROM cte_name)`. + +## Global vs Stream-Scoped CTEs + +### Global CTEs + +Define CTEs at the top level to make them available to all streams: + +```yaml +with: + user_orgs: SELECT org_id FROM org_members WHERE user_id = auth.user_id() + +streams: + projects: + query: SELECT * FROM projects WHERE org_id IN user_orgs + + tasks: + query: SELECT * FROM tasks WHERE project_id IN (SELECT id FROM projects WHERE org_id IN user_orgs) +``` + +### Stream-Scoped CTEs + +Define CTEs inside a stream to limit their scope to that stream: + +```yaml +streams: + project_data: + with: + accessible_projects: | + SELECT id FROM projects + WHERE org_id IN (SELECT org_id FROM org_members WHERE user_id = auth.user_id()) + queries: + - SELECT * FROM projects WHERE id IN accessible_projects + - SELECT * FROM tasks WHERE project_id IN accessible_projects + - SELECT * FROM comments WHERE project_id IN accessible_projects +``` + +Stream-scoped CTEs are useful when: +- The CTE is only relevant to one stream +- You want to keep related logic together +- You're using [multiple queries per stream](#combining-with-multiple-queries) + +## Combining with Multiple Queries + +CTEs work well with the `queries` feature (multiple queries per stream). This lets you share both the CTE and the bucket: + +```yaml +streams: + user_data: + with: + my_org: SELECT org_id FROM org_members WHERE user_id = auth.user_id() + queries: + - SELECT * FROM projects WHERE org_id IN my_org + - SELECT * FROM repositories WHERE org_id IN my_org + - SELECT * FROM team_members WHERE org_id IN my_org +``` + +All three queries share: +1. The CTE definition (no repeated subquery logic) +2. The same bucket (efficient sync, no duplicate data) + +See [Multiple Queries per Stream](/sync/streams/queries#multiple-queries-per-stream) for more details. + +## Complete Example + +A full configuration showing CTEs in practice: + +```yaml +config: + edition: 2 + +with: + # User's organizations (used in multiple streams) + user_orgs: | + SELECT org_id FROM org_memberships WHERE user_id = auth.user_id() + + # User's accessible projects (combines org membership with project access) + accessible_projects: | + SELECT id FROM projects + WHERE org_id IN user_orgs + OR id IN (SELECT project_id FROM project_shares WHERE shared_with = auth.user_id()) + +streams: + # Organization-level data (auto-sync) + organizations: + query: SELECT * FROM organizations WHERE id IN user_orgs + auto_subscribe: true + + projects: + query: SELECT * FROM projects WHERE id IN accessible_projects + auto_subscribe: true + + # Project details (on-demand with authorization) + project_tasks: + query: | + SELECT * FROM tasks + WHERE project_id = subscription.parameter('project_id') + AND project_id IN accessible_projects + + project_files: + query: | + SELECT * FROM files + WHERE project_id = subscription.parameter('project_id') + AND project_id IN accessible_projects +``` + +## Limitations + +**CTEs cannot reference other CTEs.** Each CTE must be self-contained: + +```yaml +# This won't work - cte2 cannot reference cte1 +with: + cte1: SELECT org_id FROM org_members WHERE user_id = auth.user_id() + cte2: SELECT id FROM projects WHERE org_id IN cte1 # Error! +``` + +If you need to chain filters, use nested subqueries in your stream query instead: + +```yaml +with: + user_orgs: SELECT org_id FROM org_members WHERE user_id = auth.user_id() + +streams: + tasks: + query: | + SELECT * FROM tasks + WHERE project_id IN ( + SELECT id FROM projects WHERE org_id IN user_orgs + ) +``` + +**CTE names take precedence over table names.** If you define a CTE with the same name as a database table, the CTE will be used. Choose distinct names to avoid confusion. diff --git a/sync/streams/examples.mdx b/sync/streams/examples.mdx index cd1d76c8..0e4886b0 100644 --- a/sync/streams/examples.mdx +++ b/sync/streams/examples.mdx @@ -1,61 +1,176 @@ --- -title: "Examples & Demos" -description: Working demo apps and complete Sync Streams examples for common patterns. +title: "Examples, Patterns & Demos" +description: Common patterns, use case examples, and working demo apps for Sync Streams. +sidebarTitle: "Examples & Demos" --- -Explore working demo apps that demonstrate Sync Streams in action. +## Common Patterns -## Demo Apps +These patterns show how to combine Sync Streams features to solve common real-world scenarios. -These demo apps show how to combine auto-subscribe streams (for data that should always be available) with on-demand streams (for data loaded when needed). +### Multi-Tenant Applications - - -Try the [`react-supabase-todolist-sync-streams`](https://github.com/powersync-ja/powersync-js/tree/main/demos/react-supabase-todolist-sync-streams) demo app by following the instructions in the README. +For apps where users belong to organizations, use JWT claims to scope data to the tenant: -In this demo: -- The app syncs `lists` by default, so they're available immediately and offline (demonstrating auto-subscribe behavior). -- The app syncs `todos` on demand when a user opens a list (demonstrating subscription parameters). -- When the user navigates back to the same list, they won't see a loading state, because the data is cached locally (demonstrating TTL caching behavior). - - -Try the [`supabase-todolist`](https://github.com/powersync-ja/powersync.dart/tree/main/demos/supabase-todolist) demo app, which supports Sync Streams. +```yaml +streams: + # All projects in the user's organization (auto-sync on connect) + org_projects: + query: SELECT * FROM projects WHERE org_id = auth.parameter('org_id') + auto_subscribe: true -Deploy the following Sync Streams configuration: + # Tasks for a specific project (on-demand) + project_tasks: + query: | + SELECT * FROM tasks + WHERE project_id = subscription.parameter('project_id') + AND project_id IN (SELECT id FROM projects WHERE org_id = auth.parameter('org_id')) +``` + +For more complex organization structures where users can belong to multiple organizations, see [Expanding JSON Arrays](/sync/streams/parameters#expanding-json-arrays). + +### Role-Based Access + +Filter data based on user roles from JWT claims: ```yaml -config: - edition: 2 +streams: + # Admins see all articles, others see only published or their own + articles: + query: | + SELECT * FROM articles + WHERE org_id = auth.parameter('org_id') + AND ( + status = 'published' + OR author_id = auth.user_id() + OR auth.parameter('role') = 'admin' + ) + auto_subscribe: true +``` +### Shared Resources + +Sync items that are either owned by the user or explicitly shared with them: + +```yaml streams: - lists: - query: SELECT * FROM lists + my_documents: + query: | + SELECT * FROM documents + WHERE owner_id = auth.user_id() + OR id IN (SELECT document_id FROM document_shares WHERE shared_with = auth.user_id()) auto_subscribe: true - todos: - query: SELECT * FROM todos WHERE list_id = subscription.parameter('list') ``` -In this demo: -- The app syncs `lists` by default, so they're available immediately and offline (demonstrating auto-subscribe behavior). -- The app syncs `todos` on demand when a user opens a list (demonstrating subscription parameters). -- When the user navigates back to the same list, they won't see a loading state, because the data is cached locally (demonstrating TTL caching behavior). - - -Kotlin Sync Streams support is available. Demo app coming soon. - - -Swift Sync Streams support is available. Demo app coming soon. - - -.NET Sync Streams support is available. Demo app coming soon. - - +### Syncing Related Data -## Common Patterns +When viewing an item, sync its related data (e.g. comments) using separate streams that share a subscription parameter: + +```yaml +streams: + issue: + query: | + SELECT * FROM issues + WHERE id = subscription.parameter('issue_id') + AND project_id IN (SELECT project_id FROM project_members WHERE user_id = auth.user_id()) + + issue_comments: + query: | + SELECT * FROM comments + WHERE issue_id = subscription.parameter('issue_id') + AND issue_id IN ( + SELECT id FROM issues WHERE project_id IN ( + SELECT project_id FROM project_members WHERE user_id = auth.user_id() + ) + ) +``` + +Subscribe to both when the user opens an issue: + +```js +const issueSub = await db.syncStream('issue', { issue_id: issueId }).subscribe(); +const commentsSub = await db.syncStream('issue_comments', { issue_id: issueId }).subscribe(); + +await Promise.all([ + issueSub.waitForFirstSync(), + commentsSub.waitForFirstSync() +]); +``` + + +If multiple streams share the same filtering logic, consider using [CTEs](/sync/streams/ctes) to avoid repetition and [multiple queries per stream](/sync/streams/queries#multiple-queries-per-stream) to reduce bucket count. + + +### Conditional Global Data + +Sync data only to users who meet certain criteria: + +```yaml +streams: + # Only sync admin settings to users who are admins + admin_settings: + query: | + SELECT * FROM admin_settings + WHERE EXISTS ( + SELECT 1 FROM users + WHERE id = auth.user_id() AND is_admin = true + ) + auto_subscribe: true +``` + +### User's Default or Primary Item + +Sync a user's default item based on a preference stored in another table: + +```yaml +streams: + # Sync todos from the user's primary list + primary_list_todos: + query: | + SELECT * FROM todos + WHERE list_id IN ( + SELECT primary_list_id FROM users WHERE id = auth.user_id() + ) + auto_subscribe: true +``` + +### Hierarchical Data + +Sync data across a parent-child hierarchy: -### Todo List with On-Demand Loading +```yaml +streams: + org_tasks: + query: | + SELECT * FROM tasks + WHERE project_id IN ( + SELECT id FROM projects WHERE org_id IN ( + SELECT org_id FROM org_members WHERE user_id = auth.user_id() + ) + ) + auto_subscribe: true +``` -A classic pattern: sync the list of lists upfront, but only sync todos when the user opens a specific list. +For deeply nested hierarchies, consider using [joins](/sync/streams/queries#using-joins) for better readability: + +```yaml +streams: + org_tasks: + query: | + SELECT t.* FROM tasks t + JOIN projects p ON t.project_id = p.id + JOIN org_members om ON p.org_id = om.org_id + WHERE om.user_id = auth.user_id() + auto_subscribe: true +``` + +## Use Case Examples + +Complete configurations for common application types. + +### Todo List App + +Sync the list of lists upfront, but only sync todos when the user opens a specific list: ```yaml config: @@ -89,56 +204,146 @@ await sub.waitForFirstSync(); const todos = await db.getAll('SELECT * FROM todos WHERE list_id = ?', [selectedListId]); ``` -### Project Workspace +### Chat Application + +Sync conversation list upfront, load messages on demand: + +```yaml +config: + edition: 2 + +streams: + # User's conversations - always show the conversation list + my_conversations: + query: | + SELECT * FROM conversations + WHERE id IN (SELECT conversation_id FROM participants WHERE user_id = auth.user_id()) + auto_subscribe: true + + # Messages - only load for the active conversation + conversation_messages: + query: | + SELECT * FROM messages + WHERE conversation_id = subscription.parameter('conversation_id') + AND conversation_id IN ( + SELECT conversation_id FROM participants WHERE user_id = auth.user_id() + ) +``` + +### Project Management App -Sync project metadata upfront, but load project contents on demand. +A full configuration for a multi-tenant project management app using [CTEs](/sync/streams/ctes): ```yaml config: edition: 2 +with: + # CTE for user's accessible projects + user_projects: | + SELECT id FROM projects + WHERE org_id = auth.parameter('org_id') + AND (is_public = true OR id IN ( + SELECT project_id FROM project_members WHERE user_id = auth.user_id() + )) + streams: - # User's projects - always available for navigation - my_projects: - query: SELECT * FROM projects WHERE owner_id = auth.user_id() + # Organization data - always available + org_info: + query: SELECT * FROM organizations WHERE id = auth.parameter('org_id') + auto_subscribe: true + + # All accessible projects - always available for navigation + projects: + query: SELECT * FROM projects WHERE id IN user_projects auto_subscribe: true - # Project details - loaded when user opens a project + # Project details - on demand when user opens a project project_tasks: query: | SELECT * FROM tasks WHERE project_id = subscription.parameter('project_id') - AND project_id IN (SELECT id FROM projects WHERE owner_id = auth.user_id()) + AND project_id IN user_projects project_files: query: | SELECT * FROM files WHERE project_id = subscription.parameter('project_id') - AND project_id IN (SELECT id FROM projects WHERE owner_id = auth.user_id()) + AND project_id IN user_projects ``` -### Chat Application +### Organization Workspace (Using Multiple Queries) -Sync conversation list upfront, load messages on demand. +Group related organization data into a single bucket using [multiple queries per stream](/sync/streams/queries#multiple-queries-per-stream): ```yaml config: edition: 2 +with: + user_orgs: SELECT org_id FROM org_members WHERE user_id = auth.user_id() + streams: - # User's conversations - always show the conversation list - my_conversations: - query: | - SELECT * FROM conversations - WHERE id IN (SELECT conversation_id FROM participants WHERE user_id = auth.user_id()) + # All org-level data syncs together in one bucket + org_data: + queries: + - SELECT * FROM organizations WHERE id IN user_orgs + - SELECT * FROM projects WHERE org_id IN user_orgs + - SELECT * FROM team_members WHERE org_id IN user_orgs auto_subscribe: true - # Messages - only load for the active conversation - conversation_messages: - query: | - SELECT * FROM messages - WHERE conversation_id = subscription.parameter('conversation_id') - AND conversation_id IN ( - SELECT conversation_id FROM participants WHERE user_id = auth.user_id() - ) + # Project details - on demand + project_details: + with: + accessible_projects: SELECT id FROM projects WHERE org_id IN user_orgs + queries: + - SELECT * FROM tasks WHERE project_id = subscription.parameter('project_id') AND project_id IN accessible_projects + - SELECT * FROM files WHERE project_id = subscription.parameter('project_id') AND project_id IN accessible_projects + - SELECT * FROM comments WHERE project_id = subscription.parameter('project_id') AND project_id IN accessible_projects +``` + +## Demo Apps + +Working demo apps that demonstrate Sync Streams in action. These show how to combine auto-subscribe streams (for data that should always be available) with on-demand streams (for data loaded when needed). + + + +Try the [`react-supabase-todolist-sync-streams`](https://github.com/powersync-ja/powersync-js/tree/main/demos/react-supabase-todolist-sync-streams) demo app by following the instructions in the README. + +In this demo: +- The app syncs `lists` by default, so they're available immediately and offline (demonstrating auto-subscribe behavior). +- The app syncs `todos` on demand when a user opens a list (demonstrating subscription parameters). +- When the user navigates back to the same list, they won't see a loading state, because the data is cached locally (demonstrating TTL caching behavior). + + +Try the [`supabase-todolist`](https://github.com/powersync-ja/powersync.dart/tree/main/demos/supabase-todolist) demo app, which supports Sync Streams. + +Deploy the following Sync Streams configuration: + +```yaml +config: + edition: 2 + +streams: + lists: + query: SELECT * FROM lists + auto_subscribe: true + todos: + query: SELECT * FROM todos WHERE list_id = subscription.parameter('list') ``` + +In this demo: +- The app syncs `lists` by default, so they're available immediately and offline (demonstrating auto-subscribe behavior). +- The app syncs `todos` on demand when a user opens a list (demonstrating subscription parameters). +- When the user navigates back to the same list, they won't see a loading state, because the data is cached locally (demonstrating TTL caching behavior). + + +Kotlin Sync Streams support is available. Demo app coming soon. + + +Swift Sync Streams support is available. Demo app coming soon. + + +.NET Sync Streams support is available. Demo app coming soon. + + diff --git a/sync/streams/migration.mdx b/sync/streams/migration.mdx index 6e9861a6..5250ad05 100644 --- a/sync/streams/migration.mdx +++ b/sync/streams/migration.mdx @@ -3,6 +3,8 @@ title: "Migrating from Sync Rules" description: How to migrate existing projects from Sync Rules to Sync Streams. --- +import StreamDefinitionReference from '/snippets/stream-definition-reference.mdx'; + ## Why Migrate? PowerSync's original Sync Rules system was optimized for offline-first use cases where you want to "sync everything upfront" when the client connects, so data is available locally if the user goes offline. @@ -98,25 +100,9 @@ try await db.connect(connector: connector, options: ConnectOptions( Use the [Sync Rules to Sync Streams converter](https://powersync-community.github.io/bucket-definitions-to-sync-streams/) to automatically convert your existing Sync Rules to Sync Streams. This tool handles most common patterns and gives you a starting point for your migration. -## Stream Options - -All available stream options: - -```yaml -streams: - my_stream: - query: SELECT * FROM table WHERE ... - auto_subscribe: true - priority: 1 - accept_potentially_dangerous_queries: true -``` +## Stream Definition Reference -| Option | Default | Description | -|--------|---------|-------------| -| `query` | (required) | SQL-like query defining which data to sync | -| `auto_subscribe` | `false` | When `true`, clients automatically subscribe to this stream on connect | -| `priority` | — | Sync priority (lower value = higher priority). See [Prioritized Sync](/sync/advanced/prioritized-sync) | -| `accept_potentially_dangerous_queries` | `false` | Silences security warnings. PowerSync warns when queries use subscription or connection parameters without also including JWT-based authorization (e.g., `auth.user_id()`). Since clients can send any value for these parameters, relying on them alone for access control could be insecure. Set to `true` if you've verified the query is safe or authorization is handled elsewhere. | + ## Migration Examples diff --git a/sync/streams/overview.mdx b/sync/streams/overview.mdx index 238bc3b8..e18decd4 100644 --- a/sync/streams/overview.mdx +++ b/sync/streams/overview.mdx @@ -4,6 +4,8 @@ description: Sync Streams enable partial sync, letting you define exactly which sidebarTitle: "Quickstart" --- +import StreamDefinitionReference from '/snippets/stream-definition-reference.mdx'; + Sync Streams enable partial sync — instead of syncing entire tables, you tell PowerSync exactly which data each user should have on their device. You write simple SQL-like queries to define the data, and your client app subscribes to the streams it needs. PowerSync handles the rest, keeping data in sync in real-time and making it available offline. For example, you might create a stream that syncs only the current user's todo items, another for shared projects they have access to, and another for reference data that everyone needs. Your app subscribes to these streams on demand, and only that data syncs to the client's local SQLite database. @@ -55,6 +57,10 @@ sync_config: +Available stream options: + + + ## Basic Examples There are two independent concepts to understand: @@ -180,7 +186,7 @@ streams: const sub = await db.syncStream('list_todos', { list_id: 'abc123' }).subscribe(); ``` -See [Writing Stream Queries](/sync/streams/queries) for the full parameter reference, subqueries, and more patterns. +See [Using Parameters](/sync/streams/parameters) for the full reference on subscription, auth, and connection parameters. ## Client-Side Usage @@ -295,14 +301,9 @@ Each subscription has a `ttl` that keeps data cached after unsubscribing. This e const sub = await db.syncStream('todos', { list_id: 'abc' }) .subscribe({ ttl: 3600 }); // Cache for 1 hour after unsubscribe ``` - -## Examples & Demos - -See [Examples & Demos](/sync/streams/examples) for common examples and demo apps that can be used as a reference for your own project. - ## Developer Notes -- **SQL Syntax**: Stream queries use a SQL-like syntax, but only `SELECT` statements are supported. You can use `IN (SELECT ...)` subqueries for filtering, but `JOIN`, `GROUP BY`, `ORDER BY`, and `LIMIT` are not available. See [Supported SQL](/sync/supported-sql) for the full list of supported operators and functions. +- **SQL Syntax**: Stream queries use a SQL-like syntax with `SELECT` statements. You can use subqueries, `INNER JOIN`, and [CTEs](/sync/streams/ctes) for filtering. `GROUP BY`, `ORDER BY`, and `LIMIT` are not supported. See [Writing Stream Queries](/sync/streams/queries) for details on joins, multiple queries per stream, and other features. - **Type Conversion**: Data types from your backend database (Postgres, MongoDB, MySQL, etc.) are converted when synced to the client's SQLite database. Most types become `text`, so you may need to parse or cast values in your app code. See [Type Mapping](/sync/types) for details on how each type is handled. @@ -310,10 +311,14 @@ See [Examples & Demos](/sync/streams/examples) for common examples and demo apps - **Case Sensitivity**: To avoid issues across different databases and platforms, use **lowercase identifiers** for all table and column names in your Sync Streams. If your backend uses mixed case, see [Case Sensitivity](/sync/advanced/case-sensitivity) for how to handle it. -- **Bucket Limits**: PowerSync uses internal partitions called "buckets" to efficiently sync data. There's a limit of 1,000 buckets per user. Each unique combination of a stream and its parameters creates one bucket, so keep this in mind when designing streams that use subscription parameters. See [Buckets](/architecture/powersync-service#bucket-system) for more on how this works. +- **Bucket Limits**: PowerSync uses internal partitions called "buckets" to efficiently sync data. There's a limit of 1,000 buckets per user. Each unique combination of a stream and its parameters creates one bucket, so keep this in mind when designing streams that use subscription parameters. You can use [multiple queries per stream](/sync/streams/queries#multiple-queries-per-stream) to reduce bucket count. See [Buckets](/architecture/powersync-service#bucket-system) for background on this. - **Troubleshooting**: If data isn't syncing as expected, the [Sync Diagnostics Client](/tools/diagnostics-client) helps you inspect what's happening for a specific user — you can see which buckets the user has and what data is being synced. +## Examples & Demos + +See [Examples & Demos](/sync/streams/examples) for working demo apps and complete application patterns. + ## Migrating from Sync Rules If you have an existing project using Sync Rules, see the [Migration Guide](/sync/streams/migration) for step-by-step instructions, syntax changes, and examples. diff --git a/sync/streams/parameters.mdx b/sync/streams/parameters.mdx new file mode 100644 index 00000000..ece2efe0 --- /dev/null +++ b/sync/streams/parameters.mdx @@ -0,0 +1,145 @@ +--- +title: "Using Parameters" +description: Filter data dynamically using subscription, auth, and connection parameters in your stream queries. +--- + +Parameters let you filter data dynamically based on who the user is and what they need to see. Sync Streams support three types of parameters, each serving a different purpose. + +## Subscription Parameters + +Passed from the client when it subscribes to a stream. This is the most common way to request specific data on demand. + +For example, if a user opens two different todo lists, the client subscribes to the same `list_todos` stream twice, once for each list: + +```yaml +streams: + list_todos: + query: SELECT * FROM todos WHERE list_id = subscription.parameter('list_id') +``` + +```js +// User opens List A - subscribe with list_id = 'list-a' +const subA = await db.syncStream('list_todos', { list_id: 'list-a' }).subscribe(); + +// User also opens List B - subscribe again with list_id = 'list-b' +const subB = await db.syncStream('list_todos', { list_id: 'list-b' }).subscribe(); + +// Both lists' todos are now syncing independently +``` + +| Function | Description | +|----------|-------------| +| `subscription.parameter('key')` | Get a single parameter by name | +| `subscription.parameters()` | All parameters as JSON (for dynamic access) | + +## Auth Parameters + +Claims from the user's JWT token. Use these to filter data based on who the user is. These values are secure and tamper-proof since they come from your authentication system. + +```yaml +streams: + my_lists: + query: SELECT * FROM lists WHERE owner_id = auth.user_id() + + # Access custom JWT claims + org_data: + query: SELECT * FROM projects WHERE org_id = auth.parameter('org_id') +``` + +| Function | Description | +|----------|-------------| +| `auth.user_id()` | The user's ID (same as `auth.parameter('sub')`) | +| `auth.parameter('key')` | Get a specific JWT claim | +| `auth.parameters()` | Full JWT payload as JSON | + +## Connection Parameters + +Specified "globally" at the connection level, before any streams are subscribed. These are the equivalent of [Client Parameters](/sync/rules/client-parameters) in Sync Rules. Use them when you need a value that applies across all streams for the session. + +```yaml +streams: + config: + query: SELECT * FROM config WHERE environment = connection.parameter('environment') +``` + +| Function | Description | +|----------|-------------| +| `connection.parameter('key')` | Get a single connection parameter | +| `connection.parameters()` | All connection parameters as JSON | + + +Changing connection parameters requires reconnecting. For values that change during a session, use subscription parameters instead. + + +## When to Use Each + +**Subscription parameters** are the most flexible option. Use them when the client needs to choose what data to sync at runtime. Each subscription operates independently, so a user can have multiple subscriptions to the same stream with different parameters. + +**Auth parameters** are the most secure option. Use them when you need to filter data based on who the user is. Since these values come from the signed JWT, they can't be tampered with by the client. + +**Connection parameters** apply globally across all streams for the session. Use them for values that rarely change, like environment flags or feature toggles. Keep in mind that changing them requires reconnecting. + +For most use cases, subscription parameters are the best choice. They're more flexible and work well with modern app patterns like multiple tabs. + +## Expanding JSON Arrays + +If your JWT or connection parameters contain an array of values (like project IDs), you can expand them to filter data. There are three equivalent ways to write this: + +**Shorthand syntax** (recommended): + +```yaml +streams: + # User's JWT contains: { "project_ids": ["proj-1", "proj-2", "proj-3"] } + my_projects: + query: SELECT * FROM projects WHERE id IN auth.parameter('project_ids') + auto_subscribe: true +``` + +**JOIN syntax** with table-valued function: + +```yaml +streams: + my_projects: + query: | + SELECT p.* FROM projects p + JOIN json_each(auth.parameter('project_ids')) AS allowed + WHERE p.id = allowed.value + auto_subscribe: true +``` + +**Subquery syntax**: + +```yaml +streams: + my_projects: + query: | + SELECT * FROM projects + WHERE id IN (SELECT value FROM json_each(auth.parameter('project_ids'))) + auto_subscribe: true +``` + +All three sync the same data: projects whose IDs are in the user's JWT `project_ids` claim. + + +`json_each()` only works with auth and connection parameters. You cannot use it on columns from joined tables. + + +## Combining Parameters + +You can combine different parameter types in a single query. A common pattern is using subscription parameters for on-demand data while using auth parameters for authorization: + +```yaml +streams: + # User subscribes with a list_id, but can only see lists they have access to + list_items: + query: | + SELECT * FROM items + WHERE list_id = subscription.parameter('list_id') + AND list_id IN ( + SELECT id FROM lists + WHERE owner_id = auth.user_id() + OR id IN (SELECT list_id FROM list_shares WHERE shared_with = auth.user_id()) + ) +``` + +See [Writing Queries](/sync/streams/queries) for more filtering techniques using subqueries and joins. diff --git a/sync/streams/queries.mdx b/sync/streams/queries.mdx index 1bc7c3ab..e5c5ba64 100644 --- a/sync/streams/queries.mdx +++ b/sync/streams/queries.mdx @@ -1,9 +1,10 @@ --- -title: "Writing Stream Queries" -description: Learn how to filter data using parameters and subqueries, select specific columns, and transform data types in your stream queries. +title: "Writing Queries" +description: Learn query syntax for filtering with subqueries and joins, selecting columns, and transforming data types. +sidebarTitle: "Writing Queries" --- -Stream queries define what data syncs to each client. You write SQL-like queries that filter, select, and transform data based on who the user is and what they need to see. +Stream queries define what data syncs to each client. This page covers query syntax: filtering, selecting columns, and transforming data. For parameter usage, see [Using Parameters](/sync/streams/parameters). For real-world patterns, see [Examples & Patterns](/sync/streams/examples). ## Basic Queries @@ -37,7 +38,7 @@ This syncs only the lists owned by the current user. The user ID comes from the ## On-Demand Data with Parameters -For data that should only sync when the user navigates to a specific screen, use subscription parameters. The client passes these when subscribing: +For data that should only sync when the user navigates to a specific screen, use subscription parameters. The client passes these when subscribing to a stream: ```yaml streams: @@ -50,14 +51,7 @@ streams: const sub = await db.syncStream('list_todos', { list_id: 'abc123' }).subscribe(); ``` -A client can subscribe to the same stream multiple times with different parameters: - -```js -// User has two lists open -const workSub = await db.syncStream('list_todos', { list_id: 'work' }).subscribe(); -const personalSub = await db.syncStream('list_todos', { list_id: 'personal' }).subscribe(); -// Both sync independently -``` +See [Using Parameters](/sync/streams/parameters) for the full reference on subscription, auth, and connection parameters. ## Selecting Columns @@ -79,6 +73,25 @@ streams: query: SELECT id, name, created_timestamp AS created_at FROM todos ``` +### Type Transformations + +PowerSync syncs data to SQLite on the client. You may need to transform types for compatibility: + +```yaml +streams: + items: + query: | + SELECT + id, + CAST(item_number AS TEXT) AS item_number, -- Cast to text + metadata_json ->> 'description' AS description, -- Extract from JSON + base64(thumbnail) AS thumbnail_base64, -- Binary to base64 + unixepoch(created_at) AS created_at -- DateTime to epoch + FROM items +``` + +See [Type Mapping](/sync/types) for details on how each database type is handled. + ## Using Subqueries Subqueries let you filter based on related tables. Use `IN (SELECT ...)` to sync data where a foreign key matches rows in another table: @@ -92,7 +105,9 @@ streams: WHERE issue_id IN (SELECT id FROM issues WHERE owner_id = auth.user_id()) ``` -Subqueries can be nested: +### Nested Subqueries + +Subqueries can be nested to traverse multiple levels of relationships. This is useful for normalized database schemas: ```yaml streams: @@ -125,252 +140,162 @@ streams: ) ``` -## Type Transformations - -PowerSync syncs data to SQLite on the client. You may need to transform types for compatibility. +## Using Joins -### Cast to Text +For complex queries that traverse multiple tables, join syntax is often easier to read than nested subqueries. You can use `JOIN` or `INNER JOIN` (they're equivalent): ```yaml streams: - items: - # Using CAST syntax - query: SELECT id, CAST(item_number AS TEXT) AS item_number FROM items - - # Or using :: syntax - # query: "SELECT id, item_number :: text AS item_number FROM items" -``` - -### Extract from JSON/JSONB - -```yaml -streams: - items: - query: SELECT id, metadata_json ->> 'description' AS description FROM items -``` - -### Convert Binary to Base64 - -```yaml -streams: - documents: - query: SELECT id, base64(thumbnail) AS thumbnail_base64 FROM documents -``` - -### Convert DateTime to Unix Epoch - -```yaml -streams: - events: - query: SELECT id, unixepoch(created_at) AS created_at FROM events + # Nested subquery version + user_comments: + query: | + SELECT * FROM comments WHERE issue_id IN ( + SELECT id FROM issues WHERE project_id IN ( + SELECT project_id FROM project_members WHERE user_id = auth.user_id() + ) + ) ``` -## Parameter Types - -Sync Streams support three types of parameters, each serving a different purpose. - -### Subscription Parameters - -Passed from the client when it subscribes to a stream. This is the most common way to request specific data. - -For example, if a user opens two different todo lists, the client subscribes to the same `list_todos` stream twice, once for each list: +The same query using joins: ```yaml streams: - list_todos: - query: SELECT * FROM todos WHERE list_id = subscription.parameter('list_id') -``` - -```js -// User opens List A - subscribe with list_id = 'list-a' -const subA = await db.syncStream('list_todos', { list_id: 'list-a' }).subscribe(); - -// User also opens List B - subscribe again with list_id = 'list-b' -const subB = await db.syncStream('list_todos', { list_id: 'list-b' }).subscribe(); - -// Both lists' todos are now syncing independently + # Join version - same result, easier to read + user_comments: + query: | + SELECT comments.* FROM comments + INNER JOIN issues ON comments.issue_id = issues.id + INNER JOIN project_members ON issues.project_id = project_members.project_id + WHERE project_members.user_id = auth.user_id() ``` -| Function | Description | -|----------|-------------| -| `subscription.parameter('key')` | Get a single parameter by name | -| `subscription.parameters()` | All parameters as JSON (for dynamic access) | +Both queries sync the same data. Choose whichever style is clearer for your use case. -### Auth Parameters +### Multiple Joins -Claims from the user's JWT token. Use these to filter data based on who the user is. These values are secure and tamper-proof since they come from your authentication system. +You can chain multiple joins to traverse complex relationships. This example joins four tables to sync checkpoints for assignments the user has access to: ```yaml streams: - my_lists: - query: SELECT * FROM lists WHERE owner_id = auth.user_id() - - # Access custom JWT claims - org_data: - query: SELECT * FROM projects WHERE org_id = auth.parameter('org_id') + my_checkpoints: + query: | + SELECT c.* FROM user_assignment_scope uas + JOIN assignment a ON a.id = uas.assignment_id + JOIN assignment_checkpoint ac ON ac.assignment_id = a.id + JOIN checkpoint c ON c.id = ac.checkpoint_id + WHERE uas.user_id = auth.user_id() + AND a.active = true ``` -| Function | Description | -|----------|-------------| -| `auth.user_id()` | The user's ID (same as `auth.parameter('sub')`) | -| `auth.parameter('key')` | Get a specific JWT claim | -| `auth.parameters()` | Full JWT payload as JSON | - -### Connection Parameters +### Self-Joins -Specified "globally" at the connection level, before any streams are subscribed. These are the equivalent of [Client Parameters](/sync/rules/client-parameters) in Sync Rules. Use them when you need a value that applies across all streams for the session. +You can join the same table multiple times using aliases. This is useful for finding related records through a shared relationship. For example, finding all users who share a group with the current user: ```yaml streams: - config: - query: SELECT * FROM config WHERE environment = connection.parameter('environment') + users_in_my_groups: + query: | + SELECT u.* FROM users u + JOIN group_memberships gm1 ON u.id = gm1.user_id + JOIN group_memberships gm2 ON gm1.group_id = gm2.group_id + WHERE gm2.user_id = auth.user_id() ``` -| Function | Description | -|----------|-------------| -| `connection.parameter('key')` | Get a single connection parameter | -| `connection.parameters()` | All connection parameters as JSON | - - -Changing connection parameters requires reconnecting. For values that change during a session, use subscription parameters instead. - - -### When to Use Each - -**Subscription parameters** are the most flexible option. Use them when the client needs to choose what data to sync at runtime. Each subscription operates independently, so a user can have multiple subscriptions to the same stream with different parameters. +### Join Limitations -**Auth parameters** are the most secure option. Use them when you need to filter data based on who the user is. Since these values come from the signed JWT, they can't be tampered with by the client. +Sync Streams support a subset of join functionality: -**Connection parameters** apply globally across all streams for the session. Use them for values that rarely change, like environment flags or feature toggles. Keep in mind that changing them requires reconnecting. - -For most use cases, subscription parameters are the best choice. They're more flexible and work well with modern app patterns like multiple tabs. - -## Advanced Patterns - -### Syncing Related Data - -When viewing an item, sync its related data (e.g. comments) using separate streams: +- **Only inner joins**: Use `JOIN` or `INNER JOIN`. LEFT, RIGHT, and OUTER joins are not supported. +- **Single output table**: All selected columns must come from one table (use `table.*` or `table.column`) +- **Simple join conditions**: Join conditions must be equality comparisons like `table1.column = table2.column` +- **No `json_each` on joined columns**: Table-valued functions like `json_each` only work with auth/connection parameters, not with columns from joined tables. ```yaml -streams: - issue: - query: | - SELECT * FROM issues - WHERE id = subscription.parameter('issue_id') - AND project_id IN (SELECT project_id FROM project_members WHERE user_id = auth.user_id()) +# Valid - selecting from one table +query: SELECT comments.* FROM comments JOIN issues ON comments.issue_id = issues.id - issue_comments: - query: | - SELECT * FROM comments - WHERE issue_id = subscription.parameter('issue_id') - AND issue_id IN ( - SELECT id FROM issues WHERE project_id IN ( - SELECT project_id FROM project_members WHERE user_id = auth.user_id() - ) - ) -``` +# Invalid - selecting from multiple tables +query: SELECT comments.*, issues.title FROM comments JOIN issues ON comments.issue_id = issues.id -Subscribe to all when the user opens an issue: +# Invalid - complex join condition +query: SELECT * FROM a JOIN b ON a.x > b.y -```js -const issueSub = await db.syncStream('issue', { issue_id: issueId }).subscribe(); -const commentsSub = await db.syncStream('issue_comments', { issue_id: issueId }).subscribe(); - -await Promise.all([ - issueSub.waitForFirstSync(), - commentsSub.waitForFirstSync() -]); +# Invalid - json_each on joined column +query: | + SELECT p.* FROM profile p + JOIN project pr ON p.project_id = pr.id + WHERE auth.user_id() IN (SELECT value FROM json_each(pr.allowed_users)) ``` -### Multi-Tenant Applications +## Multiple Queries per Stream -For apps where users belong to organizations, use JWT claims to scope data to the tenant: +You can group multiple queries into a single stream using `queries` instead of `query`. This is useful when several tables share the same access pattern: ```yaml streams: - # All projects in the user's organization (auto-sync on connect) - org_projects: - query: SELECT * FROM projects WHERE org_id = auth.parameter('org_id') + user_data: + queries: + - SELECT * FROM notes WHERE owner_id = auth.user_id() + - SELECT * FROM settings WHERE user_id = auth.user_id() + - SELECT * FROM preferences WHERE user_id = auth.user_id() auto_subscribe: true - - # Tasks for a specific project (on-demand) - project_tasks: - query: | - SELECT * FROM tasks - WHERE project_id = subscription.parameter('project_id') - AND project_id IN (SELECT id FROM projects WHERE org_id = auth.parameter('org_id')) ``` -### Role-Based Access +All three queries sync into the same bucket, which is more efficient than defining separate streams. + +### When to Use Multiple Queries -Filter data based on user roles from JWT claims: +Use `queries` when: +- Multiple tables have the same filtering logic (e.g., all filtered by `user_id`) +- You want to reduce the number of buckets +- Related data should sync together ```yaml streams: - # Admins see all articles, others see only published or their own - articles: - query: | - SELECT * FROM articles - WHERE org_id = auth.parameter('org_id') - AND ( - status = 'published' - OR author_id = auth.user_id() - OR auth.parameter('role') = 'admin' - ) - auto_subscribe: true + # All project-related data syncs together + project_details: + queries: + - SELECT * FROM tasks WHERE project_id = subscription.parameter('project_id') + - SELECT * FROM files WHERE project_id = subscription.parameter('project_id') + - SELECT * FROM comments WHERE project_id = subscription.parameter('project_id') ``` -### Conditional Global Data +### Compatibility Requirements -Sync data only to users who meet certain criteria. Use a subquery to check user properties: +For queries to share a bucket, they must use compatible parameter inputs. In practice, this means they should filter on the same parameters in the same way: ```yaml +# Valid - all queries use the same parameter pattern streams: - # Only sync admin settings to users who are admins - admin_settings: - query: | - SELECT * FROM admin_settings - WHERE EXISTS ( - SELECT 1 FROM users - WHERE id = auth.user_id() AND is_admin = true - ) - auto_subscribe: true -``` - -### User's Default or Primary Item - -Sync a user's default item based on a preference stored in another table: + user_content: + queries: + - SELECT * FROM notes WHERE user_id = auth.user_id() + - SELECT * FROM bookmarks WHERE user_id = auth.user_id() -```yaml +# Valid - all queries use the same subscription parameter streams: - # Sync todos from the user's primary list - primary_list_todos: - query: | - SELECT * FROM todos - WHERE list_id IN ( - SELECT primary_list_id FROM users WHERE id = auth.user_id() - ) - auto_subscribe: true + project_data: + queries: + - SELECT * FROM tasks WHERE project_id = subscription.parameter('project_id') + - SELECT * FROM files WHERE project_id = subscription.parameter('project_id') ``` -### Expanding JSON Arrays +### Combining with CTEs -If your JWT contains an array of values (like project IDs), use `json_each()` to expand them: +Multiple queries work well with [Common Table Expressions (CTEs)](/sync/streams/ctes) to share both the filtering logic and the bucket: ```yaml streams: - # User's JWT contains: { "project_ids": ["proj-1", "proj-2", "proj-3"] } - my_projects: - query: | - SELECT * FROM projects - WHERE id IN ( - SELECT value FROM json_each(auth.parameters() -> 'project_ids') - ) + org_data: + with: + user_orgs: SELECT org_id FROM org_members WHERE user_id = auth.user_id() + queries: + - SELECT * FROM projects WHERE org_id IN user_orgs + - SELECT * FROM repositories WHERE org_id IN user_orgs + - SELECT * FROM team_members WHERE org_id IN user_orgs auto_subscribe: true ``` -This syncs all projects whose IDs are listed in the user's JWT `project_ids` claim. - ## Complete Example A full configuration combining multiple techniques: @@ -406,4 +331,4 @@ streams: AND item_id IN (SELECT id FROM items WHERE owner_id = auth.user_id()) ``` -See [Supported SQL](/sync/supported-sql) for all available operators and functions. +See [Examples & Patterns](/sync/streams/examples) for real-world examples like multi-tenant apps and role-based access, and [Supported SQL](/sync/supported-sql) for all available operators and functions. From e176dab1ac4763fb5f1f6576b06c6f727b95b4bc Mon Sep 17 00:00:00 2001 From: Benita Volkmann Date: Thu, 5 Feb 2026 15:14:55 +0200 Subject: [PATCH 07/11] Polish --- .../advanced/custom-types-arrays-and-json.mdx | 2 +- client-sdks/advanced/pre-seeded-sqlite.mdx | 4 +- client-sdks/advanced/raw-tables.mdx | 2 +- client-sdks/infinite-scrolling.mdx | 2 +- client-sdks/orms/kotlin/room.mdx | 2 +- docs.json | 8 +- .../custom-conflict-resolution.mdx | 8 +- integrations/neon.mdx | 2 +- integrations/supabase/attachments.mdx | 2 +- intro/powersync-philosophy.mdx | 2 +- .../implementing-schema-changes.mdx | 6 +- .../self-hosting/migrating-instances.mdx | 2 +- resources/performance-and-limits.mdx | 2 +- snippets/stream-definition-reference.mdx | 12 +- sync/advanced/compatibility.mdx | 21 ++- sync/advanced/overview.mdx | 2 +- sync/advanced/prioritized-sync.mdx | 134 ++++++++++++++---- sync/advanced/sync-data-by-time.mdx | 4 +- .../many-to-many-join-tables.mdx} | 65 +++++---- sync/rules/parameter-queries.mdx | 2 +- sync/streams/examples.mdx | 103 ++++++++++++-- tools/cli.mdx | 6 +- 22 files changed, 284 insertions(+), 109 deletions(-) rename sync/{advanced/many-to-many-and-join-tables.mdx => rules/many-to-many-join-tables.mdx} (80%) diff --git a/client-sdks/advanced/custom-types-arrays-and-json.mdx b/client-sdks/advanced/custom-types-arrays-and-json.mdx index d4354524..c8a0d825 100644 --- a/client-sdks/advanced/custom-types-arrays-and-json.mdx +++ b/client-sdks/advanced/custom-types-arrays-and-json.mdx @@ -353,7 +353,7 @@ You can write the entire updated column value as a string, or, with `trackPrevio ## Custom Types -PowerSync serializes custom types as text. For details, see [types in sync rules](/sync/types). +PowerSync serializes custom types as text. For details, see [types in Sync Rules](/sync/types). ### Postgres diff --git a/client-sdks/advanced/pre-seeded-sqlite.mdx b/client-sdks/advanced/pre-seeded-sqlite.mdx index a36fa8f5..c731cc0f 100644 --- a/client-sdks/advanced/pre-seeded-sqlite.mdx +++ b/client-sdks/advanced/pre-seeded-sqlite.mdx @@ -20,9 +20,9 @@ If you're interested in seeing an end-to-end example, we've prepared a demo repo # Main Concepts ## Generate a scoped JWT token -In most cases you'd want to pre-seed the SQLite database with user specific data and not all data from the source database, as you normally would when using PowerSync. For this you would need to generate a JWT tokens that include the necessary properties to satisfy the conditions of the parameter queries in your sync rules. +In most cases you'd want to pre-seed the SQLite database with user specific data and not all data from the source database, as you normally would when using PowerSync. For this you would need to generate a JWT tokens that include the necessary properties to satisfy the conditions of the parameter queries in your Sync Rules. -Let's say we have sync rules that look like this: +Let's say we have Sync Rules that look like this: ```yaml sync_rules: content: | diff --git a/client-sdks/advanced/raw-tables.mdx b/client-sdks/advanced/raw-tables.mdx index 6526e8a8..5897a981 100644 --- a/client-sdks/advanced/raw-tables.mdx +++ b/client-sdks/advanced/raw-tables.mdx @@ -274,7 +274,7 @@ In PowerSync's [JSON-based view system](/architecture/client-architecture#schema ### Adding raw tables as a new table -When you're adding new tables to your sync rules, clients will start to sync data on those tables - even if the tables aren't mentioned in the client's schema yet. So at the time you're introducing a new raw table to your app, it's possible that PowerSync has already synced some data for that table, which would be stored in `ps_untyped`. When adding regular tables, PowerSync will automatically extract rows from `ps_untyped`. With raw tables, that step is your responsibility. To copy data, run these statements in a transaction after creating the table: +When you're adding new tables to your Sync Rules, clients will start to sync data on those tables - even if the tables aren't mentioned in the client's schema yet. So at the time you're introducing a new raw table to your app, it's possible that PowerSync has already synced some data for that table, which would be stored in `ps_untyped`. When adding regular tables, PowerSync will automatically extract rows from `ps_untyped`. With raw tables, that step is your responsibility. To copy data, run these statements in a transaction after creating the table: ``` INSERT INTO my_table (id, my_column, ...) diff --git a/client-sdks/infinite-scrolling.mdx b/client-sdks/infinite-scrolling.mdx index c64c8a1d..8ca80020 100644 --- a/client-sdks/infinite-scrolling.mdx +++ b/client-sdks/infinite-scrolling.mdx @@ -19,7 +19,7 @@ This means that in many cases, you can sync a sufficient amount of data to let a ### 2) Control data sync using client parameters -PowerSync supports the use of [client parameters](/sync/rules/client-parameters) which are specified directly by the client (i.e. not only through the [authentication token](/configuration/auth/custom)). The app can dynamically change these parameters on the client-side and they can be accessed in sync rules on the server-side. The developer can use these parameters to limit/control which data is synced, but since they are not trusted (because they are not passed via the JWT authentication token) they should not be used for access control. You should still filter data by e.g. user ID for access control purposes (using [token parameters](/sync/rules/parameter-queries) from the JWT). +PowerSync supports the use of [client parameters](/sync/rules/client-parameters) which are specified directly by the client (i.e. not only through the [authentication token](/configuration/auth/custom)). The app can dynamically change these parameters on the client-side and they can be accessed in Sync Rules on the server-side. The developer can use these parameters to limit/control which data is synced, but since they are not trusted (because they are not passed via the JWT authentication token) they should not be used for access control. You should still filter data by e.g. user ID for access control purposes (using [token parameters](/sync/rules/parameter-queries) from the JWT). Usage example: To lazy-load/lazy-sync data for infinite scrolling, you could split your data into 'pages' and use a client parameter to specify which pages to sync to a user. diff --git a/client-sdks/orms/kotlin/room.mdx b/client-sdks/orms/kotlin/room.mdx index 962809d4..5b3e9624 100644 --- a/client-sdks/orms/kotlin/room.mdx +++ b/client-sdks/orms/kotlin/room.mdx @@ -92,7 +92,7 @@ Here: - The SQL statements must match the schema created by Room. - The `RawTable.name` and `PendingStatementParameter.Column` values must match the table and column names of the synced - table from the PowerSync Service, derived from your sync rules. + table from the PowerSync Service, derived from your Sync Rules. For more details, see [raw tables](/client-sdks/advanced/raw-tables). diff --git a/docs.json b/docs.json index 11f47c1c..8eff5b03 100644 --- a/docs.json +++ b/docs.json @@ -187,6 +187,7 @@ "sync/rules/global-buckets", "sync/rules/parameter-queries", "sync/rules/data-queries", + "sync/rules/many-to-many-join-tables", "sync/rules/client-parameters" ] }, @@ -200,7 +201,6 @@ "sync/advanced/client-id", "sync/advanced/case-sensitivity", "sync/advanced/compatibility", - "sync/advanced/many-to-many-and-join-tables", "sync/advanced/sync-data-by-time", "sync/advanced/schemas-and-connections", "sync/advanced/multiple-client-versions", @@ -762,7 +762,11 @@ }, { "source": "/usage/sync-rules/guide-many-to-many-and-join-tables", - "destination": "/sync/advanced/many-to-many-and-join-tables" + "destination": "/sync/rules/many-to-many-join-tables" + }, + { + "source": "/sync/advanced/many-to-many-and-join-tables", + "destination": "/sync/rules/many-to-many-join-tables" }, { "source": "/usage/sync-rules/guide-sync-data-by-time", diff --git a/handling-writes/custom-conflict-resolution.mdx b/handling-writes/custom-conflict-resolution.mdx index febc0c06..e115b1c1 100644 --- a/handling-writes/custom-conflict-resolution.mdx +++ b/handling-writes/custom-conflict-resolution.mdx @@ -36,7 +36,7 @@ When data changes on the server: 1. **Source database updates** - Direct writes or changes from other clients 2. **PowerSync Service detects changes** - Through replication stream -3. **Clients download updates** - Based on their sync rules +3. **Clients download updates** - Based on their Sync Rules 4. **Local SQLite updates** - Changes merge into the client's database **Conflicts arise when**: Multiple clients modify the same row (or fields) before syncing, or when a client's changes conflict with server-side rules. @@ -850,7 +850,7 @@ For scenarios where you just need to record changes without tracking their statu How it works: - Mark the table as `insertOnly: true` in your client schema -- Don't include the `field_changes` table in your sync rules +- Don't include the `field_changes` table in your Sync Rules - Changes are uploaded to the server but never downloaded back to clients **Client schema:** @@ -880,7 +880,7 @@ For scenarios where you want to show sync status temporarily but don't need a pe How it works: - Use a normal table on the client (not `insertOnly`) -- Don't include the `field_changes` table in your sync rules +- Don't include the `field_changes` table in your Sync Rules - Pending changes stay on the client until they're uploaded and the server processes them - Once the server processes a change and PowerSync syncs the next checkpoint, the change automatically disappears from the client @@ -919,7 +919,7 @@ function SyncIndicator({ taskId }: { taskId: string }) { **When to use:** Showing "syncing..." indicators, temporary status tracking without long-term storage overhead, cases where you want automatic cleanup after sync. -**Tradeoff:** Can't show detailed server-side error messages (unless the server writes to a separate errors table that *is* in sync rules). No long-term history on the client. +**Tradeoff:** Can't show detailed server-side error messages (unless the server writes to a separate errors table that *is* in Sync Rules). No long-term history on the client. ## Strategy 7: Cumulative Operations (Inventory) diff --git a/integrations/neon.mdx b/integrations/neon.mdx index c5ea7dcc..7a6e9420 100644 --- a/integrations/neon.mdx +++ b/integrations/neon.mdx @@ -214,7 +214,7 @@ During development, you can use the **Sync Test** feature in the PowerSync Dashb 1. Click on **"Sync Test"** in the PowerSync Dashboard. 2. Enter the UUID of a user in your Neon Auth database to generate a test JWT. -3. Click **"Launch Sync Diagnostics Client"** to test the sync rules. +3. Click **"Launch Sync Diagnostics Client"** to test the Sync Rules. For more information, explore the [PowerSync docs](/) or join us on [our community Discord](https://discord.gg/powersync) where our team is always available to answer questions. diff --git a/integrations/supabase/attachments.mdx b/integrations/supabase/attachments.mdx index a0ad4854..9b9db83f 100644 --- a/integrations/supabase/attachments.mdx +++ b/integrations/supabase/attachments.mdx @@ -29,7 +29,7 @@ Finally, link this storage bucket to your app by opening up the **AppConfig.ts** -This concludes the necessary configuration for handling attachments in the To-Do List demo app. When running the app now, a photo can be taken for a to-do list item, and PowerSync will ensure that the photo syncs to Supabase and other devices (if sync rules allow). +This concludes the necessary configuration for handling attachments in the To-Do List demo app. When running the app now, a photo can be taken for a to-do list item, and PowerSync will ensure that the photo syncs to Supabase and other devices (if Sync Rules allow). diff --git a/intro/powersync-philosophy.mdx b/intro/powersync-philosophy.mdx index 88b08e13..15384903 100644 --- a/intro/powersync-philosophy.mdx +++ b/intro/powersync-philosophy.mdx @@ -26,7 +26,7 @@ Once you have a local SQLite database that is always in sync, [state management] #### Flexibility -PowerSync allows you to fully customize what data is synced to the client. Syncing the entire database is extremely simple, but it is just as easy to use our [Sync Rules](/sync/rules/overview) to transform and filter data for each client (dynamic partial replication). +PowerSync allows you to fully customize what data is synced to the client. Syncing the entire database is extremely simple, but it is just as easy to use [Sync Streams](/sync/streams/overview) or [Sync Rules](/sync/rules/overview) to transform and filter data for each client (partial sync). Writing back to the backend source database [is in full control of the developer](/handling-writes/writing-client-changes) — use your own authentication, validation, and constraints. diff --git a/maintenance-ops/implementing-schema-changes.mdx b/maintenance-ops/implementing-schema-changes.mdx index 36c8910d..aecbb0aa 100644 --- a/maintenance-ops/implementing-schema-changes.mdx +++ b/maintenance-ops/implementing-schema-changes.mdx @@ -100,7 +100,7 @@ The latter can happen if: When the replica identity changes, the entire table is re-replicated again. This may be a slow operation if the table is large, and all other replication will be blocked until the table is replicated again. -Sync rules affected by schema changes will fail "soft" — an alert would be generated, but the system will continue processing changes. +Sync Rules affected by schema changes will fail "soft" — an alert would be generated, but the system will continue processing changes. #### Column changes @@ -164,7 +164,7 @@ The binary log also provides DDL (Data Definition Language) query updates, which For MySQL, PowerSync detects schema changes by parsing the DDL queries in the binary log. It may not always be possible to parse the DDL queries correctly, especially if they are complex or use non-standard syntax. In such cases, PowerSync will ignore the schema change, but will log a warning with the schema change query. If required, the schema change would then need to be manually -handled by redeploying the sync rules. This triggers a re-replication. +handled by redeploying the Sync Rules. This triggers a re-replication. ### MySQL schema changes affecting Sync Rules @@ -205,7 +205,7 @@ The latter can happen if: When the replication identity changes, the entire table is replicated again. This may be a slow operation if the table is large, and all other replication will be blocked until the table is replicated again. -Sync rules affected by schema changes will fail "soft" — an alert would be generated, but the system will continue processing changes. +Sync Rules affected by schema changes will fail "soft" — an alert would be generated, but the system will continue processing changes. #### Column changes diff --git a/maintenance-ops/self-hosting/migrating-instances.mdx b/maintenance-ops/self-hosting/migrating-instances.mdx index e07e214c..d6cdccf8 100644 --- a/maintenance-ops/self-hosting/migrating-instances.mdx +++ b/maintenance-ops/self-hosting/migrating-instances.mdx @@ -7,7 +7,7 @@ description: "Migrating users between PowerSync instances" In some cases, you may want to migrate users between PowerSync instances. This may be between cloud and self-hosted instances, or even just to change the endpoint. -If the PowerSync instances use the same source database and have the same basic configuration and sync rules, you can migrate users by just changing the endpoint to the new instance. +If the PowerSync instances use the same source database and have the same basic configuration and Sync Rules, you can migrate users by just changing the endpoint to the new instance. To make this process easier, we recommend using an API to retrieve the PowerSync endpoint, instead of hardcoding the endpoint in the client application. If you're using custom authentication, this can be done in the same API call as getting the authentication token. diff --git a/resources/performance-and-limits.mdx b/resources/performance-and-limits.mdx index 138dbd8d..eda39b00 100644 --- a/resources/performance-and-limits.mdx +++ b/resources/performance-and-limits.mdx @@ -26,7 +26,7 @@ The PowerSync Cloud **Team** and **Enterprise** plans allow several of these lim - **Small rows**: 2,000-4,000 operations per second - **Large rows**: Up to 5MB per second - **Transaction processing**: ~60 transactions per second for smaller transactions -- **Reprocessing**: Same rates apply when reprocessing sync rules or adding new tables +- **Reprocessing**: Same rates apply when reprocessing Sync Rules or adding new tables ### Sync (PowerSync Service → Client) diff --git a/snippets/stream-definition-reference.mdx b/snippets/stream-definition-reference.mdx index a4e6f8d5..f50909cc 100644 --- a/snippets/stream-definition-reference.mdx +++ b/snippets/stream-definition-reference.mdx @@ -4,19 +4,19 @@ config: with: # Global CTEs (optional) - reusable subqueries available to all streams - cte_name: SELECT ... FROM ... + : SELECT ... FROM ... streams: - stream_name: + : # Query options (use one) - query: SELECT * FROM table WHERE ... # Single query + query: SELECT * FROM WHERE ... # Single query queries: # Multiple queries (same bucket) - - SELECT * FROM table_a WHERE ... - - SELECT * FROM table_b WHERE ... + - SELECT * FROM WHERE ... + - SELECT * FROM WHERE ... # Stream-scoped CTEs (optional) with: - local_cte: SELECT ... FROM ... + : SELECT ... FROM ... # Behavior options auto_subscribe: true # Auto-subscribe clients on connect (default: false) diff --git a/sync/advanced/compatibility.mdx b/sync/advanced/compatibility.mdx index 613f1d8f..6b134818 100644 --- a/sync/advanced/compatibility.mdx +++ b/sync/advanced/compatibility.mdx @@ -10,7 +10,7 @@ At the same time, we want to fix bugs or other inaccuracies that have accumulate To make this trade‑off explicit, you choose whether to keep the existing behavior or turn on newer fixes that slightly change how data is processed. -Use the `config` block in your Sync Rules YAML file to choose the behavior. There are two ways to turn fixes on: +Use the `config` block in your sync config YAML to choose the behavior. There are two ways to turn fixes on: 1. Set an `edition` to enable the full set of fixes for that edition. This is the recommended approach for new projects. 2. Toggle individual options for more fine‑grained control. @@ -21,7 +21,7 @@ For older projects, the previous behavior remains the default. New projects shou For new projects, it is recommended to enable all current fixes by setting `edition: `: -```yaml sync_rules.yaml +```yaml config: edition: 2 # Recommended to set to the latest available edition (see 'Supported fixes' table below) @@ -31,7 +31,7 @@ bucket_definitions: Or, specify options individually: -```yaml sync_rules.yaml +```yaml config: timestamps_iso8601: true versioned_bucket_ids: true @@ -39,6 +39,19 @@ config: custom_postgres_types: true ``` +## Sync Streams Requirement + +**Sync Streams require `edition: 2`**. All Sync Streams configurations must include this setting: + +```yaml +config: + edition: 2 + +streams: + my_stream: + query: SELECT * FROM my_table WHERE user_id = auth.user_id() +``` + ## Supported fixes This table lists all fixes currently supported: @@ -106,7 +119,7 @@ downloaded twice. ### `fixed_json_extract` -This fixes the `json_extract` functions as well as the `->` and `->>` operators in sync rules to behave similar +This fixes the `json_extract` functions as well as the `->` and `->>` operators in Sync Rules to behave similar to recent SQLite versions: We only split on `.` if the path starts with `$.`. For instance, `'json_extract({"foo.bar": "baz"}', 'foo.bar')` would evaluate to: diff --git a/sync/advanced/overview.mdx b/sync/advanced/overview.mdx index ad86d6ce..638276b7 100644 --- a/sync/advanced/overview.mdx +++ b/sync/advanced/overview.mdx @@ -9,7 +9,7 @@ sidebarTitle: Overview - + diff --git a/sync/advanced/prioritized-sync.mdx b/sync/advanced/prioritized-sync.mdx index e0267b1d..fdc247ab 100644 --- a/sync/advanced/prioritized-sync.mdx +++ b/sync/advanced/prioritized-sync.mdx @@ -5,7 +5,9 @@ description: "In some scenarios, you may want to sync tables using different pri ## Overview -PowerSync supports defining sync priorities, which allows you to control the sync order for different buckets of data. This is particularly useful when certain data should be available sooner than others. +PowerSync supports defining sync priorities, which allows you to control the sync order for different data. This is particularly useful when certain data should be available sooner than others. + +In Sync Rules, priorities are assigned to buckets explicitly. In Sync Streams, priorities are assigned to streams, and PowerSync manages the underlying buckets internally. **Availability** @@ -36,27 +38,84 @@ Each bucket is assigned a priority value between 0 and 3, where: - 3 is the default and lowest priority. - Lower numbers indicate higher priority. -Buckets with higher priorities sync first, and lower-priority buckets sync later. It's worth noting that if you only use a single priority, there is no difference between priorities 1-3. The difference only comes in if you use multiple different priorities. +Higher-priority data syncs first, and lower-priority data syncs later. If you only use a single priority, there is no difference between priorities 1-3. The difference only comes in when you use multiple different priorities. + + + +In Sync Streams, you assign priorities directly to streams. PowerSync manages buckets internally, so you don't need to think about bucket structure. Each stream with a given priority will have its data synced at that priority level. + +```yaml +streams: + lists: + query: SELECT * FROM lists WHERE owner_id = auth.user_id() + priority: 1 # Syncs first + auto_subscribe: true + + todos: + query: SELECT * FROM todos WHERE list_id IN (SELECT id FROM lists WHERE owner_id = auth.user_id()) + priority: 2 # Syncs after lists + auto_subscribe: true +``` + + +In Sync Rules, you assign priorities to bucket definitions. The priority determines when data in that bucket syncs relative to other buckets. + +```yaml +bucket_definitions: + user_lists: + priority: 1 # Syncs first + parameters: SELECT id AS list_id FROM lists WHERE user_id = request.user_id() + data: + - SELECT * FROM lists WHERE id = bucket.list_id + + user_todos: + priority: 2 # Syncs after lists + parameters: SELECT id AS list_id FROM lists WHERE user_id = request.user_id() + data: + - SELECT * FROM todos WHERE list_id = bucket.list_id +``` + + ## Syntax and Configuration -Priorities can be defined for a bucket using the `priority` YAML key, or with the `_priority` attribute inside parameter queries: + + +In Sync Streams, set the `priority` option on the stream definition: + +```yaml +streams: + high_priority_data: + query: SELECT * FROM important_table WHERE user_id = auth.user_id() + priority: 1 + auto_subscribe: true + + low_priority_data: + query: SELECT * FROM background_table WHERE user_id = auth.user_id() + priority: 2 + auto_subscribe: true +``` + + +In Sync Rules, priorities can be defined using the `priority` YAML key on bucket definitions, or with the `_priority` attribute inside parameter queries: ```yaml bucket_definitions: # Using the `priority` YAML key user_data: priority: 1 - parameters: SELECT request.user_id() as id where...; + parameters: SELECT request.user_id() AS id WHERE ... data: # ... - # Using the `_priority` attribute + # Using the `_priority` attribute (useful for multiple parameter queries with different priorities) project_data: - parameters: select id as project_id, 2 as _priority from projects where ...; # This approach is useful when you have multiple parameter queries with different priorities. + parameters: SELECT id AS project_id, 2 AS _priority FROM projects WHERE ... data: # ... -``` +``` + + Priorities must be static and cannot depend on row values within a parameter query. @@ -64,38 +123,57 @@ Priorities must be static and cannot depend on row values within a parameter que ## Example: Syncing Lists Before Todos -Consider a scenario where you want to display lists immediately while loading todos in the background. This approach allows users to view and interact with lists right away without waiting for todos to sync. Here's how to configure sync priorities in your Sync Rules to achieve this: +Consider a scenario where you want to display lists immediately while loading todos in the background. This approach allows users to view and interact with lists right away without waiting for todos to sync. + + +```yaml +config: + edition: 2 + +streams: + lists: + query: SELECT * FROM lists WHERE owner_id = auth.user_id() + priority: 1 # Syncs first + auto_subscribe: true + + todos: + query: | + SELECT * FROM todos + WHERE list_id IN (SELECT id FROM lists WHERE owner_id = auth.user_id()) + priority: 2 # Syncs after lists + auto_subscribe: true +``` + +The `lists` stream syncs first (priority 1), allowing users to see and interact with their lists immediately. The `todos` stream syncs afterward (priority 2), loading in the background. + + ```yaml bucket_definitions: user_lists: - # Sync the user's lists with a higher priority - priority: 1 - parameters: select id as list_id from lists where user_id = request.user_id() + priority: 1 # Syncs first + parameters: SELECT id AS list_id FROM lists WHERE user_id = request.user_id() data: - - select * from lists where id = bucket.list_id + - SELECT * FROM lists WHERE id = bucket.list_id user_todos: - # Sync the user's todos with a lower priority - priority: 2 - parameters: select id as list_id from lists where user_id = request.user_id() + priority: 2 # Syncs after lists + parameters: SELECT id AS list_id FROM lists WHERE user_id = request.user_id() data: - - select * from todos where list_id = bucket.list_id + - SELECT * FROM todos WHERE list_id = bucket.list_id ``` -In this configuration: - -The `lists` bucket has the default priority of 1, meaning it syncs first. - -The `todos` bucket is assigned a priority of 2, meaning it may sync only after the lists have been synced. +The `user_lists` bucket syncs first (priority 1), allowing users to see and interact with their lists immediately. The `user_todos` bucket syncs afterward (priority 2), loading in the background. + + ## Behavioral Considerations -- **Interruption for Higher Priority Data**: Syncing lower-priority buckets _may_ be interrupted if new data for higher-priority buckets arrives. -- **Local Changes & Consistency**: If local writes fail due to validation or permission issues, they are only reverted after _all_ buckets sync. -- **Deleted Data**: Deleted data may only be removed after _all_ buckets have synced. Future updates may improve this behavior. -- **Data Ordering**: Data in lower-priority buckets will never appear before higher-priority data. +- **Interruption for Higher Priority Data**: Syncing lower-priority data _may_ be interrupted if new data for higher-priority streams/buckets arrives. +- **Local Changes & Consistency**: If local writes fail due to validation or permission issues, they are only reverted after _all_ data has synced. +- **Deleted Data**: Deleted data may only be removed after _all_ priorities have completed syncing. Future updates may improve this behavior. +- **Data Ordering**: Lower-priority data will never appear before higher-priority data. ## Special Case: Priority 0 @@ -107,9 +185,9 @@ Caution: If misused, Priority 0 may cause flickering or inconsistencies, as upda ## Consistency Considerations -PowerSync's full consistency guarantees only apply once all buckets have completed syncing. +PowerSync's full consistency guarantees only apply once all priorities have completed syncing. -When higher-priority buckets are synced, all inserts and updates within the buckets for the specific priority will be consistent. However, deletes are only applied when the full sync completes, so you may still have some stale data within those buckets. +When higher-priority data is synced, all inserts and updates at that priority level will be consistent. However, deletes are only applied when the full sync completes, so you may still have some stale data at those priority levels. Consider the following example: @@ -132,7 +210,7 @@ PowerSync's client SDKs provide APIs to allow applications to track sync status Using the above we can render a lists component only once the user's lists (with priority 1) have completed syncing, else display a message indicating that the sync is still in progress: ```dart - // Define the priority level of the lists bucket + // Define the priority level for lists static final _listsPriority = BucketPriority(1); @override diff --git a/sync/advanced/sync-data-by-time.mdx b/sync/advanced/sync-data-by-time.mdx index df3ca74f..dcfe134b 100644 --- a/sync/advanced/sync-data-by-time.mdx +++ b/sync/advanced/sync-data-by-time.mdx @@ -18,7 +18,7 @@ However, this won't work. Here's why. # The Problem -Sync rules only support a limited set of [operators](https://docs.powersync.com/usage/sync-rules/operators-and-functions) when filtering on parameters. You can use `=`, `IN`, and `IS NULL`, but not range operators like `>`, `<`, `>=`, or `<=`. +Sync Rules only support a limited set of [operators](https://docs.powersync.com/usage/sync-rules/operators-and-functions) when filtering on parameters. You can use `=`, `IN`, and `IS NULL`, but not range operators like `>`, `<`, `>=`, or `<=`. Additionally, sync rule functions must be deterministic. Time-based functions like `now()` aren't allowed because the result changes depending on when the query runs. @@ -156,7 +156,7 @@ Each row belongs to multiple buckets (replication overhead). Re-sync overhead wh # Conclusion -Time-based sync is a common need, but current sync rules don't support range operators or time-based functions directly. +Time-based sync is a common need, but current Sync Rules don't support range operators or time-based functions directly. To recap the workarounds: - **Pre-defined time ranges** — Simplest option. Use when you have a fixed set of time ranges and don't mind schema changes. diff --git a/sync/advanced/many-to-many-and-join-tables.mdx b/sync/rules/many-to-many-join-tables.mdx similarity index 80% rename from sync/advanced/many-to-many-and-join-tables.mdx rename to sync/rules/many-to-many-join-tables.mdx index e03dada1..8a594c15 100644 --- a/sync/advanced/many-to-many-and-join-tables.mdx +++ b/sync/rules/many-to-many-join-tables.mdx @@ -1,10 +1,15 @@ --- title: "Guide: Many-to-Many and Join Tables" sidebarTitle: "Many-to-Many and Join Tables" +description: Strategies for handling many-to-many relationships in Sync Rules, which don't support JOINs directly. --- Join tables are often used to implement many-to-many relationships between tables. Join queries are not directly supported in PowerSync Sync Rules, and require some workarounds depending on the use case. This guide contains some recommended strategies. + +**Using Sync Streams?** Sync Streams support [JOINs](/sync/streams/queries#using-joins) and [nested subqueries](/sync/streams/queries#using-subqueries), which handle most many-to-many relationships directly without the workarounds described here. See [Many-to-Many with Sync Streams](/sync/streams/examples#many-to-many-relationships) for examples. + + **Postgres users:** For Postgres source databases, you can use the [`pg_ivm` extension](https://www.powersync.com/blog/using-pg-ivm-to-enable-joins-in-powersync) to create incrementally maintained materialized views with JOINs that can be referenced directly in Sync Rules. This approach avoids the need to denormalize your schema. @@ -17,9 +22,7 @@ As an example, consider a social media application. The app has message boards. - + ```sql create table users ( id uuid not null default gen_random_uuid (), @@ -96,10 +99,11 @@ The relationship between users and boards is a many-to-many, specified via the ` To start with, in our PowerSync Sync Rules, we define a [bucket](/sync/rules/organize-data-into-buckets) and sync the posts. The [parameter query](/sync/rules/parameter-queries) is defined using the `board_subscriptions` table: ```yaml +bucket_definitions: board_data: - parameters: select board_id from board_subscriptions where user_id = request.user_id() + parameters: SELECT board_id FROM board_subscriptions WHERE user_id = request.user_id() data: - - select * from posts where board_id = bucket.board_id + - SELECT * FROM posts WHERE board_id = bucket.board_id ``` ### Avoiding joins in data queries: Denormalize relationships (comments) @@ -122,24 +126,26 @@ ALTER TABLE comments ADD CONSTRAINT comments_board_id_fkey FOREIGN KEY (board_id Now we can add it to the bucket definition in our Sync Rules: ```yaml +bucket_definitions: board_data: - parameters: select board_id from board_subscriptions where user_id = request.user_id() + parameters: SELECT board_id FROM board_subscriptions WHERE user_id = request.user_id() data: - - select * from posts where board_id = bucket.board_id + - SELECT * FROM posts WHERE board_id = bucket.board_id # Add comments: - - select * from comments where board_id = bucket.board_id + - SELECT * FROM comments WHERE board_id = bucket.board_id ``` Now we want to sync topics of posts. In this case we added `board_id` from the start, so `post_topics` is simple in our Sync Rules: ```yaml +bucket_definitions: board_data: - parameters: select board_id from board_subscriptions where user_id = request.user_id() + parameters: SELECT board_id FROM board_subscriptions WHERE user_id = request.user_id() data: - - select * from posts where board_id = bucket.board_id - - select * from comments where board_id = bucket.board_id + - SELECT * FROM posts WHERE board_id = bucket.board_id + - SELECT * FROM comments WHERE board_id = bucket.board_id # Add post_topics: - - select * from post_topics where board_id = bucket.board_id + - SELECT * FROM post_topics WHERE board_id = bucket.board_id ``` ### Many-to-many strategy: Sync everything (topics) @@ -149,9 +155,10 @@ Now we need access to sync the topics for all posts synced to the device. There If the topics table is limited in size (say 1,000 or less), the simplest solution is to just sync all topics in our Sync Rules: ```yaml +bucket_definitions: global_topics: data: - - select * from topics where board_id = bucket.board_id + - SELECT * FROM topics ``` ### Many-to-many strategy: Denormalize data (topics, user names) @@ -175,14 +182,15 @@ ALTER TABLE board_subscriptions ADD COLUMN user_name text; Sync Rules: ```yaml +bucket_definitions: board_data: - parameters: select board_id from board_subscriptions where user_id = request.user_id() + parameters: SELECT board_id FROM board_subscriptions WHERE user_id = request.user_id() data: - - select * from posts where board_id = bucket.board_id - - select * from comments where board_id = bucket.board_id - - select * from post_topics where board_id = bucket.board_id + - SELECT * FROM posts WHERE board_id = bucket.board_id + - SELECT * FROM comments WHERE board_id = bucket.board_id + - SELECT * FROM post_topics WHERE board_id = bucket.board_id # Add subscriptions which include the names: - - select * from board_subscriptions where board_id = bucket.board_id + - SELECT * FROM board_subscriptions WHERE board_id = bucket.board_id ``` ### Many-to-many strategy: Array of IDs (user profiles) @@ -198,21 +206,20 @@ ALTER TABLE users ADD COLUMN subscribed_board_ids uuid[]; By using an array instead of or in addition to a join table, we can use it directly in Sync Rules: ```yaml -board_data: - parameters: select board_id from board_subscriptions where user_id = request.user_id() - data: - - select * from posts where board_id = bucket.board_id - - select * from comments where board_id = bucket.board_id - - select * from post_topics where board_id = bucket.board_id - # Add participating users: - - select name, last_activity, profile_picture, bio from users where bucket.board_id in subscribed_board_ids +bucket_definitions: + board_data: + parameters: SELECT board_id FROM board_subscriptions WHERE user_id = request.user_id() + data: + - SELECT * FROM posts WHERE board_id = bucket.board_id + - SELECT * FROM comments WHERE board_id = bucket.board_id + - SELECT * FROM post_topics WHERE board_id = bucket.board_id + # Add participating users: + - SELECT name, last_activity, profile_picture, bio FROM users WHERE bucket.board_id IN subscribed_board_ids ``` This approach does require some extra effort to keep the array up to date. One option is to use a trigger in the case of Postgres: - + ```sql CREATE OR REPLACE FUNCTION recalculate_subscribed_boards() RETURNS TRIGGER AS $$ diff --git a/sync/rules/parameter-queries.mdx b/sync/rules/parameter-queries.mdx index bf723370..c9280343 100644 --- a/sync/rules/parameter-queries.mdx +++ b/sync/rules/parameter-queries.mdx @@ -197,7 +197,7 @@ bucket_definitions: Keep in mind that the total number of buckets per user should [remain limited](/sync/rules/organize-data-into-buckets#limit-on-number-of-buckets-per-client) (\<= 1,000 [by default](/resources/performance-and-limits)), so buckets should not be too granular. -For more advanced details on many-to-many relationships and join tables, see [this guide](/sync/advanced/many-to-many-and-join-tables). +For more advanced details on many-to-many relationships and join tables, see [this guide](/sync/rules/many-to-many-join-tables). ### Expanding JSON Array Into Multiple Parameters diff --git a/sync/streams/examples.mdx b/sync/streams/examples.mdx index 0e4886b0..fc5ee5c7 100644 --- a/sync/streams/examples.mdx +++ b/sync/streams/examples.mdx @@ -10,7 +10,7 @@ These patterns show how to combine Sync Streams features to solve common real-wo ### Multi-Tenant Applications -For apps where users belong to organizations, use JWT claims to scope data to the tenant: +For apps where users belong to organizations, use JWT claims to scope all data to the user's tenant. The `org_id` in the JWT ensures users only see data from their organization, without needing to pass it from the client. ```yaml streams: @@ -27,11 +27,13 @@ streams: AND project_id IN (SELECT id FROM projects WHERE org_id = auth.parameter('org_id')) ``` +The `org_projects` stream syncs automatically on connect, giving users immediate access to their project list. The `project_tasks` stream loads on-demand when the user opens a specific project, and the subquery ensures they can only access tasks from projects in their organization. + For more complex organization structures where users can belong to multiple organizations, see [Expanding JSON Arrays](/sync/streams/parameters#expanding-json-arrays). ### Role-Based Access -Filter data based on user roles from JWT claims: +When different users should see different data based on their role, use JWT claims to apply visibility rules. This keeps authorization logic on the server side where it's secure. ```yaml streams: @@ -48,9 +50,11 @@ streams: auto_subscribe: true ``` +This query syncs articles that match any of three conditions: the article is published (visible to everyone), the user is the author (can see their own drafts), or the user is an admin (can see everything). The `role` claim comes from the JWT, so users can't escalate their own privileges. + ### Shared Resources -Sync items that are either owned by the user or explicitly shared with them: +For apps where users can share items with each other (like documents or folders), combine ownership checks with a shares table lookup. This syncs both items the user owns and items others have shared with them. ```yaml streams: @@ -62,9 +66,11 @@ streams: auto_subscribe: true ``` +The `OR` clause checks two conditions: either the user owns the document, or the document appears in the `document_shares` table with the user as the recipient. Both sets of documents sync together in one stream. + ### Syncing Related Data -When viewing an item, sync its related data (e.g. comments) using separate streams that share a subscription parameter: +When a detail view needs data from multiple tables (like an issue and its comments), create separate streams that use the same subscription parameter. This lets you subscribe to all related data at once when the user opens the view. ```yaml streams: @@ -85,7 +91,7 @@ streams: ) ``` -Subscribe to both when the user opens an issue: +Both streams filter by `issue_id` and include authorization checks to ensure the user has access. Subscribe to both when the user opens an issue: ```js const issueSub = await db.syncStream('issue', { issue_id: issueId }).subscribe(); @@ -103,7 +109,7 @@ If multiple streams share the same filtering logic, consider using [CTEs](/sync/ ### Conditional Global Data -Sync data only to users who meet certain criteria: +Sometimes you want to sync data to all users who meet certain criteria, but not to others. Use `EXISTS` to check a condition before syncing any rows. ```yaml streams: @@ -118,9 +124,11 @@ streams: auto_subscribe: true ``` +The `EXISTS` clause acts as a gate: if the user is an admin, all rows from `admin_settings` sync. If not, no rows sync. This is useful for feature flags, admin panels, or premium content. + ### User's Default or Primary Item -Sync a user's default item based on a preference stored in another table: +When users have a "default" or "primary" item stored in their profile, you can sync related data automatically without the client needing to know the ID upfront. ```yaml streams: @@ -134,9 +142,21 @@ streams: auto_subscribe: true ``` +The subquery looks up the user's `primary_list_id` from the `users` table, then syncs all todos from that list. When the user changes their primary list in the database, the synced data updates automatically. + ### Hierarchical Data -Sync data across a parent-child hierarchy: +When your data has parent-child relationships across multiple levels, you can traverse the hierarchy using nested subqueries or joins. This is common in apps where access to child records is determined by membership at a higher level. + +For example, consider an app with organizations, projects, and tasks. Users belong to organizations, and should see all tasks in projects that belong to their organizations: + +``` +Organization → Projects → Tasks + ↑ +User membership +``` + +**Using nested subqueries:** ```yaml streams: @@ -151,7 +171,9 @@ streams: auto_subscribe: true ``` -For deeply nested hierarchies, consider using [joins](/sync/streams/queries#using-joins) for better readability: +The query reads from inside out: find the user's organizations, then find projects in those organizations, then find tasks in those projects. + +**Using joins** (often easier to read for deeply nested hierarchies): ```yaml streams: @@ -164,6 +186,51 @@ streams: auto_subscribe: true ``` +Both queries produce the same result. PowerSync handles these nested relationships efficiently, so you don't need to denormalize your database or add redundant foreign keys. + +### Many-to-Many Relationships + +Many-to-many relationships (like users subscribing to boards) typically use a join table. Sync Streams support JOINs, so you can traverse these relationships directly without denormalizing your schema. + +Consider a social app where users subscribe to message boards: + +``` +Users ←→ board_subscriptions ←→ Boards → Posts → Comments +``` + +```yaml +streams: + # Posts from boards the user subscribes to + board_posts: + query: | + SELECT p.* FROM posts p + JOIN board_subscriptions bs ON p.board_id = bs.board_id + WHERE bs.user_id = auth.user_id() + auto_subscribe: true + + # Comments on those posts (no denormalization needed) + board_comments: + query: | + SELECT c.* FROM comments c + JOIN posts p ON c.post_id = p.id + JOIN board_subscriptions bs ON p.board_id = bs.board_id + WHERE bs.user_id = auth.user_id() + auto_subscribe: true + + # User profiles for board subscribers + board_users: + query: | + SELECT u.* FROM users u + JOIN board_subscriptions bs ON u.id = bs.user_id + JOIN board_subscriptions my_boards ON bs.board_id = my_boards.board_id + WHERE my_boards.user_id = auth.user_id() + auto_subscribe: true +``` + +Each query joins through the `board_subscriptions` table to find relevant data. The `board_comments` query chains two joins (comments → posts → subscriptions), and the `board_users` query finds other users who subscribe to the same boards. + +Unlike [Sync Rules](/sync/rules/many-to-many-join-tables), you don't need to denormalize your schema or maintain array columns to handle these relationships. + ## Use Case Examples Complete configurations for common application types. @@ -206,7 +273,7 @@ const todos = await db.getAll('SELECT * FROM todos WHERE list_id = ?', [selected ### Chat Application -Sync conversation list upfront, load messages on demand: +Chat apps typically have many conversations but users only view one at a time. Sync the conversation list upfront so users can see all their chats immediately, but load messages on-demand to avoid syncing potentially thousands of messages across all conversations. ```yaml config: @@ -230,9 +297,11 @@ streams: ) ``` +The `my_conversations` stream finds conversations through the `participants` join table. The `conversation_messages` stream requires both a subscription parameter (which conversation to load) and an authorization check (user must be a participant). + ### Project Management App -A full configuration for a multi-tenant project management app using [CTEs](/sync/streams/ctes): +This example shows a multi-tenant project management app where users can access public projects or projects they're members of. A CTE defines the "accessible projects" logic once, then reuses it across multiple streams. ```yaml config: @@ -272,9 +341,11 @@ streams: AND project_id IN user_projects ``` +The `user_projects` CTE combines two access rules: public projects in the org, and projects where the user is a member. The auto-subscribed streams sync navigation data immediately, while task and file details load when the user opens a specific project. + ### Organization Workspace (Using Multiple Queries) -Group related organization data into a single bucket using [multiple queries per stream](/sync/streams/queries#multiple-queries-per-stream): +When several tables share the same access pattern, you can group them into a single stream using multiple queries. This reduces the number of buckets and keeps related data together. ```yaml config: @@ -302,6 +373,8 @@ streams: - SELECT * FROM comments WHERE project_id = subscription.parameter('project_id') AND project_id IN accessible_projects ``` +The `org_data` stream combines three queries that all filter by the user's organizations. They sync together as one unit. The `project_details` stream uses a stream-scoped CTE and groups tasks, files, and comments for a specific project into a single subscription. + ## Demo Apps Working demo apps that demonstrate Sync Streams in action. These show how to combine auto-subscribe streams (for data that should always be available) with on-demand streams (for data loaded when needed). @@ -338,12 +411,12 @@ In this demo: - When the user navigates back to the same list, they won't see a loading state, because the data is cached locally (demonstrating TTL caching behavior). -Kotlin Sync Streams support is available. Demo app coming soon. +Sync Streams support is available. Demo app coming soon. -Swift Sync Streams support is available. Demo app coming soon. +Sync Streams support is available. Demo app coming soon. -.NET Sync Streams support is available. Demo app coming soon. +Sync Streams support is available. Demo app coming soon. diff --git a/tools/cli.mdx b/tools/cli.mdx index c4c83982..391c275e 100644 --- a/tools/cli.mdx +++ b/tools/cli.mdx @@ -47,7 +47,7 @@ npm ### Deploying Sync Rules with GitHub Actions -You can automate sync rule deployments using the PowerSync CLI in your CI/CD pipeline. This is useful for ensuring your sync rules are automatically deployed whenever changes are pushed to a repository. +You can automate Sync Rule deployments using the PowerSync CLI in your CI/CD pipeline. This is useful for ensuring your Sync Rules are automatically deployed whenever changes are pushed to a repository. -See a complete example of deploying sync rules with GitHub Actions +See a complete example of deploying Sync Rules with GitHub Actions The example repository demonstrates how to: -* Set up a GitHub Actions workflow to deploy sync rules on push to the `main` branch +* Set up a GitHub Actions workflow to deploy Sync Rules on push to the `main` branch * Configure required repository secrets (`POWERSYNC_AUTH_TOKEN`, `POWERSYNC_INSTANCE_ID`, `POWERSYNC_PROJECT_ID`, `POWERSYNC_ORG_ID`) * Automatically deploy `sync-rules.yaml` changes From 226ba5e0eff66dc85cf1a4589d8d8b26bf039000 Mon Sep 17 00:00:00 2001 From: Benita Volkmann Date: Thu, 5 Feb 2026 15:21:34 +0200 Subject: [PATCH 08/11] sync_rules.yaml to sync_config.yaml --- client-sdks/advanced/sequential-id-mapping.mdx | 2 +- .../powersync-service/self-hosted-instances.mdx | 4 ++-- integrations/coolify.mdx | 12 ++++++------ sync/advanced/compatibility.mdx | 2 +- 4 files changed, 10 insertions(+), 10 deletions(-) diff --git a/client-sdks/advanced/sequential-id-mapping.mdx b/client-sdks/advanced/sequential-id-mapping.mdx index a27fc462..eb2b4325 100644 --- a/client-sdks/advanced/sequential-id-mapping.mdx +++ b/client-sdks/advanced/sequential-id-mapping.mdx @@ -191,7 +191,7 @@ As sequential IDs can only be created on the backend source database, we need to The `parameters` query is updated by removing the `list_id` alias (this is removed to avoid any confusion between the `list_id` column in the `todos` table), and the `data` query is updated to use the `uuid` column as the `id` column for the `lists` and `todos` tables. We also explicitly define which columns to select, as `list_id` is no longer required in the client. -```yaml sync_rules.yaml {4, 7-8} +```yaml sync_config.yaml {4, 7-8} bucket_definitions: user_lists: # Separate bucket per todo list diff --git a/configuration/powersync-service/self-hosted-instances.mdx b/configuration/powersync-service/self-hosted-instances.mdx index f6f8b208..a17f5637 100644 --- a/configuration/powersync-service/self-hosted-instances.mdx +++ b/configuration/powersync-service/self-hosted-instances.mdx @@ -236,9 +236,9 @@ sync_rules: - SELECT * FROM lists - SELECT * FROM todos -# Alternatively, reference a sync rules file +# Alternatively, reference a sync config file # sync_rules: - # path: sync_rules.yaml + # path: sync_config.yaml ``` For more information, see [Sync Rules](/sync/rules/overview). diff --git a/integrations/coolify.mdx b/integrations/coolify.mdx index 872b3a84..53686266 100644 --- a/integrations/coolify.mdx +++ b/integrations/coolify.mdx @@ -59,7 +59,7 @@ The easiest way to get started is to use **Supabase** as it provides all three. The following configuration options should be updated: - Environment variables -- `sync_rules.yaml` file (according to your data requirements) +- `sync_config.yaml` file (according to your data requirements) - `powersync.yaml` file @@ -222,8 +222,8 @@ The following Compose file serves as a universal starting point for deploying th volumes: - ./volumes/config:/home/config - type: bind - source: ./volumes/config/sync_rules.yaml - target: /home/config/sync_rules.yaml + source: ./volumes/config/sync_config.yaml + target: /home/config/sync_config.yaml content: | bucket_definitions: user_lists: @@ -304,7 +304,7 @@ The following Compose file serves as a universal starting point for deploying th # Specify sync rules sync_rules: - path: /home/config/sync_rules.yaml + path: /home/config/sync_config.yaml # Client (application end user) authentication settings client_auth: @@ -361,7 +361,7 @@ The following Compose file serves as a universal starting point for deploying th - Navigate to the `Storages` tab and update the `sync_rules.yaml` and `powersync.yaml` files as needed. + Navigate to the `Storages` tab and update the `sync_config.yaml` and `powersync.yaml` files as needed. For more information see [Sync Rules](/sync/rules/overview) and the skeleton config file in [Service Configuration](/configuration/powersync-service/self-hosted-instances). @@ -375,7 +375,7 @@ The following Compose file serves as a universal starting point for deploying th - + diff --git a/sync/advanced/compatibility.mdx b/sync/advanced/compatibility.mdx index 6b134818..f1f12920 100644 --- a/sync/advanced/compatibility.mdx +++ b/sync/advanced/compatibility.mdx @@ -85,7 +85,7 @@ You can use the `timestamp_max_precision` option to configure the actual precisi For instance, a Postgres timestamp value would sync as `2025-09-22T14:29:30.000000` by default. If you don't want that level of precision, you can use the following options to make it sync as `2025-09-22T14:29:30.000`: -```yaml sync_rules.yaml +```yaml sync_config.yaml config: edition: 2 timestamp_max_precision: milliseconds From 1c19d3b72f0c12edcd43c6b019a72ba54d75015a Mon Sep 17 00:00:00 2001 From: Benita Volkmann Date: Thu, 5 Feb 2026 16:03:55 +0200 Subject: [PATCH 09/11] Polish --- architecture/powersync-service.mdx | 2 +- sync/advanced/prioritized-sync.mdx | 9 +++++++++ sync/streams/client-usage.mdx | 11 +++++++++++ sync/streams/migration.mdx | 6 +++--- sync/streams/overview.mdx | 2 +- 5 files changed, 25 insertions(+), 5 deletions(-) diff --git a/architecture/powersync-service.mdx b/architecture/powersync-service.mdx index 66b58d04..6cda1571 100644 --- a/architecture/powersync-service.mdx +++ b/architecture/powersync-service.mdx @@ -54,7 +54,7 @@ This also means that the PowerSync Service has to keep track of less state per-u Each bucket stores the _recent history_ of operations on each row, not just the latest state of the row. -This is another core part of the PowerSync architecure — the PowerSync Service can efficiently query the _operations_ that each client needs to receive in order to be up to date. Tracking of operation history is also key to the data integrity and [consistency](/architecture/consistency) properties of PowerSync. +This is another core part of the PowerSync architecture — the PowerSync Service can efficiently query the _operations_ that each client needs to receive in order to be up to date. Tracking of operation history is also key to the data integrity and [consistency](/architecture/consistency) properties of PowerSync. When a change occurs in the source database that affects a certain bucket (based on your Sync Streams or Sync Rules configuration), that change will be appended to the operation history in that bucket. Buckets are therefore treated as "append-only" data structures. That being said, to avoid an ever-growing operation history, the buckets can be [compacted](/maintenance-ops/compacting-buckets) (this is automatically done on PowerSync Cloud). diff --git a/sync/advanced/prioritized-sync.mdx b/sync/advanced/prioritized-sync.mdx index fdc247ab..3c798239 100644 --- a/sync/advanced/prioritized-sync.mdx +++ b/sync/advanced/prioritized-sync.mdx @@ -56,6 +56,15 @@ streams: priority: 2 # Syncs after lists auto_subscribe: true ``` + +Clients can also override the priority when subscribing: + +```js +// Override the stream's default priority for this subscription +const sub = await db.syncStream('todos', { list_id: 'abc' }).subscribe({ priority: 1 }); +``` + +This allows the same stream to be subscribed with different priorities. When multiple subscriptions resolve to the same underlying bucket, the highest priority among them is used. In Sync Rules, you assign priorities to bucket definitions. The priority determines when data in that bucket syncs relative to other buckets. diff --git a/sync/streams/client-usage.mdx b/sync/streams/client-usage.mdx index 010cb327..4f6bbbe9 100644 --- a/sync/streams/client-usage.mdx +++ b/sync/streams/client-usage.mdx @@ -301,6 +301,17 @@ const subB = await db.syncStream('todos', { list_id: 'B' }).subscribe({ ttl: 864 // List B data cached for 24h after unsubscribe ``` +## Priority Override + +Streams can have a default priority set in the YAML sync configuration (see [Prioritized Sync](/sync/advanced/prioritized-sync)). When subscribing, you can override this priority for a specific subscription: + +```js +// Override the stream's default priority +const sub = await db.syncStream('todos', { list_id: 'abc' }).subscribe({ priority: 1 }); +``` + +This allows the same stream to be subscribed with different priorities for different use cases. When multiple subscriptions resolve to the same underlying data, the highest priority among them is used for syncing. + ## Connection Parameters Connection parameters are a more advanced feature for values that apply to all streams in a session. They're the Sync Streams equivalent of [Client Parameters](/sync/rules/client-parameters) in Sync Rules. diff --git a/sync/streams/migration.mdx b/sync/streams/migration.mdx index 5250ad05..61f14bcd 100644 --- a/sync/streams/migration.mdx +++ b/sync/streams/migration.mdx @@ -29,7 +29,7 @@ Sync Streams address these limitations: 3. **Built-in caching**: Each subscription has a configurable `ttl` that keeps data cached after unsubscribing. When users return to a screen, data may already be available — no loading state needed. -4. **Simpler syntax**: Just queries with subqueries. No separate parameter queries. The syntax is closer to plain SQL. +4. **Simpler, more powerful syntax**: Queries with subqueries, JOINs, and CTEs. No separate parameter queries. The syntax is closer to plain SQL and supports more SQL features than Sync Rules. 5. **Framework integration**: React hooks and Kotlin Compose extensions let your UI components automatically manage subscriptions based on what's rendered. @@ -50,8 +50,8 @@ If you want "sync everything upfront" behavior (like Sync Rules), set `auto_subs | JS Web | v1.27.0 | v1.32.0 | | React Native | v1.25.0 | v1.29.0 | | React hooks | v1.8.0 | — | -| Node.js | — | v0.16.0 | -| Capacitor | — | v0.3.0 | +| Node.js | v0.11.0 | v0.16.0 | +| Capacitor | v0.0.1 | v0.3.0 | | Dart/Flutter | v1.16.0 | v1.17.0 | | Kotlin | v1.7.0 | v1.9.0 | | Swift | [In progress](https://github.com/powersync-ja/powersync-swift/pull/86) | v1.8.0 | diff --git a/sync/streams/overview.mdx b/sync/streams/overview.mdx index e18decd4..a15d9282 100644 --- a/sync/streams/overview.mdx +++ b/sync/streams/overview.mdx @@ -305,7 +305,7 @@ const sub = await db.syncStream('todos', { list_id: 'abc' }) - **SQL Syntax**: Stream queries use a SQL-like syntax with `SELECT` statements. You can use subqueries, `INNER JOIN`, and [CTEs](/sync/streams/ctes) for filtering. `GROUP BY`, `ORDER BY`, and `LIMIT` are not supported. See [Writing Stream Queries](/sync/streams/queries) for details on joins, multiple queries per stream, and other features. -- **Type Conversion**: Data types from your backend database (Postgres, MongoDB, MySQL, etc.) are converted when synced to the client's SQLite database. Most types become `text`, so you may need to parse or cast values in your app code. See [Type Mapping](/sync/types) for details on how each type is handled. +- **Type Conversion**: Data types from your source database (Postgres, MongoDB, MySQL, SQL Server) are converted when synced to the client's SQLite database. Most types become `text`, so you may need to parse or cast values in your app code. See [Type Mapping](/sync/types) for details on how each type is handled. - **Primary Key**: PowerSync requires every synced table to have a primary key column named `id` of type `text`. If your backend uses a different column name or type, you'll need to map it. For MongoDB, the `_id` field automatically maps to `id`. See [Client ID](/sync/advanced/client-id) for setup instructions. From 8cf81a49783f9ded18d5056bf1545a75c49af27a Mon Sep 17 00:00:00 2001 From: Benita Volkmann Date: Thu, 5 Feb 2026 16:07:20 +0200 Subject: [PATCH 10/11] Bring Sync Rules back into Setup guide --- intro/setup-guide.mdx | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/intro/setup-guide.mdx b/intro/setup-guide.mdx index 7c117fff..86197f01 100644 --- a/intro/setup-guide.mdx +++ b/intro/setup-guide.mdx @@ -512,7 +512,7 @@ Table/collection names in your configuration must match the table names defined For quick development and testing, you can generate a temporary development token instead of implementing full authentication. You'll use this token for two purposes: -- **Testing with the _Sync Diagnostics Client_** (in the next step) to verify your setup and Sync Streams +- **Testing with the _Sync Diagnostics Client_** (in the next step) to verify your setup and Sync Streams or Sync Rules - **Connecting your app** (in a later step) to test the client SDK integration @@ -521,7 +521,7 @@ You'll use this token for two purposes: 2. Go to the **Client Auth** view 3. Check the **Development tokens** setting and save your changes 4. Click the **Connect** button in the top bar - 5. **Enter token subject**: Since you're starting with simple auto-subscribed Sync Streams that sync all data to all users (as we recommended in the previous step), you can just put something like `test-user` as the token subject (which would normally be the user ID you want to test with). + 5. **Enter token subject**: Since you're starting with simple streams or buckets that sync all data to all users (as we recommended in the previous step), you can just put something like `test-user` as the token subject (which would normally be the user ID you want to test with). 6. Click **Generate token** and copy the token @@ -573,7 +573,7 @@ The Sync Diagnostics Client will connect to your PowerSync Service instance and **Checkpoint:** - Inspect your synced tables in the Sync Diagnostics Client — these should match the Sync Streams you [defined previously](#4-define-sync-streams). This confirms your setup is working correctly before integrating the client SDK into your app. + Inspect your synced tables in the Sync Diagnostics Client — these should match the Sync Streams or Sync Rules you [defined previously](#4-define-sync-streams-or-sync-rules). This confirms your setup is working correctly before integrating the client SDK into your app. # 7. Use the Client SDK @@ -624,7 +624,7 @@ import SdkClientSideSchema from '/snippets/sdk-client-side-schema.mdx'; -_PowerSync Cloud:_ The easiest way to generate your schema is using the [PowerSync Dashboard](https://dashboard.powersync.com/). Click the **Connect** button in the top bar to generate the client-side schema based on your Sync Streams in your preferred language. +_PowerSync Cloud:_ The easiest way to generate your schema is using the [PowerSync Dashboard](https://dashboard.powersync.com/). Click the **Connect** button in the top bar to generate the client-side schema based on your Sync Streams or Sync Rules in your preferred language. Here's an example schema for a simple `todos` table: @@ -639,12 +639,12 @@ import SdkSchemaExamples from '/snippets/sdk-schema-examples.mdx'; **Learn More** - The client-side schema uses three column types: `text`, `integer`, and `real`. These map directly to values from your Sync Streams and are automatically cast if needed. For details on how backend database types map to SQLite types, see [Types](/sync/types). + The client-side schema uses three column types: `text`, `integer`, and `real`. These map directly to values from your Sync Streams or Sync Rules and are automatically cast if needed. For details on how backend database types map to SQLite types, see [Types](/sync/types). ### Instantiate the PowerSync Database -Now that you have your client-side schema defined, instantiate the PowerSync database in your app. This creates the client-side SQLite database that will be kept in sync with your source database based on your Sync Streams configuration. +Now that you have your client-side schema defined, instantiate the PowerSync database in your app. This creates the client-side SQLite database that will be kept in sync with your source database based on your Sync Streams or Sync Rules configuration. import SdkInstantiateDbExamples from '/snippets/sdk-instantiate-db-examples.mdx'; @@ -1130,7 +1130,7 @@ For production deployments, you'll need to: ### Additional Resources -- Learn more about [Sync Streams](/sync/streams/overview) for advanced data filtering and on-demand syncing +- Learn more about [Sync Streams](/sync/streams/overview) or [Sync Rules](/sync/rules/overview) for advanced data filtering - Explore [Live Queries / Watch Queries](/client-sdks/watch-queries) for reactive UI updates - Check out [Example Projects](/intro/examples) for complete implementations - Review the [Client SDK References](/client-sdks/overview) for client-side platform-specific details From 86382f0fb92352673e0518e42b7afb3ea94c26fd Mon Sep 17 00:00:00 2001 From: Benita Volkmann Date: Thu, 5 Feb 2026 16:57:16 +0200 Subject: [PATCH 11/11] Polish --- sync/streams/examples.mdx | 23 ++-------------------- sync/streams/parameters.mdx | 2 +- sync/supported-sql.mdx | 38 ++++++++++++++++++++++++------------- 3 files changed, 28 insertions(+), 35 deletions(-) diff --git a/sync/streams/examples.mdx b/sync/streams/examples.mdx index fc5ee5c7..2ac9376d 100644 --- a/sync/streams/examples.mdx +++ b/sync/streams/examples.mdx @@ -8,9 +8,9 @@ sidebarTitle: "Examples & Demos" These patterns show how to combine Sync Streams features to solve common real-world scenarios. -### Multi-Tenant Applications +### Organization-Scoped Data -For apps where users belong to organizations, use JWT claims to scope all data to the user's tenant. The `org_id` in the JWT ensures users only see data from their organization, without needing to pass it from the client. +For apps where users belong to an organization (or company, team, workspace, etc.), use JWT claims to scope data. The `org_id` in the JWT ensures users only see data from their organization, without needing to pass it from the client. ```yaml streams: @@ -107,25 +107,6 @@ await Promise.all([ If multiple streams share the same filtering logic, consider using [CTEs](/sync/streams/ctes) to avoid repetition and [multiple queries per stream](/sync/streams/queries#multiple-queries-per-stream) to reduce bucket count. -### Conditional Global Data - -Sometimes you want to sync data to all users who meet certain criteria, but not to others. Use `EXISTS` to check a condition before syncing any rows. - -```yaml -streams: - # Only sync admin settings to users who are admins - admin_settings: - query: | - SELECT * FROM admin_settings - WHERE EXISTS ( - SELECT 1 FROM users - WHERE id = auth.user_id() AND is_admin = true - ) - auto_subscribe: true -``` - -The `EXISTS` clause acts as a gate: if the user is an admin, all rows from `admin_settings` sync. If not, no rows sync. This is useful for feature flags, admin panels, or premium content. - ### User's Default or Primary Item When users have a "default" or "primary" item stored in their profile, you can sync related data automatically without the client needing to know the ID upfront. diff --git a/sync/streams/parameters.mdx b/sync/streams/parameters.mdx index ece2efe0..076d7f0e 100644 --- a/sync/streams/parameters.mdx +++ b/sync/streams/parameters.mdx @@ -83,7 +83,7 @@ For most use cases, subscription parameters are the best choice. They're more fl ## Expanding JSON Arrays -If your JWT or connection parameters contain an array of values (like project IDs), you can expand them to filter data. There are three equivalent ways to write this: +If a user's JWT contains an array of IDs (e.g., `{ "project_ids": ["proj-1", "proj-2", "proj-3"] }`), you can expand it to sync all matching records. The example below syncs all three projects to the user's device. **Shorthand syntax** (recommended): diff --git a/sync/supported-sql.mdx b/sync/supported-sql.mdx index dbed1ed9..4d73788a 100644 --- a/sync/supported-sql.mdx +++ b/sync/supported-sql.mdx @@ -3,23 +3,35 @@ title: "Supported SQL" description: SQL syntax, operators, and functions supported in Sync Streams and Sync Rules queries. --- -This page documents the SQL syntax and functions supported in both Sync Streams and Sync Rules (legacy). - - -**Sync Streams** have some additional capabilities not available in Sync Rules, such as limited subqueries and `IN (SELECT ...)` syntax. See the [Sync Streams documentation](/sync/streams/overview) for details. - +This page documents the SQL syntax and functions supported in Sync Streams and Sync Rules (legacy). ## Query Syntax -The supported SQL is based on a small subset of the SQL standard syntax. +The supported SQL is based on a subset of the SQL standard syntax. Sync Streams support more SQL features than Sync Rules. + + + +1. `SELECT` statements with column selection and `WHERE` filtering +2. Subqueries with `IN (SELECT ...)`, including nested subqueries +3. `JOIN` / `INNER JOIN` for traversing relationships (selected columns must come from a single table) +4. Common Table Expressions (CTEs) via the `with:` block +5. Table-valued functions like `json_each()` for expanding arrays +6. A limited set of operators and functions — see below + +No aggregation, sorting, or set operations (`GROUP BY`, `ORDER BY`, `LIMIT`, `UNION`, etc.). + +See [Writing Queries](/sync/streams/queries) for details and examples. + + +1. Simple `SELECT` statements +2. `WHERE` filtering with `=`, `IN`, and `IS NULL` on parameters +3. A limited set of operators and functions — see below -Notable features and restrictions: +No subqueries, JOINs, CTEs, aggregation, or sorting. -1. Only simple `SELECT` statements are supported. -2. No `JOIN`, `GROUP BY` or other aggregation, `ORDER BY`, or `LIMIT` are supported in basic queries. -3. **Sync Streams**: Limited subqueries with `IN (SELECT ...)` are supported. -4. **Sync Rules**: No subqueries are supported. For token parameters, only `=` operators are supported, and `IN` to a limited extent. -5. A limited set of operators and functions are supported — see below. +See [Sync Rules documentation](/sync/rules/overview) for details. + + ## Operators and Functions @@ -63,7 +75,7 @@ Some fundamental restrictions on these operators and functions are: | base64(data) | Convert blob or text data to base64 text. | | [length(data)](https://www.sqlite.org/lang_corefunc.html#length) | For text, return the number of characters. For blob, return the number of bytes. For null, return null. For integer and real, convert to text and return the number of characters. | | [typeof(data)](https://www.sqlite.org/lang_corefunc.html#typeof) | text, integer, real, blob or null | -| [json\_each(data)](https://www.sqlite.org/json1.html#jeach) | Expands a JSON array or object from a request or token parameter into a set of parameter rows. Example: `SELECT value as project_id FROM json_each(request.jwt() -> 'project_ids'` | +| [json\_each(data)](https://www.sqlite.org/json1.html#jeach) | Expands a JSON array into rows. **Sync Streams**: Can be used as a table-valued function with JOIN syntax (e.g., `JOIN json_each(auth.parameter('ids')) AS t`). **Sync Rules**: Used in parameter queries to expand arrays. Only works with auth/connection parameters, not on columns from joined tables. | | [json\_extract(data, path)](https://www.sqlite.org/json1.html#jex) | Same as `->>` operator, but the path must start with `$.` | | [json\_array\_length(data)](https://www.sqlite.org/json1.html#jarraylen) | Given a JSON array (as text), returns the length of the array. If data is null, returns null. If the value is not a JSON array, returns 0. | | [json\_valid(data)](https://www.sqlite.org/json1.html#jvalid) | Returns 1 if the data can be parsed as JSON, 0 otherwise. |