diff --git a/docs/workbench/axonops-integration.md b/docs/workbench/axonops-integration.md new file mode 100644 index 000000000..58978aff1 --- /dev/null +++ b/docs/workbench/axonops-integration.md @@ -0,0 +1,128 @@ +--- +title: "AxonOps Integration" +description: "Link AxonOps Workbench to AxonOps monitoring dashboards. Deep links to cluster, keyspace, and table views." +meta: + - name: keywords + content: "AxonOps integration, monitoring, dashboards, cluster monitoring, AxonOps Workbench" +--- + +# AxonOps Integration + +AxonOps Workbench can link directly to [AxonOps](https://axonops.com){:target="_blank"} monitoring dashboards, giving you one-click access to cluster, keyspace, and table-level metrics from within the workbench. When enabled, an AxonOps tab appears in each connection's work area, embedding the relevant dashboard view right alongside your CQL console. + +## What is AxonOps? + +AxonOps is a monitoring and management platform purpose-built for Apache Cassandra. It provides real-time metrics, alerting, backup management, and operational insights for your clusters. For full documentation, see the [AxonOps Documentation](https://docs.axonops.com){:target="_blank"}. + +The integration in AxonOps Workbench creates deep links between your development environment and your monitoring dashboards, so you can investigate performance characteristics and validate the impact of schema changes without leaving the workbench. + +## Enabling the Integration + +The AxonOps integration is controlled at two levels: a global toggle in application settings, and per-workspace or per-connection overrides. + +### Global Setting + +1. Open **Settings** from the navigation sidebar. +2. Under the **Features** section, locate the **AxonOps Integration** checkbox. +3. Enable the checkbox to activate the integration across all workspaces. + + + +!!! info + The AxonOps integration is enabled by default. You can disable it globally if your environment does not use AxonOps monitoring. + +### Per-Workspace and Per-Connection Control + +Even with the global setting enabled, the integration can be toggled on or off for individual workspaces and connections. This allows you to enable monitoring links only for the clusters that are registered with AxonOps. + +## Configuring a Connection for AxonOps + +Each connection has an **AxonOps** tab in the connection dialog where you provide the details needed to build dashboard links. + + + +### Configuration Fields + +| Field | Description | +|-------|-------------| +| **AxonOps Organization** | Your organization name in AxonOps. This corresponds to the organization segment in your AxonOps dashboard URL. | +| **AxonOps Cluster Name** | The name of the cluster as registered in AxonOps. This must match exactly. | +| **AxonOps URL** | Choose between **AxonOps Cloud** (the managed SaaS platform) or **AxonOps Self-Host** (your own AxonOps installation). | + +### AxonOps URL Options + +- **AxonOps Cloud** -- Uses the default AxonOps Cloud endpoint (`https://dash.axonops.cloud`). Select this if you are using the managed AxonOps Cloud platform. The URL is set automatically and cannot be changed. +- **AxonOps Self-Host** -- Allows you to specify a custom URL for your self-hosted AxonOps installation. Enter the protocol (e.g., `https`) and hostname (e.g., `axonops.internal.company.com`) in the provided fields. + +### Example Configuration + +For a cluster named `production-cluster` in the organization `mycompany` on AxonOps Cloud: + +| Field | Value | +|-------|-------| +| AxonOps Organization | `mycompany` | +| AxonOps Cluster Name | `production-cluster` | +| AxonOps URL | AxonOps Cloud | + +For a self-hosted AxonOps instance: + +| Field | Value | +|-------|-------| +| AxonOps Organization | `mycompany` | +| AxonOps Cluster Name | `staging-cluster` | +| AxonOps URL | AxonOps Self-Host: `https://axonops.internal.company.com` | + +## Deep Links + +Once configured, AxonOps Workbench generates context-aware deep links that open the appropriate dashboard view. These links are available through the AxonOps integration tab in the connection work area, and through context actions in the schema browser. + +### Cluster Overview + +Navigate to the overall cluster dashboard showing health, node status, and aggregate metrics. + +**URL pattern:** + +``` +{AxonOps URL}/{ORG}/cassandra/{CLUSTER_NAME}/deeplink/dashboard/cluster +``` + +### Keyspace-Level Metrics + +View metrics scoped to a specific keyspace, including read/write latencies, partition sizes, and table-level breakdowns. + +**URL pattern:** + +``` +{AxonOps URL}/{ORG}/cassandra/{CLUSTER_NAME}/deeplink/dashboard/keyspace?keyspace={KEYSPACE_NAME}&scope=.* +``` + +### Table-Level Metrics + +Drill down to metrics for a specific table within a keyspace, such as SSTable counts, compaction statistics, and per-table latencies. + +**URL pattern:** + +``` +{AxonOps URL}/{ORG}/cassandra/{CLUSTER_NAME}/deeplink/dashboard/keyspace?keyspace={KEYSPACE_NAME}&scope={TABLE_NAME} +``` + +!!! tip + Right-click a keyspace or table in the schema browser to access AxonOps deep links directly from the context menu. + +## Use Cases + +### Investigating Slow Queries + +When a query takes longer than expected in the CQL console, switch to the AxonOps integration tab to check read and write latencies for the relevant table. The table-level deep link takes you directly to the metrics dashboard for that specific table, where you can correlate latency spikes with compaction activity or cluster-wide events. + +### Verifying Schema Change Impact + +After running an `ALTER TABLE` or `CREATE INDEX` statement, use the keyspace-level deep link to monitor the impact on read/write performance and SSTable counts. This helps confirm that the schema change is behaving as expected before rolling it out to other environments. + +### Development Clusters with Monitoring + +For teams that run AxonOps on their development or staging clusters, the integration provides a seamless workflow: write and test queries in the CQL console, then immediately check how those queries affect cluster metrics -- all within the same application window. + +### Comparing Environments + +If you maintain connections to multiple clusters (development, staging, production), each with its own AxonOps configuration, you can quickly switch between dashboard views to compare metrics across environments without leaving the workbench. diff --git a/docs/workbench/cassandra/cassandra.md b/docs/workbench/cassandra/cassandra.md deleted file mode 100644 index 1dbea026d..000000000 --- a/docs/workbench/cassandra/cassandra.md +++ /dev/null @@ -1,28 +0,0 @@ ---- -title: "Cassandra Workbench" -description: "AxonOps Workbench for Cassandra. Desktop application for database management." -meta: - - name: keywords - content: "AxonOps Workbench, Cassandra GUI, desktop application, database management" ---- - -# Cassandra Workbench - -AxonOps Workbench is a desktop client for Cassandra that provides a visual interface for browsing schemas, running CQL, and managing connections. - -## Supported Platforms - -- Windows -- macOS -- Linux - -## Getting Started - -1. Install AxonOps Workbench for your OS. -2. Launch the application and create a new connection. -3. Provide the cluster contact points and authentication details. -4. Save the connection and open a CQL editor. - -## Licensing - -See [AxonOps Workbench License](license.md) for licensing details. diff --git a/docs/workbench/cassandra/license.md b/docs/workbench/cassandra/license.md deleted file mode 100644 index 0049620a2..000000000 --- a/docs/workbench/cassandra/license.md +++ /dev/null @@ -1,15 +0,0 @@ ---- -title: "AxonOps Workbench License" -description: "AxonOps Workbench licensing. Source code license and terms." -meta: - - name: keywords - content: "Workbench license, AxonOps licensing, activation" ---- - -# AxonOps Workbench License - -AxonOps Workbench releases are made available under the [Apache License 2.0](https://github.com/axonops/axonops-workbench-cassandra/blob/main/LICENSE){:target="_blank"}. - -## Source Code License - -Source code for AxonOps Workbench is available at [https://github.com/axonops/axonops-workbench-cassandra](https://github.com/axonops/axonops-workbench-cassandra){:target="_blank"} under the [Apache License 2.0](https://github.com/axonops/axonops-workbench-cassandra/blob/main/LICENSE){:target="_blank"}. diff --git a/docs/workbench/cli.md b/docs/workbench/cli.md new file mode 100644 index 000000000..bbe8a73de --- /dev/null +++ b/docs/workbench/cli.md @@ -0,0 +1,352 @@ +--- +title: "CLI Reference" +description: "AxonOps Workbench command-line interface reference. Manage workspaces and connections, automate setup, and run CQL sessions from the terminal." +meta: + - name: keywords + content: "CLI, command line, automation, headless, import workspace, import connection, cqlsh, AxonOps Workbench" +--- + +# CLI Reference + +AxonOps Workbench includes a built-in command-line interface (CLI) for managing workspaces and connections, automating environment setup, and launching interactive CQL sessions -- all without opening the graphical interface. + +When you pass a supported argument to the workbench executable, the application automatically switches to CLI mode. Without any arguments, the regular GUI starts as usual. + +```bash +./axonops-workbench -v # CLI mode +./axonops-workbench # Regular GUI mode +``` + +!!! note "Argument value syntax" + When passing a value to an argument, you **must** use the equals sign (`=`). For example: + + ```bash + ./axonops-workbench --list-connections workspace-0b5d20cb08 # Incorrect + ./axonops-workbench --list-connections=workspace-0b5d20cb08 # Correct + ``` + +!!! note "Windows users" + On Windows, the shell may show the prompt immediately without waiting for Workbench to finish. Run it as follows to ensure the shell waits until completion: + + ```bash + start /wait "" "AxonOps Workbench.exe" + ``` + +## Arguments Reference + +| Argument | Value | Description | +|----------|-------|-------------| +| `--help`, `-h` | -- | Print all supported arguments. | +| `--version`, `-v` | -- | Print the current version of AxonOps Workbench. | +| `--list-workspaces` | -- | List all saved workspaces without their connections. | +| `--list-connections` | Workspace ID | List all saved connections in a specific workspace. | +| `--import-workspace` | JSON string, file path, or folder path | Import a workspace from inline JSON, a JSON file, or a workspace folder. | +| `--import-connection` | JSON file path | Import a connection from a JSON file. Supports SSH tunnel info and cqlsh.rc file paths. | +| `--connect` | Connection ID | Connect to a saved connection and start an interactive CQL session. | +| `--json` | -- | Output results as JSON instead of formatted text. Works with `--list-workspaces`, `--list-connections`, `--import-workspace`, and `--import-connection`. | +| `--delete-file` | -- | Delete the source file after a successful import. Ignored when `--import-workspace` receives a folder path. | +| `--test-connection` | `true` or `false` | Test a connection before importing it. With `true`, a failed test stops the import. With `false` or no value, the import continues regardless. | +| `--copy-to-default` | -- | When `--import-workspace` receives a folder path, copy the workspace to the default data directory instead of leaving it in its original location. | + +## Workspace Operations + +### Listing Workspaces + +Retrieve a table of all saved workspaces along with their IDs and connection counts: + +```bash +./axonops-workbench --list-workspaces +``` + +### Importing a Workspace + +The `--import-workspace` argument accepts three types of input: + +- **Inline JSON** -- pass a JSON string directly. +- **File path** -- pass an absolute path to a file containing valid JSON. +- **Folder path** -- pass an absolute path to a single workspace folder, or a parent folder containing multiple workspace folders (one depth level). When importing from a folder, all connections within that workspace are imported automatically. + +**Workspace JSON structure:** + +```json +{ + "name": "", + "color": "", + "defaultPath": "", + "path": "" +} +``` + +| Field | Required | Description | +|-------|----------|-------------| +| `name` | Yes | The workspace's unique name. Duplicate or invalid names cause an error. | +| `color` | No | The workspace's accent color, in any CSS color format (HEX, RGB, HSL, etc.). | +| `defaultPath` | No | Set to `true` to store workspace data in the default location, or `false` to use a custom path. Defaults to `false`. | +| `path` | No | An absolute path where the workspace data folder will be created. Only used when `defaultPath` is `false`. | + +**Examples:** + +```bash +# Import from inline JSON +./axonops-workbench --import-workspace='{"name":"Production", "color":"#FF5733"}' + +# Import from a JSON file +./axonops-workbench --import-workspace=/path/to/workspace.json + +# Import from a folder and copy to the default data directory +./axonops-workbench --import-workspace=/path/to/workspaces/ --copy-to-default + +# Import from a file and delete it after success +./axonops-workbench --import-workspace=/path/to/workspace.json --delete-file +``` + +!!! warning + If a workspace with the same name already exists, the import process is terminated. Ensure workspace names are unique before importing. + +## Connection Operations + +### Listing Connections + +Retrieve all connections in a specific workspace by passing the workspace ID: + +```bash +./axonops-workbench --list-connections=workspace-0b5d20cb08 +``` + +!!! tip + Run `--list-workspaces` first to obtain workspace IDs. + +### Importing a Connection + +Import a connection by passing the absolute path to a JSON file. The file must contain valid JSON in one of the structures shown below. + +```bash +./axonops-workbench --import-connection=/path/to/connection.json +``` + +**Apache Cassandra connection JSON structure:** + +```json +{ + "basic": { + "workspace_id": "", + "name": "", + "datacenter": "", + "hostname": "", + "port": "", + "timestamp_generator": "", + "cqlshrc": "" + }, + "auth": { + "username": "", + "password": "" + }, + "ssl": { + "ssl": "", + "certfile": "", + "userkey": "", + "usercert": "", + "validate": "" + }, + "ssh": { + "host": "", + "port": "", + "username": "", + "password": "", + "privatekey": "", + "passphrase": "", + "destaddr": "", + "destport": "" + } +} +``` + +| Section | Field | Required | Description | +|---------|-------|----------|-------------| +| `basic` | `workspace_id` | Yes | ID of the target workspace. | +| `basic` | `name` | Yes | Unique connection name within the workspace. | +| `basic` | `hostname` | Yes | Hostname or IP address of the Cassandra node. | +| `basic` | `datacenter` | No | Datacenter to set when activating the connection. | +| `basic` | `port` | No | Connection port. Defaults to `9042`. | +| `basic` | `timestamp_generator` | No | Timestamp generator class for the connection. Leave empty for the default. | +| `basic` | `cqlshrc` | No | Absolute path to a `cqlsh.rc` configuration file. | +| `auth` | `username` | No | Cassandra authentication username. | +| `auth` | `password` | No | Cassandra authentication password. | +| `ssl` | `ssl` | No | Enable SSL/TLS for the connection. | +| `ssl` | `certfile` | No | Path to the CA certificate file. | +| `ssl` | `userkey` | No | Path to the user private key file. | +| `ssl` | `usercert` | No | Path to the user certificate file. | +| `ssl` | `validate` | No | Enable certificate validation. | +| `ssh` | `host` | No | SSH tunnel hostname. | +| `ssh` | `port` | No | SSH tunnel port. | +| `ssh` | `username` | No | SSH username. | +| `ssh` | `password` | No | SSH password. | +| `ssh` | `privatekey` | No | Path to the SSH private key file. | +| `ssh` | `passphrase` | No | Passphrase for the SSH private key. | +| `ssh` | `destaddr` | No | Destination address for the tunnel. | +| `ssh` | `destport` | No | Destination port for the tunnel. | + +!!! info + If authentication credentials exist in a referenced `cqlsh.rc` file, they are ignored. Always provide `username` and `password` in the JSON structure. + +**DataStax Astra DB connection JSON structure:** + +```json +{ + "workspace_id": "", + "name": "", + "username": "clientId in AstraDB", + "password": "secret in AstraDB", + "scb_path": "" +} +``` + +| Field | Required | Description | +|-------|----------|-------------| +| `workspace_id` | Yes | ID of the target workspace. | +| `name` | Yes | Unique connection name within the workspace. | +| `username` | Yes | The Astra DB Client ID. | +| `password` | Yes | The Astra DB Client Secret. | +| `scb_path` | Yes | Absolute path to the Secure Connect Bundle (`.zip` file). | + +### Testing a Connection Before Import + +Add `--test-connection` to validate connectivity before finalizing the import: + +```bash +# Stop the import if the connection test fails +./axonops-workbench --import-connection=/path/to/connection.json --test-connection=true + +# Continue importing even if the test fails (feedback is still printed) +./axonops-workbench --import-connection=/path/to/connection.json --test-connection=false +``` + +When `--test-connection` is passed without a value, it defaults to `false` -- the import continues regardless of test outcome, but the result is printed to the terminal. + +## Interactive CQL Sessions + +The `--connect` argument launches a full interactive CQLsh session directly from your terminal, with no GUI required. Pass the connection ID to connect immediately: + +```bash +./axonops-workbench --connect=connection-abc123 +``` + +The workbench handles all connection complexity -- authentication, SSL/TLS, and SSH tunnels -- automatically before dropping you into the CQLsh prompt. Progress is displayed in the terminal, and you have full access to execute CQL commands interactively. + +!!! tip + To find connection IDs, run `--list-connections` with the target workspace ID. + +## JSON Output Mode + +Add the `--json` flag to any listing or import command to receive machine-readable JSON output instead of formatted tables. This is particularly useful for scripting and automation: + +```bash +# List workspaces as JSON +./axonops-workbench --list-workspaces --json + +# List connections as JSON +./axonops-workbench --list-connections=workspace-0b5d20cb08 --json + +# Import a workspace with JSON output +./axonops-workbench --import-workspace='{"name":"Staging"}' --json +``` + +## Headless Linux + +To use AxonOps Workbench on a headless Linux host (no display server), install and run `xvfb` to provide a virtual framebuffer: + +| Distribution | Package Name | Install Command | +|---|---|---| +| Ubuntu / Debian | `xvfb` | `sudo apt install xvfb` | +| RHEL / CentOS | `xorg-x11-server-Xvfb` | `sudo yum install xorg-x11-server-Xvfb` | +| Arch Linux | `xorg-server-xvfb` | `sudo pacman -S xorg-server-xvfb` | +| Alpine | `xvfb` | `apk add xvfb` | + +Before running the workbench, start the virtual display: + +```bash +Xvfb :99 -screen 0 1280x720x24 & export DISPLAY=:99 +``` + +Then run any CLI command as usual: + +```bash +./axonops-workbench --list-workspaces +``` + +## Automation Examples + +### Batch Workspace Setup + +The following script creates a workspace, retrieves its ID, imports a connection, and validates it -- all in one pass: + +```bash +#!/bin/bash +set -e + +WORKBENCH="./axonops-workbench" + +# Step 1: Import a workspace +echo "Creating workspace..." +$WORKBENCH --import-workspace='{"name":"Production", "color":"#2E86AB"}' + +# Step 2: Retrieve the workspace ID +echo "Retrieving workspace ID..." +WORKSPACE_ID=$($WORKBENCH --list-workspaces --json | \ + python3 -c "import sys,json; ws=json.load(sys.stdin); print([w['id'] for w in ws if w['name']=='Production'][0])") +echo "Workspace ID: $WORKSPACE_ID" + +# Step 3: Write the connection JSON file +cat > /tmp/connection.json < + +4. Fill in the connection fields: + + | Field | Description | + | --- | --- | + | **Connection Name** | A descriptive label for this connection (e.g., `Production - Astra DB`). | + | **Username (Client ID)** | The Client ID from your Astra DB application token. | + | **Password (Client Secret)** | The Client Secret from your Astra DB application token. | + | **Secure Connection Bundle** | The path to your downloaded SCB ZIP file. Click the field to open a file browser, or drag and drop the ZIP file onto it. | + + + +5. Click **Test Connection** to verify that the Workbench can authenticate and reach your Astra DB database. A successful test confirms that the bundle, Client ID, and Client Secret are all correct. +6. Once the test passes, click **Save** to store the connection. + +!!! note + All four fields are required. If any field is left empty, the test connection button will highlight the missing fields and display an error message. + +--- + +## Editing an Existing Connection + +To modify an Astra DB connection after it has been saved: + +1. Right-click on the connection in the sidebar and select **Edit**. +2. The connection dialog opens with the existing values pre-filled, including the path to the Secure Connect Bundle. +3. Update any fields as needed -- for example, to point to a new SCB file after rotating your database credentials. +4. Click **Test Connection** to verify the updated settings, then click **Save**. + +--- + +## Credential Security + +Astra DB credentials are protected using the same security mechanisms as standard Cassandra connections: + +- **OS Keychain Storage** -- Your Client ID and Client Secret are stored in the operating system's native credential manager (macOS Keychain, Windows Credential Manager, or Linux libsecret). They are never written to plain-text configuration files on disk. +- **Per-Installation RSA Encryption** -- Each Workbench installation generates a unique RSA key pair. Credentials are encrypted with this key before being passed to the connection layer, providing an additional layer of protection at rest. + +The Secure Connect Bundle path is stored in the connection's metadata, but the bundle itself remains at its original location on your file system. AxonOps Workbench reads the bundle at connection time and does not copy or modify it. + +--- + +## Working with Astra DB + +Once connected, Astra DB databases behave like any other Cassandra connection within AxonOps Workbench. The following features are fully supported: + +- **CQL Console** -- Write and execute CQL statements with syntax highlighting and auto-completion. +- **Schema Browser** -- Navigate keyspaces, tables, columns, indexes, materialized views, and user-defined types. +- **Query Execution** -- Run SELECT, INSERT, UPDATE, and DELETE statements against your Astra DB tables. +- **Query Tracing** -- Enable tracing on individual queries to analyze execution performance and identify bottlenecks. +- **Result Export** -- Export query results to CSV, JSON, or other supported formats. +- **Schema Snapshots** -- Save and compare point-in-time snapshots of your database schema. +- **CQL Descriptions** -- Generate DDL statements for keyspaces and tables. + +!!! info + Astra DB connections do not display a host and port in the connection list because the endpoint information is embedded within the Secure Connect Bundle. + +--- + +## Limitations + +Astra DB is a managed service, and DataStax enforces certain restrictions at the platform level. These are not limitations of AxonOps Workbench itself, but they affect what operations you can perform through the Workbench: + +- **DDL Restrictions** -- Some schema operations (such as creating or dropping keyspaces) may be restricted depending on your Astra DB plan and permissions. Table-level DDL (CREATE TABLE, ALTER TABLE, DROP TABLE) is generally available within your assigned keyspaces. +- **No SSH Tunneling** -- Astra DB connections use the Secure Connect Bundle for encrypted transport. The SSH Tunnel tab in the connection dialog does not apply to Astra DB connections. +- **No Custom SSL Configuration** -- TLS encryption is handled entirely by the Secure Connect Bundle. There is no need (or ability) to configure separate CA certificates or client certificates for Astra DB connections. +- **Serverless Availability** -- Astra DB serverless databases may pause after a period of inactivity. If your database is paused, you may need to resume it from the Astra DB console before connecting. +- **Permission-Dependent Features** -- The features available to you depend on the role assigned to your application token. A token with limited permissions may not be able to perform certain schema or data operations. + +--- + +## Troubleshooting + +| Symptom | Possible Cause | Resolution | +| --- | --- | --- | +| Connection test fails immediately | Missing or incorrect Client ID / Client Secret | Verify your token credentials in the Astra DB console. Regenerate the token if the Client Secret was lost. | +| Connection test fails with a TLS error | Corrupted or extracted SCB file | Re-download the Secure Connect Bundle from the Astra DB console. Ensure the file is a `.zip` archive and has not been unzipped. | +| Connection test times out | Database is paused or network restrictions | Check the Astra DB console to confirm your database is active. Verify that outbound HTTPS traffic is allowed from your workstation. | +| Schema operations return permission errors | Insufficient token role | Generate a new application token with the appropriate role (e.g., **Database Administrator**) for the operations you need to perform. | +| "All fields are required" error | One or more form fields are empty | Ensure that Connection Name, Client ID, Client Secret, and the SCB file path are all provided. | diff --git a/docs/workbench/connections/cassandra.md b/docs/workbench/connections/cassandra.md new file mode 100644 index 000000000..c871d3d27 --- /dev/null +++ b/docs/workbench/connections/cassandra.md @@ -0,0 +1,228 @@ +--- +title: "Apache Cassandra Connections" +description: "Connect to Apache Cassandra and DataStax Enterprise clusters. Configure authentication, consistency levels, cqlsh.rc, and connection variables." +meta: + - name: keywords + content: "Cassandra connection, DSE, authentication, consistency level, cqlsh.rc, variables, AxonOps Workbench" +--- + +# Apache Cassandra Connections + +AxonOps Workbench connects directly to Apache Cassandra and DataStax Enterprise clusters using the CQL native protocol. The connection dialog provides a structured interface for configuring your cluster endpoint, authentication credentials, consistency levels, and the underlying `cqlsh.rc` configuration -- all from a single window. + +## Basic Connection + +The **Basic** tab of the connection dialog contains the core parameters needed to reach your cluster. + +| Field | Description | Required | +| --- | --- | --- | +| **Connection Name** | A human-readable label for this connection (e.g., `Production Node 1`). Must be unique within the workspace. | Yes | +| **Datacenter** | The Cassandra datacenter name to connect to (e.g., `datacenter1`). | No | +| **Hostname** | The IP address or hostname of a Cassandra node (e.g., `192.168.0.10`). | Yes | +| **Port** | The CQL native transport port. Defaults to `9042`. | Yes | +| **Connection Time** | Controls the timestamp generator used during insert operations. Options are **Client-Side Time** (desktop time, using `MonotonicTimestampGenerator`), **Server-Side Time** (Cassandra server time, using `None`), or **Not Set**. | No | +| **Page Size** | The default number of rows returned per page for query results. Defaults to `100`. | No | + + + +### Testing a Connection + +Before saving, click the **Test Connection** button in the bottom-right corner of the dialog. Workbench will attempt to establish a CQL session using the current settings -- including any authentication, SSH tunnel, or SSL configuration you have provided. + +- A spinning indicator appears while the connection attempt is in progress. +- If the test succeeds, a confirmation message is displayed. +- If the test fails, an error message describes what went wrong (unreachable host, authentication failure, SSL handshake error, etc.). +- You can click the **Terminate** button to cancel a test that is taking too long. + +!!! tip + Always test your connection before saving. This catches configuration mistakes early and avoids troubleshooting failed connections after the fact. + +## Authentication + +The **Authentication** tab lets you provide credentials for clusters that have Cassandra native authentication enabled (the `PasswordAuthenticator`). + +| Field | Description | +| --- | --- | +| **Username** | The Cassandra username (e.g., `cassandra`). | +| **Password** | The corresponding password. A toggle button lets you reveal or hide the password text. | +| **Save authentication credentials locally** | When checked (the default), credentials are persisted securely on your machine so you do not have to re-enter them each time you connect. | + +### Credential Security + +AxonOps Workbench never stores credentials in plain-text configuration files. Instead, it uses a two-layer security model: + +- **OS Keychain Storage** -- Credentials are stored in your operating system's native credential manager using the [keytar](https://github.com/nickhurst/keytar){:target="_blank"} library. +- **RSA Encryption** -- Each Workbench installation generates a unique RSA key pair. Sensitive connection data is encrypted at rest with this key before being passed to the keychain. + +!!! info "Keychain backends by operating system" + The underlying keychain backend depends on your platform: + + - **macOS** -- Keychain Services (the system Keychain) + - **Windows** -- Windows Credential Manager + - **Linux** -- libsecret (used by GNOME Keyring, KWallet, and other Secret Service implementations) + + On Linux, ensure that `libsecret` and a Secret Service provider (such as GNOME Keyring) are installed and running. Without one, credential storage will not function. + +## Consistency Levels + +When executing CQL statements in the CQL Console, you can set the consistency level for both regular and serial (lightweight transaction) operations. These settings appear in the query toolbar and control how many replicas must acknowledge a read or write before the operation is considered successful. + +### Regular Consistency Levels + +The following regular consistency levels are available: + +| Level | Description | +| --- | --- | +| `ANY` | A write must be written to at least one node (including hinted handoff). Not valid for reads. | +| `ONE` | A single replica must respond. | +| `TWO` | Two replicas must respond. | +| `THREE` | Three replicas must respond. | +| `QUORUM` | A majority of replicas across all datacenters must respond. | +| `LOCAL_QUORUM` | A majority of replicas in the local datacenter must respond. | +| `EACH_QUORUM` | A majority of replicas in each datacenter must respond. Only valid for writes. | +| `ALL` | All replicas must respond. | +| `LOCAL_ONE` | A single replica in the local datacenter must respond. | + +### Serial Consistency Levels + +Serial consistency levels apply only to lightweight transactions (`IF NOT EXISTS`, `IF` conditions): + +| Level | Description | +| --- | --- | +| `SERIAL` | Linearizable consistency across all datacenters. | +| `LOCAL_SERIAL` | Linearizable consistency within the local datacenter only. | + +!!! tip "When to change consistency levels" + The defaults (`LOCAL_ONE` for reads, `LOCAL_ONE` for writes, `LOCAL_SERIAL` for serial) are suitable for most development workflows. Consider changing them when: + + - You need **stronger consistency guarantees** for testing -- use `LOCAL_QUORUM` or `QUORUM` to simulate production read/write behavior. + - You are running **lightweight transactions** and need to control whether the serial phase spans all datacenters (`SERIAL`) or just the local one (`LOCAL_SERIAL`). + - You want to **verify replication** by reading at `ALL` to confirm data has reached every replica. + + Avoid using `ALL` for routine work, as a single unavailable replica will cause the operation to fail. + +## cqlsh.rc Configuration + +Every Cassandra connection in AxonOps Workbench is backed by a `cqlsh.rc` configuration file. This file controls low-level connection behavior including timeouts, SSL settings, COPY options, and pre/post-connect scripts. + +### Built-in Editor + +The connection dialog includes a built-in Monaco editor (the same editor that powers VS Code) for directly editing the `cqlsh.rc` content. You can toggle between the form view and the editor view using the **Switch Editor** button in the dialog footer. + + + +The editor provides: + +- Syntax highlighting for INI-style configuration +- The ability to toggle between the structured form and raw editor using the **Switch Editor** button +- An **Expand Editor** button to view and edit the configuration in a larger window + +### Sensitive Data Detection + +Workbench actively monitors the `cqlsh.rc` content for sensitive data. If it detects uncommented `username`, `password`, or `credentials` fields in the file, it highlights the offending lines in red and displays a warning glyph in the editor margin. + +!!! warning + Do not place usernames, passwords, or credentials directly in the `cqlsh.rc` file. Use the **Authentication** tab instead, which stores credentials securely in your OS keychain. If Workbench detects sensitive data in the editor, the affected lines are flagged to alert you. + +### Default cqlsh.rc Template + +When you create a new connection, Workbench populates the editor with a default `cqlsh.rc` template. This template includes all standard sections with their options commented out: + +| Section | Purpose | +| --- | --- | +| `[authentication]` | Credentials file path and default keyspace | +| `[auth_provider]` | Custom authentication provider class | +| `[ui]` | Display settings (colors, time format, timezone, float precision, encoding) | +| `[cql]` | CQL version and default page size | +| `[connection]` | Hostname, port, SSL toggle, timeouts, and timestamp generator | +| `[csv]` | Field size limit for CSV operations | +| `[tracing]` | Maximum trace wait time | +| `[ssl]` | Certificate file paths and TLS version | +| `[certfiles]` | Per-host certificate overrides | +| `[preconnect]` | Scripts to execute before connecting | +| `[postconnect]` | Scripts to execute after connecting | +| `[copy]` | Shared COPY TO / COPY FROM options | +| `[copy-to]` | COPY TO specific options | +| `[copy-from]` | COPY FROM specific options | + +The `[connection]` section is pre-configured with `hostname`, `port = 9042`, and `timestamp_generator = None` as active (uncommented) values. All other options are commented out by default and can be enabled as needed. + +### Custom cqlsh.rc Path + +Each connection stores its `cqlsh.rc` file within the connection's own folder structure inside the workspace directory, at the path: + +``` +//config/cqlsh.rc +``` + +When a connection is used to launch a CQL session, Workbench passes this file to cqlsh via the `--cqlshrc` flag, ensuring that each connection uses its own isolated configuration. + +## Connection Variables + +Variables allow you to parameterize values across your connections and `cqlsh.rc` files. Instead of hard-coding hostnames, ports, or other values that change between environments, you can define a variable once and reference it anywhere using the `${variable_name}` syntax. + +### What Variables Are + +A variable is a named placeholder with a value and a scope. When Workbench processes your `cqlsh.rc` files and SSH tunnel configurations, it replaces `${variable_name}` references with the corresponding values. This makes it straightforward to: + +- Share a single set of connection templates across development, staging, and production environments +- Change a hostname or port in one place and have it propagate to all connections that reference it +- Keep environment-specific values out of your `cqlsh.rc` files + +### Managing Variables + +Variables are managed in the **Settings** dialog under the **Variables** section. Each variable has three attributes: + +| Attribute | Description | +| --- | --- | +| **Name** | The variable identifier. Must contain only letters, digits, and underscores. Must start with a letter or underscore. | +| **Value** | The value that will be substituted wherever the variable is referenced. | +| **Scope** | Which workspaces the variable applies to. You can select **All Workspaces** or limit it to specific workspaces. | + +### Manifest and Values Files + +Variables are persisted in two separate stores within the OS keychain: + +- **Manifest** (`AxonOpsWorkbenchVarsManifest`) -- Contains variable names and their scope assignments. This is the "shape" of your variable set. +- **Values** (`AxonOpsWorkbenchVarsValues`) -- Contains the full variable objects including their actual values. + +This separation ensures that the structure of your variables can be inspected without exposing sensitive values. + +### Nested Variables + +Variable values can reference other variables. For example, if you define: + +- `cluster_host` = `192.168.1.100` +- `cluster_endpoint` = `${cluster_host}:9042` + +Workbench recursively resolves nested references, so `${cluster_endpoint}` evaluates to `192.168.1.100:9042`. The editor provides an eye toggle button next to variable values that contain nested references, allowing you to see the resolved value. + +### Collision Detection + +When you save variables, Workbench checks for collisions to prevent ambiguity: + +- **Name collisions** -- Two variables with the same name and overlapping scope are not allowed. +- **Value collisions** -- Two variables with the same value and overlapping scope are flagged, because Workbench would not be able to determine which variable name to substitute during reverse mapping. + +If a collision is detected, the conflicting fields are highlighted and a descriptive error message explains how to resolve the issue (either rename the variable or adjust its scope). + +### Variable Scoping + +Each variable can be scoped to: + +- **All Workspaces** -- The variable is available in every workspace. Selecting this option deselects all individual workspace selections. +- **Specific Workspaces** -- The variable is only available in the selected workspaces. If you subsequently select all workspaces individually, Workbench automatically switches back to the "All Workspaces" scope. + +When Workbench resolves variables for a given connection, it only considers variables whose scope includes the active workspace (or whose scope is set to "All Workspaces"). + +## DataStax Enterprise + +DataStax Enterprise (DSE) clusters use the same Apache Cassandra connection dialog. To connect to a DSE cluster: + +1. Open the connection dialog and select the **Apache Cassandra** connection type. +2. Enter the hostname and port of a DSE node in the **Basic** tab. +3. Provide authentication credentials in the **Authentication** tab if your DSE cluster has authentication enabled. +4. Configure SSH tunneling or SSL/TLS as needed. + +!!! note + DSE clusters expose the CQL native transport on the same default port (`9042`) as open-source Cassandra. AxonOps Workbench communicates with DSE through the standard CQL protocol, so no additional configuration is required beyond what you would provide for an Apache Cassandra connection. DSE-specific features such as DSE Graph or DSE Search are outside the scope of CQL connections. diff --git a/docs/workbench/connections/index.md b/docs/workbench/connections/index.md new file mode 100644 index 000000000..3e3c87da0 --- /dev/null +++ b/docs/workbench/connections/index.md @@ -0,0 +1,51 @@ +--- +title: "Connections" +description: "Connect AxonOps Workbench to Apache Cassandra clusters and DataStax Astra DB with SSH tunneling and SSL/TLS encryption support." +meta: + - name: keywords + content: "AxonOps Workbench connections, Cassandra connection, Astra DB connection, SSH tunnel, SSL TLS, database connection" +--- + +# Connections + +AxonOps Workbench supports multiple connection types for Apache Cassandra, including direct connections to on-premise or self-hosted clusters and managed cloud databases through DataStax Astra DB. Every connection method is backed by enterprise-grade security features such as SSH tunneling and full SSL/TLS certificate chain support. + +## Connection Types + +| Type | Use Case | Authentication | Encryption | +| --- | --- | --- | --- | +| Apache Cassandra | Direct connection to on-premise or self-hosted clusters | Username / Password | Optional SSL/TLS | +| DataStax Astra DB | DataStax managed cloud Cassandra | Client ID / Secret | Built-in via Secure Connect Bundle | + +## Security Features + +AxonOps Workbench takes a defense-in-depth approach to securing your database credentials and network traffic: + +- **OS Keychain Storage** -- Credentials are stored in the operating system's native credential manager (macOS Keychain, Windows Credential Manager, or Linux libsecret) rather than in plain-text configuration files. +- **Per-Installation RSA Key Encryption** -- Each Workbench installation generates a unique RSA key pair used to encrypt sensitive connection data at rest. +- **SSH Tunneling** -- Connect to clusters that are not directly reachable from your workstation by routing traffic through an SSH bastion host. This is essential for network-restricted or private-subnet deployments. +- **SSL/TLS Certificate Chain Support** -- Configure trusted CA certificates, client certificates, and private keys for mutual TLS authentication with your cluster. + +## Connection Dialog + +When you create or edit a connection, the connection dialog presents five tabs: + +1. **Basic** -- Configure the core connection parameters including hostname, port, and datacenter name. +2. **Authentication** -- Provide your username and password for Cassandra native authentication, or your Client ID and Client Secret for Astra DB. +3. **SSH Tunnel** -- Set up an SSH tunnel by specifying the bastion host, SSH port, SSH username, and authentication method (password or private key). +4. **SSL** -- Enable SSL/TLS and configure your CA certificate, client certificate, and client private key for encrypted and mutually authenticated connections. +5. **AxonOps** -- Link this connection to an AxonOps Dashboard instance for integrated monitoring and management alongside your CQL workflow. + + + +!!! tip + Use the **Test Connection** button before saving to verify that your hostname, credentials, and any tunnel or SSL settings are configured correctly. This avoids troubleshooting failed connections after the fact. + +## Detailed Guides + +For step-by-step instructions on each connection type and security feature, see the following pages: + +- [Apache Cassandra](cassandra.md) -- Connect to self-hosted or on-premise Cassandra clusters. +- [DataStax Astra DB](astra-db.md) -- Connect to DataStax Astra DB using a Secure Connect Bundle. +- [SSH Tunneling](ssh-tunneling.md) -- Route connections through an SSH bastion host for network-restricted clusters. +- [SSL/TLS](ssl-tls.md) -- Configure SSL/TLS encryption and mutual certificate authentication. diff --git a/docs/workbench/connections/ssh-tunneling.md b/docs/workbench/connections/ssh-tunneling.md new file mode 100644 index 000000000..2d3948558 --- /dev/null +++ b/docs/workbench/connections/ssh-tunneling.md @@ -0,0 +1,169 @@ +--- +title: "SSH Tunneling" +description: "Connect to Cassandra clusters through SSH tunnels using AxonOps Workbench. Configure bastion hosts with password or key-based authentication." +meta: + - name: keywords + content: "SSH tunnel, bastion host, secure connection, private key, Cassandra SSH, AxonOps Workbench" +--- + +# SSH Tunneling + +Many production Cassandra clusters are deployed in private networks that are not directly reachable from a developer workstation. Firewalls, VPNs, and cloud VPC boundaries all create legitimate barriers between your machine and the database nodes. AxonOps Workbench solves this with built-in SSH tunneling, allowing you to route your CQL connection through an intermediary host -- commonly called a bastion host or jump server -- that has network access to both your workstation and the Cassandra cluster. + +## When to Use SSH Tunneling + +SSH tunneling is the right choice when: + +- Your Cassandra nodes reside in a private subnet with no public IP addresses. +- A firewall or security group blocks direct access to port 9042 from your network. +- Your organization mandates that all database access passes through a hardened bastion host for auditing and access control. +- You are connecting across cloud VPCs, on-premise data centers, or hybrid environments where direct routing is unavailable. + +## How It Works + +When you enable SSH tunneling, AxonOps Workbench establishes the connection in two stages: + +```plantuml +@startuml +skinparam backgroundColor transparent +skinparam defaultFontName Arial +skinparam defaultFontSize 12 +skinparam componentStyle rectangle +skinparam shadowing false + +skinparam component { + BackgroundColor #F5F5F5 + BorderColor #333333 + FontColor #333333 +} + +skinparam arrow { + Color #333333 + FontColor #333333 +} + +component "AxonOps Workbench\n(your machine)" as workbench +component "Bastion Host\n(jump server)" as bastion +component "Cassandra Cluster\n(private network)" as cassandra + +workbench -right-> bastion : "SSH :22" +bastion -right-> cassandra : "CQL :9042" +@enduml +``` + +1. **SSH connection** -- Workbench opens an encrypted SSH session to the bastion host using the credentials you provide (password or private key). +2. **Port forwarding** -- Through that SSH session, Workbench requests a forwarded connection from the bastion host to the Cassandra node on the CQL port. A local port is allocated automatically on your machine so that all CQL traffic is transparently routed through the tunnel. + +The result is a secure, encrypted channel between your workstation and the Cassandra cluster, without exposing any database ports to the public internet. + +## Configuring an SSH Tunnel + +Open the connection dialog by creating a new connection or editing an existing one, then select the **SSH Tunnel** tab. + + + +### SSH Tunnel Fields + +| Field | Description | Default | +| --- | --- | --- | +| **SSH Host** | The hostname or IP address of the bastion / jump server to SSH into. | -- | +| **SSH Port** | The port on which the SSH service is listening on the bastion host. | `22` | +| **Username** | The SSH username for authenticating with the bastion host. | -- | +| **Password** | The SSH password for password-based authentication. Leave blank if using a private key. | -- | +| **Private Key File** | Path to a PEM-encoded private key file for key-based authentication. Use the file selector to browse to the key on disk. | -- | +| **Passphrase** | The passphrase that protects your private key file. Leave blank if the key is unencrypted. | -- | + +!!! tip + You can use password authentication, private key authentication, or both simultaneously. If both a password and a private key are provided, Workbench will use whichever the SSH server accepts. + +### Destination Address and Port + +AxonOps Workbench automatically derives the destination address and port from the host and port you configured on the **Basic** tab of the connection dialog. The SSH tunnel forwards traffic to that Cassandra node as seen from the bastion host. + +If the Cassandra node's address differs when viewed from the bastion host (for example, an internal DNS name or a private IP), the destination address defaults to `127.0.0.1` on the bastion host when no explicit override is present. In most deployments, the connection host you enter on the Basic tab is already the internal address, and no additional configuration is needed. + +### Saving Credentials + +At the bottom of the SSH Tunnel tab, the **Save SSH credentials locally** checkbox controls whether your SSH username, password, and passphrase are persisted with the connection. When enabled, credentials are encrypted using the installation's RSA key pair and stored via the operating system's native credential manager. When disabled, you will be prompted to enter your SSH credentials each time you connect. + + + +## Timeout Settings + +SSH tunnel timeouts are configured globally in the application settings, not per connection. Two timeout values control tunnel establishment: + +| Setting | Description | Default | +| --- | --- | --- | +| **Ready Timeout** | Maximum time (in milliseconds) to wait for the SSH connection to be established with the bastion host. | `60000` (60 seconds) | +| **Forward Timeout** | Maximum time (in milliseconds) to wait for the port-forwarding channel to be opened through the SSH connection. | `60000` (60 seconds) | + +These values are stored in the application configuration file under the `[sshtunnel]` section: + +```ini +[sshtunnel] +readyTimeout=60000 +forwardTimeout=60000 +``` + +!!! info + If you frequently connect over high-latency networks or through multiple network hops, consider increasing these timeout values to avoid premature connection failures. + +## Troubleshooting + +### Connection Timeout + +**Symptom:** The connection attempt hangs and eventually fails with a timeout error. + +**Possible causes and solutions:** + +- The bastion host is unreachable from your network. Verify that you can reach the SSH host and port from your machine (for example, using `ssh` from a terminal). +- A firewall is blocking outbound connections on the SSH port. Confirm that your network allows outbound traffic on port 22 (or your custom SSH port). +- The timeout values are too low for your network conditions. Increase the **Ready Timeout** and **Forward Timeout** values in the application settings. + +### Authentication Failed + +**Symptom:** The connection fails immediately with an "Authentication failed" or "All configured authentication methods failed" error. + +**Possible causes and solutions:** + +- The SSH username or password is incorrect. Double-check the credentials with your system administrator. +- The private key file does not match the public key installed on the bastion host. Verify that the correct key pair is in use. +- The passphrase for the private key is incorrect or missing. If your key is passphrase-protected, ensure the **Passphrase** field is filled in. +- The SSH server does not accept the authentication method you are using. Some servers disable password authentication entirely and require key-based authentication, or vice versa. + +### Port Forwarding Failed + +**Symptom:** The SSH connection succeeds, but the tunnel fails with "Timed out while waiting for forwardOut" or a "Connection refused" error. + +**Possible causes and solutions:** + +- The Cassandra node is not reachable from the bastion host. SSH into the bastion host manually and verify that you can connect to the Cassandra host and CQL port (for example, using `cqlsh` or `nc`). +- Cassandra is not running or is not listening on the expected port. Confirm that the Cassandra process is active and bound to the correct address and port. +- The **Forward Timeout** is too short. Increase the value in the application settings if the Cassandra node takes time to respond through the bastion host. + +!!! note + When a "Connection refused" error occurs during port forwarding, AxonOps Workbench appends a reminder to ensure that Cassandra is up and running. This message indicates that the SSH tunnel itself was established successfully, but the bastion host could not reach the Cassandra node. + +### Private Key Format Issues + +**Symptom:** Authentication fails even though the private key path and passphrase are correct. + +**Possible causes and solutions:** + +- The key file is not in PEM format. AxonOps Workbench expects a PEM-encoded private key (the file typically begins with `-----BEGIN RSA PRIVATE KEY-----` or `-----BEGIN OPENSSH PRIVATE KEY-----`). If your key is in a different format, convert it using `ssh-keygen`: + + ```bash + ssh-keygen -p -m PEM -f /path/to/your/key + ``` + +- The key uses an unsupported algorithm. Workbench relies on the `ssh2` library for tunnel creation. While most key types are supported (RSA, DSA, ECDSA), **ed25519 keys with a passphrase** may not be fully decrypted by the built-in key parser. If you encounter issues with an ed25519 key, try using the key without a passphrase or switch to an RSA or ECDSA key. + +!!! warning + Ensure that your private key file has appropriate file-system permissions. On Linux and macOS, the key file should be readable only by your user (`chmod 600`). Overly permissive key files may be rejected by the SSH client library. + +## Best Practices + +- **Use key-based authentication** whenever possible. It is more secure than password authentication and avoids the need to store or transmit passwords. +- **Keep timeout values reasonable.** The 60-second defaults work well for most environments. Only increase them if you experience intermittent timeouts over slow or congested networks. +- **Test the connection** after configuring SSH tunnel settings. The **Test Connection** button validates the full path -- SSH tunnel establishment, port forwarding, and CQL handshake -- in a single step. +- **Rotate SSH keys regularly** in accordance with your organization's security policies. Update the private key file path in the connection dialog after each rotation. diff --git a/docs/workbench/connections/ssl-tls.md b/docs/workbench/connections/ssl-tls.md new file mode 100644 index 000000000..98fa8436c --- /dev/null +++ b/docs/workbench/connections/ssl-tls.md @@ -0,0 +1,246 @@ +--- +title: "SSL/TLS Connections" +description: "Configure SSL/TLS encrypted connections to Cassandra clusters in AxonOps Workbench. CA certificates, client certificates, and validation options." +meta: + - name: keywords + content: "SSL, TLS, encrypted connection, CA certificate, client certificate, Cassandra security, AxonOps Workbench" +--- + +# SSL/TLS Connections + +Encrypting client-to-node communication with SSL/TLS protects data in transit between AxonOps Workbench and your Cassandra cluster. This is essential for production deployments, environments subject to compliance requirements such as PCI DSS or HIPAA, and any cluster accessible over an untrusted network. + +When SSL/TLS is not enabled, AxonOps Workbench displays an open padlock icon next to the connection name in the work area and shows a warning in the interactive terminal: + +> SSL is not enabled, the connection is not encrypted and is being transmitted in the clear. + +Enabling SSL/TLS upgrades the icon to a closed padlock, confirming that traffic between the Workbench and the cluster is encrypted. + +--- + +## SSL Tab Overview + +The SSL tab in the connection dialog provides four fields for configuring encrypted connections: + + + +| Field | cqlsh.rc Key | Description | +|-------|-------------|-------------| +| **CA Certificate File** | `certfile` | Path to the Certificate Authority (CA) certificate used to verify the server's identity | +| **Client Key File** | `userkey` | Path to the client's private key file, required for mutual TLS | +| **Client Certificate File** | `usercert` | Path to the client certificate file, required for mutual TLS | +| **Enable connection with SSL** | `ssl` | Toggle that enables or disables SSL/TLS for the connection | +| **Validate files** | `validate` | Toggle that enables or disables server certificate verification (enabled by default) | + +Each certificate path field opens a native file picker when clicked, allowing you to browse to the file on your local filesystem. + +!!! tip + Use the **Test Connection** button after configuring SSL to verify that your certificates are correct and the cluster accepts the connection before saving. + +--- + +## Certificate Formats + +AxonOps Workbench expects certificates and keys in **PEM format**. PEM files are Base64-encoded and typically have the following extensions: + +- `.pem` -- generic PEM-encoded file +- `.crt` or `.cert` -- certificate file +- `.key` -- private key file + +A PEM certificate file begins and ends with markers like: + +``` +-----BEGIN CERTIFICATE----- +MIIDdzCCAl+gAwIBAgIEbL... +-----END CERTIFICATE----- +``` + +A PEM private key file begins and ends with: + +``` +-----BEGIN PRIVATE KEY----- +MIIEvQIBADANBgkqhkiG9w... +-----END PRIVATE KEY----- +``` + +!!! info "Converting from other formats" + If your certificates are in a different format (such as PKCS#12 or DER), convert them to PEM using OpenSSL: + + ```bash + # Convert DER to PEM + openssl x509 -inform DER -in certificate.der -out certificate.pem + + # Extract certificate and key from PKCS#12 (.p12 / .pfx) + openssl pkcs12 -in keystore.p12 -out ca-cert.pem -cacerts -nokeys + openssl pkcs12 -in keystore.p12 -out client-cert.pem -clcerts -nokeys + openssl pkcs12 -in keystore.p12 -out client-key.pem -nocerts -nodes + ``` + +--- + +## Common Configurations + +### Server-Side SSL Only + +This is the simplest SSL configuration. The client verifies the server's identity using the CA certificate, but the server does not require a client certificate. Use this when your cluster has `client_encryption_options.require_client_auth` set to `false` in `cassandra.yaml`. + +**Required fields:** + +- **CA Certificate File** -- path to the CA certificate that signed the server's certificate +- **Enable connection with SSL** -- toggled on +- **Validate files** -- toggled on + +**Leave blank:** + +- Client Key File +- Client Certificate File + + + +This corresponds to the following entries in the underlying `cqlsh.rc` configuration: + +```ini +[connection] +ssl = true + +[ssl] +certfile = /path/to/ca-certificate.pem +validate = true +``` + +### Mutual TLS (mTLS) + +Mutual TLS provides two-way authentication: the client verifies the server, and the server verifies the client. Use this when your cluster has `client_encryption_options.require_client_auth` set to `true` in `cassandra.yaml`. + +**Required fields:** + +- **CA Certificate File** -- path to the CA certificate that signed the server's certificate +- **Client Key File** -- path to your client private key +- **Client Certificate File** -- path to your client certificate (signed by a CA trusted by the server) +- **Enable connection with SSL** -- toggled on +- **Validate files** -- toggled on + + + +This corresponds to the following entries in the underlying `cqlsh.rc` configuration: + +```ini +[connection] +ssl = true + +[ssl] +certfile = /path/to/ca-certificate.pem +userkey = /path/to/client-key.pem +usercert = /path/to/client-certificate.pem +validate = true +``` + +### Self-Signed Certificates + +When connecting to a development or test cluster that uses self-signed certificates, the default certificate validation will fail because the certificate is not signed by a recognized CA. You can disable validation by toggling **Validate files** off. + +!!! warning "Security implications" + Disabling certificate validation means the Workbench will not verify the server's identity. This makes the connection vulnerable to man-in-the-middle attacks. Only disable validation in trusted development or test environments -- never in production. + +**Required fields:** + +- **Enable connection with SSL** -- toggled on +- **Validate files** -- toggled off + +You may optionally still provide a CA certificate file. Even with validation disabled, the SSL/TLS encryption itself remains active -- traffic is still encrypted, but the server's identity is not verified. + +This corresponds to the following entries in the underlying `cqlsh.rc` configuration: + +```ini +[connection] +ssl = true + +[ssl] +validate = false +``` + +--- + +## Per-Host CA Certificates + +If your cluster nodes use different CA certificates (for example, during a certificate rotation), you can configure per-host CA certificates in the `cqlsh.rc` file directly using the `[certfiles]` section: + +```ini +[certfiles] +192.168.1.3 = /path/to/keys/node1-ca.pem +192.168.1.4 = /path/to/keys/node2-ca.pem +``` + +When a per-host certificate is defined, it overrides the default `certfile` in the `[ssl]` section for that specific node. + +!!! note + Per-host CA certificates must be configured by editing the `cqlsh.rc` file in the Workbench editor. This option is not available through the SSL tab in the connection dialog. + +--- + +## Troubleshooting + +### Certificate File Not Found + +**Symptom:** Connection fails with a file-not-found error after saving or testing. + +**Possible causes and solutions:** + +- Verify the file path is correct and the file exists at the specified location. +- Ensure the file is readable by your user account. +- If you moved or renamed the certificate files after configuring the connection, update the paths in the SSL tab. + +### Certificate Format Errors + +**Symptom:** Connection fails with an error indicating the certificate cannot be parsed or is in an unsupported format. + +**Possible causes and solutions:** + +- Confirm the file is in PEM format (not DER, PKCS#12, or JKS). +- Open the file in a text editor and verify it begins with `-----BEGIN CERTIFICATE-----` or `-----BEGIN PRIVATE KEY-----`. +- If the file is in another format, convert it to PEM using the OpenSSL commands listed in the [Certificate Formats](#certificate-formats) section above. + +### Hostname Verification Failures + +**Symptom:** Connection fails with a hostname mismatch or certificate verification error even though the CA certificate is correct. + +**Possible causes and solutions:** + +- The Common Name (CN) or Subject Alternative Name (SAN) in the server certificate must match the hostname or IP address you are connecting to. +- If you are connecting through an SSH tunnel to `127.0.0.1`, the server certificate likely does not list `127.0.0.1` as a valid name. In this case, you may need to disable validation for the tunneled connection. +- Ask your cluster administrator to reissue the server certificate with the correct hostname or IP in the SAN field. + +### Expired Certificates + +**Symptom:** Connection fails with a certificate expiration error. + +**Possible causes and solutions:** + +- Check the certificate expiration date using OpenSSL: + + ```bash + openssl x509 -in /path/to/certificate.pem -noout -dates + ``` + +- If the certificate has expired, obtain a renewed certificate from your CA or cluster administrator and update the path in the SSL tab. +- As a temporary workaround in non-production environments, you can disable validation by toggling **Validate files** off. + +### SSL Handshake Failures + +**Symptom:** Connection fails during the SSL handshake with a protocol or cipher error. + +**Possible causes and solutions:** + +- Ensure the Cassandra cluster has `client_encryption_options.enabled` set to `true` in `cassandra.yaml`. +- Verify that the TLS protocol version supported by the cluster is compatible. Cassandra 4.1 and later auto-negotiate the TLS protocol version. +- If the cluster requires client authentication (`require_client_auth: true`), make sure you have provided both the Client Key File and Client Certificate File. + +### Connection Works Without SSL but Fails With SSL + +**Symptom:** You can connect with the **Enable connection with SSL** toggle off but the connection fails when you turn it on. + +**Possible causes and solutions:** + +- Confirm that `client_encryption_options.enabled` is set to `true` on the Cassandra cluster. If the server does not have SSL enabled, the SSL handshake will fail. +- Double-check that the CA certificate you provided matches the one used to sign the server certificate. +- Try disabling **Validate files** temporarily to isolate whether the issue is with certificate validation or with the SSL setup itself. diff --git a/docs/workbench/cql-console/index.md b/docs/workbench/cql-console/index.md new file mode 100644 index 000000000..8f47dcada --- /dev/null +++ b/docs/workbench/cql-console/index.md @@ -0,0 +1,99 @@ +--- +title: "CQL Console" +description: "AxonOps Workbench CQL Console — a Monaco Editor-powered query editor with syntax highlighting, auto-completion, multi-tab support, and destructive command safety for Apache Cassandra." +meta: + - name: keywords + content: "CQL Console, CQL editor, Monaco Editor, syntax highlighting, auto-completion, query editor, Cassandra query, AxonOps Workbench" +--- + +# CQL Console + +The CQL Console is the primary workspace in AxonOps Workbench — a Monaco Editor-powered query editor purpose-built for Cassandra Query Language. Built on the same editor technology as VS Code, the CQL Console provides a rich editing experience with syntax highlighting, context-aware auto-completion, and multi-tab support for working with your Cassandra clusters. + + + +## Editor Features + +### Syntax Highlighting + +The CQL Console provides CQL-specific syntax highlighting with color coding for: + +- **Keywords** — CQL commands such as `SELECT`, `INSERT`, `CREATE TABLE`, and `ALTER` +- **Data types** — Type identifiers including `text`, `int`, `uuid`, `timestamp`, and collection types +- **Functions** — Built-in functions such as `now()`, `toTimestamp()`, and aggregate functions +- **String literals** — Quoted values and constants +- **Comments** — Single-line and multi-line comment blocks + +The editor also visually distinguishes between CQL statements and CQLSH shell commands, making it easy to identify the type of command you are writing. + +### Auto-Completion + +The CQL Console provides context-aware suggestions as you type, drawing from both the CQL language specification and your connected cluster's live schema: + +- **CQL keywords and commands** — `SELECT`, `INSERT`, `CREATE TABLE`, `ALTER KEYSPACE`, and other standard CQL statements +- **CQLSH commands** — `DESCRIBE`, `SOURCE`, `CONSISTENCY`, `TRACING`, and other shell-level commands +- **Keyspace names** — All keyspaces available on the connected cluster +- **Table names** — Tables scoped to the active keyspace +- **Column names** — Columns scoped to the table referenced in the current query context + +Auto-completion accelerates query writing and helps prevent typos in schema object names. + +### Multi-Tab Support + +The CQL Console supports multiple editor tabs per connection, allowing you to work on several queries at once: + +- Open multiple editor tabs within a single connection +- Each tab maintains its own query state and result set +- Switch between tabs without losing your work + +This is particularly useful when you need to cross-reference data across tables or iterate on related queries simultaneously. + +### Destructive Command Safety + +AxonOps Workbench detects potentially destructive CQL commands before they are executed and prompts you for confirmation. The following commands are protected: + +- `DELETE` +- `DROP` +- `TRUNCATE` +- `INSERT` +- `UPDATE` +- `ALTER` +- `BATCH` +- `CREATE` +- `GRANT` +- `REVOKE` + +!!! warning + Destructive command detection is configurable. Review your Workbench settings to adjust which commands require confirmation and to enable or disable this safety feature. + +### Keyboard Shortcuts + +The CQL Console supports keyboard shortcuts for common operations, allowing you to work more efficiently without reaching for the mouse: + +| Shortcut | Action | +| --- | --- | +| Execute query | Run the current CQL statement | +| Execute all | Run all statements in the editor | +| New tab | Open a new editor tab | +| Close tab | Close the current editor tab | +| Next tab / Previous tab | Switch between open editor tabs | + +Keyboard shortcuts are configurable in **Settings**. For the full shortcut reference, see the [Query Execution](query-execution.md) guide. + +## Notifications + +The CQL Console includes an in-app notification center that keeps you informed about query activity and system events: + +- **Query status** — Notifications when queries complete successfully or encounter errors +- **Warning messages** — Alerts about potential issues such as large result sets or tombstone warnings +- **Connection events** — Status changes for the active database connection + +The notification indicator in the toolbar shows the count of unseen notifications. Click the indicator to open the notification panel and review recent messages. + +## Detailed Guides + +For in-depth coverage of specific CQL Console capabilities, see the following pages: + +- **[Query Execution](query-execution.md)** — Running queries, the SOURCE command, pagination, and query history +- **[Query Tracing](query-tracing.md)** — Performance analysis with interactive trace visualization +- **[Results & Export](results-export.md)** — Exporting data to CSV and PDF, clipboard copy as JSON, and working with BLOB columns diff --git a/docs/workbench/cql-console/query-execution.md b/docs/workbench/cql-console/query-execution.md new file mode 100644 index 000000000..6aa2d0126 --- /dev/null +++ b/docs/workbench/cql-console/query-execution.md @@ -0,0 +1,258 @@ +--- +title: "Query Execution" +description: "Execute CQL queries in AxonOps Workbench. Run single or multiple statements, use the SOURCE command, paginate results, and query history." +meta: + - name: keywords + content: "CQL query, execute, SOURCE command, pagination, query history, destructive warning, AxonOps Workbench" +--- + +# Query Execution + +AxonOps Workbench provides a full-featured CQL execution environment that goes well beyond a basic query runner. You can execute single statements or batches, run CQL scripts from external files, page through large result sets, and recall previously executed queries from a searchable history -- all within the CQL Console. + +--- + +## Running Queries + +### Single Statement + +To execute a single CQL statement, type it into the editor and press the **Execute** button or use the keyboard shortcut: + +| Platform | Shortcut | +| --- | --- | +| **macOS** | Cmd + Enter | +| **Windows / Linux** | Ctrl + Enter | + +The result is displayed directly beneath the statement in the interactive terminal area. For `SELECT` queries, results are rendered in a tabular format. For data-modification and schema-change statements, Workbench confirms that the CQL statement was executed successfully. + + + +### Multiple Statements + +You can write several CQL statements in a single editor block, separated by semicolons. When you execute, Workbench sends all statements to the server and processes them sequentially. Each statement's output is displayed in order within the results area. + +```sql +CREATE KEYSPACE IF NOT EXISTS demo + WITH replication = {'class': 'SimpleStrategy', 'replication_factor': 1}; + +USE demo; + +CREATE TABLE IF NOT EXISTS users ( + user_id uuid PRIMARY KEY, + name text, + email text +); + +INSERT INTO users (user_id, name, email) + VALUES (uuid(), 'Alice', 'alice@example.com'); + +SELECT * FROM users; +``` + +If an error occurs during execution, Workbench stops processing at the failed statement and reports the error, leaving subsequent statements unexecuted. + +!!! tip + Keep related statements together in a single block for scripting workflows such as schema migrations or seed data inserts. + +--- + +## SOURCE Command + +The `SOURCE` command lets you execute CQL statements stored in an external file, directly from the Workbench console. This is useful for running migration scripts, seed data files, or any reusable CQL that you maintain outside the editor. + +### Syntax + +```sql +SOURCE '/path/to/file.cql'; +``` + +The file path must be an absolute path enclosed in single quotes. The file should contain valid CQL statements separated by semicolons, just as you would write them in the editor. + +### Executing CQL Files + +You can also execute CQL files through the **Execute File** button in the session action bar. This opens a file picker dialog where you can select one or more `.cql` or `.sql` files to run. + + + +### Progress Tracking + +When executing a file, Workbench displays real-time progress as each statement in the file is processed. The results area shows which statements have completed and whether they succeeded or failed. + +### Stop on Error + +By default, file execution halts when a statement produces an error. This **stop-on-error** behavior prevents cascading failures -- for example, if a `CREATE TABLE` statement fails, subsequent `INSERT` statements that depend on that table are not attempted. + +You can toggle stop-on-error from the execution dialog when running files. + +!!! note + The `SOURCE` command is a CQLSH shell command, not a CQL statement. It is processed by the Workbench session layer rather than sent directly to the Cassandra server. + +--- + +## Destructive Command Safety + +AxonOps Workbench identifies potentially destructive CQL commands and displays a warning before they are executed. This safety mechanism helps prevent accidental data loss or unintended schema changes, particularly in production environments. + +!!! warning + Always double-check the target keyspace and table before confirming a destructive operation. Destructive commands may cause irreversible data loss. + +### Protected Commands + +The following CQL commands are classified as destructive and trigger a confirmation prompt: + +| Command | Risk | +| --- | --- | +| `DELETE` | Removes rows or columns from a table | +| `DROP` | Removes keyspaces, tables, indexes, or other schema objects | +| `TRUNCATE` | Removes all data from a table | +| `INSERT` | Adds or overwrites rows (upsert behavior in Cassandra) | +| `UPDATE` | Modifies existing row data | +| `ALTER` | Changes keyspace or table schema | +| `BATCH` | Groups multiple modification statements | +| `CREATE` | Creates new schema objects (protected because it can alter cluster state) | +| `GRANT` | Assigns permissions to roles | +| `REVOKE` | Removes permissions from roles | + +When a destructive command is detected, Workbench displays a confirmation dialog. You must explicitly confirm the action before the statement is sent to the server. + +The destructive command warning also appears inline in the CQL Snippets editor when a snippet contains any of the protected commands, alerting you before you save or execute the snippet. + +--- + +## Query History + +Workbench automatically records every CQL statement you execute, building a per-connection history that you can search and re-execute at any time. + +### Browsing History + +You can navigate through your query history directly from the editor using keyboard shortcuts: + +| Action | Platform | Shortcut | +| --- | --- | --- | +| **Previous statement** | macOS | Cmd + Up Arrow | +| | Windows / Linux | Ctrl + Up Arrow | +| **Next statement** | macOS | Cmd + Down Arrow | +| | Windows / Linux | Ctrl + Down Arrow | + +Each press of the shortcut replaces the current editor content with the next or previous statement from your history, allowing you to cycle through past queries rapidly. + +### History Panel + +Click the **History** button in the session action bar to open the full history panel. This panel displays a numbered list of previously executed statements for the current connection, with the most recent entries at the top. + +From the history panel you can: + +- Browse the complete list of past statements +- Click any entry to load it into the editor for re-execution +- Clear the entire history for the current connection + + + +!!! info + Workbench stores up to **30 statements** per connection. When the limit is reached, the oldest entry is removed to make room for new ones. Duplicate statements are automatically deduplicated. + +--- + +## Pagination + +Cassandra queries can return very large result sets. Workbench uses cursor-based pagination to retrieve results in manageable pages, preventing excessive memory usage and keeping the interface responsive. + +### How Pagination Works + +When you execute a `SELECT` query, Workbench fetches the first page of results based on the configured page size. If additional rows are available beyond the current page, a **Fetch next page** button appears below the results table. Each click retrieves the next page from the server using the Cassandra driver's cursor, and appends the new rows to the existing result set. + +Pagination continues until all matching rows have been retrieved or you choose to stop fetching. + + + +### Configuring Page Size + +The page size determines how many rows are returned per page. You can view and change the current page size from the **Page Size** indicator in the session action bar. + +To change the page size: + +1. Click the **Page Size** button in the session action bar. +2. Enter the desired number of rows per page in the input field. +3. Click **Change paging size** to apply. + +Alternatively, you can set the page size directly using the CQLSH `PAGING` command in the editor: + +```sql +PAGING 200; +``` + +To check the current paging status: + +```sql +PAGING; +``` + +To disable paging entirely: + +```sql +PAGING OFF; +``` + +!!! tip + A page size of **100** is the default. For tables with wide rows or large column values, consider using a smaller page size to keep response times fast. For narrow result sets, a larger page size reduces the number of fetches needed. + +### Last Page Navigation + +When all pages have been fetched, a **Last** button appears next to the pagination controls, allowing you to jump directly to the final page of results in the table view. + +--- + +## Consistency Level + +Workbench allows you to set the consistency level for your queries, controlling how many replicas must respond before a result is returned to the client. + +### Setting the Consistency Level + +Click the **Consistency Level** indicator in the session action bar to select from the available levels: + +**Regular consistency levels:** + +- `ANY` +- `LOCAL_ONE` +- `ONE` +- `TWO` +- `THREE` +- `QUORUM` +- `LOCAL_QUORUM` +- `EACH_QUORUM` +- `ALL` + +**Serial consistency levels (for lightweight transactions):** + +- `SERIAL` +- `LOCAL_SERIAL` + +You can also set the consistency level using the CQLSH `CONSISTENCY` command: + +```sql +CONSISTENCY QUORUM; +``` + +And for serial consistency: + +```sql +SERIAL CONSISTENCY LOCAL_SERIAL; +``` + +The active consistency level is displayed in the session action bar so you always know which level is in effect. + +!!! note + The consistency level applies to all queries executed in the current session. For details on configuring the default consistency level at connection time, see [Connection Configuration](../connections/cassandra.md). + +--- + +## Keyboard Shortcuts Reference + +The following shortcuts are available in the CQL Console for query execution and history navigation: + +| Action | macOS | Windows / Linux | +| --- | --- | --- | +| Execute statement | Cmd + Enter | Ctrl + Enter | +| Previous history statement | Cmd + Up Arrow | Ctrl + Up Arrow | +| Next history statement | Cmd + Down Arrow | Ctrl + Down Arrow | +| Clear console | Cmd + L | Ctrl + L | diff --git a/docs/workbench/cql-console/query-tracing.md b/docs/workbench/cql-console/query-tracing.md new file mode 100644 index 000000000..da96aafaa --- /dev/null +++ b/docs/workbench/cql-console/query-tracing.md @@ -0,0 +1,177 @@ +--- +title: "Query Tracing" +description: "Trace CQL query execution in AxonOps Workbench. Visualize performance bottlenecks with interactive charts." +meta: + - name: keywords + content: "query tracing, performance analysis, execution plan, bottleneck, trace visualization, AxonOps Workbench" +--- + +# Query Tracing + +Query tracing reveals exactly how Cassandra executes a query — which node coordinated the request, which replicas were contacted, and how much time was spent at each step. AxonOps Workbench makes trace data accessible through an interactive visual interface, turning raw trace events into a clear picture of query performance. + + + +## Enabling Tracing + +To trace a query, enable tracing in your CQL session before executing the statement. In the CQL Console, run: + +```sql +TRACING ON; +``` + +Then execute your query as usual: + +```sql +SELECT * FROM my_keyspace.users WHERE user_id = 123; +``` + +When tracing is enabled, Cassandra records detailed execution events for every subsequent query. Each traced query produces a **session ID** that uniquely identifies its trace data. + +To disable tracing when you are finished: + +```sql +TRACING OFF; +``` + +!!! note + Tracing adds overhead to every query executed while it is active. Cassandra writes trace events to the `system_traces` keyspace on each participating node. Use tracing for targeted debugging and performance analysis, not for production traffic. + +### Viewing a Trace + +After running a traced query, the query output includes a tracing session ID. AxonOps Workbench provides a **tracing button** on each query result block that has an associated trace session. Click this button to fetch and display the trace data in the **Query Tracing** tab. + + + +You can also search for a specific session by pasting its session ID into the search field at the top of the Query Tracing tab. The search field accepts session IDs and partial query text to help you locate traces across your session history. + +## Understanding Trace Output + +When you open a trace, AxonOps Workbench displays the session ID along with a timestamp badge, a set of source node filter buttons, interactive charts, and a detailed activities table. + +### Trace Events + +Each trace event represents a discrete step in the query execution pipeline. Cassandra records these events on both the coordinator node and the replica nodes involved in the query. The activities table displays the following fields for each event: + +| Field | Description | +|-------|-------------| +| **Activity** | A description of what Cassandra did at this step (e.g., "Parsing SELECT statement", "Reading data from memtable", "Sending message to replica") | +| **Source** | The IP address of the node that recorded this event | +| **Source Data Center** | The data center to which the source node belongs | +| **Source Elapsed** | The time elapsed for this event relative to the previous event, displayed in milliseconds | +| **Source Port** | The port on the source node | +| **Thread** | The Cassandra thread that processed this event | +| **Event ID** | A time-based UUID identifying this specific event | +| **Session ID** | The tracing session ID that groups all events for this query | + +### Reading the Timeline + +Trace events are ordered chronologically, showing the full sequence of operations from the moment the coordinator receives the query to when the final response is assembled. A typical read query follows this pattern: + +1. The **coordinator** parses the CQL statement and determines which replicas hold the requested data +2. The coordinator sends read requests to the appropriate **replica nodes** +3. Each replica reads data from its local storage (memtables and SSTables) +4. Replicas send responses back to the coordinator +5. The coordinator assembles the result and returns it to the client + +The **Source Elapsed** column shows the incremental time between consecutive events. This makes it straightforward to identify which specific step consumed the most time during execution. + +## Trace Visualization + +AxonOps Workbench renders trace data using two interactive Chart.js charts that make it easy to spot where time is being spent. + + + +### Timeline Chart + +The timeline chart is a horizontal bar chart where each bar represents a trace event. The horizontal axis shows elapsed time in milliseconds, and each bar spans from the start to the end of that event relative to the trace session. Bars are color-coded to visually distinguish individual activities. + +The timeline chart supports interactive exploration: + +- **Zoom** — Hold `Ctrl` and scroll the mouse wheel to zoom in on a specific time range +- **Pan** — Hold `Ctrl` and drag to pan across the timeline +- **Reset zoom** — Click the zoom reset button to return to the full view +- **Click an event** — Click any bar in the chart to filter the activities table to that specific event + +### Doughnut Chart + +The doughnut chart displays the proportional time distribution across trace events. Each slice represents a single activity, sized according to the time it consumed relative to the total trace duration. Hover over a slice to see the elapsed time in milliseconds. Clicking a slice filters the activities table to that event. + +### Filtering by Source Node + +Above the charts, a row of **source filter buttons** shows each node that participated in the query. Each button displays the node IP address and, when available, its data center name as a badge. Click a source button to filter both the charts and the activities table to show only events from that node. + +The **All** button resets the view to show events from all nodes. This filtering is useful for isolating coordinator activity from replica activity, or for comparing performance across replicas. + +## Common Trace Patterns + +Understanding what normal and abnormal traces look like helps you quickly diagnose performance issues. + +### Healthy Trace + +A well-performing query trace shows: + +- Low and consistent elapsed times between events (sub-millisecond to low single-digit milliseconds) +- Coordinator and replica events interleaved in a tight sequence +- Total trace duration proportional to the consistency level (more replicas contacted means slightly longer traces) + +### High Coordinator Latency + +If the coordinator node shows large elapsed times before contacting replicas, look for: + +- Excessive garbage collection pauses on the coordinator +- High coordinator load causing thread pool contention +- Network delays between the client and the coordinator + +### Slow Replica Responses + +When replicas show disproportionately large elapsed times compared to the coordinator, investigate: + +- Disk I/O bottlenecks on the replica — particularly if the trace shows time spent reading from SSTables +- Compaction backlog causing excessive SSTable reads +- Uneven data distribution placing too much load on specific replicas + +### Tombstone Warnings + +Traces that include tombstone-scanning activity indicate that Cassandra is reading through deleted data markers. High tombstone counts degrade read performance and can appear as: + +- Events mentioning tombstone scans or tombstone thresholds +- Unexpectedly long elapsed times during the read phase + +!!! warning + If you see tombstone-related events in your traces, review your data model and deletion patterns. Consider adjusting `gc_grace_seconds` or restructuring your tables to minimize tombstone accumulation. + +### Large Partition Warnings + +Traces involving large partitions show elevated read times because Cassandra must scan more data within a single partition. Indicators include: + +- High elapsed times during memtable or SSTable reads for a single partition key +- A disproportionate number of read events relative to the amount of data returned + +!!! tip + Use query tracing to validate data model decisions. If traces consistently show large partition reads, consider refining your partition key strategy to distribute data more evenly. + +## Trace Data Source + +Behind the scenes, Cassandra stores trace data in two system tables: + +- **`system_traces.sessions`** — Contains one row per traced query with the session ID, coordinator address, duration, and query parameters +- **`system_traces.events`** — Contains the individual trace events for each session, including the activity description, source node, and elapsed time + +AxonOps Workbench retrieves trace data by querying these tables using the session ID that Cassandra returns when tracing is enabled. The `getQueryTrace` method fetches both the session metadata and all associated events, then presents them in the visual interface. + +### Session ID Correlation + +Each traced query produces a unique session ID (a UUID). This session ID is the key for correlating trace data: + +- The session ID appears in the query result output when tracing is active +- It is displayed as a badge in the Query Tracing tab for each traced query +- You can use it to search for a specific trace in the Query Tracing tab +- Multiple traces are preserved in the tab during your session, allowing you to compare traces side by side + +### Copying Trace Data + +Each trace result includes a **copy button** that copies the full trace data to your clipboard in JSON format. The toast notification confirms the copy and shows the data size. This is useful for sharing trace data with team members or including it in performance reports. + +!!! tip + Trace data in `system_traces` has a default TTL (time to live) of 24 hours. If you need to reference a trace later, copy the data or take note of the session ID while the trace is still available. diff --git a/docs/workbench/cql-console/results-export.md b/docs/workbench/cql-console/results-export.md new file mode 100644 index 000000000..84c4159e9 --- /dev/null +++ b/docs/workbench/cql-console/results-export.md @@ -0,0 +1,160 @@ +--- +title: "Results & Export" +description: "View query results, export to CSV or PDF, copy as JSON, and work with BLOB data in AxonOps Workbench." +meta: + - name: keywords + content: "query results, export CSV, export PDF, copy JSON, BLOB, binary data, AxonOps Workbench" +--- + +# Results & Export + +AxonOps Workbench displays query results in an interactive, feature-rich table and provides straightforward options for exporting data and working with binary (BLOB) columns. This page covers how to navigate result sets, export data in different formats, and handle BLOB content. + +## Viewing Results + +### Result Blocks + +Every CQL statement you execute produces its own **result block** in the output area below the editor. When you run multiple statements at once, each statement maps to a separate sub-output block within the parent block, keeping results cleanly organized and independently scrollable. + + + +Each result block displays: + +- **Statement badge** — When a block contains more than one statement, a badge shows the total statement count +- **Statement preview** — For multi-statement executions, each sub-output includes the CQL statement that produced it +- **Action buttons** — Download, copy, and query tracing controls appear alongside each result + +### Result Grid + +Query results from `SELECT` statements are rendered in an interactive table powered by the Tabulator library. The result grid provides a spreadsheet-like experience with the following capabilities: + +- **Column sorting** — Click any column header to sort results in ascending or descending order +- **Column resizing** — Drag column borders to adjust widths; columns resize to fit content by default +- **Column reordering** — Drag and drop column headers to rearrange their order +- **Column filtering** — Each column header includes a search input for filtering rows by value +- **Pagination** — Results are paginated with a configurable page size (default: 50 rows per page). Use the pagination controls at the bottom of the table to navigate between pages +- **Row selection** — Click a row's checkbox to select it. Hold Shift and click to select a range of rows. Hold Ctrl and click to toggle individual row selection. Use the header checkbox to select or deselect all visible rows on the current page +- **Cell inspection** — Click on a cell to highlight it. Complex data types such as maps, sets, and user-defined types are displayed with a collapsible JSON viewer + + + +### Server-Side Paging + +For large result sets, the Workbench fetches data incrementally from the Cassandra cluster rather than loading everything at once. When additional rows are available beyond the current page: + +1. A **Next** button appears in the paginator area of the result table +2. Clicking **Next** fetches the next batch of rows from the server and appends them to the table +3. A spinner indicator displays while the next page is being retrieved +4. Once all rows have been fetched, the standard **Last** page button reappears and normal local pagination resumes + +This approach keeps the application responsive even when querying tables with large numbers of rows. + +### Copying Results + +Click the **Copy** button on any result block to copy its contents to the clipboard in JSON format. The copied output includes both the original CQL statement and the result data, formatted as a readable JSON object. + +## Exporting Results + +Each result block that contains tabular data provides export options accessible through the **Download** button. Click the download icon to reveal the available export formats. + + + +### CSV Export + +Click the **CSV** icon to export the result table as a comma-separated values file. The exported file is named `statement_block.csv` by default and includes: + +- A header row with all column names +- All rows currently loaded in the result table +- Standard CSV formatting compatible with spreadsheet applications such as Excel, Google Sheets, and LibreOffice Calc + +CSV export is ideal for further data analysis, reporting, or importing results into other tools. + +### PDF Export + +Click the **PDF** icon to export the result table as a PDF document. The exported file is named `statement_block.pdf` and includes: + +- A title derived from the CQL statement that produced the results +- The full result table rendered in portrait orientation + +PDF export is suitable for sharing results as a formatted, printable document. + +### Copying as JSON + +In addition to file exports, the **Copy** button on the block-level action bar copies the entire block contents to the clipboard as a formatted JSON object. This includes the CQL statement text and all output data, making it convenient for pasting into documentation, issue reports, or other tools that accept JSON. + +## Working with BLOBs + +AxonOps Workbench provides dedicated support for working with Cassandra `blob` columns, including previewing, importing, and exporting binary data directly from the application. + +### BLOB Preview + +BLOB preview allows you to inspect the contents of a `blob` field by opening it in your system's default application for the detected file type. + +**Enabling BLOB Preview:** + +BLOB preview is enabled by default. To toggle it: + +1. Open **Settings** +2. Navigate to **Features** +3. Set **Preview BLOB** to `true` (enabled) or `false` (disabled) + +When enabled, a **Preview Item** option appears in the context menu for `blob` fields in the data editor. Clicking it triggers the following process: + +1. The hex-encoded BLOB value is read from the field +2. The Workbench detects the file type by examining the file signature (magic bytes) of the binary data +3. The binary content is written to a temporary file with the appropriate file extension +4. The file opens in your operating system's default application for that file type + +!!! note + BLOB preview works with file types that have recognizable file signatures, such as images (PNG, JPEG, GIF), audio files, and other common binary formats. Files with `application/*` MIME types (such as generic binary data) are excluded from preview. + +!!! info + BLOB conversion and file detection run in a background process to keep the main application responsive. + +### BLOB Import + +You can insert binary data into `blob` columns by uploading a file directly from the data editor: + +1. When editing a row that contains a `blob` column, click the **Upload Item** button next to the BLOB field +2. A file dialog opens where you can select any file from your filesystem +3. The selected file is read and converted to a hex-encoded string (prefixed with `0x`) +4. The hex value is populated into the input field, ready to be saved with your INSERT or UPDATE statement + +**Size Limits:** + +BLOB uploads are subject to a configurable size limit to prevent accidental insertion of excessively large binary objects: + +- **Default limit:** 1 MB +- **Configuration path:** Settings > Limits > `insertBlobSize` +- **Accepted formats:** The limit value supports human-readable size notation (e.g., `1MB`, `2MB`, `500KB`) + +If the selected file exceeds the configured limit, the Workbench displays an error notification showing the file size and the current maximum, and the upload is cancelled. + +!!! tip + If you regularly work with larger binary objects, increase the `insertBlobSize` value in your configuration. Keep in mind that very large BLOBs can impact Cassandra performance and should be used judiciously. + +### BLOB Export + +To export a BLOB value from a result set to a file on disk: + +1. Click the **Preview Item** option in the context menu for the `blob` field +2. The Workbench converts the hex-encoded data back to binary and writes it to a temporary file +3. The file opens in the system's default application, from where you can use "Save As" to store it at a permanent location + +This workflow makes it straightforward to extract images, documents, or other binary content stored in Cassandra `blob` columns. + +### BLOB Conversion + +All BLOB operations -- reading files into hex strings for import, and converting hex strings back to binary for preview and export -- are performed in a **background renderer process**. This design ensures that: + +- The main application UI remains responsive during conversion of large binary files +- File I/O operations do not block query execution or other user interactions +- A loading spinner appears on the BLOB field while the conversion is in progress + +The conversion pipeline works as follows: + +- **Import (file to hex):** The file is read into a byte buffer using Node.js file system APIs, then converted to a hex string using byte-to-hex encoding, prefixed with `0x` +- **Export/Preview (hex to file):** The hex string is decoded back to a byte buffer, written to a temporary file in the system's temp directory with a sanitized filename and the detected file extension, then opened with the system's default handler + +!!! warning + Temporary files created during BLOB preview are stored in your operating system's temp directory. These files are not automatically cleaned up by the Workbench. Periodically review and clear your temp directory if you frequently preview large BLOB values. diff --git a/docs/workbench/getting-started/first-steps.md b/docs/workbench/getting-started/first-steps.md new file mode 100644 index 000000000..cfb585006 --- /dev/null +++ b/docs/workbench/getting-started/first-steps.md @@ -0,0 +1,151 @@ +--- +title: "First Steps" +description: "Create your first workspace, connect to a Cassandra cluster, and run your first CQL query in AxonOps Workbench." +meta: + - name: keywords + content: "getting started, first steps, create workspace, connect Cassandra, first query, AxonOps Workbench" +--- + +# First Steps + +This guide walks you through your first session with AxonOps Workbench -- from creating a workspace and connecting to a Cassandra cluster, to browsing your schema and running your first CQL query. + +--- + +## 1. Create a Workspace + +Workspaces are organizational containers that group related connections, much like projects in an IDE. You might create separate workspaces for different environments (Development, Staging, Production) or different teams and applications. + +Each workspace is stored as a folder on disk containing a `connections.json` manifest and individual connection configuration folders. This makes workspaces portable and easy to share through version control. + +### Steps + +1. From the home screen, click the **+** button in the bottom-right corner of the workspaces panel, or click the prompt in the center of an empty workspaces list. +2. In the **Add Workspace** dialog, enter a **Workspace Name** -- for example, `Development`. +3. Choose a **Workspace Color** to visually distinguish this workspace in the sidebar. Click the color field to open the color picker. +4. Optionally, set a custom **Workspace Path** by clicking the folder icon next to the path field. To revert to the default storage location, click the reset icon. +5. Click **Add Workspace** to save. + + + +Your new workspace appears on the home screen. Click it to open it and begin adding connections. + +!!! info + By default, workspace data is stored in the application's internal data directory. Setting a custom path is useful when you want to keep workspace configurations alongside a project repository or share them with your team. + +--- + +## 2. Add a Connection + +With your workspace open, you can add connections to Apache Cassandra, DataStax Enterprise, or DataStax Astra DB clusters. + +### Apache Cassandra / DataStax Enterprise + +1. Click the **+** button in the bottom-right corner of the connections panel, or click the prompt in the center of an empty connections list. +2. In the **Add Connection** dialog, ensure **Apache Cassandra** is selected at the top. +3. In the **Basic** section, fill in the following fields: + + | Field | Description | Required | + | --- | --- | --- | + | **Connection Name** | A display name for this connection (e.g., `Local Dev Node`) | Yes | + | **Cassandra Data Center** | The datacenter name (e.g., `datacenter1`) | No | + | **Cassandra Hostname** | The IP address or hostname of a Cassandra node (e.g., `192.168.0.10` or `localhost`) | Yes | + | **Port** | The CQL native transport port (default: `9042`) | Yes | + +4. If your cluster requires authentication, click the **Authentication** section on the left side of the dialog, then enter your **Username** and **Password**. The *Save authentication credentials locally* option stores credentials securely in your operating system's keychain. +5. Click **Test Connection** to verify that the Workbench can reach your cluster with the provided settings. +6. Once the test succeeds, click **Add Connection** to save. + + + +### DataStax Astra DB + +1. Click the **+** button to open the **Add Connection** dialog. +2. Select **DataStax AstraDB** at the top of the dialog. +3. Fill in the following fields: + + | Field | Description | + | --- | --- | + | **Connection Name** | A display name for this Astra DB connection | + | **Username (Client ID)** | The Client ID from your Astra DB application token | + | **Password (Client Secret)** | The Client Secret from your Astra DB application token | + | **Secure Connection Bundle** | The Secure Connect Bundle (SCB) ZIP file downloaded from the Astra DB console | + +4. To provide the SCB file, click the file selector field and browse to the ZIP file on disk. You can also drag and drop the bundle file directly onto the field. +5. Click **Test Connection** to verify, then click **Add Connection** to save. + + + +!!! tip + The connection dialog also provides sections for **SSH Tunnel**, **SSL Configuration**, and **AxonOps Integration**. For detailed instructions on these advanced options, see the [Connections](../connections/index.md) guide and the [SSH Tunneling](../connections/ssh-tunneling.md) page. + +--- + +## 3. Browse Your Schema + +Once connected, AxonOps Workbench loads your cluster's metadata and displays it in an interactive tree view. + +### The Metadata Tree + +The metadata tree appears in the left panel of the work area after you open a connection. It provides a hierarchical view of your cluster's schema: + +- **Keyspaces** -- Top-level nodes representing each keyspace in the cluster + - **Tables** -- The tables within each keyspace + - **Columns** -- Individual column definitions with data types + - **Materialized Views** -- Any materialized views defined in the keyspace + - **User-Defined Types (UDTs)** -- Custom types created in the keyspace + - **Indexes** -- Secondary indexes defined on tables + +### Navigating the Tree + +- **Expand** a node by clicking the arrow icon next to it, or by double-clicking the node name. +- **Search** the tree using the search bar at the top of the metadata panel. Results are highlighted as you type, and you can navigate between matches using the up/down arrows. +- **Refresh** the metadata tree by clicking the refresh button at the bottom of the panel. This reloads the schema from the cluster, picking up any changes made outside the Workbench. + + + +!!! tip + Right-clicking on a keyspace, table, or other schema object opens a context menu with actions such as creating new objects within that keyspace. You can also use the `DESCRIBE` command in the CQL Console to retrieve the full DDL statement for any schema object. + +--- + +## 4. Run Your First Query + +With a connection open, you are ready to execute CQL statements against your cluster. + +### Open the CQL Editor + +The CQL editor is the main work area that appears when you open a connection. It provides syntax highlighting, auto-completion, and multi-tab support so you can work with several queries simultaneously. + +### Execute a Query + +1. Click in the editor area and type a simple CQL query, for example: + + ```sql + SELECT * FROM system_schema.keyspaces; + ``` + +2. Click the **Execute** button, or use the keyboard shortcut displayed next to the button. +3. The results appear in a table below the editor, showing column names as headers and rows of data. + + + +### Working with Results + +- **Pagination** -- For queries that return large result sets, the Workbench paginates the output automatically. The default page size is 100 rows, which you can configure in the connection settings. Use the navigation controls below the results table to move between pages. +- **Export** -- Query results can be exported to CSV, JSON, or other formats for further analysis. +- **Multi-tab** -- Open additional editor tabs to work on multiple queries without losing your place. Each tab maintains its own query text, execution history, and results. + +!!! info + The page size for query results is configured per connection. You can adjust it in the **Basic** section of the connection dialog under the **Page Size** field (default: `100`). + +--- + +## Next Steps + +Now that you have a workspace, a connection, and your first query results, explore the rest of what AxonOps Workbench has to offer: + +- **[CQL Console](../cql-console/index.md)** -- Discover advanced editor features including auto-completion, query history, and query tracing. +- **[Local Clusters](local-clusters.md)** -- Spin up local Cassandra clusters with Docker or Podman for development and testing. +- **[SSH Tunneling](../connections/ssh-tunneling.md)** -- Connect to clusters behind firewalls or in private subnets through an SSH bastion host. +- **[CQL Snippets](../snippets.md)** -- Save, organize, and reuse frequently used CQL statements across sessions and workspaces. diff --git a/docs/workbench/getting-started/installation.md b/docs/workbench/getting-started/installation.md new file mode 100644 index 000000000..7024571c3 --- /dev/null +++ b/docs/workbench/getting-started/installation.md @@ -0,0 +1,239 @@ +--- +title: "Installation" +description: "Download and install AxonOps Workbench on macOS, Windows, or Linux. System requirements and installation instructions." +meta: + - name: keywords + content: "install AxonOps Workbench, download, macOS, Windows, Linux, Homebrew, system requirements" +--- + +# Installation + +AxonOps Workbench is available for macOS, Windows, and Linux. This page covers system requirements, download options, and step-by-step installation instructions for each platform. + +--- + +## System Requirements + +| Requirement | Details | +|-------------|---------| +| **macOS** | macOS 12 (Monterey) or later | +| **Windows** | Windows 10 or later | +| **Linux** | Ubuntu 20.04+, Fedora 36+, RHEL 8+, or equivalent | +| **Architecture** | x64 (Intel/AMD) on all platforms; arm64 (Apple Silicon) on macOS and Linux | +| **Disk Space** | ~400 MB | +| **RAM** | 4 GB minimum, 8 GB recommended | + +!!! info "Node.js is bundled" + AxonOps Workbench ships with Node.js built in. You do not need to install Node.js separately. + +--- + +## Download + +You can download AxonOps Workbench from either of these locations: + +- [AxonOps Workbench download page](https://axonops.com/workbench/download/){:target="_blank"} -- recommended for stable releases +- [GitHub Releases](https://github.com/axonops/axonops-workbench-cassandra/releases){:target="_blank"} -- all releases including pre-release builds + +### Available Formats + +| Platform | Formats | +|----------|---------| +| **macOS** | DMG, ZIP | +| **Windows** | NSIS installer (.exe), MSI installer | +| **Linux** | tar.gz, DEB, RPM, AppImage, Snap, Flatpak | + +--- + +## macOS + +### Option 1: Direct Download (DMG) + +1. Download the `.dmg` file for your architecture from the [download page](https://axonops.com/workbench/download/){:target="_blank"}: + + - **Apple Silicon** (M1/M2/M3/M4): `AxonOps.Workbench--mac-arm64.dmg` + - **Intel**: `AxonOps.Workbench--mac-x64.dmg` + +2. Open the downloaded `.dmg` file. +3. Drag **AxonOps Workbench** into your **Applications** folder. +4. Eject the disk image. +5. Launch AxonOps Workbench from your Applications folder or Spotlight. + +!!! note "macOS Gatekeeper" + On first launch, macOS may display a security prompt because the application was downloaded from the internet. Click **Open** to proceed. AxonOps Workbench is signed and notarized by Apple. + +### Option 2: Homebrew + +If you use [Homebrew](https://brew.sh/){:target="_blank"}, install AxonOps Workbench with: + +```bash +brew tap axonops/homebrew-repository +brew install --cask axonopsworkbench +``` + +To install a beta release instead: + +```bash +brew install --cask axonopsworkbench-beta +``` + +!!! tip "Custom Applications directory" + To install into your home Applications folder instead of the system-wide one, set the `HOMEBREW_CASK_OPTS` environment variable before installing: + + ```bash + export HOMEBREW_CASK_OPTS="--appdir=~/Applications" + ``` + +--- + +## Windows + +### Option 1: NSIS Installer (Recommended) + +1. Download the `.exe` installer from the [download page](https://axonops.com/workbench/download/){:target="_blank"}: + + - `AxonOps.Workbench--win-x64.exe` + +2. Run the installer and follow the on-screen prompts. +3. Choose the installation directory (the default is recommended). +4. Once installation completes, launch AxonOps Workbench from the Start menu or desktop shortcut. + +!!! note "Windows SmartScreen" + Windows may display a SmartScreen warning on first run. Click **More info** and then **Run anyway** to proceed. + +### Option 2: MSI Installer + +The MSI installer is available for environments that require MSI-based deployment (such as Group Policy or SCCM): + +1. Download `AxonOps.Workbench--win-x64.msi` from the [download page](https://axonops.com/workbench/download/){:target="_blank"}. +2. Run the `.msi` file and follow the installation wizard. + +--- + +## Linux + +AxonOps Workbench supports multiple Linux packaging formats. Choose the one that best suits your distribution. + +### Debian / Ubuntu (DEB) + +Download the `.deb` package and install it with `apt`: + +```bash +sudo apt install ./AxonOps.Workbench--linux-amd64.deb +``` + +This automatically resolves dependencies including `libnss3` and `libsecret-1-0`. + +To uninstall: + +```bash +sudo apt remove axonops-workbench +``` + +### Red Hat / Fedora (RPM) + +Download the `.rpm` package and install it with `dnf`: + +```bash +sudo dnf install ./AxonOps.Workbench--linux-x86_64.rpm +``` + +On older systems using `yum`: + +```bash +sudo yum install ./AxonOps.Workbench--linux-x86_64.rpm +``` + +To uninstall: + +```bash +sudo dnf remove axonops-workbench +``` + +### Manual Install (tar.gz) + +The portable tar.gz archive works on any Linux distribution: + +1. Download the `.tar.gz` archive for your architecture: + + - **x64**: `AxonOps.Workbench--linux-x64.tar.gz` + - **arm64**: `AxonOps.Workbench--linux-arm64.tar.gz` + +2. Extract the archive: + + ```bash + tar -xzf AxonOps.Workbench--linux-x64.tar.gz + ``` + +3. Run the application: + + ```bash + ./axonops-workbench/axonops-workbench --no-sandbox + ``` + +!!! tip "Desktop integration" + For a more integrated experience, move the extracted folder to `/opt` and create a `.desktop` file in `~/.local/share/applications/`. + +### Snap Store + +Install AxonOps Workbench from the Snap Store: + +```bash +sudo snap install axonops-workbench +``` + +### Flatpak + +Install AxonOps Workbench via Flatpak: + +```bash +flatpak install axonops-workbench +``` + +### AppImage + +The AppImage format provides a portable, single-file executable: + +1. Download the `.AppImage` file from the [GitHub Releases](https://github.com/axonops/axonops-workbench-cassandra/releases){:target="_blank"} page. +2. Make it executable: + + ```bash + chmod +x AxonOps.Workbench--linux-x86_64.AppImage + ``` + +3. Run the application: + + ```bash + ./AxonOps.Workbench--linux-x86_64.AppImage + ``` + +--- + +## Verifying the Download + +Each release on [GitHub Releases](https://github.com/axonops/axonops-workbench-cassandra/releases){:target="_blank"} includes SHA256 checksums. Verify your download to ensure it has not been tampered with or corrupted during transfer. + +**macOS / Linux:** + +```bash +sha256sum AxonOps.Workbench---. +``` + +Compare the output against the checksum published on the release page. + +**Windows (PowerShell):** + +```bash +Get-FileHash AxonOps.Workbench--win-x64.exe -Algorithm SHA256 +``` + +!!! info "Software Bill of Materials" + Each release also includes SBOM files in CycloneDX and SPDX formats for supply chain transparency. These are available as release artifacts on GitHub. + +--- + +## First Launch + +After installation, launch AxonOps Workbench and you will be greeted with the welcome screen. From there you can create your first workspace and set up a connection to a Cassandra cluster. + +For a guided walkthrough of the initial setup, see the [First Steps](first-steps.md) guide. diff --git a/docs/workbench/getting-started/local-clusters.md b/docs/workbench/getting-started/local-clusters.md new file mode 100644 index 000000000..1f2ef1361 --- /dev/null +++ b/docs/workbench/getting-started/local-clusters.md @@ -0,0 +1,259 @@ +--- +title: "Local Clusters" +description: "Create local Cassandra clusters with Docker or Podman directly from AxonOps Workbench. One-click development environments." +meta: + - name: keywords + content: "local clusters, Docker, Podman, Cassandra development, sandbox, AxonOps Workbench" +--- + +# Local Clusters + +AxonOps Workbench lets you create fully functional Apache Cassandra clusters on your local machine with a single click. These local clusters run as Docker or Podman containers, giving you an isolated development and testing environment without the overhead of provisioning remote infrastructure. + +Each local cluster is a self-contained sandbox project powered by Docker Compose. You can spin up multi-node Cassandra clusters, optionally include AxonOps monitoring, and connect to them directly from the Workbench query editor -- all without leaving the application. + +--- + +## Prerequisites + +Before creating a local cluster, you need a container runtime installed on your machine. + +| Requirement | Details | +|-------------|---------| +| **Container Runtime** | [Docker Desktop](https://docs.docker.com/get-docker/){:target="_blank"} or [Podman](https://podman.io/getting-started/installation){:target="_blank"} | +| **Docker Compose** | Included with Docker Desktop. For Podman, install [podman-compose](https://github.com/containers/podman-compose){:target="_blank"} separately. | +| **Disk Space** | At least 2 GB free (more for multi-node clusters with monitoring) | +| **RAM** | 4 GB minimum available for containers; 8 GB or more recommended for multi-node clusters | + +!!! warning "Local development only" + Local clusters are intended for development and testing purposes. They are not suitable for production workloads. + +!!! note "Linux users" + When using Docker on Linux, your user account must belong to the `docker` group. You can add yourself with: + + ```bash + sudo usermod -aG docker $USER + ``` + + Log out and back in for the change to take effect. This requirement does not apply when using Podman. + +--- + +## Container Tool Setup + +AxonOps Workbench supports both Docker and Podman as container management tools. You select which tool to use when creating your first local cluster, and you can change it at any time in the application settings. + +### Selecting a Tool During Cluster Creation + +When you open the **Create Local Cluster** dialog, you are prompted to select either **Docker** or **Podman** as the container management tool. This selection is saved and used for all subsequent operations. + +### Changing the Tool in Settings + +To change your container management tool after initial setup: + +1. Open the application **Settings**. +2. Navigate to the **Features** section. +3. Under **Container Management Tool**, select **Docker** or **Podman**. + +### Custom Binary Paths + +If your container tool is installed in a non-standard location, you can specify the path manually: + +1. Open the application **Settings**. +2. Look for the **Container Management Tool Paths** section. +3. Enter the full path to the `docker` or `podman` binary. + +AxonOps Workbench searches common installation paths automatically on all platforms. Custom paths are only needed if the tool is not found through standard system locations. + +!!! tip "Podman on Ubuntu" + Ubuntu-based Linux distributions may experience compatibility issues with Podman. If you encounter problems, switching to Docker is recommended. + +--- + +## Creating a Local Cluster + +To create a new local cluster: + +1. From the sidebar, navigate to the **Local Clusters** workspace. +2. Click **Create Local Cluster** to open the creation dialog. + + + +3. Configure the cluster settings: + +### Cluster Name (Optional) + +Enter a descriptive name for your cluster. If left blank, a unique identifier is generated automatically. + +### Apache Cassandra Version + +Select the Cassandra version to run from the dropdown: + +| Version | Notes | +|---------|-------| +| **5.0** | Default. Latest major release. | +| **4.1** | Current LTS-style release. | +| **4.0** | Previous stable release. | + +### Number of Nodes + +Use the slider to set how many Cassandra nodes to include in the cluster. The range is **1 to 20 nodes**, with a default of **3**. + +Each node runs as a separate container. More nodes require more system resources (CPU, RAM, and disk space). + +!!! info "Resource considerations" + Each Cassandra node is configured with a maximum heap size of 256 MB. A 3-node cluster with AxonOps monitoring typically uses around 2-3 GB of RAM. Plan accordingly for larger clusters. + +### AxonOps Monitoring + +Check **Install AxonOps within the local cluster** to include full monitoring capabilities. This option is enabled by default. See [With AxonOps Monitoring](#with-axonops-monitoring) below for details on what gets deployed. + +### Start Immediately + +Check **Run the local cluster once created** to start the cluster automatically after creation. If unchecked, the cluster is created but remains stopped until you start it manually. + +4. Click **Create Project** to build the cluster. + +AxonOps Workbench generates a Docker Compose configuration, allocates random available ports for Cassandra and monitoring services, and saves the project. If you selected the immediate start option, the containers begin pulling images and starting up. + +--- + +## With AxonOps Monitoring + +When you enable the AxonOps monitoring option, the following containers are created alongside your Cassandra nodes: + +| Container | Image | Purpose | +|-----------|-------|---------| +| **OpenSearch** | `opensearchproject/opensearch:2.18.0` | Metrics and log storage backend | +| **AxonOps Server** | `registry.axonops.com/.../axon-server:latest` | Metrics collection and processing | +| **AxonOps Dashboard** | `registry.axonops.com/.../axon-dash:latest` | Web-based monitoring dashboard | +| **Cassandra Nodes** | `registry.axonops.com/.../cassandra:` | Cassandra with AxonOps agent built in | + +The Cassandra container images include the AxonOps agent pre-installed. Each node is configured to report to the AxonOps Server container automatically. + +Once all containers are healthy, the AxonOps Dashboard is accessible through a randomly assigned port on `localhost`. The Workbench displays this port in the cluster details and provides a direct link to open the dashboard in your browser. + +!!! note "Without AxonOps Monitoring" + If you uncheck the AxonOps monitoring option, only the Cassandra node containers are created. The OpenSearch, AxonOps Server, and AxonOps Dashboard containers are omitted from the Docker Compose configuration. + +--- + +## Managing Running Clusters + +Local clusters appear in the **Local Clusters** workspace in the sidebar. Each cluster card shows the cluster name, Cassandra version, number of nodes, and current status. + +### Starting a Cluster + +Click the **Start** button on a stopped cluster. AxonOps Workbench runs `docker compose up -d` (or `podman compose up -d`) in the background. Progress is displayed in a live notification that streams the container runtime output. + +After the containers start, Workbench waits for Cassandra to become ready by testing the CQL native transport port. This readiness check retries every 5 seconds for up to 6 minutes. + +### Stopping a Cluster + +Click the **Stop** button on a running cluster. This runs `docker compose down`, which stops and removes the containers while preserving the data volumes. + +### Connecting to a Cluster + +Once a cluster is running, click **Connect** to open a CQL console connected to the first Cassandra node. The connection uses the randomly assigned native transport port on `localhost`. + +### Viewing the Docker Compose File + +Each local cluster has a generated `docker-compose.yml` file stored in the application's data directory under `localclusters//`. You can inspect this file to understand the full container configuration, including port mappings, environment variables, and volume mounts. + +### Deleting a Cluster + +To remove a local cluster, stop it first if it is running, then delete it from the cluster list. Deleting a cluster removes the Docker Compose file and associated project data from disk. + +--- + +## Docker Compose Migration + +AxonOps Workbench automatically migrates legacy Docker Compose files to a newer format when starting a cluster. This migration replaces older `sed`-based command configurations with environment variables for the AxonOps Dashboard service. + +The migration process: + +- Runs automatically when you start a cluster -- no manual action is required. +- Creates a timestamped backup of the original file (e.g., `docker-compose.yml.bak.2025012314`) before making changes. +- Logs the migration status for reference. + +If the migration is not needed (the file already uses the current format), no changes are made. + +!!! info "Backward compatibility" + Clusters created with older versions of AxonOps Workbench are migrated transparently. The backup file is preserved so you can review what changed if needed. + +--- + +## Sandbox Limits + +By default, AxonOps Workbench allows only **1** local cluster to run simultaneously. This limit helps prevent excessive resource consumption on your development machine. + +To change this limit: + +1. Open the application **Settings**. +2. Navigate to the **Limits** section. +3. Adjust the **Sandbox** value to the desired maximum number of concurrent running clusters. + +If you attempt to start a cluster when the limit is reached, Workbench displays a notification explaining the restriction and directing you to the settings to adjust it. + +--- + +## Troubleshooting + +### Docker or Podman Not Found + +**Symptoms:** Error message stating that `docker compose` or `podman compose` is not installed or not accessible. + +**Solutions:** + +- Verify that Docker Desktop or Podman is installed and running. +- For Docker, confirm that `docker compose version` returns a valid result in your terminal. +- For Podman, confirm that `podman compose --version` returns a valid result. +- If the tool is installed in a non-standard location, set the custom path in **Settings > Container Management Tool Paths**. +- On macOS, ensure the Docker or Podman CLI tools are linked to a location on your `PATH` (e.g., `/usr/local/bin`). + +### Linux User Not in Docker Group + +**Symptoms:** Permission denied errors when starting a local cluster on Linux with Docker. + +**Solution:** + +```bash +sudo usermod -aG docker $USER +``` + +Log out and log back in, then restart AxonOps Workbench. This is not required when using Podman. + +### Port Conflicts + +**Symptoms:** Container fails to start with a "port is already in use" error. + +**Solutions:** + +- AxonOps Workbench assigns random available ports when creating a cluster. If a port conflict occurs, stop the conflicting service or delete and recreate the local cluster to generate new port assignments. +- Check for other running containers or services that may be occupying the required ports. + +### Insufficient Disk Space + +**Symptoms:** Container images fail to pull, or containers crash shortly after starting. + +**Solutions:** + +- Ensure you have at least 2 GB of free disk space for a basic cluster, and more for multi-node clusters with monitoring. +- Run `docker system prune` to reclaim space from unused images, containers, and volumes. +- For Podman, use `podman system prune` to achieve the same result. + +### Containers Start But Cassandra Is Not Ready + +**Symptoms:** The progress notification shows "Cassandra is not ready yet, recheck again in 5 seconds" and eventually times out after 6 minutes. + +**Solutions:** + +- Verify that your system has sufficient RAM available. Each Cassandra node requires at least 256 MB of heap space plus overhead. +- Check the Docker/Podman logs for the Cassandra containers to identify startup errors. +- Reduce the number of nodes if your system is resource-constrained. + +### Podman Compatibility on Ubuntu + +**Symptoms:** Local clusters fail to start or behave unexpectedly when using Podman on Ubuntu-based distributions. + +**Solution:** Switch to Docker as the container management tool. Ubuntu-based distributions have known compatibility issues with Podman. You can change the tool in **Settings > Features > Container Management Tool**. diff --git a/docs/workbench/index.md b/docs/workbench/index.md new file mode 100644 index 000000000..c2313fcae --- /dev/null +++ b/docs/workbench/index.md @@ -0,0 +1,144 @@ +--- +title: "AxonOps Workbench" +description: "AxonOps Workbench is a free, open-source desktop application for Apache Cassandra. A Cassandra-native query editor, schema browser, and cluster manager for developers and DBAs on macOS, Windows, and Linux." +meta: + - name: keywords + content: "AxonOps Workbench, Cassandra GUI, Cassandra desktop client, CQL editor, Cassandra schema browser, open source Cassandra tool, Cassandra IDE" +--- + +# AxonOps Workbench + +AxonOps Workbench is a free, open-source desktop application purpose-built for Apache Cassandra. It provides developers and database administrators with a Cassandra-native query editor, schema browser, and connection manager -- all in a single, secure, cross-platform application. + +Unlike generic database tools, AxonOps Workbench understands Cassandra's data model, CQL syntax, and cluster topology. Connections stay local to your machine, queries execute directly against your clusters, and there is no cloud dependency or telemetry. + +--- + +## Key Features + +
+ +- :material-console-line:{ .lg .middle } **CQL Console** + + --- + + Full-featured query editor with CQL syntax highlighting, auto-completion, multi-tab support, and result export. + + [:octicons-arrow-right-24: Open CQL Console docs](cql-console/index.md) + +- :material-connection:{ .lg .middle } **Connection Management** + + --- + + Connect to local, remote, and cloud Cassandra clusters with support for SSL/TLS, authentication, and SSH tunneling. + + [:octicons-arrow-right-24: Manage connections](connections/index.md) + +- :material-database-search:{ .lg .middle } **Schema Browser** + + --- + + Explore keyspaces, tables, columns, indexes, materialized views, and user-defined types in a navigable tree view. + + [:octicons-arrow-right-24: Browse schema](schema.md) + +- :material-docker:{ .lg .middle } **Local Clusters** + + --- + + Spin up local Cassandra clusters with Docker directly from the Workbench for development and testing. + + [:octicons-arrow-right-24: Run local clusters](getting-started/local-clusters.md) + +- :material-folder-multiple:{ .lg .middle } **Workspaces** + + --- + + Organize connections, queries, and snippets into workspaces for different projects or environments. + + [:octicons-arrow-right-24: Set up workspaces](workspaces.md) + +- :material-chart-line:{ .lg .middle } **Query Tracing** + + --- + + Trace CQL query execution to understand performance characteristics and identify bottlenecks. + + [:octicons-arrow-right-24: Trace queries](cql-console/query-tracing.md) + +- :material-content-save:{ .lg .middle } **CQL Snippets** + + --- + + Save, organize, and reuse frequently used CQL statements across sessions and workspaces. + + [:octicons-arrow-right-24: Manage snippets](snippets.md) + +- :material-powershell:{ .lg .middle } **CLI** + + --- + + Launch and control AxonOps Workbench from the command line for scripting and automation workflows. + + [:octicons-arrow-right-24: CLI reference](cli.md) + +- :material-cog:{ .lg .middle } **Settings** + + --- + + Configure editor preferences, keyboard shortcuts, themes, and application behavior. + + [:octicons-arrow-right-24: Configure settings](settings.md) + +- :material-link-variant:{ .lg .middle } **AxonOps Integration** + + --- + + Connect to AxonOps Cloud or self-hosted AxonOps for cluster monitoring, alerting, and operational insights. + + [:octicons-arrow-right-24: Integrate with AxonOps](axonops-integration.md) + +
+ +--- + +## Supported Databases + +AxonOps Workbench works with the following Apache Cassandra-compatible databases: + +- **Apache Cassandra** -- versions 4.0, 4.1, and 5.0 +- **DataStax Enterprise (DSE)** -- DSE clusters with Cassandra workloads +- **DataStax Astra DB** -- cloud-native Cassandra-as-a-Service + +--- + +## Supported Platforms + +AxonOps Workbench runs natively on all major desktop operating systems: + +- **macOS** -- Intel (x64) and Apple Silicon (arm64) +- **Windows** -- x64 +- **Linux** -- x64 and arm64 (.deb and .AppImage packages) + +--- + +## Open Source + +AxonOps Workbench is released under the [Apache License 2.0](license.md), giving you the freedom to use, modify, and distribute it without restriction. + +Source code, issue tracking, and contribution guidelines are available on [GitHub](https://github.com/axonops/axonops-workbench-cassandra){:target="_blank"}. + +--- + +## Quick Start + +Get up and running with AxonOps Workbench in three steps: + +1. **Install** -- Download and install the application for your platform. + [:octicons-arrow-right-24: Installation guide](getting-started/installation.md) + +2. **Connect** -- Add your first Cassandra cluster connection. + [:octicons-arrow-right-24: First steps](getting-started/first-steps.md) + +3. **Query** -- Open the CQL Console and run your first query. + [:octicons-arrow-right-24: Run your first query](cql-console/index.md) diff --git a/docs/workbench/license.md b/docs/workbench/license.md new file mode 100644 index 000000000..8c35794c7 --- /dev/null +++ b/docs/workbench/license.md @@ -0,0 +1,23 @@ +--- +title: "AxonOps Workbench License" +description: "AxonOps Workbench licensing. Apache License 2.0 open-source license." +meta: + - name: keywords + content: "Workbench license, AxonOps licensing, Apache License 2.0, open source" +--- + +# License + +AxonOps Workbench is released under the [Apache License 2.0](https://github.com/axonops/axonops-workbench-cassandra/blob/main/LICENSE){:target="_blank"}. + +## Source Code + +Source code is available at [github.com/axonops/axonops-workbench-cassandra](https://github.com/axonops/axonops-workbench-cassandra){:target="_blank"}. + +## Third-Party Licenses + +AxonOps Workbench includes third-party software. SBOM (Software Bill of Materials) files in CycloneDX and SPDX formats are included with each release. + +## Contributing + +Contributions are welcome under the [Contributor License Agreement](https://github.com/axonops/axonops-workbench-cassandra/blob/main/CLA.md){:target="_blank"}. See the [Contributing Guide](https://github.com/axonops/axonops-workbench-cassandra/blob/main/CONTRIBUTING.md){:target="_blank"} for details. diff --git a/docs/workbench/schema.md b/docs/workbench/schema.md new file mode 100644 index 000000000..1c1094ed8 --- /dev/null +++ b/docs/workbench/schema.md @@ -0,0 +1,313 @@ +--- +title: "Schema Management" +description: "Browse, create, alter, and compare database schemas in AxonOps Workbench. Visual metadata tree with DDL generation." +meta: + - name: keywords + content: "schema browser, DDL, metadata, keyspace, table, UDT, schema diff, AxonOps Workbench" +--- + +# Schema Management + +AxonOps Workbench provides a comprehensive set of tools for exploring, creating, modifying, and comparing Cassandra schemas. The metadata browser gives you a navigable tree view of your entire cluster schema, while visual dialogs let you build and alter schema objects without writing CQL by hand. A built-in schema diff editor rounds out the toolset by letting you compare schema versions side by side. + + + +--- + +## Metadata Browser + +The metadata browser presents your cluster's schema as an interactive tree view in the left panel of each connection's work area. It organizes all schema objects into a hierarchical structure that mirrors the logical layout of your Cassandra data model. + +### Tree View Hierarchy + +Each connected cluster displays its schema in the following hierarchy: + +- **Keyspaces** -- the top-level containers for all schema objects + - **Replication Strategy** -- the keyspace's replication configuration (strategy class, replication factor, datacenter mapping) + - **Tables** -- all standard tables within the keyspace, with a count indicator + - **Counter Tables** -- tables containing `counter` columns are grouped separately for easy identification + - For each table: + - **Primary Key** -- the full primary key with partition and clustering columns identified + - **Partition Key** -- the partition key columns + - **Clustering Keys** -- the clustering key columns + - **Static Columns** -- columns declared as `STATIC` + - **Columns** -- all columns with their CQL data types, including partition key, clustering key, and regular columns + - **Compaction** -- the compaction strategy and related settings such as `bloom_filter_fp_chance` + - **Options** -- table-level options including compression, caching, TTL, and gc_grace_seconds + - **Triggers** -- any triggers attached to the table + - **Views** -- materialized views based on the table, each with its own partition key, clustering keys, and column breakdown + - **Indexes** -- secondary indexes defined in the keyspace, showing index kind (e.g., composites, keys, custom) and the associated table + - **User Defined Types** -- UDTs with their field names and types + - **User Defined Functions** -- UDFs with attributes for determinism, monotonicity, null-input handling, language, return type, and arguments + - **User Defined Aggregates** -- UDAs with their state function, final function, state type, and arguments + + + +### System Keyspaces + +AxonOps Workbench automatically identifies and groups Cassandra's internal system keyspaces into a dedicated **System Keyspaces** node at the top of the tree. The recognized system keyspaces are: + +- `system` +- `system_auth` +- `system_distributed` +- `system_schema` +- `system_traces` + +System keyspaces are visually separated from your application keyspaces, keeping the tree focused on the schema you work with day to day. Certain operations -- such as dropping a keyspace -- are disabled for system keyspaces to prevent accidental damage. + +### Right-Click Context Menu + +Right-clicking on a node in the metadata tree opens a context menu with actions appropriate to that node type. The available actions are organized into DDL (schema) and DML (data) categories. + +**Keyspaces node (root):** + +| Action | Description | +|--------|-------------| +| Create Keyspace | Open the keyspace creation dialog | + +**Keyspace node:** + +| Action | Description | +|--------|-------------| +| Create UDT | Open the UDT creation dialog for this keyspace | +| Create Table | Open the table creation dialog for this keyspace | +| Create Counter Table | Open the counter table creation dialog | +| Alter Keyspace | Open the keyspace alteration dialog | +| Drop Keyspace | Drop the keyspace (with confirmation) | + +**Table node:** + +| Action | Description | +|--------|-------------| +| Alter Table | Open the table alteration dialog | +| Drop Table | Drop the table (with confirmation) | +| Truncate Table | Remove all data from the table (with confirmation) | +| Insert Row | Open the row insertion dialog | +| Insert Row as JSON | Open the JSON row insertion dialog | +| Select Row | Generate a SELECT query for this table | +| Select Row as JSON | Generate a SELECT JSON query for this table | +| Delete Row/Column | Open the row or column deletion dialog | + +**Counter table node:** + +| Action | Description | +|--------|-------------| +| Increment/Decrement Counter(s) | Open the counter update dialog | + +**UDT node:** + +| Action | Description | +|--------|-------------| +| Alter UDT | Open the UDT alteration dialog | +| Drop UDT | Drop the UDT (with confirmation) | + +!!! tip + Right-click context menus are also available on container nodes such as the **Tables** and **User Defined Types** groups, giving you quick access to create new objects within a keyspace without first selecting the keyspace itself. + +### Refreshing Metadata + +After making schema changes -- whether through the CQL Console or the visual dialogs -- you can refresh the metadata tree to reflect the latest state of your cluster. This re-fetches the full cluster metadata and rebuilds the tree view. + +--- + +## DDL Generation + +AxonOps Workbench can generate the `CREATE` statement (DDL) for any schema object in your cluster. This is the same output you would get from running `DESCRIBE` or `DESC` commands in CQLSH, but available directly from the UI. + +### Generating DDL + +Right-click on any keyspace, table, or the cluster root node in the metadata tree and select the CQL description option. The Workbench retrieves the DDL through its internal `getDDL` method, which supports scoping to: + +- **Cluster** -- generates `CREATE KEYSPACE` and `CREATE TABLE` statements for the entire cluster +- **Keyspace** -- generates all DDL within a single keyspace +- **Table** -- generates the `CREATE TABLE` statement for a specific table + +The generated DDL is displayed in a syntax-highlighted editor panel where you can review, copy, or export it. + +### DESCRIBE Commands + +You can also generate DDL by running `DESCRIBE` (or `DESC`) commands directly in the CQL Console: + +```sql +-- Describe the entire cluster +DESCRIBE CLUSTER; + +-- Describe a specific keyspace +DESCRIBE KEYSPACE my_keyspace; + +-- Describe a specific table +DESC TABLE my_keyspace.my_table; +``` + +The output is syntax-highlighted in the results panel, making it easy to read and copy. + +!!! info + The `DESCRIBE` and `DESC` keywords are interchangeable. Both are recognized by the CQL Console's auto-completion and syntax highlighting. + +--- + +## Schema Diff + +The schema diff tab provides a Monaco-powered side-by-side diff editor for comparing schema definitions. It is accessible from the **Schema Diff** tab in the connection work area. + + + +### How It Works + +The schema diff editor uses the same Monaco diff view found in VS Code, highlighting additions, deletions, and modifications between two schema texts. Paste or load the DDL for two versions of a schema object, and the diff editor renders an inline comparison with: + +- **Green highlights** for added lines +- **Red highlights** for removed lines +- **Inline character-level diffs** for modified lines + +### Use Cases + +Schema diff is valuable in several scenarios: + +- **Comparing environments** -- Paste the DDL from a staging keyspace on the left and the production keyspace on the right to identify discrepancies before a deployment. +- **Tracking schema changes** -- Compare the current DDL of a table against a previously saved version to review what has changed over time. +- **Reviewing migrations** -- Validate that a migration script produces the expected schema by comparing the before and after states. +- **Auditing drift** -- Detect unintended schema differences across clusters that should be identical. + +!!! tip + Use the DDL generation feature to quickly obtain the `CREATE` statements you want to compare, then paste them into the schema diff editor. + +--- + +## Creating Schema + +AxonOps Workbench provides visual creation dialogs for keyspaces, tables, counter tables, and user-defined types. Each dialog generates the corresponding CQL statement and places it in the CQL Console editor, where you can review it before executing. + +### Keyspaces + +To create a new keyspace, right-click on the **Keyspaces** root node in the metadata tree and select **Create Keyspace**. + +The keyspace creation dialog includes: + +- **Keyspace name** -- must be unique within the cluster +- **Replication strategy** -- choose between `SimpleStrategy` and `NetworkTopologyStrategy` +- **Replication factor** -- for `SimpleStrategy`, a single replication factor; for `NetworkTopologyStrategy`, a replication factor per datacenter + +The dialog is pre-populated with the datacenters detected from your connected cluster, making `NetworkTopologyStrategy` configuration straightforward. + +!!! warning + `SimpleStrategy` is intended for development purposes only. For production deployments and multi-datacenter clusters, always use `NetworkTopologyStrategy`. Avoid switching from `SimpleStrategy` to `NetworkTopologyStrategy` in multi-DC clusters without careful planning. + +### Tables + +To create a table, right-click on a keyspace or its **Tables** container node and select **Create Table**. + +The table creation dialog provides a visual interface for defining: + +- **Table name** -- must be unique within the keyspace +- **Columns** -- add columns with names, CQL data types, and optional UDT types from the keyspace +- **Primary key** -- designate partition key and clustering key columns +- **Clustering order** -- set ascending or descending sort order for each clustering column + +#### Version-Specific Metadata Defaults + +When creating a table, AxonOps Workbench applies sensible default metadata values that correspond to your connected Cassandra version. The following defaults are applied for Cassandra 4.0, 4.1, and 5.0: + +| Property | Default Value | +|----------|---------------| +| `additional_write_policy` | `99p` | +| `bloom_filter_fp_chance` | `0.01` | +| `caching` | `{"keys": "ALL", "rows_per_partition": "NONE"}` | +| `compaction` | `SizeTieredCompactionStrategy` (min threshold: 4, max: 32) | +| `compression` | `LZ4Compressor` (chunk length: 16 KB) | +| `crc_check_chance` | `1.0` | +| `default_time_to_live` | `0` | +| `gc_grace_seconds` | `864000` (10 days) | +| `max_index_interval` | `2048` | +| `memtable_flush_period_in_ms` | `0` | +| `min_index_interval` | `128` | +| `read_repair` | `BLOCKING` | +| `speculative_retry` | `99p` | + +These defaults match the Apache Cassandra defaults for each respective version and can be customized in the creation dialog before generating the CQL statement. + +### Counter Tables + +Counter tables have special constraints in Cassandra -- non-key columns must use the `counter` data type, and counter columns cannot be mixed with non-counter columns. AxonOps Workbench provides a dedicated **Create Counter Table** option (available from the keyspace, Tables container, or Counter Tables container context menu) that enforces these constraints in the visual dialog. + +### User-Defined Types (UDTs) + +To create a UDT, right-click on a keyspace or its **User Defined Types** container node and select **Create UDT**. + +The UDT creation dialog lets you: + +- **Name the type** -- must be unique within the keyspace +- **Define fields** -- add fields with names and CQL data types +- **Reference other UDTs** -- fields can use other UDTs already defined in the same keyspace as their type + +The dialog lists all existing UDTs in the keyspace so you can reference them when defining nested types. + +!!! note + If the keyspace has no existing UDTs, the dialog will indicate that no UDT types are available for field references. You can still create a UDT using only native CQL data types. + +--- + +## Altering Schema + +AxonOps Workbench provides visual dialogs for modifying existing schema objects. Like the creation dialogs, each alteration dialog generates the appropriate CQL statement and places it in the CQL Console for review before execution. + +### Altering Keyspaces + +Right-click on a keyspace node and select **Alter Keyspace** to open the alteration dialog. You can modify: + +- **Replication strategy** -- switch between `SimpleStrategy` and `NetworkTopologyStrategy` +- **Replication factor** -- adjust the replication factor for the selected strategy + +!!! warning + Changing replication strategy or factor on a production keyspace requires running `nodetool repair` on all nodes to ensure data consistency. Plan these changes carefully, especially in multi-datacenter environments. + +The keyspace name cannot be changed through the alter dialog. To rename a keyspace, you must create a new keyspace and migrate the data. + +### Altering Tables + +Right-click on a table node and select **Alter Table** to open the alteration dialog. Supported modifications include: + +- **Adding columns** -- define new columns with name and data type +- **Dropping columns** -- remove existing non-key columns +- **Modifying table properties** -- change options such as compaction strategy, compression, caching, TTL defaults, and `gc_grace_seconds` + +!!! danger + Dropping a column removes all data stored in that column across all rows. This operation is irreversible. Always verify your intent before executing the generated `ALTER TABLE` statement. + +The table name and primary key columns cannot be changed after creation. These are fundamental to Cassandra's data distribution and storage model. + +### Altering User-Defined Types + +Right-click on a UDT node and select **Alter UDT** to open the alteration dialog. You can: + +- **Add new fields** -- extend the type with additional fields +- **Rename fields** -- change the name of existing fields + +The UDT name itself cannot be altered. If you need to rename a UDT, you must create a new type, migrate your data, and drop the old type. + +!!! note + When altering a UDT, the dialog shows all existing UDTs in the keyspace (excluding the one being altered) for use as field types. If the keyspace contains only the UDT being modified, no UDT field types will be available for reference. + +### Dropping Schema Objects + +The following schema objects can be dropped through the right-click context menu: + +- **Keyspaces** -- `DROP KEYSPACE` (not available for system keyspaces) +- **Tables** -- `DROP TABLE` +- **UDTs** -- `DROP TYPE` + +Each drop action displays a confirmation dialog that clearly names the object being dropped and warns that the action is irreversible. The generated CQL statement is placed in the console for review before execution. + +### Truncating Tables + +The **Truncate Table** option removes all rows from a table while preserving the table structure. This is available from the table right-click context menu and also requires confirmation before execution. + +--- + +## Best Practices + +- **Refresh metadata after changes** -- Always refresh the metadata tree after creating, altering, or dropping schema objects to ensure the tree reflects the current cluster state. +- **Review generated CQL** -- The visual dialogs place generated statements in the CQL Console rather than executing them directly. Take advantage of this to review and adjust the CQL before running it. +- **Use schema diff before deploying** -- Compare staging and production schemas before applying migrations to catch unintended differences. +- **Back up DDL regularly** -- Use the DDL generation feature to export and version-control your schema definitions. +- **Prefer NetworkTopologyStrategy** -- For any cluster that may grow beyond a single datacenter, start with `NetworkTopologyStrategy` to avoid a disruptive migration later. diff --git a/docs/workbench/settings.md b/docs/workbench/settings.md new file mode 100644 index 000000000..012a97a92 --- /dev/null +++ b/docs/workbench/settings.md @@ -0,0 +1,213 @@ +--- +title: "Settings" +description: "Configure AxonOps Workbench settings. Theme, language, security, feature toggles, limits, and update preferences." +meta: + - name: keywords + content: "settings, configuration, theme, language, dark mode, updates, AxonOps Workbench" +--- + +# Settings + +The Settings dialog in AxonOps Workbench provides centralized control over the application's appearance, security, resource limits, feature toggles, update behavior, and keyboard shortcuts. Open it from the main interface to customize the workbench to your preferences. + +## Appearance + +### Theme + +AxonOps Workbench supports **Light** and **Dark** themes. The application can also detect your operating system's theme preference and adjust automatically. + +- **Light** -- A clean, bright interface suitable for well-lit environments. +- **Dark** -- A dark interface that reduces eye strain in low-light conditions. + +The workbench listens for OS-level theme changes in real time. If your system switches between light and dark mode, the application follows suit. + + + +### Language + +The interface is available in the following languages: + +| Language | Code | RTL Support | +|----------|------|-------------| +| English | `en` | No | +| Arabic - العربية | `ar` | Yes | +| Spanish | `es` | No | +| French | `fr` | No | +| Galician | `gl` | No | +| Hebrew - עִברִית | `iw` | Yes | +| Simplified Chinese - 简体中文 | `zh` | No | + +Changing the language applies immediately across the entire interface without requiring a restart. For right-to-left (RTL) languages such as Arabic and Hebrew, the application automatically mirrors the layout direction. + +## Security + +### Logging + +The logging feature records application events, processes, and errors to a log file on disk. This is valuable for troubleshooting issues or understanding application behavior. + +| Setting | Default | Description | +|---------|---------|-------------| +| Logging Enabled | Disabled | When enabled, the workbench writes detailed event logs to a session log file. | + +When logging is enabled, the Settings dialog displays: + +- **Log file name** -- The name of the current session's log file. +- **Log folder path** -- The directory where log files are stored. In production builds, this is the system's standard application log directory. In development mode, logs are written to `data/logging/` within the application directory. + +!!! tip + Enable logging when diagnosing connection issues or unexpected behavior. You can open the log file or its containing folder directly from the Settings dialog. + +### Content Protection + +Content protection is an OS-level security feature that prevents the application window from being captured in screenshots or screen recordings. + +| Setting | Default | Description | +|---------|---------|-------------| +| Content Protection | Disabled | When enabled, prevents screen capture tools from recording the workbench window contents. | + +This feature uses the operating system's native content protection APIs. It is available on platforms that support window-level capture prevention. + +!!! note + Content protection is hidden in the Settings dialog on platforms where it is not supported. + +## Limits + +Resource limits control the maximum allowances for specific workbench features. Adjusting these values helps manage system resource consumption. + +| Setting | Default | Description | +|---------|---------|-------------| +| Sandbox Instances | 1 | Maximum number of local Cassandra sandbox instances that can run simultaneously. | +| CQLSH Sessions | 10 | Maximum number of concurrent CQLSH sessions across all connections. | +| BLOB Insert Size | 1 MB | Maximum size of binary large objects (BLOBs) that can be inserted through the workbench interface. | + +## Features + +Feature toggles enable or disable specific workbench capabilities. Additional configuration options for certain features are also available in this section. + +| Setting | Default | Description | +|---------|---------|-------------| +| Local Clusters | Enabled | Manage local Apache Cassandra clusters running in containers. | +| Basic CQLSH | Enabled | Access the basic CQLSH terminal interface for raw command-line interaction. | +| Preview BLOB | Enabled | Preview binary large objects (images, documents) directly within the query results. | +| AxonOps Integration | Enabled | Connect to AxonOps monitoring and management services from within the workbench. | +| Container Tool | None | Select the container management tool for local clusters: `docker`, `podman`, or `none`. | +| CQL Snippets Author Name | (empty) | Default author name embedded in the metadata of new CQL snippets. | + +### Container Tool Paths + +If the workbench does not automatically detect your container management tool, you can specify custom paths: + +| Setting | Default | Description | +|---------|---------|-------------| +| Podman Path | (auto-detect) | Custom path to the `podman` executable. | +| Docker Path | (auto-detect) | Custom path to the `docker` executable. | + +## Updates + +Update settings control how the workbench checks for and installs new versions. + +| Setting | Default | Description | +|---------|---------|-------------| +| Check for Updates | Enabled | Periodically check whether a new version of AxonOps Workbench is available. | +| Auto Update | Enabled | Automatically download and install updates when a new version is detected. | + +The update mechanism detects your installation format and uses the appropriate update channel: + +- **GitHub Releases** -- Standard builds downloaded from GitHub. +- **Mac App Store** -- macOS App Store installations. +- **Snap** -- Linux Snap package installations. +- **Flatpak** -- Linux Flatpak installations. + +!!! info + Update checking is available only in production builds. In development mode, the check-for-updates feature is disabled. + +## Keyboard Shortcuts + +AxonOps Workbench provides configurable keyboard shortcuts for common actions. Shortcuts that are marked as editable can be customized to your preferred key combinations. Platform-specific defaults are provided for Windows, macOS, and Linux. + +| Action | Windows / Linux | macOS | Editable | +|--------|----------------|-------|----------| +| Zoom In | `Ctrl+Shift+=` | `Cmd+Shift+]` | Yes | +| Zoom Out | `Ctrl+Shift+-` | `Cmd+Shift+/` | Yes | +| Zoom Reset | `Ctrl+Shift+9` | `Cmd+Shift+9` | Yes | +| Connections Search | `Ctrl+K` | `Cmd+K` | Yes | +| Clear Enhanced Console | `Ctrl+L` | `Cmd+L` | Yes | +| Increase Basic Console Font | `Ctrl+=` | `Cmd+=` | No | +| Decrease Basic Console Font | `Ctrl+-` | `Cmd+-` | No | +| Reset Basic Console Font | `Ctrl+0` | `Cmd+0` | No | +| History Statements Forward | `Ctrl+Up` | `Cmd+Up` | Yes | +| History Statements Backward | `Ctrl+Down` | `Cmd+Down` | Yes | +| Execute CQL Statement | `Ctrl+Enter` | `Cmd+Enter` | No | +| Toggle Full Screen | `F11` | `F11` | No | +| Select Rows in Range | `Shift+Click` | `Shift+Click` | No | +| Deselect Row | `Ctrl+Click` | `Ctrl+Click` | No | + +To customize an editable shortcut: + +1. Open the Settings dialog and navigate to the keyboard shortcuts section. +2. Click on the shortcut you want to change. +3. Press your desired key combination. +4. The new shortcut is saved immediately. + +To reset a shortcut to its default value, use the reset option next to the shortcut entry. + +!!! note + Custom shortcuts are stored locally and persist across application restarts. Shortcuts marked as non-editable are fixed and cannot be changed. + +## Configuration File + +All settings are stored in the `app-config.cfg` file, located in the application's `config/` directory. This file uses the INI format and can be edited directly with a text editor for advanced configuration. + +**Example `app-config.cfg`:** + +```ini +[security] +loggingEnabled=false +cassandraCopyrightAcknowledged=false + +[ui] +theme=light +language=en + +[limit] +sandbox=1 +cqlsh=10 +insertBlobSize=1MB + +[sshtunnel] +readyTimeout=60000 +forwardTimeout=60000 + +[features] +localClusters=true +containersManagementTool=none +basicCQLSH=true +previewBlob=true +axonOpsIntegration=true +cqlSnippetsAuthorName= + +[updates] +checkForUpdates=true +autoUpdate=true + +[containersManagementToolsPaths] +podman= +docker= +``` + +!!! warning + Editing `app-config.cfg` manually is intended for advanced users. Incorrect values may cause unexpected behavior. Always close the application before editing the file directly, and consider creating a backup before making changes. + +### Configuration Sections + +| Section | Purpose | +|---------|---------| +| `[security]` | Logging toggle and acknowledgment flags | +| `[ui]` | Theme and language preferences | +| `[limit]` | Resource limits for sandbox instances, CQLSH sessions, and BLOB size | +| `[sshtunnel]` | Timeout values for SSH tunnel connections (in milliseconds) | +| `[features]` | Feature toggles and container tool selection | +| `[updates]` | Update checking and auto-update preferences | +| `[containersManagementToolsPaths]` | Custom paths for container management executables | + +When the application updates, configuration files are merged automatically. New settings introduced in an update are added with their default values, while your existing preferences are preserved. diff --git a/docs/workbench/snippets.md b/docs/workbench/snippets.md new file mode 100644 index 000000000..045b626fb --- /dev/null +++ b/docs/workbench/snippets.md @@ -0,0 +1,148 @@ +--- +title: "CQL Snippets" +description: "Save, organize, and reuse CQL queries as snippets in AxonOps Workbench. Scope snippets to workspaces, connections, keyspaces, or tables." +meta: + - name: keywords + content: "CQL snippets, save queries, reuse, templates, AxonOps Workbench" +--- + +# CQL Snippets + +CQL Snippets let you save, organize, and reuse CQL queries directly within AxonOps Workbench. Each snippet is stored as a `.cql` file with YAML frontmatter metadata, making snippets both human-readable and easy to manage outside the application. + +## What are Snippets? + +A snippet is a saved CQL query paired with metadata such as a title, author, creation date, and optional scope. Snippets serve as reusable templates for queries you run frequently -- schema inspection statements, common data lookups, administrative commands, or any CQL you want to keep at hand. + +Key characteristics of snippets: + +- Stored as individual `.cql` files on disk +- Include YAML frontmatter with metadata (title, author, date, associations) +- Can be scoped to different levels of your workspace hierarchy +- Accessible from a dedicated sidebar panel + +## Creating a Snippet + +To create a new snippet: + +1. Write or paste your CQL query in the CQL Console. +2. Save the query as a snippet using the save action. +3. Provide a title for the snippet. The title is used to generate the filename on disk. +4. Select the scope level (workspace, connection, keyspace, or table) to control where the snippet appears. + +The snippet is saved as a `.cql` file in a `_snippets` folder within the appropriate workspace directory. A random identifier is appended to the filename to prevent naming conflicts. + +### Author Name + +Each snippet records an author name in its metadata. You can configure the default author name in **Settings > Features** under the **CQL Snippets Author Name** field. When set, this name is automatically included in every new snippet you create. + +## Snippet Scoping + +Snippets can be associated with different levels of your workspace hierarchy. The scope determines where a snippet appears in the sidebar tree and which context it is relevant to. + +| Scope | Description | Visible From | +|-------|-------------|--------------| +| **Workspace** | Available across all connections in the workspace | Workspace node in the snippets tree | +| **Connection** | Tied to a specific cluster connection | Connection node and below | +| **Keyspace** | Associated with a particular keyspace | Keyspace node and its tables | +| **Table** | Scoped to a specific table | Table node only | + +The scoping mechanism uses an `associated_with` array in the snippet's frontmatter: + +- An empty or absent `associated_with` field means the snippet is scoped to the workspace level. +- A single-element array `[connectionID]` scopes the snippet to a specific connection. +- A two-element array `[connectionID, keyspaceName]` scopes it to a keyspace. +- A three-element array `[connectionID, keyspaceName, tableName]` scopes it to a table. + +This hierarchical scoping keeps your snippet library organized as it grows, ensuring that relevant queries appear in the right context. + +## File Format + +Snippets are plain-text `.cql` files with YAML frontmatter at the top. Here is an example: + +```sql +--- +title: Find large partitions +author: Jane Smith +created_date: 2025-01-15T10:30:00Z +associated_with: + - connection-a1b2c3d4 + - my_keyspace +--- +SELECT token(id), id, partition_size +FROM my_keyspace.my_table +WHERE token(id) > ? +LIMIT 100; +``` + +The frontmatter section (between the `---` markers) contains: + +| Field | Description | +|-------|-------------| +| `title` | Display name for the snippet | +| `author` | Name of the snippet creator | +| `created_date` | ISO 8601 timestamp of when the snippet was created | +| `associated_with` | Array defining the snippet's scope (see Snippet Scoping above) | + +Everything after the closing `---` marker is the CQL query body. + +## Managing Snippets + +### Browsing Snippets + +The snippets sidebar presents your saved snippets in a tree view that mirrors your workspace structure: + +- **Workspaces** -- Top-level node listing all workspaces +- **Connections** -- Each connection within a workspace, expandable to show keyspaces and tables +- **Orphaned Snippets** -- A dedicated section for snippets whose associated connection no longer exists + +Expanding a node in the tree loads the connection's metadata (keyspaces and tables) so you can browse snippets at every level. + + + +### Editing a Snippet + +To update an existing snippet: + +1. Select the snippet from the sidebar. +2. Modify the CQL query or metadata as needed. +3. Save the changes. The updated content is written back to the same `.cql` file on disk. + +### Deleting a Snippet + +To remove a snippet, select it and choose the delete action. This permanently removes the `.cql` file from the filesystem. + +!!! warning + Deleting a snippet is irreversible. The file is removed from disk and cannot be recovered through the application. + +### Orphaned Snippets + +When a connection is deleted but its associated snippets remain on disk, those snippets become orphaned. AxonOps Workbench automatically detects orphaned snippets and displays them under the **Orphaned Snippets** section in the sidebar tree. + +Orphaned snippets occur when: + +- A connection has been removed from the workspace +- The snippet's `associated_with` references a connection ID that no longer exists + +You can review orphaned snippets to decide whether to delete them or reassign them by editing their frontmatter. + +## Storage Location + +Snippets are stored in `_snippets` folders within the workspace directory structure: + +``` +data/ + workspaces/ + _snippets/ # Global snippets + my-query-a1b2.cql + my-workspace/ + _snippets/ # Workspace-level snippets + find-large-partitions-c3d4.cql + check-compaction-e5f6.cql + connections.json +``` + +Workspace-level and connection-scoped snippets reside in the `_snippets` folder inside the workspace directory. The scope is determined by the `associated_with` metadata within each file, not by the file's physical location. + +!!! tip + Because snippets are plain `.cql` files with readable frontmatter, you can create, edit, or organize them using any text editor. This also makes it straightforward to share snippets with your team through version control or file sharing. diff --git a/docs/workbench/troubleshooting.md b/docs/workbench/troubleshooting.md new file mode 100644 index 000000000..b1c9c8cae --- /dev/null +++ b/docs/workbench/troubleshooting.md @@ -0,0 +1,259 @@ +--- +title: "Troubleshooting" +description: "Troubleshoot common AxonOps Workbench issues. Connection failures, authentication errors, SSL problems, Docker issues, and query errors." +meta: + - name: keywords + content: "troubleshooting, connection failure, authentication error, SSL error, Docker, AxonOps Workbench" +--- + +# Troubleshooting + +This page covers common issues you may encounter when using AxonOps Workbench and provides steps to resolve them. + +## Connection Issues + +### Cannot Connect to Cluster + +If the workbench fails to establish a connection to your Cassandra cluster, work through the following checklist: + +- **Hostname and port** -- Verify that the hostname (or IP address) and port are correct. The default Cassandra native transport port is `9042`. +- **Cassandra is running** -- Confirm the Cassandra process is running on the target host. Use `nodetool status` or check the system service status. +- **Firewall rules** -- Ensure that your firewall allows traffic on the configured port between your machine and the Cassandra node. +- **Datacenter setting** -- If you specified a datacenter in the connection dialog, confirm it matches a datacenter name reported by `nodetool status`. An incorrect datacenter name can prevent the driver from discovering nodes. +- **Network connectivity** -- Run a basic connectivity test from your machine: + + ```bash + telnet 9042 + ``` + + or: + + ```bash + nc -zv 9042 + ``` + +!!! tip + Use the `--test-connection` CLI argument to validate connectivity before importing a connection. See the [CLI Reference](cli.md) for details. + +### Authentication Failed + +If you receive an authentication error: + +- **Credentials** -- Double-check the username and password in the connection dialog. Cassandra credentials are case-sensitive. +- **Authenticator configuration** -- Confirm that the Cassandra cluster has authentication enabled (`authenticator: PasswordAuthenticator` in `cassandra.yaml`). If authentication is disabled on the server but credentials are provided in the workbench, the connection may still fail. +- **Role permissions** -- Verify that the user role has `LOGIN` permission and the necessary grants on the target keyspaces. + +### Connection Timeout + +If connections are timing out: + +- **Network latency** -- High latency or unstable network conditions can cause timeouts. Test connectivity with `ping` or `traceroute`. +- **SSH tunnel timeouts** -- If using an SSH tunnel, the workbench has configurable timeout values for tunnel establishment (`readyTimeout`) and port forwarding (`forwardTimeout`), both defaulting to 60 seconds. For slow networks, consider checking these values in the application configuration. +- **Overloaded cluster** -- A Cassandra node under heavy load may be slow to accept new connections. Check node health with `nodetool status` and `nodetool tpstats`. + +## SSL/TLS Issues + +### Certificate Errors + +SSL/TLS connection failures are typically caused by certificate configuration problems. + +**Wrong CA certificate:** +Ensure the CA certificate file (`certfile`) matches the certificate authority that signed the Cassandra node's certificate. If the server uses a certificate chain, the CA file must include the full chain. + +**Expired certificate:** +Check the certificate expiry date: + +```bash +openssl x509 -in /path/to/cert.pem -noout -enddate +``` + +If the certificate has expired, obtain a renewed certificate from your certificate authority. + +**Hostname mismatch:** +The certificate's Common Name (CN) or Subject Alternative Name (SAN) must match the hostname you are connecting to. Verify with: + +```bash +openssl x509 -in /path/to/cert.pem -noout -text | grep -A1 "Subject Alternative Name" +``` + +**Self-signed certificates:** + +!!! danger + Disabling certificate validation reduces security and should only be used in development or testing environments. Never disable validation for production connections. + +If you are using self-signed certificates and encounter validation errors, you can disable certificate validation in the connection's SSL settings by unchecking the **Validate** option. This bypasses hostname and CA verification. + +### Key Format Issues + +- Ensure private key files are in PEM format. If your key is in a different format (e.g., PKCS#12), convert it first: + + ```bash + openssl pkcs12 -in keystore.p12 -out userkey.pem -nodes -nocerts + ``` + +- Confirm that the private key file is not password-protected unless the workbench supports passphrase entry for client certificates. +- Verify file permissions on key files. On Linux and macOS, private keys should be readable only by the owner: + + ```bash + chmod 600 /path/to/userkey.pem + ``` + +## SSH Tunnel Issues + +### Tunnel Establishment Failed + +If the SSH tunnel cannot be established: + +- **SSH host reachability** -- Verify that the SSH host is reachable from your machine on the specified SSH port (default `22`). +- **Credentials** -- Check the SSH username and password, or ensure the private key file path is correct and the key has appropriate permissions (`chmod 600`). +- **Host key verification** -- If the SSH server's host key has changed (e.g., after a server rebuild), you may need to update your `known_hosts` file. +- **SSH server configuration** -- Confirm that the SSH server allows TCP forwarding. Check for `AllowTcpForwarding yes` in the server's `sshd_config`. + +### Port Forwarding Failed + +If the tunnel is established but port forwarding fails: + +- **Destination address and port** -- Verify that the `destaddr` and `destport` values in the SSH configuration point to the correct Cassandra host and port as reachable from the SSH server. +- **Port conflicts** -- Ensure the local port used for tunneling is not already in use by another process. +- **Firewall on the SSH host** -- The SSH server must be able to reach the Cassandra node on the specified destination address and port. + +## Local Cluster Issues + +### Docker or Podman Not Found + +AxonOps Workbench uses Docker or Podman to manage local (sandbox) Cassandra clusters. If the workbench cannot find your container runtime: + +- **Verify installation** -- Confirm Docker or Podman is installed and accessible from the command line: + + ```bash + docker --version + # or + podman --version + ``` + +- **Container management tool setting** -- In **Settings**, check that the **Containers Management Tool** is set to the correct runtime (`docker` or `podman`). By default, this is set to `none`. +- **Custom paths** -- If Docker or Podman is installed in a non-standard location, configure the path in the application settings under **Containers Management Tool Paths**. + +### Cluster Won't Start + +If a local cluster fails to start: + +- **Port conflicts** -- Cassandra's default ports (`9042`, `7000`, `7001`, `7199`) may be in use by another process. Check for conflicts: + + ```bash + lsof -i :9042 + # or on Linux + ss -tlnp | grep 9042 + ``` + +- **Insufficient disk space** -- Docker containers need adequate disk space. Check available space with `df -h` and ensure Docker has sufficient storage. +- **Docker daemon not running** -- Verify the Docker daemon is active: + + ```bash + sudo systemctl status docker + ``` + +- **Resource limits** -- Cassandra requires a minimum of 2 GB of memory. Ensure your Docker environment has sufficient resources allocated (check Docker Desktop settings on macOS and Windows). + +### Permission Errors on Linux + +If you encounter permission errors when managing local clusters on Linux: + +- **Docker group membership** -- Add your user to the `docker` group to avoid needing `sudo`: + + ```bash + sudo usermod -aG docker $USER + ``` + + Log out and log back in for the change to take effect. + +- **Podman rootless mode** -- If using Podman in rootless mode, ensure your user has the necessary sub-UID and sub-GID ranges configured in `/etc/subuid` and `/etc/subgid`. + +## Query Issues + +### Execution Errors + +**Syntax errors:** +CQL syntax errors are reported with the line and position of the error. Common causes include missing semicolons, mismatched quotes, and incorrect keyword usage. Refer to the [Apache Cassandra CQL documentation](https://cassandra.apache.org/doc/latest/cassandra/cql/){:target="_blank"} for correct syntax. + +**No keyspace selected:** +If you see `No keyspace has been specified`, run a `USE` statement to select a keyspace before executing table-level queries: + +```sql +USE my_keyspace; +SELECT * FROM my_table LIMIT 10; +``` + +Alternatively, qualify table names with the keyspace: + +```sql +SELECT * FROM my_keyspace.my_table LIMIT 10; +``` + +**Table not found:** +Verify the table exists in the current keyspace: + +```sql +DESCRIBE TABLES; +``` + +Table and keyspace names are case-sensitive when quoted. If the table was created with double-quoted names, you must use the same casing. + +### Slow Query Performance + +If queries are running slowly: + +- **Enable Query Tracing** -- Use the tracing feature in the CQL console to see how the query is executed across nodes. This reveals the time spent on each operation and helps identify bottlenecks. + + ```sql + TRACING ON; + SELECT * FROM my_keyspace.my_table WHERE id = 'abc123'; + TRACING OFF; + ``` + +- **Check query patterns** -- Avoid `SELECT *` on large tables without a `WHERE` clause. Full table scans are expensive in Cassandra. +- **Review the data model** -- Ensure your queries match the table's partition key and clustering columns. Queries that require `ALLOW FILTERING` are typically inefficient and should be avoided in production. + +## Application Issues + +### App Won't Start + +If AxonOps Workbench fails to launch: + +- **Check system requirements** -- Ensure your operating system and hardware meet the minimum requirements. AxonOps Workbench runs on Windows, macOS, and Linux. +- **Clear application configuration** -- A corrupted configuration file can prevent startup. Locate and remove the configuration directory to reset to defaults: + + - **Windows:** `%APPDATA%/AxonOps Workbench/` + - **macOS:** `~/Library/Application Support/AxonOps Workbench/` + - **Linux:** `~/.config/AxonOps Workbench/` + + !!! warning + Clearing the configuration directory removes all saved workspaces and connections. Export your data first if possible. + +- **Reinstall the application** -- Download the latest version from the [AxonOps Workbench releases page](https://github.com/axonops/axonops-workbench-cassandra/releases){:target="_blank"} and perform a clean installation. +- **Check log files** -- Application logs can reveal startup errors. Log files are located in: + + - **Windows:** `%APPDATA%/AxonOps Workbench/logs/` + - **macOS:** `~/Library/Logs/AxonOps Workbench/` + - **Linux:** `~/.config/AxonOps Workbench/logs/` + +### Update Issues + +If the application fails to update: + +- **Manual update** -- Download the latest version directly from the [releases page](https://github.com/axonops/axonops-workbench-cassandra/releases){:target="_blank"} and install it over the existing version. +- **Check network access** -- Automatic updates require access to GitHub. Verify that your network allows outbound connections to `github.com`. +- **Permissions** -- On Linux and macOS, ensure the application directory is writable by your user. + +## Getting Help + +If the steps above do not resolve your issue: + +- **GitHub Issues** -- Search for existing issues or open a new one at [github.com/axonops/axonops-workbench-cassandra/issues](https://github.com/axonops/axonops-workbench-cassandra/issues){:target="_blank"}. Include the following information: + + - AxonOps Workbench version (`--version`) + - Operating system and version + - Steps to reproduce the issue + - Relevant log file excerpts + +- **Log files** -- Always attach relevant log files when reporting issues. Logs contain timestamps, error messages, and stack traces that help diagnose problems quickly. diff --git a/docs/workbench/workspaces.md b/docs/workbench/workspaces.md new file mode 100644 index 000000000..8ed28bfe6 --- /dev/null +++ b/docs/workbench/workspaces.md @@ -0,0 +1,166 @@ +--- +title: "Workspaces" +description: "Organize your Cassandra connections with workspaces in AxonOps Workbench. Create, customize, and share workspace configurations." +meta: + - name: keywords + content: "workspaces, organize connections, share configuration, AxonOps Workbench" +--- + +# Workspaces + +Workspaces are organizational containers in AxonOps Workbench that let you group related database connections together. Think of them as projects -- each workspace can hold multiple connections, making it straightforward to separate development, staging, and production environments or to organize connections by team or application. + +## What is a Workspace? + +Every connection in AxonOps Workbench belongs to a workspace. When you first launch the application, a default sandbox workspace is available for quick experimentation. Beyond that, you can create as many workspaces as you need, each with its own name, color, storage path, and set of connections. + +Workspaces provide: + +- **Logical grouping** of connections by project, environment, or team +- **Visual identification** through customizable colors +- **Flexible storage** with default or custom directory paths +- **Portability** through import and export capabilities + +## Creating a Workspace + +To create a new workspace: + +1. Open the workspace management area from the left sidebar. +2. Click the button to add a new workspace. +3. Fill in the workspace details: + + - **Name** -- A unique name for the workspace. This name is also used to generate the workspace folder name on disk. + - **Color** -- Choose a color to visually distinguish this workspace from others. A color picker is provided for precise selection. + - **Path** -- Select where workspace data should be stored. You can use the default data directory or specify a custom path on your filesystem. + +4. Confirm creation. The workbench creates a dedicated folder for the workspace and initializes a `connections.json` manifest inside it. + +!!! note + Workspace names must be unique. If you attempt to create a workspace with a name that already exists, the operation will be rejected. + +## Workspace Colors + +Each workspace can be assigned a color that appears throughout the interface, making it easy to identify which workspace you are working in at a glance. This is particularly useful when managing many workspaces simultaneously. + +The color picker supports any standard color format, including HEX (e.g., `#FF5733`), RGB, and HSL values. + + + +## Managing Workspaces + +### Switching Workspaces + +The workspace switcher in the left sidebar displays all available workspaces. Click on any workspace to switch to it and view its connections. Workspaces are sorted alphabetically for easy navigation. + +### Editing a Workspace + +You can update a workspace's name, color, or storage path at any time: + +1. Select the workspace you want to edit. +2. Open the edit dialog. +3. Modify the desired fields. +4. Save your changes. + +When you rename a workspace or change its storage path, the workbench automatically moves the workspace folder and all its contents to the new location. + +### Deleting a Workspace + +When you delete a workspace, the workbench removes: + +- The workspace entry from the master `workspaces.json` manifest +- The workspace folder and all files within it, including connection data + +!!! warning + Deleting a workspace permanently removes all connections and associated data stored within it. This action cannot be undone. + +## Storage + +Workspace data is stored as JSON files on your local filesystem. The overall structure is: + +``` +data/ + workspaces/ + workspaces.json # Master manifest listing all workspaces + my-project/ # Individual workspace folder + connections.json # Connections manifest for this workspace + _snippets/ # CQL snippets scoped to this workspace + connection-folder-1/ # Data for a specific connection + connection-folder-2/ + another-workspace/ + connections.json + ... +``` + +### Default Path vs. Custom Path + +- **Default path** -- Workspace folders are created inside the application's built-in `data/workspaces/` directory. This is the simplest option and works well for most users. +- **Custom path** -- You can specify any accessible directory on your filesystem. This is useful when you want to store workspace data on a shared drive, a specific partition, or alongside your project source code. + +The `workspaces.json` master manifest tracks the location of each workspace, recording whether it uses the default path or a custom one. + +## Sharing Workspaces + +Workspaces are portable. Because all workspace data is stored as standard JSON files in a well-defined folder structure, you can share workspaces with team members or transfer them between machines. + +### Exporting a Workspace + +To share a workspace, copy its folder from the data directory. The folder contains everything needed to recreate the workspace: connection definitions, snippets, and configuration. + +### Importing a Workspace via the CLI + +AxonOps Workbench provides command-line arguments for importing workspaces programmatically: + +**Import from a JSON string:** + +```bash +./axonops-workbench --import-workspace='{"name":"Production", "color":"#FF5733"}' +``` + +**Import from a JSON file:** + +```bash +./axonops-workbench --import-workspace=/path/to/workspace.json +``` + +**Import from an existing workspace folder:** + +```bash +./axonops-workbench --import-workspace=/path/to/workspace-folder/ +``` + +When importing from a folder path, the workbench detects workspaces (including nested workspace folders one level deep) and imports them along with their connections. + +### Additional CLI Flags + +| Flag | Description | +|------|-------------| +| `--copy-to-default` | Copies the imported workspace folder into the default data directory. Without this flag, folder-based imports reference the workspace at its original path. | +| `--delete-file` | Deletes the source JSON file after a successful import. Ignored when the source is a folder. | +| `--json` | Outputs import results as a JSON string instead of formatted text, useful for scripting and automation. | + +**Example -- automated import with cleanup:** + +```bash +./axonops-workbench --import-workspace=/path/to/workspace.json --copy-to-default --delete-file +``` + +### Source Control + +Because workspace data is stored as plain JSON files, you can commit workspace configurations to version control. This enables teams to share a consistent set of connection definitions across environments. + +!!! tip + When sharing workspace configurations through source control, be mindful of sensitive data such as credentials. Connection files may contain authentication details that should not be committed to a repository. Consider using SSH key-based authentication or environment-specific configuration to avoid exposing secrets. + +### Listing Workspaces via the CLI + +To view all saved workspaces from the command line: + +```bash +./axonops-workbench --list-workspaces +``` + +Add `--json` for machine-readable output: + +```bash +./axonops-workbench --list-workspaces --json +``` diff --git a/mkdocs.yml b/mkdocs.yml index 0e6a342bf..b606455b2 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -287,9 +287,30 @@ nav: - Re-use an existing host ID: how-to/reuse-host-id.md - AxonOps Workbench: - - Cassandra: - # - 'Overview' : workbench/cassandra/cassandra.md - - License: workbench/cassandra/license.md + - Overview: workbench/index.md + - Getting Started: + - Installation: workbench/getting-started/installation.md + - First Steps: workbench/getting-started/first-steps.md + - Local Clusters: workbench/getting-started/local-clusters.md + - Connections: + - Overview: workbench/connections/index.md + - Apache Cassandra: workbench/connections/cassandra.md + - DataStax Astra DB: workbench/connections/astra-db.md + - SSH Tunneling: workbench/connections/ssh-tunneling.md + - SSL/TLS: workbench/connections/ssl-tls.md + - CQL Console: + - Overview: workbench/cql-console/index.md + - Query Execution: workbench/cql-console/query-execution.md + - Query Tracing: workbench/cql-console/query-tracing.md + - Results & Export: workbench/cql-console/results-export.md + - Schema Management: workbench/schema.md + - Workspaces: workbench/workspaces.md + - CQL Snippets: workbench/snippets.md + - Settings: workbench/settings.md + - CLI Reference: workbench/cli.md + - AxonOps Integration: workbench/axonops-integration.md + - Troubleshooting: workbench/troubleshooting.md + - License: workbench/license.md - Data Platforms: - Overview: data-platforms/index.md