... Filter file resources by one of the following
- filter names and a corresponding value: project,
- regex.
- ```
-
-## dataset resource inspect
-
-Display all metadata of a file resource.
-
-```shell-session title="Usage"
-$ cmemc dataset resource inspect [OPTIONS] RESOURCE_ID
-```
-
-
-
-
-
-??? info "Options"
- ```text
-
- --raw Outputs raw JSON.
- ```
-
-## dataset resource usage
-
-Display all usage data of a file resource.
-
-```shell-session title="Usage"
-$ cmemc dataset resource usage [OPTIONS] RESOURCE_ID
-```
-
-
-
-
-
-??? info "Options"
- ```text
-
- --raw Outputs raw JSON.
- ```
-
diff --git a/docs/automate/cmemc-command-line-interface/command-reference/graph/imports/index.md b/docs/automate/cmemc-command-line-interface/command-reference/graph/imports/index.md
index 19409ee67..b6a270f84 100644
--- a/docs/automate/cmemc-command-line-interface/command-reference/graph/imports/index.md
+++ b/docs/automate/cmemc-command-line-interface/command-reference/graph/imports/index.md
@@ -6,7 +6,9 @@ tags:
- KnowledgeGraph
- cmemc
---
+
# graph imports Command Group
+
List, create, delete and show graph imports.
@@ -16,25 +18,18 @@ Graphs are identified by an IRI. Statement imports are managed by creating owl:i
!!! note
The get a list of existing graphs, execute the `graph list` command or use tab-completion.
-
-
## graph imports tree
Show graph tree(s) of the imports statement hierarchy.
```shell-session title="Usage"
-$ cmemc graph imports tree [OPTIONS] [IRIS]...
+cmemc graph imports tree [OPTIONS] [IRIS]...
```
-
-
-
You can output one or more trees of the import hierarchy.
Imported graphs which do not exist are shown as `[missing: IRI]`. Imported graphs which will result in an import cycle are shown as `[ignored: IRI]`. Each graph is shown with label and IRI.
-
-
??? info "Options"
```text
@@ -51,16 +46,11 @@ Imported graphs which do not exist are shown as `[missing: IRI]`. Imported graph
List accessible graph imports statements.
```shell-session title="Usage"
-$ cmemc graph imports list [OPTIONS]
+cmemc graph imports list [OPTIONS]
```
-
-
-
Graphs are identified by an IRI. Statement imports are managed by creating owl:imports statements such as "`FROM_GRAPH` owl:imports `TO_GRAPH`" in the `FROM_GRAPH`. All statements in the `TO_GRAPH` are then available in the `FROM_GRAPH`.
-
-
??? info "Options"
```text
@@ -74,36 +64,24 @@ Graphs are identified by an IRI. Statement imports are managed by creating owl:i
Add statement to import a TO_GRAPH into a FROM_GRAPH.
```shell-session title="Usage"
-$ cmemc graph imports create FROM_GRAPH TO_GRAPH
+cmemc graph imports create FROM_GRAPH TO_GRAPH
```
-
-
-
Graphs are identified by an IRI. Statement imports are managed by creating owl:imports statements such as "`FROM_GRAPH` owl:imports `TO_GRAPH`" in the `FROM_GRAPH`. All statements in the `TO_GRAPH` are then available in the `FROM_GRAPH`.
!!! note
The get a list of existing graphs, execute the `graph list` command or use tab-completion.
-
-
-
## graph imports delete
Delete statement to import a TO_GRAPH into a FROM_GRAPH.
```shell-session title="Usage"
-$ cmemc graph imports delete FROM_GRAPH TO_GRAPH
+cmemc graph imports delete FROM_GRAPH TO_GRAPH
```
-
-
-
Graphs are identified by an IRI. Statement imports are managed by creating owl:imports statements such as "`FROM_GRAPH` owl:imports `TO_GRAPH`" in the `FROM_GRAPH`. All statements in the `TO_GRAPH` are then available in the `FROM_GRAPH`.
!!! note
The get a list of existing graph imports, execute the `graph imports list` command or use tab-completion.
-
-
-
diff --git a/docs/automate/cmemc-command-line-interface/command-reference/graph/index.md b/docs/automate/cmemc-command-line-interface/command-reference/graph/index.md
index c2ed68917..5b74752da 100644
--- a/docs/automate/cmemc-command-line-interface/command-reference/graph/index.md
+++ b/docs/automate/cmemc-command-line-interface/command-reference/graph/index.md
@@ -6,7 +6,9 @@ tags:
- KnowledgeGraph
- cmemc
---
+
# graph Command Group
+
List, import, export, delete, count, tree or open graphs.
@@ -16,23 +18,16 @@ Graphs are identified by an IRI.
!!! note
The get a list of existing graphs, execute the `graph list` command or use tab-completion.
-
-
## graph count
Count triples in graph(s).
```shell-session title="Usage"
-$ cmemc graph count [OPTIONS] [IRIS]...
+cmemc graph count [OPTIONS] [IRIS]...
```
-
-
-
This command lists graphs with their triple count. Counts do not include imported graphs.
-
-
??? info "Options"
```text
@@ -40,41 +35,14 @@ This command lists graphs with their triple count. Counts do not include importe
-s, --summarize Display only a sum of all counted graphs together
```
-## graph tree
-
-(Hidden) Deprecated: use 'graph imports tree' instead.
-
-```shell-session title="Usage"
-$ cmemc graph tree [OPTIONS] [IRIS]...
-```
-
-
-
-
-
-??? info "Options"
- ```text
-
- -a, --all Show tree of all (readable) graphs.
- --raw Outputs raw JSON of the graph importTree API response.
- --id-only Lists only graph identifier (IRIs) and no labels or other
- metadata. This is useful for piping the IRIs into other
- commands. The output with this option is a sorted, flat, de-
- duplicated list of existing graphs.
- ```
-
## graph list
List accessible graphs.
```shell-session title="Usage"
-$ cmemc graph list [OPTIONS]
+cmemc graph list [OPTIONS]
```
-
-
-
-
??? info "Options"
```text
@@ -94,13 +62,8 @@ Export graph(s) as NTriples to stdout (-), file or directory.
$ cmemc graph export [OPTIONS] [IRIS]...
```
-
-
-
In case of file export, data from all selected graphs will be concatenated in one file. In case of directory export, .graph and .ttl files will be created for each graph.
-
-
??? info "Options"
```text
@@ -137,13 +100,9 @@ In case of file export, data from all selected graphs will be concatenated in on
Delete graph(s) from the store.
```shell-session title="Usage"
-$ cmemc graph delete [OPTIONS] [IRIS]...
+cmemc graph delete [OPTIONS] [IRIS]...
```
-
-
-
-
??? info "Options"
```text
@@ -161,12 +120,9 @@ $ cmemc graph delete [OPTIONS] [IRIS]...
Import graph(s) to the store.
```shell-session title="Usage"
-$ cmemc graph import [OPTIONS] INPUT_PATH [IRI]
+cmemc graph import [OPTIONS] INPUT_PATH [IRI]
```
-
-
-
If input is a file, content will be uploaded to the graph identified with the IRI.
If input is a directory and NO IRI is given, it scans for file-pairs such as `xyz.ttl` and `xyz.ttl.graph`, where `xyz.ttl` is the actual triples file and `xyz.ttl.graph` contains the graph IRI in the first line: `https://mygraph.de/xyz/`.
@@ -178,9 +134,6 @@ If the ``--replace`` flag is set, the data in the graphs will be overwritten, if
!!! note
Directories are scanned on the first level only (not recursively).
-
-
-
??? info "Options"
```text
@@ -201,10 +154,6 @@ If the ``--replace`` flag is set, the data in the graphs will be overwritten, if
Open / explore a graph in the browser.
```shell-session title="Usage"
-$ cmemc graph open IRI
+cmemc graph open IRI
```
-
-
-
-
diff --git a/docs/automate/cmemc-command-line-interface/command-reference/graph/insights/index.md b/docs/automate/cmemc-command-line-interface/command-reference/graph/insights/index.md
index 7452ef1e3..3303492ba 100644
--- a/docs/automate/cmemc-command-line-interface/command-reference/graph/insights/index.md
+++ b/docs/automate/cmemc-command-line-interface/command-reference/graph/insights/index.md
@@ -5,29 +5,25 @@ icon: eccenca/graph-insights
tags:
- cmemc
---
+
# graph insights Command Group
+
List, create, delete and inspect graph insight snapshots.
Graph Insight Snapshots are identified by an ID. To get a list of existing snapshots, execute the `graph insights list` command or use tab-completion.
-
## graph insights list
List graph insight snapshots.
```shell-session title="Usage"
-$ cmemc graph insights list [OPTIONS]
+cmemc graph insights list [OPTIONS]
```
-
-
-
Graph Insights Snapshots are identified by an ID.
-
-
??? info "Options"
```text
@@ -44,24 +40,17 @@ Graph Insights Snapshots are identified by an ID.
Delete graph insight snapshots.
```shell-session title="Usage"
-$ cmemc graph insights delete [OPTIONS] [SNAPSHOT_IDS]...
+cmemc graph insights delete [OPTIONS] [SNAPSHOT_IDS]...
```
-
-
-
Graph Insight Snapshots are identified by an ID.
!!! warning
Snapshots will be deleted without prompting.
-
!!! note
Snapshots can be listed by using the `graph insights list` command.
-
-
-
??? info "Options"
```text
@@ -76,16 +65,11 @@ Graph Insight Snapshots are identified by an ID.
Create or update a graph insight snapshot.
```shell-session title="Usage"
-$ cmemc graph insights create [OPTIONS] IRI
+cmemc graph insights create [OPTIONS] IRI
```
-
-
-
Create a graph insight snapshot for a given graph. If the snapshot already exists, it is hot-swapped after re-creation. The snapshot contains only the (imported) graphs the requesting user can read.
-
-
??? info "Options"
```text
@@ -102,16 +86,11 @@ Create a graph insight snapshot for a given graph. If the snapshot already exist
Update a graph insight snapshot.
```shell-session title="Usage"
-$ cmemc graph insights update [OPTIONS] [SNAPSHOT_ID]
+cmemc graph insights update [OPTIONS] [SNAPSHOT_ID]
```
-
-
-
After the update, the snapshot is hot-swapped.
-
-
??? info "Options"
```text
@@ -133,16 +112,11 @@ After the update, the snapshot is hot-swapped.
Inspect the metadata of a graph insight snapshot.
```shell-session title="Usage"
-$ cmemc graph insights inspect [OPTIONS] SNAPSHOT_ID
+cmemc graph insights inspect [OPTIONS] SNAPSHOT_ID
```
-
-
-
-
??? info "Options"
```text
--raw Outputs raw JSON.
```
-
diff --git a/docs/automate/cmemc-command-line-interface/command-reference/graph/validation/index.md b/docs/automate/cmemc-command-line-interface/command-reference/graph/validation/index.md
index 2932c48b8..c0fea074f 100644
--- a/docs/automate/cmemc-command-line-interface/command-reference/graph/validation/index.md
+++ b/docs/automate/cmemc-command-line-interface/command-reference/graph/validation/index.md
@@ -7,7 +7,9 @@ tags:
- Validation
- cmemc
---
+
# graph validation Command Group
+
Validate resources in a graph.
@@ -17,23 +19,16 @@ This command group is dedicated to the management of resource validation process
!!! note
Validation processes are identified with a random ID and can be listed with the `graph validation list` command. To start or cancel validation processes, use the `graph validation execute` and `graph validation cancel` command. To inspect the found violations of a validation process, use the `graph validation inspect` command.
-
-
## graph validation execute
Start a new validation process.
```shell-session title="Usage"
-$ cmemc graph validation execute [OPTIONS] IRI
+cmemc graph validation execute [OPTIONS] IRI
```
-
-
-
Validation is performed on all typed resources of the data / context graph (and its sub-graphs). Each resource is validated against all applicable node shapes from the shape catalog.
-
-
??? info "Options"
```text
@@ -75,20 +70,14 @@ Validation is performed on all typed resources of the data / context graph (and
List running and finished validation processes.
```shell-session title="Usage"
-$ cmemc graph validation list [OPTIONS]
+cmemc graph validation list [OPTIONS]
```
-
-
-
This command provides a filterable table or identifier list of validation processes. The command operates on the process summary and provides some statistics.
!!! note
Detailed information on the found violations can be listed with the `graph validation inspect` command.
-
-
-
??? info "Options"
```text
@@ -106,12 +95,9 @@ This command provides a filterable table or identifier list of validation proces
List and inspect errors found with a validation process.
```shell-session title="Usage"
-$ cmemc graph validation inspect [OPTIONS] PROCESS_ID
+cmemc graph validation inspect [OPTIONS] PROCESS_ID
```
-
-
-
This command provides detailed information on the found violations of a validation process.
Use the ``--filter`` option to limit the output based on different criteria such as constraint name (`constraint`), origin node shape of the rule (`node-shape`), or the validated resource (`resource`).
@@ -119,9 +105,6 @@ Use the ``--filter`` option to limit the output based on different criteria such
!!! note
Validation processes IDs can be listed with the `graph validation list` command, or by utilizing the tab completion of this command.
-
-
-
??? info "Options"
```text
@@ -140,18 +123,12 @@ Use the ``--filter`` option to limit the output based on different criteria such
Cancel a running validation process.
```shell-session title="Usage"
-$ cmemc graph validation cancel PROCESS_ID
+cmemc graph validation cancel PROCESS_ID
```
-
-
-
!!! note
In order to get the process IDs of all currently running validation processes, use the `graph validation list` command with the option `--filter status running`, or utilize the tab completion of this command.
-
-
-
## graph validation export
Export a report of finished validations.
@@ -160,9 +137,6 @@ Export a report of finished validations.
$ cmemc graph validation export [OPTIONS] [PROCESS_IDS]...
```
-
-
-
This command exports a jUnit XML or JSON report in order to process them somewhere else (e.g. a CI pipeline).
You can export a single report of multiple validation processes.
@@ -172,9 +146,6 @@ For jUnit XML: Each validation process result will be transformed to a single te
!!! note
Validation processes IDs can be listed with the `graph validation list` command, or by utilizing the tab completion of this command.
-
-
-
??? info "Options"
```text
@@ -188,4 +159,3 @@ For jUnit XML: Each validation process result will be transformed to a single te
--format [JSON|XML] Export either the plain JSON report or a distilled
jUnit XML report. [default: XML]
```
-
diff --git a/docs/automate/cmemc-command-line-interface/command-reference/index.md b/docs/automate/cmemc-command-line-interface/command-reference/index.md
index bcdc57568..f524dd368 100644
--- a/docs/automate/cmemc-command-line-interface/command-reference/index.md
+++ b/docs/automate/cmemc-command-line-interface/command-reference/index.md
@@ -6,7 +6,9 @@ tags:
- Reference
- cmemc
---
+
# Command Reference
+
!!! info
@@ -40,7 +42,6 @@ tags:
| [admin store](admin/store/index.md) | [bootstrap](admin/store/index.md#admin-store-bootstrap) | Update/Import or remove bootstrap data. |
| [admin store](admin/store/index.md) | [export](admin/store/index.md#admin-store-export) | Backup all knowledge graphs to a ZIP archive. |
| [admin store](admin/store/index.md) | [import](admin/store/index.md#admin-store-import) | Restore graphs from a ZIP archive. |
-| [admin store](admin/store/index.md) | [migrate](admin/store/index.md#admin-store-migrate) | Migrate configuration resources to the current version. |
| [admin user](admin/user/index.md) | [list](admin/user/index.md#admin-user-list) | List user accounts. |
| [admin user](admin/user/index.md) | [create](admin/user/index.md#admin-user-create) | Create a user account. |
| [admin user](admin/user/index.md) | [update](admin/user/index.md#admin-user-update) | Update a user account. |
@@ -68,12 +69,7 @@ tags:
| [dataset](dataset/index.md) | [create](dataset/index.md#dataset-create) | Create a dataset. |
| [dataset](dataset/index.md) | [open](dataset/index.md#dataset-open) | Open datasets in the browser. |
| [dataset](dataset/index.md) | [update](dataset/index.md#dataset-update) | Update a dataset. |
-| [dataset resource](dataset/resource/index.md) | [list](dataset/resource/index.md#dataset-resource-list) | List available file resources. |
-| [dataset resource](dataset/resource/index.md) | [delete](dataset/resource/index.md#dataset-resource-delete) | Delete file resources. |
-| [dataset resource](dataset/resource/index.md) | [inspect](dataset/resource/index.md#dataset-resource-inspect) | Display all metadata of a file resource. |
-| [dataset resource](dataset/resource/index.md) | [usage](dataset/resource/index.md#dataset-resource-usage) | Display all usage data of a file resource. |
| [graph](graph/index.md) | [count](graph/index.md#graph-count) | Count triples in graph(s). |
-| [graph](graph/index.md) | [tree](graph/index.md#graph-tree) | (Hidden) Deprecated: use 'graph imports tree' instead. |
| [graph](graph/index.md) | [list](graph/index.md#graph-list) | List accessible graphs. |
| [graph](graph/index.md) | [export](graph/index.md#graph-export) | Export graph(s) as NTriples to stdout (-), file or directory. |
| [graph](graph/index.md) | [delete](graph/index.md#graph-delete) | Delete graph(s) from the store. |
@@ -142,4 +138,3 @@ tags:
| [workflow scheduler](workflow/scheduler/index.md) | [inspect](workflow/scheduler/index.md#workflow-scheduler-inspect) | Display all metadata of a scheduler. |
| [workflow scheduler](workflow/scheduler/index.md) | [disable](workflow/scheduler/index.md#workflow-scheduler-disable) | Disable scheduler(s). |
| [workflow scheduler](workflow/scheduler/index.md) | [enable](workflow/scheduler/index.md#workflow-scheduler-enable) | Enable scheduler(s). |
-
diff --git a/docs/automate/cmemc-command-line-interface/command-reference/package/index.md b/docs/automate/cmemc-command-line-interface/command-reference/package/index.md
index 90c8a2d3f..552ebd613 100644
--- a/docs/automate/cmemc-command-line-interface/command-reference/package/index.md
+++ b/docs/automate/cmemc-command-line-interface/command-reference/package/index.md
@@ -6,24 +6,21 @@ tags:
- cmemc
- Package
---
+
# package Command Group
+
List, (un)install, export, create, or inspect packages.
-
## package inspect
Inspect the manifest of a package.
```shell-session title="Usage"
-$ cmemc package inspect [OPTIONS] PACKAGE_PATH
+cmemc package inspect [OPTIONS] PACKAGE_PATH
```
-
-
-
-
??? info "Options"
```text
@@ -36,13 +33,9 @@ $ cmemc package inspect [OPTIONS] PACKAGE_PATH
List installed packages.
```shell-session title="Usage"
-$ cmemc package list [OPTIONS]
+cmemc package list [OPTIONS]
```
-
-
-
-
??? info "Options"
```text
@@ -59,16 +52,11 @@ $ cmemc package list [OPTIONS]
Install packages.
```shell-session title="Usage"
-$ cmemc package install [OPTIONS] [PACKAGE_ID]
+cmemc package install [OPTIONS] [PACKAGE_ID]
```
-
-
-
This command installs a package either from the marketplace or from local package archives (.cpa) or directories.
-
-
??? info "Options"
```text
@@ -82,13 +70,9 @@ This command installs a package either from the marketplace or from local packag
Uninstall installed packages.
```shell-session title="Usage"
-$ cmemc package uninstall [OPTIONS] [PACKAGE_ID]
+cmemc package uninstall [OPTIONS] [PACKAGE_ID]
```
-
-
-
-
??? info "Options"
```text
@@ -107,10 +91,6 @@ Export installed packages to package directories.
$ cmemc package export [OPTIONS] [PACKAGE_ID]
```
-
-
-
-
??? info "Options"
```text
@@ -126,18 +106,13 @@ $ cmemc package export [OPTIONS] [PACKAGE_ID]
Build a package archive from a package directory.
```shell-session title="Usage"
-$ cmemc package build [OPTIONS] PACKAGE_DIRECTORY
+cmemc package build [OPTIONS] PACKAGE_DIRECTORY
```
-
-
-
This command processes a package directory, validates its content including the manifest, and creates a versioned Corporate Memory package archive (.cpa) with the following naming convention: {package_id}-v{version}.cpa
Package archives can be published to the marketplace using the `package publish` command.
-
-
??? info "Options"
```text
@@ -152,13 +127,9 @@ Package archives can be published to the marketplace using the `package publish`
Publish a package archive to the marketplace server.
```shell-session title="Usage"
-$ cmemc package publish [OPTIONS] PACKAGE_ARCHIVE
+cmemc package publish [OPTIONS] PACKAGE_ARCHIVE
```
-
-
-
-
??? info "Options"
```text
@@ -170,16 +141,11 @@ $ cmemc package publish [OPTIONS] PACKAGE_ARCHIVE
Search for available packages with a given search text.
```shell-session title="Usage"
-$ cmemc package search [OPTIONS] [SEARCH_TERMS]...
+cmemc package search [OPTIONS] [SEARCH_TERMS]...
```
-
-
-
-
??? info "Options"
```text
--raw Outputs raw JSON.
```
-
diff --git a/docs/automate/cmemc-command-line-interface/command-reference/project/file/index.md b/docs/automate/cmemc-command-line-interface/command-reference/project/file/index.md
index efc4dd301..5b4c9d724 100644
--- a/docs/automate/cmemc-command-line-interface/command-reference/project/file/index.md
+++ b/docs/automate/cmemc-command-line-interface/command-reference/project/file/index.md
@@ -6,7 +6,9 @@ tags:
- Files
- cmemc
---
+
# project file Command Group
+
List, inspect, up-/download or delete project file resources.
@@ -16,23 +18,16 @@ File resources are identified with a `RESOURCE_ID` which is a concatenation of i
!!! note
To get a list of existing file resources, execute the `project file list` command or use tab-completion.
-
-
## project file list
List available file resources.
```shell-session title="Usage"
-$ cmemc project file list [OPTIONS]
+cmemc project file list [OPTIONS]
```
-
-
-
Outputs a table or a list of file resources.
-
-
??? info "Options"
```text
@@ -49,16 +44,11 @@ Outputs a table or a list of file resources.
Delete file resources.
```shell-session title="Usage"
-$ cmemc project file delete [OPTIONS] [RESOURCE_IDS]...
+cmemc project file delete [OPTIONS] [RESOURCE_IDS]...
```
-
-
-
There are three selection mechanisms: with specific IDs - only those specified resources will be deleted; by using `--filter` - resources based on the filter type and value will be deleted; by using `--all`, which will delete all resources.
-
-
??? info "Options"
```text
@@ -75,28 +65,21 @@ There are three selection mechanisms: with specific IDs - only those specified r
Download file resources to the local file system.
```shell-session title="Usage"
-$ cmemc project file download [OPTIONS] [RESOURCE_IDS]...
+cmemc project file download [OPTIONS] [RESOURCE_IDS]...
```
-
-
-
This command downloads one or more file resources from projects to your local file system. Files are saved with their resource names in the output directory.
Resources are identified by their IDs in the format `PROJECT_ID`:`RESOURCE_NAME`.
```shell-session title="Example"
-$ cmemc project file download my-proj:my-file.csv
+cmemc project file download my-proj:my-file.csv
```
-
```shell-session title="Example"
-$ cmemc project file download my-proj:file1.csv my-proj:file2.csv --output-dir /tmp
+cmemc project file download my-proj:file1.csv my-proj:file2.csv --output-dir /tmp
```
-
-
-
??? info "Options"
```text
@@ -112,25 +95,18 @@ $ cmemc project file download my-proj:file1.csv my-proj:file2.csv --output-dir /
Upload a file to a project.
```shell-session title="Usage"
-$ cmemc project file upload [OPTIONS] INPUT_PATH
+cmemc project file upload [OPTIONS] INPUT_PATH
```
-
-
-
This command uploads a file to a project as a file resource.
!!! note
If you want to create a dataset from your file, the `dataset create` command is maybe the better option.
-
```shell-session title="Example"
-$ cmemc project file upload my-file.csv --project my-project
+cmemc project file upload my-file.csv --project my-project
```
-
-
-
??? info "Options"
```text
@@ -149,13 +125,9 @@ $ cmemc project file upload my-file.csv --project my-project
Display all metadata of a file resource.
```shell-session title="Usage"
-$ cmemc project file inspect [OPTIONS] RESOURCE_ID
+cmemc project file inspect [OPTIONS] RESOURCE_ID
```
-
-
-
-
??? info "Options"
```text
@@ -167,16 +139,11 @@ $ cmemc project file inspect [OPTIONS] RESOURCE_ID
Display all usage data of a file resource.
```shell-session title="Usage"
-$ cmemc project file usage [OPTIONS] RESOURCE_ID
+cmemc project file usage [OPTIONS] RESOURCE_ID
```
-
-
-
-
??? info "Options"
```text
--raw Outputs raw JSON.
```
-
diff --git a/docs/automate/cmemc-command-line-interface/command-reference/project/index.md b/docs/automate/cmemc-command-line-interface/command-reference/project/index.md
index e13611088..fdf4532b2 100644
--- a/docs/automate/cmemc-command-line-interface/command-reference/project/index.md
+++ b/docs/automate/cmemc-command-line-interface/command-reference/project/index.md
@@ -6,7 +6,9 @@ tags:
- Project
- cmemc
---
+
# project Command Group
+
List, import, export, create, delete or open projects.
@@ -16,40 +18,28 @@ Projects are identified by a `PROJECT_ID`.
!!! note
To get a list of existing projects, execute the `project list` command or use tab-completion.
-
-
## project open
Open projects in the browser.
```shell-session title="Usage"
-$ cmemc project open PROJECT_IDS...
+cmemc project open PROJECT_IDS...
```
-
-
-
With this command, you can open a project in the workspace in your browser to change them.
The command accepts multiple project IDs which results in opening multiple browser tabs.
-
-
## project list
List available projects.
```shell-session title="Usage"
-$ cmemc project list [OPTIONS]
+cmemc project list [OPTIONS]
```
-
-
-
Outputs a list of project IDs which can be used as reference for the project create, delete, export and import commands.
-
-
??? info "Options"
```text
@@ -69,24 +59,17 @@ Export projects to files.
$ cmemc project export [OPTIONS] [PROJECT_IDS]...
```
-
-
-
Projects can be exported with different export formats. The default type is a zip archive which includes metadata as well as dataset resources. If more than one project is exported, a file is created for each project. By default, these files are created in the current directory with a descriptive name (see `--template` option default).
!!! note
Projects can be listed by using the `project list` command.
-
You can use the template string to create subdirectories.
```shell-session title="Example"
$ cmemc config list | parallel -I% cmemc -c % project export --all -t "dump/{{connection}}/{{date}}-{{id}}.project"
```
-
-
-
??? info "Options"
```text
@@ -122,19 +105,13 @@ $ cmemc config list | parallel -I% cmemc -c % project export --all -t "dump/{{co
Import a project from a file or directory.
```shell-session title="Usage"
-$ cmemc project import [OPTIONS] PATH [PROJECT_ID]
+cmemc project import [OPTIONS] PATH [PROJECT_ID]
```
-
-
-
```shell-session title="Example"
-$ cmemc project import my_project.zip my_project
+cmemc project import my_project.zip my_project
```
-
-
-
??? info "Options"
```text
@@ -147,24 +124,17 @@ $ cmemc project import my_project.zip my_project
Delete projects.
```shell-session title="Usage"
-$ cmemc project delete [OPTIONS] [PROJECT_IDS]...
+cmemc project delete [OPTIONS] [PROJECT_IDS]...
```
-
-
-
This command deletes existing data integration projects from Corporate Memory.
!!! warning
Projects will be deleted without prompting!
-
!!! note
Projects can be listed with the `project list` command.
-
-
-
??? info "Options"
```text
@@ -179,20 +149,14 @@ This command deletes existing data integration projects from Corporate Memory.
Create projects.
```shell-session title="Usage"
-$ cmemc project create [OPTIONS] PROJECT_IDS...
+cmemc project create [OPTIONS] PROJECT_IDS...
```
-
-
-
This command creates one or more new projects. Existing projects will not be overwritten.
!!! note
Projects can be listed by using the `project list` command.
-
-
-
??? info "Options"
```text
@@ -214,27 +178,19 @@ This command creates one or more new projects. Existing projects will not be ove
Reload projects from the workspace provider.
```shell-session title="Usage"
-$ cmemc project reload [OPTIONS] [PROJECT_IDS]...
+cmemc project reload [OPTIONS] [PROJECT_IDS]...
```
-
-
-
This command reloads all tasks of a project from the workspace provider. This is similar to the `workspace reload` command, but for a single project only.
!!! note
You need this in case you changed project data externally or loaded a project which uses plugins which are not installed yet. In this case, install the plugin(s) and reload the project afterward.
-
!!! warning
Depending on the size your datasets esp. your Knowledge Graphs, reloading a project can take a long time to re-create the path caches.
-
-
-
??? info "Options"
```text
-a, --all Reload all projects
```
-
diff --git a/docs/automate/cmemc-command-line-interface/command-reference/project/variable/index.md b/docs/automate/cmemc-command-line-interface/command-reference/project/variable/index.md
index 3c8700251..669ce58cb 100644
--- a/docs/automate/cmemc-command-line-interface/command-reference/project/variable/index.md
+++ b/docs/automate/cmemc-command-line-interface/command-reference/project/variable/index.md
@@ -6,7 +6,9 @@ tags:
- Variables
- cmemc
---
+
# project variable Command Group
+
List, create, delete or get data from project variables.
@@ -15,22 +17,16 @@ Project variables can be used in dataset and task parameters, and in the templat
Variables are identified by a `VARIABLE_ID`. To get a list of existing variables, execute the list command or use tab-completion. The `VARIABLE_ID` is a concatenation of a `PROJECT_ID` and a `VARIABLE_NAME`, such as `my-project:my-variable`.
-
## project variable list
List available project variables.
```shell-session title="Usage"
-$ cmemc project variable list [OPTIONS]
+cmemc project variable list [OPTIONS]
```
-
-
-
Outputs a table or a list of project variables.
-
-
??? info "Options"
```text
@@ -48,20 +44,14 @@ Outputs a table or a list of project variables.
Get the value or other data of a project variable.
```shell-session title="Usage"
-$ cmemc project variable get [OPTIONS] VARIABLE_ID
+cmemc project variable get [OPTIONS] VARIABLE_ID
```
-
-
-
Use the ``--key`` option to specify which information you want to get.
!!! note
Only the `value` key is always available on a project variable. Static value variables have no `template` key, and the `description` key is optional for both types of variables.
-
-
-
??? info "Options"
```text
@@ -76,18 +66,13 @@ Use the ``--key`` option to specify which information you want to get.
Delete project variables.
```shell-session title="Usage"
-$ cmemc project variable delete [OPTIONS] [VARIABLE_IDS]...
+cmemc project variable delete [OPTIONS] [VARIABLE_IDS]...
```
-
-
-
There are three selection mechanisms: with specific IDs - only those specified variables will be deleted; by using `--filter` - variables based on the filter type and value will be deleted; by using `--all`, which will delete all variables.
Variables are automatically sorted by their dependencies and deleted in the correct order (template-based variables that depend on others are deleted first, then their dependencies).
-
-
??? info "Options"
```text
@@ -103,25 +88,18 @@ Variables are automatically sorted by their dependencies and deleted in the corr
Create a new project variable.
```shell-session title="Usage"
-$ cmemc project variable create [OPTIONS] VARIABLE_NAME
+cmemc project variable create [OPTIONS] VARIABLE_NAME
```
-
-
-
Variables need to be created with a value or a template (not both). In addition to that, a project ID and a name are mandatory.
```shell-session title="Example"
-$ cmemc project variable create my_var --project my_project --value abc
+cmemc project variable create my_var --project my_project --value abc
```
-
!!! note
cmemc is currently not able to manage the order of the variables in a project. This means you have to create plain value variables in advance, before you can create template based variables, which access these values.
-
-
-
??? info "Options"
```text
@@ -141,20 +119,14 @@ $ cmemc project variable create my_var --project my_project --value abc
Update data of an existing project variable.
```shell-session title="Usage"
-$ cmemc project variable update [OPTIONS] VARIABLE_ID
+cmemc project variable update [OPTIONS] VARIABLE_ID
```
-
-
-
With this command you can update the value or the template, as well as the description of a project variable.
!!! note
If you update the template of a static variable, it will be transformed to a template based variable. If you want to change the value of a template based variable, an error will be shown.
-
-
-
??? info "Options"
```text
@@ -165,4 +137,3 @@ With this command you can update the value or the template, as well as the descr
accessing variables from the same project.
--description TEXT The new description of the project variable.
```
-
diff --git a/docs/automate/cmemc-command-line-interface/command-reference/query/index.md b/docs/automate/cmemc-command-line-interface/command-reference/query/index.md
index c7c43ac8b..8d2a24a2a 100644
--- a/docs/automate/cmemc-command-line-interface/command-reference/query/index.md
+++ b/docs/automate/cmemc-command-line-interface/command-reference/query/index.md
@@ -6,7 +6,9 @@ tags:
- SPARQL
- cmemc
---
+
# query Command Group
+
List, execute, get status or open SPARQL queries.
@@ -20,27 +22,20 @@ Queries can use a mustache like syntax to specify placeholder for parameter valu
!!! note
In order to get a list of queries from the query catalog, execute the `query list` command or use tab-completion.
-
-
## query execute
Execute queries which are loaded from files or the query catalog.
```shell-session title="Usage"
-$ cmemc query execute [OPTIONS] QUERIES...
+cmemc query execute [OPTIONS] QUERIES...
```
-
-
-
Queries are identified either by a file path, a URI from the query catalog, or a shortened URI (qname, using a default namespace).
If multiple queries are executed one after the other, the first failing query stops the whole execution chain.
Limitations: All optional parameters (e.g. accept, base64, ...) are provided for ALL queries in an execution chain. If you need different parameters for each query in a chain, run cmemc multiple times and use the logical operators && and || of your shell instead.
-
-
??? info "Options"
```text
@@ -84,18 +79,13 @@ Limitations: All optional parameters (e.g. accept, base64, ...) are provided for
List available queries from the catalog.
```shell-session title="Usage"
-$ cmemc query list [OPTIONS]
+cmemc query list [OPTIONS]
```
-
-
-
Outputs a list of query URIs which can be used as reference for the query execute command.
You can filter queries based on ID, type, placeholder, or regex pattern.
-
-
??? info "Options"
```text
@@ -114,18 +104,13 @@ You can filter queries based on ID, type, placeholder, or regex pattern.
Open queries in the editor of the query catalog in your browser.
```shell-session title="Usage"
-$ cmemc query open [OPTIONS] QUERIES...
+cmemc query open [OPTIONS] QUERIES...
```
-
-
-
With this command, you can open (remote) queries from the query catalog in the query editor in your browser (e.g. in order to change them). You can also load local query files into the query editor, in order to import them into the query catalog.
The command accepts multiple query URIs or files which results in opening multiple browser tabs.
-
-
??? info "Options"
```text
@@ -138,18 +123,13 @@ The command accepts multiple query URIs or files which results in opening multip
Get status information of executed and running queries.
```shell-session title="Usage"
-$ cmemc query status [OPTIONS] [QUERY_ID]
+cmemc query status [OPTIONS] [QUERY_ID]
```
-
-
-
With this command, you can access the latest executed SPARQL queries on the Explore backend (DataPlatform). These queries are identified by UUIDs and listed ordered by starting timestamp.
You can filter queries based on status and runtime in order to investigate slow queries. In addition to that, you can get the details of a specific query by using the ID as a parameter.
-
-
??? info "Options"
```text
@@ -167,27 +147,21 @@ You can filter queries based on status and runtime in order to investigate slow
Re-execute queries from a replay file.
```shell-session title="Usage"
-$ cmemc query replay [OPTIONS] REPLAY_FILE
+cmemc query replay [OPTIONS] REPLAY_FILE
```
-
-
-
This command reads a `REPLAY_FILE` and re-executes the logged queries. A `REPLAY_FILE` is a JSON document which is an array of JSON objects with at least a key `queryString` holding the query text OR a key `iri` holding the IRI of the query in the query catalog. It can be created with the `query status` command.
```shell-session title="Example"
-$ query status --raw > replay.json
+query status --raw > replay.json
```
-
The output of this command shows basic query execution statistics.
The queries are executed one after another in the order given in the input `REPLAY_FILE`. Query placeholders / parameters are ignored. If a query results in an error, the duration is not counted.
The optional output file is the same JSON document which is used as input, but each query object is annotated with an additional `replays` object, which is an array of JSON objects which hold values for the replay|loop|run IDs, start and end time as well as duration and other data.
-
-
??? info "Options"
```text
@@ -208,13 +182,8 @@ The optional output file is the same JSON document which is used as input, but e
Cancel a running query.
```shell-session title="Usage"
-$ cmemc query cancel QUERY_ID
+cmemc query cancel QUERY_ID
```
-
-
-
With this command, you can cancel a running query. Depending on the backend triple store, this will result in a broken result stream (stardog, neptune and virtuoso) or a valid result stream with incomplete results (graphdb)
-
-
diff --git a/docs/automate/cmemc-command-line-interface/command-reference/vocabulary/cache/index.md b/docs/automate/cmemc-command-line-interface/command-reference/vocabulary/cache/index.md
index fa503b08d..8a2269062 100644
--- a/docs/automate/cmemc-command-line-interface/command-reference/vocabulary/cache/index.md
+++ b/docs/automate/cmemc-command-line-interface/command-reference/vocabulary/cache/index.md
@@ -6,24 +6,21 @@ tags:
- Vocabulary
- cmemc
---
+
# vocabulary cache Command Group
+
List und update the vocabulary cache.
-
## vocabulary cache update
Reload / updates the data integration cache for a vocabulary.
```shell-session title="Usage"
-$ cmemc vocabulary cache update [OPTIONS] [IRIS]...
+cmemc vocabulary cache update [OPTIONS] [IRIS]...
```
-
-
-
-
??? info "Options"
```text
@@ -35,13 +32,9 @@ $ cmemc vocabulary cache update [OPTIONS] [IRIS]...
Output the content of the global vocabulary cache.
```shell-session title="Usage"
-$ cmemc vocabulary cache list [OPTIONS]
+cmemc vocabulary cache list [OPTIONS]
```
-
-
-
-
??? info "Options"
```text
@@ -50,4 +43,3 @@ $ cmemc vocabulary cache list [OPTIONS]
cmemc commands.
--raw Outputs raw JSON.
```
-
diff --git a/docs/automate/cmemc-command-line-interface/command-reference/vocabulary/index.md b/docs/automate/cmemc-command-line-interface/command-reference/vocabulary/index.md
index e09e522e5..4591d16c4 100644
--- a/docs/automate/cmemc-command-line-interface/command-reference/vocabulary/index.md
+++ b/docs/automate/cmemc-command-line-interface/command-reference/vocabulary/index.md
@@ -6,42 +6,33 @@ tags:
- Vocabulary
- cmemc
---
+
# vocabulary Command Group
+
List, (un-)install, import or open vocabs / manage cache.
-
## vocabulary open
Open / explore a vocabulary graph in the browser.
```shell-session title="Usage"
-$ cmemc vocabulary open IRI
+cmemc vocabulary open IRI
```
-
-
-
Vocabularies are identified by their graph IRI. Installed vocabularies can be listed with the `vocabulary list` command.
-
-
## vocabulary list
Output a list of vocabularies.
```shell-session title="Usage"
-$ cmemc vocabulary list [OPTIONS]
+cmemc vocabulary list [OPTIONS]
```
-
-
-
Vocabularies are graphs (see `graph` command group) which consists of class and property descriptions.
-
-
??? info "Options"
```text
@@ -60,16 +51,11 @@ Vocabularies are graphs (see `graph` command group) which consists of class and
Install one or more vocabularies from the catalog.
```shell-session title="Usage"
-$ cmemc vocabulary install [OPTIONS] [IRIS]...
+cmemc vocabulary install [OPTIONS] [IRIS]...
```
-
-
-
Vocabularies are identified by their graph IRI. Installable vocabularies can be listed with the vocabulary list command.
-
-
??? info "Options"
```text
@@ -82,16 +68,11 @@ Vocabularies are identified by their graph IRI. Installable vocabularies can be
Uninstall one or more vocabularies.
```shell-session title="Usage"
-$ cmemc vocabulary uninstall [OPTIONS] [IRIS]...
+cmemc vocabulary uninstall [OPTIONS] [IRIS]...
```
-
-
-
Vocabularies are identified by their graph IRI. Already installed vocabularies can be listed with the vocabulary list command.
-
-
??? info "Options"
```text
@@ -103,18 +84,13 @@ Vocabularies are identified by their graph IRI. Already installed vocabularies c
Import a turtle file as a vocabulary.
```shell-session title="Usage"
-$ cmemc vocabulary import [OPTIONS] FILE
+cmemc vocabulary import [OPTIONS] FILE
```
-
-
-
With this command, you can import a local ontology file as a named graph and create a corresponding vocabulary catalog entry.
The uploaded ontology file is analysed locally in order to discover the named graph and the prefix declaration. This requires an OWL ontology description which correctly uses the `vann:preferredNamespacePrefix` and `vann:preferredNamespaceUri` properties.
-
-
??? info "Options"
```text
@@ -126,4 +102,3 @@ The uploaded ontology file is analysed locally in order to discover the named gr
--replace Replace (overwrite) existing vocabulary, if
present.
```
-
diff --git a/docs/automate/cmemc-command-line-interface/command-reference/workflow/index.md b/docs/automate/cmemc-command-line-interface/command-reference/workflow/index.md
index 139d7d6ab..0839d35e1 100644
--- a/docs/automate/cmemc-command-line-interface/command-reference/workflow/index.md
+++ b/docs/automate/cmemc-command-line-interface/command-reference/workflow/index.md
@@ -6,33 +6,29 @@ tags:
- Workflow
- cmemc
---
+
# workflow Command Group
+
List, execute, status or open (io) workflows.
Workflows are identified by a `WORKFLOW_ID`. The get a list of existing workflows, execute the list command or use tab-completion. The `WORKFLOW_ID` is a concatenation of a `PROJECT_ID` and a `TASK_ID`, such as `my-project:my-workflow`.
-
## workflow execute
Execute workflow(s).
```shell-session title="Usage"
-$ cmemc workflow execute [OPTIONS] [WORKFLOW_IDS]...
+cmemc workflow execute [OPTIONS] [WORKFLOW_IDS]...
```
-
-
-
With this command, you can start one or more workflows at the same time or in a sequence, depending on the result of the predecessor.
Executing a workflow can be done in two ways: Without `--wait` just sends the starting signal and does not look for the workflow and its result (fire and forget). Starting workflows in this way, starts all given workflows at the same time.
The optional `--wait` option starts the workflows in the same way, but also polls the status of a workflow until it is finished. In case of an error of a workflow, the next workflow is not started.
-
-
??? info "Options"
```text
@@ -52,20 +48,14 @@ The optional `--wait` option starts the workflows in the same way, but also poll
Execute a workflow with file input/output.
```shell-session title="Usage"
-$ cmemc workflow io [OPTIONS] WORKFLOW_ID
+cmemc workflow io [OPTIONS] WORKFLOW_ID
```
-
-
-
With this command, you can execute a workflow that uses replaceable datasets as input, output or for configuration. Use the input parameter to feed data into the workflow. Likewise, use output for retrieval of the workflow result. Workflows without a replaceable dataset will throw an error.
!!! note
Regarding the input dataset configuration - the following rules apply: If autoconfig is enabled ('--autoconfig', the default), the dataset configuration is guessed. If autoconfig is disabled ('--no-autoconfig') and the type of the dataset file is the same as the replaceable dataset in the workflow, the configuration from this dataset is copied. If autoconfig is disabled and the type of the dataset file is different from the replaceable dataset in the workflow, the default config is used.
-
-
-
??? info "Options"
```text
@@ -101,13 +91,9 @@ With this command, you can execute a workflow that uses replaceable datasets as
List available workflow.
```shell-session title="Usage"
-$ cmemc workflow list [OPTIONS]
+cmemc workflow list [OPTIONS]
```
-
-
-
-
??? info "Options"
```text
@@ -127,13 +113,9 @@ $ cmemc workflow list [OPTIONS]
Get status information of workflow(s).
```shell-session title="Usage"
-$ cmemc workflow status [OPTIONS] [WORKFLOW_IDS]...
+cmemc workflow status [OPTIONS] [WORKFLOW_IDS]...
```
-
-
-
-
??? info "Options"
```text
@@ -150,10 +132,6 @@ $ cmemc workflow status [OPTIONS] [WORKFLOW_IDS]...
Open a workflow in your browser.
```shell-session title="Usage"
-$ cmemc workflow open WORKFLOW_ID
+cmemc workflow open WORKFLOW_ID
```
-
-
-
-
diff --git a/docs/automate/cmemc-command-line-interface/command-reference/workflow/scheduler/index.md b/docs/automate/cmemc-command-line-interface/command-reference/workflow/scheduler/index.md
index f868a14fe..be55a806d 100644
--- a/docs/automate/cmemc-command-line-interface/command-reference/workflow/scheduler/index.md
+++ b/docs/automate/cmemc-command-line-interface/command-reference/workflow/scheduler/index.md
@@ -6,31 +6,27 @@ tags:
- Automate
- cmemc
---
+
# workflow scheduler Command Group
+
List, inspect, enable/disable or open scheduler.
Schedulers execute workflows in specified intervals. They are identified with a `SCHEDULER_ID`. To get a list of existing schedulers, execute the list command or use tab-completion.
-
## workflow scheduler open
Open scheduler(s) in the browser.
```shell-session title="Usage"
-$ cmemc workflow scheduler open [OPTIONS] SCHEDULER_IDS...
+cmemc workflow scheduler open [OPTIONS] SCHEDULER_IDS...
```
-
-
-
With this command, you can open a scheduler in the workspace in your browser to change it.
The command accepts multiple scheduler IDs which results in opening multiple browser tabs.
-
-
??? info "Options"
```text
@@ -43,16 +39,11 @@ The command accepts multiple scheduler IDs which results in opening multiple bro
List available scheduler.
```shell-session title="Usage"
-$ cmemc workflow scheduler list [OPTIONS]
+cmemc workflow scheduler list [OPTIONS]
```
-
-
-
Outputs a table or a list of scheduler IDs which can be used as reference for the scheduler commands.
-
-
??? info "Options"
```text
@@ -66,13 +57,9 @@ Outputs a table or a list of scheduler IDs which can be used as reference for th
Display all metadata of a scheduler.
```shell-session title="Usage"
-$ cmemc workflow scheduler inspect [OPTIONS] SCHEDULER_ID
+cmemc workflow scheduler inspect [OPTIONS] SCHEDULER_ID
```
-
-
-
-
??? info "Options"
```text
@@ -84,16 +71,11 @@ $ cmemc workflow scheduler inspect [OPTIONS] SCHEDULER_ID
Disable scheduler(s).
```shell-session title="Usage"
-$ cmemc workflow scheduler disable [OPTIONS] [SCHEDULER_IDS]...
+cmemc workflow scheduler disable [OPTIONS] [SCHEDULER_IDS]...
```
-
-
-
The command accepts multiple scheduler IDs which results in disabling them one after the other.
-
-
??? info "Options"
```text
@@ -105,19 +87,13 @@ The command accepts multiple scheduler IDs which results in disabling them one a
Enable scheduler(s).
```shell-session title="Usage"
-$ cmemc workflow scheduler enable [OPTIONS] [SCHEDULER_IDS]...
+cmemc workflow scheduler enable [OPTIONS] [SCHEDULER_IDS]...
```
-
-
-
The command accepts multiple scheduler IDs which results in enabling them one after the other.
-
-
??? info "Options"
```text
-a, --all Enable all scheduler.
```
-
diff --git a/docs/automate/cmemc-command-line-interface/configuration/certificate-handling-and-ssl-verification/index.md b/docs/automate/cmemc-command-line-interface/configuration/certificate-handling-and-ssl-verification/index.md
index cc2a3becf..ae361974b 100644
--- a/docs/automate/cmemc-command-line-interface/configuration/certificate-handling-and-ssl-verification/index.md
+++ b/docs/automate/cmemc-command-line-interface/configuration/certificate-handling-and-ssl-verification/index.md
@@ -64,9 +64,9 @@ miGId7jMXd24bpfYZSiniC0+SHiCwEmzN818Ss9aIMChymAnV3RRB/UqKLlOMnA=
You can also disable SSL Verification completely by setting the `SSL_VERIFY` key in the config or environment to `false`.
However, this will lead to warnings:
+
``` shell-session
$ cmemc -c ssltest.eccenca.com graph list
SSL verification is disabled (SSL_VERIFY=False).
...
```
-
diff --git a/docs/automate/cmemc-command-line-interface/configuration/completion-setup/index.md b/docs/automate/cmemc-command-line-interface/configuration/completion-setup/index.md
index 155655373..127f566f5 100644
--- a/docs/automate/cmemc-command-line-interface/configuration/completion-setup/index.md
+++ b/docs/automate/cmemc-command-line-interface/configuration/completion-setup/index.md
@@ -26,25 +26,22 @@ We suggest using [zsh](https://en.wikipedia.org/wiki/Z_shell) so you can take ad
Use the following lines for the completion setup of cmemc >= 23.3.
If using an older version, look at the [old documenation](https://documentation.eccenca.com/23.1/automate/cmemc-command-line-interface/configuration/completion-setup/).
-
In order to enable tab completion with **zsh** run the following command:
``` shell-session title="completion setup for zsh"
-$ eval "$(_CMEMC_COMPLETE=zsh_source cmemc)"
+eval "$(_CMEMC_COMPLETE=zsh_source cmemc)"
```
To enable the interactive menu as seen above in **zsh** run the following command:
``` shell-session title="interactive menu for zsh"
-$ zstyle ':completion:*' menu select
+zstyle ':completion:*' menu select
```
In order to enable tab completion with **bash** run the following command:
``` shell-session title="completion setup for bash"
-$ eval "$(_CMEMC_COMPLETE=bash_source cmemc)"
+eval "$(_CMEMC_COMPLETE=bash_source cmemc)"
```
You may want to add this line to your `.bashrc` or `.zshrc`.
-
-
diff --git a/docs/automate/cmemc-command-line-interface/configuration/environment-based-configuration/index.md b/docs/automate/cmemc-command-line-interface/configuration/environment-based-configuration/index.md
index 4d1fa12d5..b70ee965c 100644
--- a/docs/automate/cmemc-command-line-interface/configuration/environment-based-configuration/index.md
+++ b/docs/automate/cmemc-command-line-interface/configuration/environment-based-configuration/index.md
@@ -27,10 +27,10 @@ For these variables the rules are simple: You can use any variable from the [con
The following commands provide the same result as given in the [basic example for a config file](../file-based-configuration/index.md):
``` shell-session
-$ export CMEM_BASE_URI=http://localhost/
-$ export OAUTH_GRANT_TYPE=client_credentials
-$ export OAUTH_CLIENT_ID=cmem-service-account
-$ export OAUTH_CLIENT_SECRET=...
+export CMEM_BASE_URI=http://localhost/
+export OAUTH_GRANT_TYPE=client_credentials
+export OAUTH_CLIENT_ID=cmem-service-account
+export OAUTH_CLIENT_SECRET=...
```
!!! info
@@ -63,8 +63,8 @@ $ cmemc --config-file cmemc.ini --connection mycmem graph list --raw
As a next step, we replace all connection parameters with environment variables:
``` shell-session
-$ export CMEMC_CONFIG_FILE=cmemc.ini
-$ export CMEMC_CONNECTION=mycmem
+export CMEMC_CONFIG_FILE=cmemc.ini
+export CMEMC_CONNECTION=mycmem
```
This alone allows us to save a lot of typing for a series of commands on the same Corporate Memory instance.
@@ -77,7 +77,7 @@ $ cmemc graph list --raw
However, you can also pre-define command options in the same way:
``` shell-session
-$ export CMEMC_GRAPH_LIST_RAW=true
+export CMEMC_GRAPH_LIST_RAW=true
```
Again, the same command but `--raw` is set per default.
@@ -92,7 +92,7 @@ $ cmemc graph list
Since there is a top level `--debug` option, the corresponding variable name is `CMEMC_DEBUG`:
``` shell-session
-$ export CMEMC_DEBUG=true
+export CMEMC_DEBUG=true
```
## Configuration environment export from the config file
@@ -118,12 +118,12 @@ export SSL_VERIFY="True"
This can be used to export a full `config.env` or to `eval` it in an environment for other processes:
``` shell-session
-$ cmemc -c my-cmem.example.org config eval > config.env
-$ eval $(cmemc -c my-cmem.example.org config eval)
+cmemc -c my-cmem.example.org config eval > config.env
+eval $(cmemc -c my-cmem.example.org config eval)
```
Please note that the following command has the same effect but needs the `cmemc.ini` for evaluating the `config` values for the config section `my-cmem.example.org`:
``` shell-session
-$ export CMEMC_CONNECTION="my-cmem.example.org"
+export CMEMC_CONNECTION="my-cmem.example.org"
```
diff --git a/docs/automate/cmemc-command-line-interface/configuration/file-based-configuration/index.md b/docs/automate/cmemc-command-line-interface/configuration/file-based-configuration/index.md
index a152b4575..6f4eb5339 100644
--- a/docs/automate/cmemc-command-line-interface/configuration/file-based-configuration/index.md
+++ b/docs/automate/cmemc-command-line-interface/configuration/file-based-configuration/index.md
@@ -26,7 +26,6 @@ If you need to change this location and want to use another config file, you hav
However, once you start cmemc the first time without any command or option, it will create an empty configuration file at this location and will output a general introduction.
-
??? example "First cmemc run ..."
``` shell-session
$ cmemc
@@ -270,4 +269,3 @@ Setting this to a PEM file allows for using private Certificate Authorities for
Please refer to [Certificate handling and SSL verification](../certificate-handling-and-ssl-verification/index.md) for more information.
This variable defaults to `$PYTHON_HOME/site-packages/certifi/cacert.pem`.
-
diff --git a/docs/automate/cmemc-command-line-interface/configuration/getting-credentials-from-external-processes/index.md b/docs/automate/cmemc-command-line-interface/configuration/getting-credentials-from-external-processes/index.md
index 6ae2a4ccf..1995b8b91 100644
--- a/docs/automate/cmemc-command-line-interface/configuration/getting-credentials-from-external-processes/index.md
+++ b/docs/automate/cmemc-command-line-interface/configuration/getting-credentials-from-external-processes/index.md
@@ -20,11 +20,11 @@ As described in the [Configuration with Environment Variables](../environment-ba
The following code snippet demonstrates the behaviour:
``` shell-session
-$ export CMEM_BASE_URI="https://your-cmem.eccenca.dev/"
-$ export OAUTH_GRANT_TYPE="client_credentials"
-$ export OAUTH_CLIENT_ID="cmem-service-account"
-$ export OAUTH_CLIENT_SECRET="...secret..."
-$ cmemc graph list
+export CMEM_BASE_URI="https://your-cmem.eccenca.dev/"
+export OAUTH_GRANT_TYPE="client_credentials"
+export OAUTH_CLIENT_ID="cmem-service-account"
+export OAUTH_CLIENT_SECRET="...secret..."
+cmemc graph list
```
In the context of a CI/CD pipeline, e.g., on github, these credentials can be taken from the repository secrets:
@@ -48,7 +48,7 @@ jobs:
In shell context, you can fetch the secret from an external process to the variable:
``` shell-session
-$ export OAUTH_CLIENT_SECRET=$(get-my-secret.sh)
+export OAUTH_CLIENT_SECRET=$(get-my-secret.sh)
```
## External Processes
@@ -116,4 +116,3 @@ if [ "${OAUTH_GRANT_TYPE}" = "password" ]; then
fi
exit 1
```
-
diff --git a/docs/automate/cmemc-command-line-interface/configuration/index.md b/docs/automate/cmemc-command-line-interface/configuration/index.md
index b6b25c395..8dce0e911 100644
--- a/docs/automate/cmemc-command-line-interface/configuration/index.md
+++ b/docs/automate/cmemc-command-line-interface/configuration/index.md
@@ -10,28 +10,27 @@ hide:
In order to work with cmemc, you have to configure it according to your needs.
-
-- :material-file-cog-outline: File-based Configuration
+- :material-file-cog-outline: File-based Configuration
---
The most common way to configure cmemc is with a central [configuration file](file-based-configuration/index.md).
-- :material-cog-outline: Environment-based Configuration
+- :material-cog-outline: Environment-based Configuration
---
In addition to configuration files, cmemc can be widely configured and parameterized with [environment variables](environment-based-configuration/index.md).
-- :material-rocket-launch: Completion Setup
+- :material-rocket-launch: Completion Setup
---
Setting up [command completion](completion-setup/index.md) is optional but highly recommended and will greatly speed up your cmemc terminal sessions.
-- :material-key-link: Security Considerations
+- :material-key-link: Security Considerations
---
diff --git a/docs/automate/cmemc-command-line-interface/index.md b/docs/automate/cmemc-command-line-interface/index.md
index f54bb65f1..b39468a9a 100644
--- a/docs/automate/cmemc-command-line-interface/index.md
+++ b/docs/automate/cmemc-command-line-interface/index.md
@@ -13,7 +13,7 @@ tags:
-- :octicons-terminal-16: **Command Line** interface for **eccenca Corporate Memory**
+- :octicons-terminal-16: **Command Line** interface for **eccenca Corporate Memory**
---
@@ -33,7 +33,7 @@ tags:
[{ .off-glb }](https://pypi.python.org/pypi/cmem-cmemc/)
[{ .off-glb }](./invocation/docker-image/index.md)
-- :octicons-people-24: Intended for **Administrators** and **Linked Data Expert**
+- :octicons-people-24: Intended for **Administrators** and **Linked Data Expert**
---
@@ -46,13 +46,12 @@ tags:
--filter tag velocity-daily
```
- 1. :person_raising_hand:
+ 1. :person_raising_hand:
- The option `-c` is short for `--connection` and references to a remote Corporate Memory instance.
- The `list` command in the `dataset` command group shows all datasets of an instance.
- In order to manipulate output dataset list, the `--filter` option takes two parameter, a filter type (`tag`, `project`, ...) and a value.
-
-- :octicons-rocket-16: Fast ad-hoc Execution with **Command Completion**
+- :octicons-rocket-16: Fast ad-hoc Execution with **Command Completion**
---
@@ -61,8 +60,7 @@ tags:
Create Build Project and Dataset
-
-- :material-feature-search-outline: **Main Features**:
+- :material-feature-search-outline: **Main Features**:
---
@@ -76,4 +74,3 @@ tags:
```
-
diff --git a/docs/automate/cmemc-command-line-interface/installation/index.md b/docs/automate/cmemc-command-line-interface/installation/index.md
index d36233d25..51cc27a0a 100644
--- a/docs/automate/cmemc-command-line-interface/installation/index.md
+++ b/docs/automate/cmemc-command-line-interface/installation/index.md
@@ -13,16 +13,13 @@ cmemc can be installed using the python package from pypi.org, the release packa
cmemc is available as an [official pypi package](https://pypi.org/project/cmem-cmemc/) so installation can be done with pip or pipx (preferred):
``` shell-session
-$ pipx install cmem-cmemc
+pipx install cmem-cmemc
```
-
## ... via docker image
This topic is described on a [stand-alone page](../invocation/docker-image/index.md).
-
!!! Note
Once you have installed cmemc, you need to configure a connection with a [config file](../configuration/file-based-configuration/index.md) or learn how to [use environment variables](../configuration/environment-based-configuration/index.md) to control cmemc.
-
diff --git a/docs/automate/cmemc-command-line-interface/invocation/docker-image/index.md b/docs/automate/cmemc-command-line-interface/invocation/docker-image/index.md
index d309bafa7..8a6ae7625 100644
--- a/docs/automate/cmemc-command-line-interface/invocation/docker-image/index.md
+++ b/docs/automate/cmemc-command-line-interface/invocation/docker-image/index.md
@@ -67,4 +67,3 @@ http://schema.org/,8809
https://vocab.eccenca.com/shacl/,1752
[...]
```
-
diff --git a/docs/automate/cmemc-command-line-interface/invocation/github-action/index.md b/docs/automate/cmemc-command-line-interface/invocation/github-action/index.md
index 54c3264f0..0982159a4 100644
--- a/docs/automate/cmemc-command-line-interface/invocation/github-action/index.md
+++ b/docs/automate/cmemc-command-line-interface/invocation/github-action/index.md
@@ -60,4 +60,3 @@ The Github project [eccenca/cmemc-workflow](https://github.com/eccenca/cmemc-wor
Here is an example output:

-
diff --git a/docs/automate/cmemc-command-line-interface/invocation/gitlab-pipeline/index.md b/docs/automate/cmemc-command-line-interface/invocation/gitlab-pipeline/index.md
index a40a0199d..cadc5bd90 100644
--- a/docs/automate/cmemc-command-line-interface/invocation/gitlab-pipeline/index.md
+++ b/docs/automate/cmemc-command-line-interface/invocation/gitlab-pipeline/index.md
@@ -61,4 +61,3 @@ The Github project [eccenca/cmemc-workflow](https://github.com/eccenca/cmemc-wor
Here is an example output:

-
diff --git a/docs/automate/cmemc-command-line-interface/invocation/index.md b/docs/automate/cmemc-command-line-interface/invocation/index.md
index 418dd4a90..bdf6f2e96 100644
--- a/docs/automate/cmemc-command-line-interface/invocation/index.md
+++ b/docs/automate/cmemc-command-line-interface/invocation/index.md
@@ -12,12 +12,12 @@ Besides the plain ad-hoc invocation from a users terminal, the following recipes
-- :material-docker: Executing cmemc as a [Docker Container](docker-image/index.md).
+- :material-docker: Executing cmemc as a [Docker Container](docker-image/index.md).
-- :material-github: Running cmemc jobs as part of [Github Actions](github-action/index.md).
+- :material-github: Running cmemc jobs as part of [Github Actions](github-action/index.md).
-- :material-gitlab: Running cmemc jobs as part of [Gitlab Pipelines](gitlab-pipeline/index.md).
+- :material-gitlab: Running cmemc jobs as part of [Gitlab Pipelines](gitlab-pipeline/index.md).
-- :eccenca-application-queries: Preparing [SPARQL Scripts](sparql-scripts/index.md) to fetch data from your Knowledge Graphs.
-
+- :eccenca-application-queries: Preparing [SPARQL Scripts](sparql-scripts/index.md) to fetch data from your Knowledge Graphs.
+
diff --git a/docs/automate/cmemc-command-line-interface/invocation/sparql-scripts/index.md b/docs/automate/cmemc-command-line-interface/invocation/sparql-scripts/index.md
index dc31287cc..f4315b000 100644
--- a/docs/automate/cmemc-command-line-interface/invocation/sparql-scripts/index.md
+++ b/docs/automate/cmemc-command-line-interface/invocation/sparql-scripts/index.md
@@ -34,7 +34,7 @@ This will set cmemc as an interpreter for the rest of the file, and by using the
Now you need to define your SPARQL file as executable and run it:
``` shell-session
-$ chmod a+x ./count-graphs.sh
+chmod a+x ./count-graphs.sh
```
``` shell-session
@@ -48,4 +48,3 @@ https://ns.eccenca.com/data/queries/,39
https://ns.eccenca.com/data/config/,4
https://ns.eccenca.com/data/userinfo/,4
```
-
diff --git a/docs/automate/cmemc-command-line-interface/troubleshooting-and-caveats/index.md b/docs/automate/cmemc-command-line-interface/troubleshooting-and-caveats/index.md
index 4b1037890..75c3f6aa5 100644
--- a/docs/automate/cmemc-command-line-interface/troubleshooting-and-caveats/index.md
+++ b/docs/automate/cmemc-command-line-interface/troubleshooting-and-caveats/index.md
@@ -36,4 +36,3 @@ This can have multiple reasons - please check in the following order:
- `application.yaml` of DataIntegration
- reverse proxy configuration
-
diff --git a/docs/automate/cmemc-command-line-interface/workflow-execution-and-orchestration/index.md b/docs/automate/cmemc-command-line-interface/workflow-execution-and-orchestration/index.md
index 95c3a61ec..c271f4118 100644
--- a/docs/automate/cmemc-command-line-interface/workflow-execution-and-orchestration/index.md
+++ b/docs/automate/cmemc-command-line-interface/workflow-execution-and-orchestration/index.md
@@ -147,4 +147,3 @@ else
exit 0
fi
```
-
diff --git a/docs/automate/continuous-integration/index.md b/docs/automate/continuous-integration/index.md
index ed967e2ca..c7780dc01 100644
--- a/docs/automate/continuous-integration/index.md
+++ b/docs/automate/continuous-integration/index.md
@@ -33,9 +33,8 @@ The following pages provide recipes for different CI/CD solutions:
-- :material-github: [Github Actions](../cmemc-command-line-interface/invocation/github-action/index.md)
+- :material-github: [Github Actions](../cmemc-command-line-interface/invocation/github-action/index.md)
-- :material-gitlab: [Gitlab Pipelines](../cmemc-command-line-interface/invocation/gitlab-pipeline/index.md)
+- :material-gitlab: [Gitlab Pipelines](../cmemc-command-line-interface/invocation/gitlab-pipeline/index.md)
-
diff --git a/docs/automate/index.md b/docs/automate/index.md
index 48dc8623a..f7ff7c045 100644
--- a/docs/automate/index.md
+++ b/docs/automate/index.md
@@ -11,29 +11,28 @@ Setup processes and automate activities based on and towards your Knowledge Grap
-- :octicons-terminal-16: [cmemc - Command Line Interface](cmemc-command-line-interface/index.md)
+- :octicons-terminal-16: [cmemc - Command Line Interface](cmemc-command-line-interface/index.md)
---
cmemc is intended for system administrators and Linked Data experts, who want to automate and control activities on eccenca Corporate Memory remotely.
-- :eccenca-artefact-workflow: [Processing data with variable input workflows](processing-data-with-variable-input-workflows/index.md)
+- :eccenca-artefact-workflow: [Processing data with variable input workflows](processing-data-with-variable-input-workflows/index.md)
---
This tutorial shows how you can create and use data integration workflows to process data coming from outside Corporate Memory (i.e., without registering datasets).
-- :material-clock-start: [Scheduling Workflows](scheduling-workflows/index.md)
+- :material-clock-start: [Scheduling Workflows](scheduling-workflows/index.md)
---
For a time-based execution of a workflow, Corporate Memory provides the Scheduler operator.
-- :material-github: [Continuous Integration and Delivery](continuous-integration/index.md)
+- :material-github: [Continuous Integration and Delivery](continuous-integration/index.md)
---
Setup processes which continuously integrate data artifacts such as vocabularies and shapes with your Corporate Memory instances.
-
diff --git a/docs/automate/processing-data-with-variable-input-workflows/index.md b/docs/automate/processing-data-with-variable-input-workflows/index.md
index 26e952a1e..c3721a523 100644
--- a/docs/automate/processing-data-with-variable-input-workflows/index.md
+++ b/docs/automate/processing-data-with-variable-input-workflows/index.md
@@ -26,7 +26,7 @@ This allows for solving all kinds of [☆ Automation](../index.md) tasks when yo
- by using the [command line interface](../cmemc-command-line-interface/index.md)
``` shell-session
- $ cmemc -c my-cmem project import tutorial-varinput.project.zip varinput
+ cmemc -c my-cmem project import tutorial-varinput.project.zip varinput
```
## 1 Install the required vocabularies
@@ -81,7 +81,7 @@ For this, you need to use the `workflow io` command:
``` shell-session
# process one specific feed xml document
-$ cmemc workflow io varinput:process-feed -i feed.xml
+cmemc workflow io varinput:process-feed -i feed.xml
```
You can easily automate this for a [list of feeds](feeds.txt) like this:
diff --git a/docs/automate/scheduling-workflows/index.md b/docs/automate/scheduling-workflows/index.md
index 52444713d..7c35df02d 100644
--- a/docs/automate/scheduling-workflows/index.md
+++ b/docs/automate/scheduling-workflows/index.md
@@ -30,7 +30,6 @@ Once you are ready with the configurations, click **Create** button. Now, the sc

-
## Modify, enable or disable a scheduler
1. Navigate to **Build → Projects** section in the workspace.
@@ -62,4 +61,3 @@ More common examples:
- `PT30M` - every half hour
- `PT1H` - every hour
- `P1D` - every day
-
diff --git a/docs/build/active-learning/index.md b/docs/build/active-learning/index.md
index 8a501a24c..a08d62f9e 100644
--- a/docs/build/active-learning/index.md
+++ b/docs/build/active-learning/index.md
@@ -37,7 +37,7 @@ The examples process below uses the **movies** example project which can be adde
## Creating an automatic link rule
-- Choose properties to compare.
+- Choose properties to compare.
Select from the suggestions or search them by specifying property paths for both entities.
{ class="bordered" }
@@ -48,15 +48,15 @@ The examples process below uses the **movies** example project which can be adde
## Add property paths for both entities
-- Click on the Source path and select a path.
+- Click on the Source path and select a path.
{ class="bordered" }
-- Click on the Target path and select a corresponding path.
+- Click on the Target path and select a corresponding path.
{ class="bordered" }
-- Click on the :eccenca-item-add-artefact: icon to add the path pair to be examined in the learning algorithm.
+- Click on the :eccenca-item-add-artefact: icon to add the path pair to be examined in the learning algorithm.
{ class="bordered" }
@@ -66,11 +66,11 @@ The examples process below uses the **movies** example project which can be adde
{ class="bordered" }
-- Click on :eccenca-item-remove: icon to remove the paths.
+- Click on :eccenca-item-remove: icon to remove the paths.
{ class="bordered" }
-- Click on Start learning.
+- Click on Start learning.
{ class="bordered" }
@@ -105,7 +105,7 @@ The examples process below uses the **movies** example project which can be adde
{ class="bordered" }
-- On the right side of the page click on the 3 dots, then click on show entity’s URI.
+- On the right side of the page click on the 3 dots, then click on show entity’s URI.
{ class="bordered" }
@@ -115,11 +115,11 @@ The examples process below uses the **movies** example project which can be adde
{ class="bordered" }
-- Click on Save based on our input confirm, uncertain and decline the link rule will get generated automatically and the score changes for these entities in the score bar.
+- Click on Save based on our input confirm, uncertain and decline the link rule will get generated automatically and the score changes for these entities in the score bar.
{ class="bordered" }
-- Switch on the save best learned rule, then click on save.
+- Switch on the save best learned rule, then click on save.
{ class="bordered" }
diff --git a/docs/build/define-prefixes-namespaces/index.md b/docs/build/define-prefixes-namespaces/index.md
index aa20eca17..1ccfdcf2b 100644
--- a/docs/build/define-prefixes-namespaces/index.md
+++ b/docs/build/define-prefixes-namespaces/index.md
@@ -13,8 +13,8 @@ Namespace declarations allow for the abbreviation of IRIs by using a prefixed re
For example, after defining a namespace with the values
-- **prefix name** = `cohw`, and the
-- **namespace IRI** = `https://data.company.org/hardware/`
+- **prefix name** = `cohw`, and the
+- **namespace IRI** = `https://data.company.org/hardware/`
you can use the term `cohw:test` as an abbreviation for the full IRI `https://data.company.org/hardware/test`.
@@ -28,8 +28,8 @@ After installing a vocabulary from the [Vocabulary Catalog](../../explore-and-a
In order to get the **prefix name** and the **namespace IRI** from the vocabulary graph, the following terms from the [VANN vocabulary](https://vocab.org/vann/) need to be used on the Ontology resource.
-- [vann:preferredNamespacePrefix](https://vocab.org/vann/#preferredNamespacePrefix) - to specify the **prefix name**
-- [vann:preferredNamespaceUri](https://vocab.org/vann/#preferredNamespaceUri) - to specify the **namespace IRI**
+- [vann:preferredNamespacePrefix](https://vocab.org/vann/#preferredNamespacePrefix) - to specify the **prefix name**
+- [vann:preferredNamespaceUri](https://vocab.org/vann/#preferredNamespaceUri) - to specify the **namespace IRI**
In the Explore area, an Ontology with a correct namespace declaration looks like this:
@@ -51,10 +51,10 @@ In addition to the used vocabulary namespace declarations, you may want to add w
Such organization use cases include:
-- Namespaces per class / resource type:
- - **prefix name** = `persons`, **namespace IRI** = `https://example.org/data/persons/`
-- Namespaces per data owner or origin:
- - **prefix name** = `sales`, **namespace IRI** = `https://example.org/data/sales/`
+- Namespaces per class / resource type:
+ - **prefix name** = `persons`, **namespace IRI** = `https://example.org/data/persons/`
+- Namespaces per data owner or origin:
+ - **prefix name** = `sales`, **namespace IRI** = `https://example.org/data/sales/`
Prefixes in Data Integration are defined on a project basis. When creating a new project, a list of well-know prefixes is already declared.
@@ -68,8 +68,8 @@ By using the **Edit Prefix Settings** button in this Configuration area, you wil
In this dialog, you are able to
-- Delete a namespace declaration → **Delete Prefix**
-- Add a new namespace declaration → **Add**
+- Delete a namespace declaration → **Delete Prefix**
+- Add a new namespace declaration → **Add**
## Validating Namespace Declarations
diff --git a/docs/build/evaluate-template/index.md b/docs/build/evaluate-template/index.md
index fdd76502b..c0a22ca92 100644
--- a/docs/build/evaluate-template/index.md
+++ b/docs/build/evaluate-template/index.md
@@ -30,7 +30,7 @@ The graph dataset is attached to the email as an N-triples file.
The following material is used in this tutorial:
-- RDF graph containing company information regarding employees, products and services: [company.ttl](company.ttl)
+- RDF graph containing company information regarding employees, products and services: [company.ttl](company.ttl)
```Turtle
a prod:Hardware ;
@@ -294,11 +294,11 @@ The tutorial consists of the following steps, which are described in detail belo
3. Fill in the required details, such as **Label**, your email credentials for sending, and the recipient email address(es).
When finished, click **Create**.
- - Host: The SMTP host, e.g, mail.myProvider.com
- - Port: The SMTP port
- - User: The username for the email account
- - Password: The password for the email account
- - To: The recipient email address(es)
+ - Host: The SMTP host, e.g, mail.myProvider.com
+ - Port: The SMTP port
+ - User: The username for the email account
+ - Password: The password for the email account
+ - To: The recipient email address(es)
@@ -320,17 +320,17 @@ The tutorial consists of the following steps, which are described in detail belo
Items can be dragged from the list of items on the left side onto the canvas.
To connect the outputs and inputs, click and hold the output on the right side of an item and drag it to the input on the left side of another item.
- - The **Knowledge Graph dataset** connects to the **Request RDF triples task** and the **SPARQL Select query task**.
- - The **Request RDF triples task** connects to the **RDF dataset**.
+ - The **Knowledge Graph dataset** connects to the **Request RDF triples task** and the **SPARQL Select query task**.
+ - The **Request RDF triples task** connects to the **RDF dataset**.
It requests all triples from the products graph and sends them to the dataset.
- - The **RDF dataset** connects to the **Send eMail task**.
+ - The **RDF dataset** connects to the **Send eMail task**.
It holds the NTriples file that will be attached to the email.
- - The **SPARQL Select query task** connects to the **Evaluate template task**.
+ - The **SPARQL Select query task** connects to the **Evaluate template task**.
Note that the graph to be queried is specified in the SPARQL query itself with the FROM clause, while the input only triggers its execution.
The query results are sent to its output.
- - The **Evaluate template task** connects to the **Text dataset**.
+ - The **Evaluate template task** connects to the **Text dataset**.
It receives the SPARQL query results and sends the evaluated Jinja template to its output.
- - The **Text dataset** connects to the **Transform**.
+ - The **Text dataset** connects to the **Transform**.
It holds the text file with the evaluated Jinja template and acts as input for the Transform.
{ class="bordered" }
@@ -340,11 +340,11 @@ The tutorial consists of the following steps, which are described in detail belo
The **Evaluate template** operator can also be connected directly to the **Transform**.
In this case, skip [§6](#6-create-a-text-dataset) and enter *output* instead of *text* for the **Value path** of the value mapping in the **Transform** (see [§7.6](#7-create-a-transform)).
-1. Click on three dots of the **Send eMail** task, select **Config** and tick the check box to enable the config port.
+5. Click on three dots of the **Send eMail** task, select **Config** and tick the check box to enable the config port.
{ class="bordered" width="55%" }
-2. Connect the output of the **Transform** to the config port located on the top of the **Send eMail** task.
+6. Connect the output of the **Transform** to the config port located on the top of the **Send eMail** task.
When finished, click **Save**.
The complete workflow now looks as shown below.
diff --git a/docs/build/extracting-data-from-a-web-api/index.md b/docs/build/extracting-data-from-a-web-api/index.md
index e0b47b4a2..975525b72 100644
--- a/docs/build/extracting-data-from-a-web-api/index.md
+++ b/docs/build/extracting-data-from-a-web-api/index.md
@@ -18,13 +18,13 @@ The tutorial is based on the [GitHub API (v3)](https://developer.github.com/v3/)
- by using the [command line interface](../../automate/cmemc-command-line-interface/index.md)
``` shell-session
- $ cmemc -c my-cmem project import tutorial-webapi.project.zip web-api
+ cmemc -c my-cmem project import tutorial-webapi.project.zip web-api
```
In order to get familiar with the API, simply fetch an example response with this command:
``` shell-session
-$ curl https://api.github.com/orgs/vocol/repos
+curl https://api.github.com/orgs/vocol/repos
```
The HTTP Get request retrieves all repositories of a GitHub organization named vocol.
@@ -65,7 +65,6 @@ The JSON response includes the data for all repositories (**mobivoc**, **vocol**
…
-->
-
## 1 Register a Web API
@@ -159,4 +158,3 @@ To build a workflow that combines all the elements we previously built, we now d
5. Validate the result by clicking on the **Workflow** **Report** tab and see the result of your execution. In this example, 15x repositories were found from the GitHub API request.

-
diff --git a/docs/build/index.md b/docs/build/index.md
index a6e4133f9..a7ab41c40 100644
--- a/docs/build/index.md
+++ b/docs/build/index.md
@@ -15,7 +15,7 @@ The Build stage turns your source data—across files, databases, APIs, and stre
-- :eccenca-application-dataintegration: Foundations: Introduction and Best Practices
+- :eccenca-application-dataintegration: Foundations: Introduction and Best Practices
---
@@ -25,7 +25,7 @@ The Build stage turns your source data—across files, databases, APIs, and stre
- [Define Prefixes / Namespaces](define-prefixes-namespaces/index.md) --- Namespace declarations allow for abbreviation of IRIs by using a prefixed name instead of an IRI, in particular when writing SPARQL queries or Turtle.
- [Spark](spark/index.md) --- Explainer of Apache Spark and its integration within the BUILD platform.
-- :material-list-status: Tutorials
+- :material-list-status: Tutorials
---
@@ -39,14 +39,14 @@ The Build stage turns your source data—across files, databases, APIs, and stre
- [Evaluate Jinja Template and Send an Email Message](evaluate-template/index.md) --- Template and send an email after a workflow execution.
- [Link Intrusion Detection Systems to Open-Source INTelligence](tutorial-how-to-link-ids-to-osint/index.md) --- Link IDS data to OSINT sources.
-- :fontawesome-regular-snowflake: Patterns
+- :fontawesome-regular-snowflake: Patterns
---
- [Reconfigure Workflow Tasks](workflow-reconfiguration/index.md) --- During its execution, new parameters can be loaded from any source, which overwrites originally set parameters.
- [Project and Global Variables](variables/index.md) --- Define and reuse variables across tasks and projects.
-- :material-book-open-variant-outline: Reference
+- :material-book-open-variant-outline: Reference
---
@@ -55,4 +55,3 @@ The Build stage turns your source data—across files, databases, APIs, and stre
- [Task and Operator Reference](reference/index.md) --- Reference documentation for tasks and operators in the Build workspace.
-
diff --git a/docs/build/integrations/index.md b/docs/build/integrations/index.md
index 7ad316132..fa84f7a20 100644
--- a/docs/build/integrations/index.md
+++ b/docs/build/integrations/index.md
@@ -7,386 +7,331 @@ tags:
- Build
- Reference
---
+
# Integrations
+
The following services and applications can be easily integrated in Corporate Memory workflows:
-- :simple-anthropic:{ .lg .middle } Anthropic / Claude
+- :simple-anthropic:{ .lg .middle } Anthropic / Claude
---
- Use the [Execute Instructions](../../build/reference/customtask/cmem_plugin_llm-ExecuteInstructions.md) or [Create Embeddings](../../build/reference/customtask/cmem_plugin_llm-CreateEmbeddings.md) task
-to interact with any
-[Anthropic / Claude provided Large Language Models](https://docs.claude.com/en/docs/about-claude/models/overview)
-(LLMs).
-
+ Use the [Execute Instructions](../../build/reference/customtask/cmem_plugin_llm-ExecuteInstructions.md) or [Create Embeddings](../../build/reference/customtask/cmem_plugin_llm-CreateEmbeddings.md) task to interact with any [Anthropic / Claude provided Large Language Models](https://docs.claude.com/en/docs/about-claude/models/overview) (LLMs).
-- :other-apacheavro:{ .lg .middle } Avro
+- :other-apacheavro:{ .lg .middle } Avro
---
Use the [Avro](../../build/reference/dataset/avro.md) dataset to read and write files in the [Avro format](https://avro.apache.org/).
-
-- :material-microsoft-azure:{ .lg .middle } Azure AI Foundry
+- :material-microsoft-azure:{ .lg .middle } Azure AI Foundry
---
- Use the [Execute Instructions](../../build/reference/customtask/cmem_plugin_llm-ExecuteInstructions.md) or [Create Embeddings](../../build/reference/customtask/cmem_plugin_llm-CreateEmbeddings.md) task
-to interact with any [Azure AI Foundry provided Large Language Models](https://ai.azure.com/catalog) (LLMs).
-
+ Use the [Execute Instructions](../../build/reference/customtask/cmem_plugin_llm-ExecuteInstructions.md) or [Create Embeddings](../../build/reference/customtask/cmem_plugin_llm-CreateEmbeddings.md) task to interact with any [Azure AI Foundry provided Large Language Models](https://ai.azure.com/catalog) (LLMs).
-- :fontawesome-solid-file-csv:{ .lg .middle } CSV
+- :fontawesome-solid-file-csv:{ .lg .middle } CSV
---
- Comma-separated values (CSV) is a text data format which can be processed
-(read and write) with the [CSV Dataset](../../build/reference/dataset/csv.md).
+ Comma-separated values (CSV) is a text data format which can be processed (read and write) with the [CSV Dataset](../../build/reference/dataset/csv.md).
-
-- :material-email-outline:{ .lg .middle } eMail / SMTP
+- :material-email-outline:{ .lg .middle } eMail / SMTP
---
Send plain text or HTML formatted [eMail messages](../../build/reference/customtask/SendEMail.md) using an SMTP server.
-
-- :material-file-excel:{ .lg .middle } Excel
+- :material-file-excel:{ .lg .middle } Excel
---
Use the [Excel](../../build/reference/dataset/excel.md) task to read and write to Excel workbooks in the Open XML format (XLSX).
-
-- :material-google-drive:{ .lg .middle } Google Drive
+- :material-google-drive:{ .lg .middle } Google Drive
---
Use the [Excel (Google Drive)](../../build/reference/dataset/googlespreadsheet.md) to read and write to Excel workbooks in Google Drive.
-
-- :other-graphdb:{ .lg .middle } GraphDB
+- :other-graphdb:{ .lg .middle } GraphDB
---
Load and write Knowledge Graphs to an external GraphDB store by using the [SPARQL endpoint](../../build/reference/dataset/sparqlEndpoint.md) dataset.
-Query data from GraphDB by using the SPARQL
-[Construct](../../build/reference/customtask/sparqlCopyOperator.md),
-[Select](../../build/reference/customtask/sparqlSelectOperator.md) and
-[Update](../../build/reference/customtask/sparqlUpdateOperator.md) tasks.
-GraphDB can be used as the integrated Quad Store as well.
+ Query data from GraphDB by using the SPARQL
+ [Construct](../../build/reference/customtask/sparqlCopyOperator.md),
+ [Select](../../build/reference/customtask/sparqlSelectOperator.md) and
+ [Update](../../build/reference/customtask/sparqlUpdateOperator.md) tasks.
+ GraphDB can be used as the integrated Quad Store as well.
-- :simple-graphql:{ .lg .middle } GraphQL
+- :simple-graphql:{ .lg .middle } GraphQL
---
You can execute a [GraphQL query](../../build/reference/customtask/cmem_plugin_graphql-workflow-graphql-GraphQLPlugin.md) and process the result in a workflow.
-
-- :simple-apachehive:{ .lg .middle } Hive
+- :simple-apachehive:{ .lg .middle } Hive
---
Read from or write to an embedded Apache [Hive database](../../build/reference/dataset/Hive.md) endpoint.
-
-- :simple-jira:{ .lg .middle } Jira
+- :simple-jira:{ .lg .middle } Jira
---
Execute a [JQL query](../../build/reference/customtask/cmem_plugin_jira-JqlQuery.md) on a Jira instance to fetch and integrate issue data.
-
-- :material-code-json:{ .lg .middle } JSON
+- :material-code-json:{ .lg .middle } JSON
---
Use the [JSON](../../build/reference/dataset/json.md) dataset to read and write JSON files (JavaScript Object Notation).
-
-- :material-code-json:{ .lg .middle } JSON Lines
+- :material-code-json:{ .lg .middle } JSON Lines
---
Use the [JSON](../../build/reference/dataset/json.md) dataset to read and write files in the [JSON Lines](https://jsonlines.org/) text file format.
-
-- :simple-apachekafka:{ .lg .middle } Kafka
+- :simple-apachekafka:{ .lg .middle } Kafka
---
- You can [send](../../build/reference/customtask/cmem_plugin_kafka-SendMessages.md) and
-[receive messages](../../build/reference/customtask/cmem_plugin_kafka-ReceiveMessages.md) to and from a Kafka topic.
+ You can [send](../../build/reference/customtask/cmem_plugin_kafka-SendMessages.md) and [receive messages](../../build/reference/customtask/cmem_plugin_kafka-ReceiveMessages.md) to and from a Kafka topic.
-
-- :simple-kubernetes:{ .lg .middle } Kubernetes
+- :simple-kubernetes:{ .lg .middle } Kubernetes
---
You can [Execute a command in a kubernetes pod](../../build/reference/customtask/cmem_plugin_kubernetes-Execute.md) and captures its output to process it.
-
-- :simple-mariadb:{ .lg .middle } MariaDB
+- :simple-mariadb:{ .lg .middle } MariaDB
---
- MariaDB can be accessed with the [Remote SQL endpoint](../../build/reference/dataset/Jdbc.md) dataset and a
-[JDBC driver](https://central.sonatype.com/artifact/org.mariadb.jdbc/mariadb-java-client/overview).
-
+ MariaDB can be accessed with the [Remote SQL endpoint](../../build/reference/dataset/Jdbc.md) dataset and a [JDBC driver](https://central.sonatype.com/artifact/org.mariadb.jdbc/mariadb-java-client/overview).
-- :simple-mattermost:{ .lg .middle } Mattermost
+- :simple-mattermost:{ .lg .middle } Mattermost
---
- Send workflow reports or any other message to user and groups in you Mattermost with
-the [Send Mattermost messages](../../build/reference/customtask/cmem_plugin_mattermost.md) task.
+ Send workflow reports or any other message to user and groups in you Mattermost with the [Send Mattermost messages](../../build/reference/customtask/cmem_plugin_mattermost.md) task.
-
-- :material-microsoft:{ .lg .middle } Microsoft SQL
+- :material-microsoft:{ .lg .middle } Microsoft SQL
---
- The Microsoft SQL Server can be accessed with the [Remote SQL endpoint](../../build/reference/dataset/Jdbc.md) dataset and a
-[JDBC driver](https://central.sonatype.com/artifact/com.microsoft.sqlserver/mssql-jdbc).
-
+ The Microsoft SQL Server can be accessed with the [Remote SQL endpoint](../../build/reference/dataset/Jdbc.md) dataset and a [JDBC driver](https://central.sonatype.com/artifact/com.microsoft.sqlserver/mssql-jdbc).
-- :simple-mysql:{ .lg .middle } MySQL
+- :simple-mysql:{ .lg .middle } MySQL
---
- MySQL can be accessed with the [Remote SQL endpoint](../../build/reference/dataset/Jdbc.md) dataset and a
-[JDBC driver](https://central.sonatype.com/artifact/org.mariadb.jdbc/mariadb-java-client/overview).
-
+ MySQL can be accessed with the [Remote SQL endpoint](../../build/reference/dataset/Jdbc.md) dataset and a [JDBC driver](https://central.sonatype.com/artifact/org.mariadb.jdbc/mariadb-java-client/overview).
-- :simple-neo4j:{ .lg .middle } Neo4J
+- :simple-neo4j:{ .lg .middle } Neo4J
---
Use the [Neo4j](../../build/reference/dataset/neo4j.md) dataset for reading and writing [Neo4j graphs](https://neo4j.com/).
-
-- :other-neptune:{ .lg .middle } Neptune
+- :other-neptune:{ .lg .middle } Neptune
---
Load and write Knowledge Graphs to Amazon Neptune by using the [SPARQL endpoint](../../build/reference/dataset/sparqlEndpoint.md) dataset.
-Query data from Amazon Neptune by using the SPARQL
-[Construct](../../build/reference/customtask/sparqlCopyOperator.md),
-[Select](../../build/reference/customtask/sparqlSelectOperator.md) and
-[Update](../../build/reference/customtask/sparqlUpdateOperator.md) tasks.
-Amazon Neptune can be used as the integrated Quad Store as well (beta).
+ Query data from Amazon Neptune by using the SPARQL
+ [Construct](../../build/reference/customtask/sparqlCopyOperator.md),
+ [Select](../../build/reference/customtask/sparqlSelectOperator.md) and
+ [Update](../../build/reference/customtask/sparqlUpdateOperator.md) tasks.
+ Amazon Neptune can be used as the integrated Quad Store as well (beta).
-- :simple-nextcloud:{ .lg .middle } Nextcloud
+- :simple-nextcloud:{ .lg .middle } Nextcloud
---
- Use a Nextcloud instance to [download files](../../build/reference/customtask/cmem_plugin_nextcloud-Download.md) to process
-them or [upload files](../../build/reference/customtask/cmem_plugin_nextcloud-Upload.md) you created with Corporate Memory.
-
+ Use a Nextcloud instance to [download files](../../build/reference/customtask/cmem_plugin_nextcloud-Download.md) to process them or [upload files](../../build/reference/customtask/cmem_plugin_nextcloud-Upload.md) you created with Corporate Memory.
-- :material-microsoft-office:{ .lg .middle } Office 365
+- :material-microsoft-office:{ .lg .middle } Office 365
---
Use the [Excel (OneDrive, Office365)](../../build/reference/dataset/office365preadsheet.md) to read and write to Excel workbooks in Office 365.
-
-- :simple-ollama:{ .lg .middle } Ollama
+- :simple-ollama:{ .lg .middle } Ollama
---
- Use the [Execute Instructions](../../build/reference/customtask/cmem_plugin_llm-ExecuteInstructions.md) or [Create Embeddings](../../build/reference/customtask/cmem_plugin_llm-CreateEmbeddings.md) task
-to interact with Ollama provided Large Language Models (LLMs).
-
+ Use the [Execute Instructions](../../build/reference/customtask/cmem_plugin_llm-ExecuteInstructions.md) or [Create Embeddings](../../build/reference/customtask/cmem_plugin_llm-CreateEmbeddings.md) task to interact with Ollama provided Large Language Models (LLMs).
-- :simple-openai:{ .lg .middle } OpenAI
+- :simple-openai:{ .lg .middle } OpenAI
---
- Use the [Execute Instructions](../../build/reference/customtask/cmem_plugin_llm-ExecuteInstructions.md) or [Create Embeddings](../../build/reference/customtask/cmem_plugin_llm-CreateEmbeddings.md) task
-to interact with any [OpenAI provided Large Language Models](https://platform.openai.com/docs/models) (LLMs).
-
+ Use the [Execute Instructions](../../build/reference/customtask/cmem_plugin_llm-ExecuteInstructions.md) or [Create Embeddings](../../build/reference/customtask/cmem_plugin_llm-CreateEmbeddings.md) task to interact with any [OpenAI provided Large Language Models](https://platform.openai.com/docs/models) (LLMs).
-- :octicons-ai-model-24:{ .lg .middle } OpenRouter
+- :octicons-ai-model-24:{ .lg .middle } OpenRouter
---
Use the [Execute Instructions](../../build/reference/customtask/cmem_plugin_llm-ExecuteInstructions.md) or [Create Embeddings](../../build/reference/customtask/cmem_plugin_llm-CreateEmbeddings.md) task
to interact with any [OpenRouter provided Large Language Models](https://openrouter.ai/models) (LLMs).
-
-- :other-apacheorc:{ .lg .middle } ORC
+- :other-apacheorc:{ .lg .middle } ORC
---
Use the [ORC](../../build/reference/dataset/orc.md) dataset to read and write files in the [ORC](https://orc.apache.org/) format.
-
-- :simple-apacheparquet:{ .lg .middle } Parquet
+- :simple-apacheparquet:{ .lg .middle } Parquet
---
Use the [Parquet](../../build/reference/dataset/parquet.md) dataset to read and write files in the [Parquet](https://parquet.apache.org/) format.
-
-- :black_large_square:{ .lg .middle } pgvector
+- :black_large_square:{ .lg .middle } pgvector
---
Store vector embeddings into [pgvector](https://github.com/pgvector/pgvector)
-using the [Search Vector Embeddings](../../build/reference/customtask/cmem_plugin_pgvector-Search.md).
-
+ using the [Search Vector Embeddings](../../build/reference/customtask/cmem_plugin_pgvector-Search.md).
-- :simple-postgresql:{ .lg .middle } PostgreSQL
+- :simple-postgresql:{ .lg .middle } PostgreSQL
---
PostgreSQL can be accessed with the [Remote SQL endpoint](../../build/reference/dataset/Jdbc.md) dataset and a
-[JDBC driver](https://central.sonatype.com/artifact/org.postgresql/postgresql/versions).
-
+ [JDBC driver](https://central.sonatype.com/artifact/org.postgresql/postgresql/versions).
-- :other-powerbi:{ .lg .middle } PowerBI
+- :other-powerbi:{ .lg .middle } PowerBI
---
Leverage your Knowledge Graphs in PowerBI by using our
-[Corporate Memory Power-BI-Connector](../../consume/consuming-graphs-in-power-bi/index.md).
+ [Corporate Memory Power-BI-Connector](../../consume/consuming-graphs-in-power-bi/index.md).
-
-- :other-qlever:{ .lg .middle } Qlever
+- :other-qlever:{ .lg .middle } Qlever
---
Load and write Knowledge Graphs to an external Qlever store by using the [SPARQL endpoint](../../build/reference/dataset/sparqlEndpoint.md) dataset.
-Query data from Qlever by using the SPARQL
-[Construct](../../build/reference/customtask/sparqlCopyOperator.md),
-[Select](../../build/reference/customtask/sparqlSelectOperator.md) and
-[Update](../../build/reference/customtask/sparqlUpdateOperator.md) tasks.
-Qlever can be used as the integrated Quad Store as well (beta).
+ Query data from Qlever by using the SPARQL
+ [Construct](../../build/reference/customtask/sparqlCopyOperator.md),
+ [Select](../../build/reference/customtask/sparqlSelectOperator.md) and
+ [Update](../../build/reference/customtask/sparqlUpdateOperator.md) tasks.
+ Qlever can be used as the integrated Quad Store as well (beta).
-- :simple-semanticweb:{ .lg .middle } RDF
+- :simple-semanticweb:{ .lg .middle } RDF
---
Use the [RDF file](../../build/reference/dataset/file.md) dataset to read and write files in the RDF formats
-([N-Quads](https://www.w3.org/TR/n-quads/), [N-Triples](https://www.w3.org/TR/n-triples/),
-[Turtle](https://www.w3.org/TR/turtle/), [RDF/XML](https://www.w3.org/TR/rdf-syntax-grammar/) or
-[RDF/JSON](https://www.w3.org/TR/rdf-json/)).
-
+ ([N-Quads](https://www.w3.org/TR/n-quads/), [N-Triples](https://www.w3.org/TR/n-triples/),
+ [Turtle](https://www.w3.org/TR/turtle/), [RDF/XML](https://www.w3.org/TR/rdf-syntax-grammar/) or
+ [RDF/JSON](https://www.w3.org/TR/rdf-json/)).
-- :other-redash:{ .lg .middle } Redash
+- :other-redash:{ .lg .middle } Redash
---
Leverage your Knowledge Graphs in Redash using the integrated
-[Corporate Memory Redash-Connector](../../consume/consuming-graphs-with-redash/index.md).
-
+ [Corporate Memory Redash-Connector](../../consume/consuming-graphs-with-redash/index.md).
-- :material-application-braces-outline:{ .lg .middle } REST
+- :material-application-braces-outline:{ .lg .middle } REST
---
Execute REST requests using [Execute REST requests](../../build/reference/customtask/eccencaRestOperator.md).
-
-- :fontawesome-brands-salesforce:{ .lg .middle } Salesforce
+- :fontawesome-brands-salesforce:{ .lg .middle } Salesforce
---
- Interact with your Salesforce data, such as [Create/Update Salesforce Objects](../../build/reference/customtask/cmem_plugin_salesforce-workflow-operations-SobjectCreate.md) or
-execute a [SOQL query (Salesforce)](../../build/reference/customtask/cmem_plugin_salesforce-SoqlQuery.md).
-
+ Interact with your Salesforce data, such as [Create/Update Salesforce Objects](../../build/reference/customtask/cmem_plugin_salesforce-workflow-operations-SobjectCreate.md) or execute a [SOQL query (Salesforce)](../../build/reference/customtask/cmem_plugin_salesforce-SoqlQuery.md).
-- :simple-snowflake:{ .lg .middle } Snowflake
+- :simple-snowflake:{ .lg .middle } Snowflake
---
Snowflake can be accessed with the [Snowflake SQL endpoint](../../build/reference/dataset/SnowflakeJdbc.md) dataset and a
-[JDBC driver](https://central.sonatype.com/artifact/net.snowflake/snowflake-jdbc).
-
+ [JDBC driver](https://central.sonatype.com/artifact/net.snowflake/snowflake-jdbc).
-- :simple-apachespark:{ .lg .middle } Spark
+- :simple-apachespark:{ .lg .middle } Spark
---
Apply a [Spark](https://spark.apache.org/) function to a specified field using [Execute Spark function](../../build/reference/customtask/SparkFunction.md).
-
-- :simple-sqlite:{ .lg .middle } SQLite
+- :simple-sqlite:{ .lg .middle } SQLite
---
SQLite can be accessed with the [Remote SQL endpoint](../../build/reference/dataset/Jdbc.md) dataset and a
-[JDBC driver](https://central.sonatype.com/artifact/org.xerial/sqlite-jdbc).
-
+ [JDBC driver](https://central.sonatype.com/artifact/org.xerial/sqlite-jdbc).
-- :material-ssh:{ .lg .middle } SSH
+- :material-ssh:{ .lg .middle } SSH
---
Interact with SSH servers to [Download SSH files](../../build/reference/customtask/cmem_plugin_ssh-Download.md) or [Execute commands via SSH](../../build/reference/customtask/cmem_plugin_ssh-Execute.md).
-
-- :other-tentris:{ .lg .middle } Tentris
+- :other-tentris:{ .lg .middle } Tentris
---
Load and write Knowledge Graphs to an external Tentris store by using the [SPARQL endpoint](../../build/reference/dataset/sparqlEndpoint.md) dataset.
-Query data from Tentris by using the SPARQL
-[Construct](../../build/reference/customtask/sparqlCopyOperator.md),
-[Select](../../build/reference/customtask/sparqlSelectOperator.md) and
-[Update](../../build/reference/customtask/sparqlUpdateOperator.md) tasks.
-Tentris can be used as the integrated Quad Store as well (beta).
+ Query data from Tentris by using the SPARQL
+ [Construct](../../build/reference/customtask/sparqlCopyOperator.md),
+ [Select](../../build/reference/customtask/sparqlSelectOperator.md) and
+ [Update](../../build/reference/customtask/sparqlUpdateOperator.md) tasks.
+ Tentris can be used as the integrated Quad Store as well (beta).
-- :simple-trino:{ .lg .middle } Trino
+- :simple-trino:{ .lg .middle } Trino
---
[Trino](https://github.com/trinodb/trino) can be access with the
-[Remote SQL endpoint](../../build/reference/dataset/Jdbc.md) dataset and a [JDBC driver](https://trino.io/docs/current/client/jdbc.html).
+ [Remote SQL endpoint](../../build/reference/dataset/Jdbc.md) dataset and a [JDBC driver](https://trino.io/docs/current/client/jdbc.html).
-
-- :black_large_square:{ .lg .middle } Virtuoso
+- :black_large_square:{ .lg .middle } Virtuoso
---
Load and write Knowledge Graphs to an external Openlink Virtuoso store by using the [SPARQL endpoint](../../build/reference/dataset/sparqlEndpoint.md) dataset.
-Query data from Virtuoso by using the SPARQL
-[Construct](../../build/reference/customtask/sparqlCopyOperator.md),
-[Select](../../build/reference/customtask/sparqlSelectOperator.md) and
-[Update](../../build/reference/customtask/sparqlUpdateOperator.md) tasks.
-Virtuoso can be used as the integrated Quad Store as well (beta).
+ Query data from Virtuoso by using the SPARQL
+ [Construct](../../build/reference/customtask/sparqlCopyOperator.md),
+ [Select](../../build/reference/customtask/sparqlSelectOperator.md) and
+ [Update](../../build/reference/customtask/sparqlUpdateOperator.md) tasks.
+ Virtuoso can be used as the integrated Quad Store as well (beta).
-- :material-xml:{ .lg .middle } XML
+- :material-xml:{ .lg .middle } XML
---
Load and write data to XML files with the [XML](../../build/reference/dataset/xml.md) dataset as well as
-[Parse XML](../../build/reference/customtask/XmlParserOperator.md) from external services.
-
+ [Parse XML](../../build/reference/customtask/XmlParserOperator.md) from external services.
-- :simple-yaml:{ .lg .middle } YAML
+- :simple-yaml:{ .lg .middle } YAML
---
Load and integrate data from YAML files with the [Parse YAML](../../build/reference/customtask/cmem_plugin_yaml-parse.md) task.
-
-- :material-code-json:{ .lg .middle } Zipped JSON
+- :material-code-json:{ .lg .middle } Zipped JSON
---
Use the [JSON](../../build/reference/dataset/json.md) dataset to read and write JSON files in a ZIP Archive.
-
-
-
-
\ No newline at end of file
+
diff --git a/docs/build/kafka-consumer/index.md b/docs/build/kafka-consumer/index.md
index 3d2f7fb98..907a6abe5 100644
--- a/docs/build/kafka-consumer/index.md
+++ b/docs/build/kafka-consumer/index.md
@@ -53,18 +53,18 @@ In Create new item window, select Kafka Consumer (Receive Messages) and click Ad
Configure the Kafka Consumer according to the topic that shall be consumed:
-- **Bootstrap Server** - URL of the Kafka broker including the port number (commonly port ´9092)
-- **Security Protocol** - Security mechanism used for authentication
-- **Topic** - Name / ID of the topic where messages are published
-- **Advanced Section**
- - **Messages Dataset** - A dataset (XML/JSON) where messages can be written to. Leave this field empty to output the messages as entities (see below).
- - **SASL** authentication settings as provided by your Kafka broker
- - **Auto Offset Reset** - Consumption starts either at the earliest offset or the latest offset.
- - **Consumer Group Name** - Consumer groups can be used to distribute the load of messages (partitions) between multiple consumers of the same group (c.f. [Kafka Concepts](https://docs.confluent.io/platform/current/clients/consumer.html#concepts)).
- - **Client Id** - An optional identifier of the client which is communicated to the server. When this field is empty, the plugin defaults to `DNS:PROJECT_ID:TASK_ID`.
- - **Local Consumer Queue Size** - Maximum total message size in kilobytes that the consumer can buffer for a specific partition. The consumer will stop fetching from the partition if it hits this limit. This helps prevent consumers from running out of memory.
- - **Message Limit** - The maximum number of messages to fetch and process in each run. If `0` or less, all messages will be fetched.
- - **Disable Commit** Setting this to `true` will disable committing messages after retrival. This means you will receive the same messages on the next execution (for testing, development, or debugging).
+- **Bootstrap Server** - URL of the Kafka broker including the port number (commonly port ´9092)
+- **Security Protocol** - Security mechanism used for authentication
+- **Topic** - Name / ID of the topic where messages are published
+- **Advanced Section**
+ - **Messages Dataset** - A dataset (XML/JSON) where messages can be written to. Leave this field empty to output the messages as entities (see below).
+ - **SASL** authentication settings as provided by your Kafka broker
+ - **Auto Offset Reset** - Consumption starts either at the earliest offset or the latest offset.
+ - **Consumer Group Name** - Consumer groups can be used to distribute the load of messages (partitions) between multiple consumers of the same group (c.f. [Kafka Concepts](https://docs.confluent.io/platform/current/clients/consumer.html#concepts)).
+ - **Client Id** - An optional identifier of the client which is communicated to the server. When this field is empty, the plugin defaults to `DNS:PROJECT_ID:TASK_ID`.
+ - **Local Consumer Queue Size** - Maximum total message size in kilobytes that the consumer can buffer for a specific partition. The consumer will stop fetching from the partition if it hits this limit. This helps prevent consumers from running out of memory.
+ - **Message Limit** - The maximum number of messages to fetch and process in each run. If `0` or less, all messages will be fetched.
+ - **Disable Commit** Setting this to `true` will disable committing messages after retrival. This means you will receive the same messages on the next execution (for testing, development, or debugging).
{ class="bordered" }
@@ -86,11 +86,11 @@ To execute the Kafka Consumer it needs to be placed inside a Workflow. The messa
In the "message streaming mode" (**Messages Dataset** is not set) the received messages will be generated as entities and forwarded to the subsequent operator in the workflow. This mode is not limited to any message format. The generated message entities will have the following flat schema:
-- **key** — the optional key of the message,
-- **content** — the message itself as plain text,
-- **offset** — the given offset of the message in the topic,
-- **ts-production** — the timestamp when the message was written to the topic,
-- **ts-consumption** — the timestamp when the message was consumed from the topic.
+- **key** — the optional key of the message,
+- **content** — the message itself as plain text,
+- **offset** — the given offset of the message in the topic,
+- **ts-production** — the timestamp when the message was written to the topic,
+- **ts-consumption** — the timestamp when the message was consumed from the topic.
Connect the output of Kafka Consumer inside a Workflow to a tabular dataset (e.g. a [CSV Dataset](../reference/dataset/csv.md)) or directly to a transformation task.
diff --git a/docs/build/lift-data-from-json-and-xml-sources/index.md b/docs/build/lift-data-from-json-and-xml-sources/index.md
index eed445373..ea2d30af8 100644
--- a/docs/build/lift-data-from-json-and-xml-sources/index.md
+++ b/docs/build/lift-data-from-json-and-xml-sources/index.md
@@ -30,11 +30,11 @@ The documentation consists of the following steps, which are described in detail
The following material is used in this tutorial:
-- Sample vocabulary describing the data in the JSON and XML files: [products_vocabulary.nt](products_vocabulary.nt)
+- Sample vocabulary describing the data in the JSON and XML files: [products_vocabulary.nt](products_vocabulary.nt)
{ class="bordered" }
-- Sample JSON file: [services.json](services.json)
+- Sample JSON file: [services.json](services.json)
```json
[
@@ -56,7 +56,7 @@ The following material is used in this tutorial:
]
```
-- Sample XML file: [orgmap.xml](orgmap.xml)
+- Sample XML file: [orgmap.xml](orgmap.xml)
```xml
@@ -119,9 +119,9 @@ The vocabulary contains the classes and properties needed to map the source data
3. Define a **Name**, a **Graph URI** and a **Description** of the vocabulary. _In this example we will use:_
- - Name: _**Product Vocabulary**_
- - Graph URI: _****_
- - Description: _**Example vocabulary modeled to describe relations between products and services.**_
+ - Name: _**Product Vocabulary**_
+ - Graph URI: _****_
+ - Description: _**Example vocabulary modeled to describe relations between products and services.**_
{ class="bordered" width="50%" }
@@ -339,8 +339,8 @@ Click **Transform evaluation** to evaluate the transformed entities.
2. Press the  button and validate the results. In this example, 9x Service entities were created in our Knowledge Graph based on the mapping.
3. You can click **Knowledge Graphs** under **EXPLORE** to (re-)view of the created Knowledge Graphs
4. Enter the following URIs in the Enter search term for JSON and XML respectively.
- - JSON / Service: _****_
- - XML / Department: _****_
+ - JSON / Service: _****_
+ - XML / Department: _****_
=== "JSON"
diff --git a/docs/build/lift-data-from-tabular-data-such-as-csv-xslx-or-database-tables/index.md b/docs/build/lift-data-from-tabular-data-such-as-csv-xslx-or-database-tables/index.md
index e6788d6ba..e7fe2d933 100644
--- a/docs/build/lift-data-from-tabular-data-such-as-csv-xslx-or-database-tables/index.md
+++ b/docs/build/lift-data-from-tabular-data-such-as-csv-xslx-or-database-tables/index.md
@@ -19,7 +19,7 @@ This beginner-level tutorial shows how you can build a Knowledge Graph based on
- by using the [command line interface](../../automate/cmemc-command-line-interface/index.md)
``` shell-session
- $ cmemc -c my-cmem project import tutorial-csv.project.zip tutorial-csv
+ cmemc -c my-cmem project import tutorial-csv.project.zip tutorial-csv
```
This step is optional and makes some of the following steps of the tutorial superfluous.
@@ -33,16 +33,15 @@ The documentation consists of the following steps, which are described in detail
5. Evaluate a Transformation
6. Build the Knowledge Graph
-
## Sample Material
The following material is used in this tutorial, you should download the files and have them at hand throughout the tutorial:
-- Sample vocabulary which describes the data in the CSV files: [products_vocabulary.nt](products_vocabulary.nt)
+- Sample vocabulary which describes the data in the CSV files: [products_vocabulary.nt](products_vocabulary.nt)
{ class="bordered" }
-- Sample CSV file: [services.csv](services.csv)
+- Sample CSV file: [services.csv](services.csv)
!!! info
@@ -52,7 +51,7 @@ The following material is used in this tutorial, you should download the files a
| I241-8776317 | Component Confabulation | Z249-1364492, L557-1467804, C721-7900144, ... | Corinna.Ludwig@company.org | 1082,00 EUR |
| … | … | … | … | … |
-- Sample Excel file: [products.xlsx](products.xlsx)
+- Sample Excel file: [products.xlsx](products.xlsx)
!!! info
@@ -87,11 +86,10 @@ The vocabulary contains the classes and properties needed to map the data into t
{ class="bordered" width="50%" }
-
=== "cmemc"
``` shell-session
- $ cmemc vocabulary import products_vocabulary.nt
+ cmemc vocabulary import products_vocabulary.nt
```
---
@@ -102,7 +100,7 @@ The vocabulary contains the classes and properties needed to map the data into t
{ class="bordered" width="50%" }
-2. Click **Create :octicons-plus-circle-24:** at the top right of the page.
+2. Click **Create :octicons-plus-circle-24:** at the top right of the page.
3. In the **Create new item** window, select **Project** and click **Add**. The Create new item of type Project window appears.
@@ -110,7 +108,6 @@ The vocabulary contains the classes and properties needed to map the data into t
5. Click **Create**. Your project is created.
-
---
=== "Workflow view"
@@ -161,7 +158,7 @@ The vocabulary contains the classes and properties needed to map the data into t
The general form of the JDBC connection string is:
- ```
+ ```text
jdbc:://:/
```
@@ -198,7 +195,6 @@ The transformation defines how an input dataset (e.g. CSV) will be transformed i
{ class="bordered" width="50%" }
-
3. Scroll down to **Target vocabularies** and choose **Products vocabulary**.
{ class="bordered" width="50%" }
@@ -219,13 +215,13 @@ The transformation defines how an input dataset (e.g. CSV) will be transformed i
4. Define the **Target entity type** from the vocabulary, the **URI pattern** and a **label** for the mapping. _In this example we will use:_
- - Target entity type: _**Service**_
- - URI pattern:
+ - Target entity type: _**Service**_
+ - URI pattern:
- - Click **Create custom pattern**
- - Insert `http://ld.company.org/prod-inst/{ServiceID}`, where `http://ld.company.org/prod-inst/` is a common prefix for the instances in this use case, and `{ServiceID}` is a placeholder that will resolve to the column of that name.
+ - Click **Create custom pattern**
+ - Insert `http://ld.company.org/prod-inst/{ServiceID}`, where `http://ld.company.org/prod-inst/` is a common prefix for the instances in this use case, and `{ServiceID}` is a placeholder that will resolve to the column of that name.
- - An optional Label: `Service`
+ - An optional Label: `Service`
{ class="bordered" width="50%" }
@@ -237,26 +233,26 @@ _Example RDF triple in our Knowledge Graph based on the mapping definition:_
```
-6. Evaluate your mapping by clicking the Expand :material-greater-than: button in the **Examples of target data** property to see at most three generated base URIs.
+1. Evaluate your mapping by clicking the Expand :material-greater-than: button in the **Examples of target data** property to see at most three generated base URIs.
{ class="bordered" width="50%" }
We have now created the Service entities in the Knowledge Graph. As a next step, we will add the name of the Service entity.
-7. Press the circular **Blue + button** on the lower right and select **Add value mapping**.
+2. Press the circular **Blue + button** on the lower right and select **Add value mapping**.
{ class="bordered" width="50%" }
-8. Define the **Target property**, the **Data type**, the **Value path** (column name) and a **Label** for your value mapping. _In this example we will use:_
+3. Define the **Target property**, the **Data type**, the **Value path** (column name) and a **Label** for your value mapping. _In this example we will use:_
- - Target Property: `name`
- - Data type: _**String**_
- - Value path: `ServiceName` (which corresponds to the column of that name)
- - An optional Label: `service name`
+ - Target Property: `name`
+ - Data type: _**String**_
+ - Value path: `ServiceName` (which corresponds to the column of that name)
+ - An optional Label: `service name`
{ class="bordered" width="50%" }
-9. Click **Save**.
+4. Click **Save**.
---
@@ -266,7 +262,6 @@ Go the **Transform evaluation** tab of your transformation to view a list of gen
{ class="bordered" width="50%" }
-
---
## 6 Build the Knowledge Graph
@@ -279,8 +274,8 @@ Go the **Transform evaluation** tab of your transformation to view a list of gen
3. Define a **Label** for the Knowledge Graph and provide a **graph** uri. Leave all the other parameters at the default values. _In this example we will use:_
- - Label: `Service Knowledge Graph`
- - Graph: `http://ld.company.org/prod-instances/`
+ - Label: `Service Knowledge Graph`
+ - Graph: `http://ld.company.org/prod-instances/`
{ class="bordered" width="50%" }
@@ -292,7 +287,6 @@ Go the **Transform evaluation** tab of your transformation to view a list of gen
{ class="bordered" width="50%" }
-
7. Click Knowledge Graph under **Explore** in the navigation on the left side of the page.
{ class="bordered" width="50%" }
@@ -308,4 +302,3 @@ Go the **Transform evaluation** tab of your transformation to view a list of gen
10. Finally you can use the Explore **Knowledge Graphs** module to (re-)view of the created Knowledge Graph: `http://ld.company.org/prod-instances/`
{ class="bordered" width="50%" }
-
diff --git a/docs/build/loading-jdbc-datasets-incrementally/index.md b/docs/build/loading-jdbc-datasets-incrementally/index.md
index fd35cc182..7e6e2c9d1 100644
--- a/docs/build/loading-jdbc-datasets-incrementally/index.md
+++ b/docs/build/loading-jdbc-datasets-incrementally/index.md
@@ -45,15 +45,15 @@ To extract data from a relational database, you need to first register a **JDBC
{ class="bordered" }
5. Provide the required configuration details for the JDBC endpoint:
- - **Label**: Provide a table name.
- - **Description:** Optionally describe your table.
- - **JDBC Driver Connection URL:** Provide the JDBC connection. In this tutorial we use a MySQL database. The database server is named _mysql_ and the database is named _serviceDB_.
- - **Table:** Provide the name of the table in the database.
- - **Source query**: Provide a default source query. In this tutorial, the source query will be modified later as the OFFSET changes.
- - **Limit:** Provide a LIMIT for the SQL query. In this tutorial, we choose 5 for demonstrating the functionality. You may select any value which works for your use case.
- - **Query strategy**: Select: _Execute the given source query. No paging or virtual Query._ In this tutorial, this needs to be changed so that when this JDBC endpoint is being used, Corporate Memory will always check for the _Source Query_ that was provided earlier.
- - **User**: Provide the user name which is allowed to access the database.
- - **Password**: Provide the user password that is allowed to access the database.
+ - **Label**: Provide a table name.
+ - **Description:** Optionally describe your table.
+ - **JDBC Driver Connection URL:** Provide the JDBC connection. In this tutorial we use a MySQL database. The database server is named _mysql_ and the database is named _serviceDB_.
+ - **Table:** Provide the name of the table in the database.
+ - **Source query**: Provide a default source query. In this tutorial, the source query will be modified later as the OFFSET changes.
+ - **Limit:** Provide a LIMIT for the SQL query. In this tutorial, we choose 5 for demonstrating the functionality. You may select any value which works for your use case.
+ - **Query strategy**: Select: _Execute the given source query. No paging or virtual Query._ In this tutorial, this needs to be changed so that when this JDBC endpoint is being used, Corporate Memory will always check for the _Source Query_ that was provided earlier.
+ - **User**: Provide the user name which is allowed to access the database.
+ - **Password**: Provide the user password that is allowed to access the database.
{ class="bordered" }
@@ -71,7 +71,7 @@ To incrementally extract data in Corporate Memory, we need to store the informat
4. Select the previously created JDBC endpoint (in our example: "Services Table (JDBC)"
5. Press the **Turtle** tab inside your JDBC endpoint view (right)
-In our example, the JDBC Endpoint IRI looks like this: __IncrementalJDBCdatasetload/8d0e4895-1d45-442f-8fd8-b1459ec3dbde_ServicesTableJDBC_
+In our example, the JDBC Endpoint IRI looks like this: ``
See screenshot below for example:
@@ -85,7 +85,14 @@ The following three RDF triples hold the (minimal) necessary information we need
2. The second triple defines a label for the Graph.
3. The third triple defines the <...**lastOffset**> property we need for this tutorial. As a default, we set it to 0 to start with the first row in the table.
-**services_metadata_graph**
+For your project:
+
+1. adjust the CMEM DI Project IRI and
+2. the JDBC endpoint IRI.
+
+**Import the Graph** in the Exploration tab → Graph (menu) → Add new Graph → Provide Graph IRI + Select file.
+
+`services_metadata_graph.nt`:
```nt
@@ -99,14 +106,7 @@ The following three RDF triples hold the (minimal) necessary information we need
"0" . # set the initial offset to zero to start with the first row in the table
```
-For your project, please:
-
-1. adjust the CMEM DI Project IRI and
-2. the JDBC endpoint IRI.
-
-**Import the Graph** in the Exploration tab → Graph (menu) → Add new Graph → Provide Graph IRI + Select file
-
-In our example, we used the following Graph IRI for the Metadata Graph: __
+In our example, we used the following Graph IRI for the Metadata Graph: ``
## 3 Create a Transformation to dynamically compose a SQL Query
diff --git a/docs/build/mapping-creator/index.md b/docs/build/mapping-creator/index.md
index 0f0193246..a15409dcb 100644
--- a/docs/build/mapping-creator/index.md
+++ b/docs/build/mapping-creator/index.md
@@ -34,9 +34,9 @@ Using visual tools, drag-and-drop, and suggestions, you can create mappings betw
The Mapping Creator consists of three parts:
-- Source schema shown on the left side
-- Target Schema shown on the right side
-- Mappings between elements in the source schema and in the target schema
+- Source schema shown on the left side
+- Target Schema shown on the right side
+- Mappings between elements in the source schema and in the target schema
You can move, connect or disconnect, and inspect each element visually.
@@ -65,8 +65,8 @@ To complete a mapping, properties need to be added to complete your desired targ
There are two options to add properties:
-- during class selection
-- from vocabularies
+- during class selection
+- from vocabularies
##### During class selection
@@ -74,9 +74,9 @@ There are two options to add properties:
In the _add target class_ dialog you may select different kind of properties:
-- class properties - properties defined in the domain of the selected class or its super-classes
-- default properties - typical well-known properties like `rdfs:label` or `rdfs:comment`
-- generic properties - properties defined with no explicit domain (or in domain of `owl:Thing`)
+- class properties - properties defined in the domain of the selected class or its super-classes
+- default properties - typical well-known properties like `rdfs:label` or `rdfs:comment`
+- generic properties - properties defined with no explicit domain (or in domain of `owl:Thing`)
The property preview helps to confirm your choice.
@@ -86,8 +86,8 @@ The property preview helps to confirm your choice.
The _add property from vocabularies_ dialog allows you to search and select a property and to configure it in the desired way:
-- redefine the role of a property, to use a DatatypeProperty in the role of an ObjectProperty, or vice versa
-- define the _direction_ an ObjectProperty should be used in
+- redefine the role of a property, to use a DatatypeProperty in the role of an ObjectProperty, or vice versa
+- define the _direction_ an ObjectProperty should be used in
#### Create direct mappings
diff --git a/docs/build/reference/aggregator/average.md b/docs/build/reference/aggregator/average.md
index 3ca9efb0c..3fe305506 100644
--- a/docs/build/reference/aggregator/average.md
+++ b/docs/build/reference/aggregator/average.md
@@ -2,12 +2,12 @@
title: "Average"
description: "Computes the weighted average."
icon: octicons/cross-reference-24
-tags:
+tags:
---
-# Average
-
+# Average
+
Computes the weighted average.
@@ -21,7 +21,6 @@ Computes the weighted average.
* Input values: `[0.4, 0.5, 0.9]`
* Returns: `0.6`
-
---
**Multiplies individual similarity scores with their weight before averaging:**
@@ -29,20 +28,16 @@ Computes the weighted average.
* Input values: `[0.3, 0.5, 0.6]`
* Returns: `0.5`
-
---
**Missing scores always lead to an output of none:**
* Input values: `[-1.0, null, 1.0]`
* Returns: `null`
-
-
-
## Parameter
`None`
## Advanced Parameter
-`None`
\ No newline at end of file
+`None`
diff --git a/docs/build/reference/aggregator/firstNonEmpty.md b/docs/build/reference/aggregator/firstNonEmpty.md
index 3b6165fcb..a6b11e3ee 100644
--- a/docs/build/reference/aggregator/firstNonEmpty.md
+++ b/docs/build/reference/aggregator/firstNonEmpty.md
@@ -2,12 +2,12 @@
title: "First non-empty score"
description: "Forwards the first input that provides a non-empty similarity score."
icon: octicons/cross-reference-24
-tags:
+tags:
---
-# First non-empty score
-
+# First non-empty score
+
Forwards the first input that provides a non-empty similarity score.
@@ -21,13 +21,10 @@ Forwards the first input that provides a non-empty similarity score.
* Input values: `[null, 0.2, 0.5]`
* Returns: `0.2`
-
-
-
## Parameter
`None`
## Advanced Parameter
-`None`
\ No newline at end of file
+`None`
diff --git a/docs/build/reference/aggregator/geometricMean.md b/docs/build/reference/aggregator/geometricMean.md
index 11bbed8a3..07d4b5445 100644
--- a/docs/build/reference/aggregator/geometricMean.md
+++ b/docs/build/reference/aggregator/geometricMean.md
@@ -2,12 +2,12 @@
title: "Geometric mean"
description: "Compute the (weighted) geometric mean."
icon: octicons/cross-reference-24
-tags:
+tags:
---
-# Geometric mean
-
+# Geometric mean
+
Compute the (weighted) geometric mean.
@@ -22,7 +22,6 @@ Compute the (weighted) geometric mean.
* Input values: `[0.0, 0.0, 0.0]`
* Returns: `0.0`
-
---
**Example 2:**
@@ -30,7 +29,6 @@ Compute the (weighted) geometric mean.
* Input values: `[1.0, 1.0, 1.0]`
* Returns: `1.0`
-
---
**Example 3:**
@@ -38,7 +36,6 @@ Compute the (weighted) geometric mean.
* Input values: `[0.5, 1.0]`
* Returns: `0.629961`
-
---
**Example 4:**
@@ -46,7 +43,6 @@ Compute the (weighted) geometric mean.
* Input values: `[0.5, 1.0, 0.7]`
* Returns: `0.672866`
-
---
**Example 5:**
@@ -54,20 +50,16 @@ Compute the (weighted) geometric mean.
* Input values: `[0.1, 0.9, 0.2]`
* Returns: `0.153971`
-
---
**Missing scores always lead to an output of none:**
* Input values: `[-1.0, null, 1.0]`
* Returns: `null`
-
-
-
## Parameter
`None`
## Advanced Parameter
-`None`
\ No newline at end of file
+`None`
diff --git a/docs/build/reference/aggregator/handleMissingValues.md b/docs/build/reference/aggregator/handleMissingValues.md
index 28ecf268c..ad5c35796 100644
--- a/docs/build/reference/aggregator/handleMissingValues.md
+++ b/docs/build/reference/aggregator/handleMissingValues.md
@@ -2,12 +2,12 @@
title: "Handle missing values"
description: "Generates a default similarity score, if no similarity score is provided (e.g., due to missing values). Using this operator can have a performance impact, since it lowers the efficiency of the underlying computation."
icon: octicons/cross-reference-24
-tags:
+tags:
---
-# Handle missing values
-
+# Handle missing values
+
Generates a default similarity score, if no similarity score is provided (e.g., due to missing values). Using this operator can have a performance impact, since it lowers the efficiency of the underlying computation.
@@ -21,7 +21,6 @@ Generates a default similarity score, if no similarity score is provided (e.g.,
* Input values: `[0.1]`
* Returns: `0.1`
-
---
**Outputs the default score, if no input score is provided:**
@@ -31,23 +30,16 @@ Generates a default similarity score, if no similarity score is provided (e.g.,
* Input values: `[null]`
* Returns: `1.0`
-
-
-
## Parameter
### Default value
The default value to be generated, if no similarity score is provided. Must be a value between -1 (inclusive) and 1 (inclusive). '1' represents boolean true and '-1' represents boolean false.
-- ID: `defaultValue`
-- Datatype: `double`
-- Default Value: `-1.0`
-
-
-
-
+* ID: `defaultValue`
+* Datatype: `double`
+* Default Value: `-1.0`
## Advanced Parameter
-`None`
\ No newline at end of file
+`None`
diff --git a/docs/build/reference/aggregator/index.md b/docs/build/reference/aggregator/index.md
index f3a309847..8b5789c3a 100644
--- a/docs/build/reference/aggregator/index.md
+++ b/docs/build/reference/aggregator/index.md
@@ -5,7 +5,9 @@ tags:
- Build
- Reference
---
+
# Aggregators
+
This kind of task aggregates multiple similarity scores.
diff --git a/docs/build/reference/aggregator/max.md b/docs/build/reference/aggregator/max.md
index 9d3a55ef5..ad5ba2dbb 100644
--- a/docs/build/reference/aggregator/max.md
+++ b/docs/build/reference/aggregator/max.md
@@ -2,12 +2,12 @@
title: "Or"
description: "At least one input score must be within the threshold. Selects the maximum score."
icon: octicons/cross-reference-24
-tags:
+tags:
---
-# Or
-
+# Or
+
At least one input score must be within the threshold. Selects the maximum score.
@@ -21,21 +21,18 @@ At least one input score must be within the threshold. Selects the maximum score
* Input values: `[0.5, 0.0]`
* Returns: `0.5`
-
---
**Selects the maximum similarity score:**
* Input values: `[-1.0, -0.5, -0.3]`
* Returns: `-0.3`
-
---
**Missing scores default to a similarity score of -1:**
* Input values: `[null]`
* Returns: `-1.0`
-
---
**Weights are ignored:**
@@ -43,13 +40,10 @@ At least one input score must be within the threshold. Selects the maximum score
* Input values: `[1.0, 0.0]`
* Returns: `1.0`
-
-
-
## Parameter
`None`
## Advanced Parameter
-`None`
\ No newline at end of file
+`None`
diff --git a/docs/build/reference/aggregator/min.md b/docs/build/reference/aggregator/min.md
index e323f785b..7b0b5ba2d 100644
--- a/docs/build/reference/aggregator/min.md
+++ b/docs/build/reference/aggregator/min.md
@@ -2,12 +2,12 @@
title: "And"
description: "All input scores must be within the threshold. Selects the minimum score."
icon: octicons/cross-reference-24
-tags:
+tags:
---
-# And
-
+# And
+
All input scores must be within the threshold. Selects the minimum score.
@@ -21,21 +21,18 @@ All input scores must be within the threshold. Selects the minimum score.
* Input values: `[1.0, 0.0]`
* Returns: `0.0`
-
---
**Selects the minimum similarity score:**
* Input values: `[-1.0, 0.0, 0.5, 1.0]`
* Returns: `-1.0`
-
---
**Missing scores default to a similarity score of -1:**
* Input values: `[1.0, null, -0.5]`
* Returns: `-1.0`
-
---
**Weights are ignored:**
@@ -43,13 +40,10 @@ All input scores must be within the threshold. Selects the minimum score.
* Input values: `[1.0, 0.0]`
* Returns: `0.0`
-
-
-
## Parameter
`None`
## Advanced Parameter
-`None`
\ No newline at end of file
+`None`
diff --git a/docs/build/reference/aggregator/negate.md b/docs/build/reference/aggregator/negate.md
index f1c4974eb..208d1cd16 100644
--- a/docs/build/reference/aggregator/negate.md
+++ b/docs/build/reference/aggregator/negate.md
@@ -2,20 +2,19 @@
title: "Negate"
description: "Negates the result of the input comparison. A single input is expected. Using this operator can have a performance impact, since it lowers the efficiency of the underlying computation."
icon: octicons/cross-reference-24
-tags:
+tags:
---
-# Negate
-
+# Negate
+
Negates the result of the input comparison. A single input is expected. Using this operator can have a performance impact, since it lowers the efficiency of the underlying computation.
-
## Parameter
`None`
## Advanced Parameter
-`None`
\ No newline at end of file
+`None`
diff --git a/docs/build/reference/aggregator/quadraticMean.md b/docs/build/reference/aggregator/quadraticMean.md
index 84e54453e..e13f6d1c2 100644
--- a/docs/build/reference/aggregator/quadraticMean.md
+++ b/docs/build/reference/aggregator/quadraticMean.md
@@ -2,12 +2,12 @@
title: "Euclidian distance"
description: "Calculates the Euclidian distance."
icon: octicons/cross-reference-24
-tags:
+tags:
---
-# Euclidian distance
-
+# Euclidian distance
+
Calculates the Euclidian distance.
@@ -22,7 +22,6 @@ Calculates the Euclidian distance.
* Input values: `[1.0, 1.0, 1.0]`
* Returns: `1.0`
-
---
**Example 2:**
@@ -30,7 +29,6 @@ Calculates the Euclidian distance.
* Input values: `[1.0, 0.0]`
* Returns: `0.707107`
-
---
**Example 3:**
@@ -38,7 +36,6 @@ Calculates the Euclidian distance.
* Input values: `[0.4, 0.5, 0.6]`
* Returns: `0.506623`
-
---
**Example 4:**
@@ -46,7 +43,6 @@ Calculates the Euclidian distance.
* Input values: `[0.0, 0.0]`
* Returns: `0.0`
-
---
**Example 5:**
@@ -54,7 +50,6 @@ Calculates the Euclidian distance.
* Input values: `[1.0, 0.0, 0.0]`
* Returns: `0.707107`
-
---
**Example 6:**
@@ -62,20 +57,16 @@ Calculates the Euclidian distance.
* Input values: `[0.4, 0.5, 0.6]`
* Returns: `0.538516`
-
---
**Missing scores always lead to an output of none:**
* Input values: `[-1.0, null, 1.0]`
* Returns: `null`
-
-
-
## Parameter
`None`
## Advanced Parameter
-`None`
\ No newline at end of file
+`None`
diff --git a/docs/build/reference/aggregator/scale.md b/docs/build/reference/aggregator/scale.md
index 6dbc6efb0..d28969d8e 100644
--- a/docs/build/reference/aggregator/scale.md
+++ b/docs/build/reference/aggregator/scale.md
@@ -2,12 +2,12 @@
title: "Scale"
description: "Scales a similarity score by a factor."
icon: octicons/cross-reference-24
-tags:
+tags:
---
-# Scale
-
+# Scale
+
Scales a similarity score by a factor.
@@ -24,14 +24,12 @@ Scales a similarity score by a factor.
* Input values: `[1.0]`
* Returns: `0.5`
-
---
**Ignores missing values:**
* Input values: `[null]`
* Returns: `null`
-
---
**Throws a validation error if more than one input is provided:**
@@ -39,23 +37,16 @@ Scales a similarity score by a factor.
* Returns: `null`
* **Throws error:** `IllegalArgumentException`
-
-
-
## Parameter
### Factor
All input similarity values are multiplied with this factor.
-- ID: `factor`
-- Datatype: `double`
-- Default Value: `1.0`
-
-
-
-
+* ID: `factor`
+* Datatype: `double`
+* Default Value: `1.0`
## Advanced Parameter
-`None`
\ No newline at end of file
+`None`
diff --git a/docs/build/reference/customtask/.pages b/docs/build/reference/customtask/.pages
index 6b4a2b5e8..557a35df7 100644
--- a/docs/build/reference/customtask/.pages
+++ b/docs/build/reference/customtask/.pages
@@ -2,6 +2,7 @@ nav:
- index.md
- "Add project files": addProjectFiles.md
- "Cancel Workflow": CancelWorkflow.md
+ - "Clear dataset": clearDataset.md
- "Combine CSV files": combine-csv.md
- "Concatenate to file": ConcatenateToFile.md
- "Create Embeddings": cmem_plugin_llm-CreateEmbeddings.md
@@ -41,6 +42,7 @@ nav:
- "Parse XML": XmlParserOperator.md
- "Parse YAML": cmem_plugin_yaml-parse.md
- "Pivot": Pivot.md
+ - "Reason": cmem_plugin_reason-plugin_reason-ReasonPlugin.md
- "Request RDF triples": tripleRequestOperator.md
- "Scheduler": Scheduler.md
- "Search addresses": SearchAddresses.md
@@ -61,11 +63,13 @@ nav:
- "Store Vector Embeddings": cmem_plugin_pgvector-Store.md
- "Unpivot": Unpivot.md
- "Update Graph Insights Snapshots": cmem_plugin_graph_insights-Update.md
+ - "Update SemSpect": cmem_plugin_semspect-task-Update.md
- "Upload File to Knowledge Graph": eccencaDataPlatformGraphStoreFileUploadOperator.md
- "Upload files to Nextcloud": cmem_plugin_nextcloud-Upload.md
- "Upload local files": cmem_plugin_project_resources-UploadLocalFiles.md
- "Upload SSH files": cmem_plugin_ssh-Upload.md
- "Validate Entities": cmem_plugin_validation-validate-ValidateEntities.md
- "Validate Knowledge Graph": cmem_plugin_validation-validate-ValidateGraph.md
+ - "Validate OWL consistency": cmem_plugin_reason-plugin_validate-ValidatePlugin.md
- "Validate XML": validateXsdOperator.md
- "XSLT": xsltOperator.md
\ No newline at end of file
diff --git a/docs/build/reference/customtask/CancelWorkflow.md b/docs/build/reference/customtask/CancelWorkflow.md
index 79bd3ab60..e771d2365 100644
--- a/docs/build/reference/customtask/CancelWorkflow.md
+++ b/docs/build/reference/customtask/CancelWorkflow.md
@@ -2,17 +2,16 @@
title: "Cancel Workflow"
description: "Cancels a workflow if a specified condition is fulfilled. A typical use case for this operator is to cancel the workflow execution if the input data is empty."
icon: octicons/cross-reference-24
-tags:
+tags:
- WorkflowTask
---
-# Cancel Workflow
-
+# Cancel Workflow
+
Cancels a workflow if a specified condition is fulfilled. A typical use case for this operator is to cancel the workflow execution if the input data is empty.
-
## Parameter
### Type URI
@@ -23,8 +22,6 @@ The entity type to check the condition on.
- Datatype: `uri`
- Default Value: `None`
-
-
### Condition
The cancellation condition
@@ -33,8 +30,6 @@ The cancellation condition
- Datatype: `enumeration`
- Default Value: `empty`
-
-
### Invert condition
If true, the specified condition will be inverted, i.e., the workflow execution will be cancelled if the condition is not fulfilled.
@@ -43,8 +38,6 @@ If true, the specified condition will be inverted, i.e., the workflow execution
- Datatype: `boolean`
- Default Value: `false`
-
-
### Fail workflow
If true, the workflow execution will fail if the condition is met. If false, the workflow execution would be stopped, but shown as successfull.
@@ -53,10 +46,6 @@ If true, the workflow execution will fail if the condition is met. If false, the
- Datatype: `boolean`
- Default Value: `false`
-
-
-
-
## Advanced Parameter
-`None`
\ No newline at end of file
+`None`
diff --git a/docs/build/reference/customtask/ConcatenateToFile.md b/docs/build/reference/customtask/ConcatenateToFile.md
index e2ea70dbf..f63699df1 100644
--- a/docs/build/reference/customtask/ConcatenateToFile.md
+++ b/docs/build/reference/customtask/ConcatenateToFile.md
@@ -2,17 +2,16 @@
title: "Concatenate to file"
description: "Concatenates values into a file."
icon: octicons/cross-reference-24
-tags:
+tags:
- WorkflowTask
---
-# Concatenate to file
-
+# Concatenate to file
+
Concatenates values into a file.
-
## Parameter
### Path
@@ -23,8 +22,6 @@ Values from this path will be concatenated.
- Datatype: `string`
- Default Value: `None`
-
-
### Mime type
MIME type of the output file.
@@ -33,8 +30,6 @@ MIME type of the output file.
- Datatype: `string`
- Default Value: `None`
-
-
### Prefix
Prefix to be written before the first value.
@@ -43,8 +38,6 @@ Prefix to be written before the first value.
- Datatype: `multiline string`
- Default Value: `None`
-
-
### Glue
Separator to be inserted between concatenated values.
@@ -53,8 +46,6 @@ Separator to be inserted between concatenated values.
- Datatype: `multiline string`
- Default Value: `None`
-
-
### Suffix
Suffix to be written after the last value.
@@ -63,10 +54,6 @@ Suffix to be written after the last value.
- Datatype: `multiline string`
- Default Value: `None`
-
-
-
-
## Advanced Parameter
### Charset
@@ -77,8 +64,6 @@ The file encoding.
- Datatype: `string`
- Default Value: `UTF-8`
-
-
### File extension
File extension of the output file.
@@ -87,5 +72,3 @@ File extension of the output file.
- Datatype: `string`
- Default Value: `.tmp`
-
-
diff --git a/docs/build/reference/customtask/CustomSQLExecution.md b/docs/build/reference/customtask/CustomSQLExecution.md
index 4ba021ce8..af69ed2da 100644
--- a/docs/build/reference/customtask/CustomSQLExecution.md
+++ b/docs/build/reference/customtask/CustomSQLExecution.md
@@ -2,17 +2,16 @@
title: "Spark SQL query"
description: "Executes a custom SQL query on the first input Spark dataframe and returns the result as its output."
icon: octicons/cross-reference-24
-tags:
+tags:
- WorkflowTask
---
-# Spark SQL query
-
+# Spark SQL query
+
Executes a custom SQL query on the first input Spark dataframe and returns the result as its output.
-
## Parameter
### Command
@@ -23,10 +22,6 @@ SQL command. The name of the table in the statement must be 'dataset', regardles
- Datatype: `code-sql`
- Default Value: `None`
-
-
-
-
## Advanced Parameter
-`None`
\ No newline at end of file
+`None`
diff --git a/docs/build/reference/customtask/DistinctBy.md b/docs/build/reference/customtask/DistinctBy.md
index 695a0b1ad..b5d6fa1df 100644
--- a/docs/build/reference/customtask/DistinctBy.md
+++ b/docs/build/reference/customtask/DistinctBy.md
@@ -2,17 +2,16 @@
title: "Distinct by"
description: "Removes duplicated entities based on a user-defined path. Note that this operator does not retain the order of the entities."
icon: octicons/cross-reference-24
-tags:
+tags:
- WorkflowTask
---
-# Distinct by
-
+# Distinct by
+
Removes duplicated entities based on a user-defined path. Note that this operator does not retain the order of the entities.
-
## Parameter
### Distinct path
@@ -23,8 +22,6 @@ Entities that share this path will be deduplicated.
- Datatype: `string`
- Default Value: `None`
-
-
### Resolve duplicates
Strategy to resolve duplicates.
@@ -33,10 +30,6 @@ Strategy to resolve duplicates.
- Datatype: `enumeration`
- Default Value: `keepLast`
-
-
-
-
## Advanced Parameter
-`None`
\ No newline at end of file
+`None`
diff --git a/docs/build/reference/customtask/JsonParserOperator.md b/docs/build/reference/customtask/JsonParserOperator.md
index 2c13f3fe7..12f0da925 100644
--- a/docs/build/reference/customtask/JsonParserOperator.md
+++ b/docs/build/reference/customtask/JsonParserOperator.md
@@ -2,17 +2,16 @@
title: "Parse JSON"
description: "Parses an incoming entity as a JSON dataset. Typically, it is used before a transformation task. Takes exactly one input of which only the first entity is processed."
icon: octicons/cross-reference-24
-tags:
+tags:
- WorkflowTask
---
-# Parse JSON
-
+# Parse JSON
+
Parses an incoming entity as a JSON dataset. Typically, it is used before a transformation task. Takes exactly one input of which only the first entity is processed.
-
## Parameter
### Input path
@@ -23,8 +22,6 @@ The Silk path expression of the input entity that contains the JSON document. If
- Datatype: `string`
- Default Value: `None`
-
-
### Base path
The path to the elements to be read, starting from the root element, e.g., `/Persons/Person`. If left empty, all direct children of the root element will be read.
@@ -33,8 +30,6 @@ The path to the elements to be read, starting from the root element, e.g., `/Per
- Datatype: `string`
- Default Value: `None`
-
-
### URI suffix pattern
A URI pattern that is relative to the base URI of the input entity, e.g., `/{ID}`, where `{path}` may contain relative paths to elements. This relative part is appended to the input entity URI to construct the full URI pattern.
@@ -43,8 +38,6 @@ A URI pattern that is relative to the base URI of the input entity, e.g., `/{ID}
- Datatype: `string`
- Default Value: `None`
-
-
### Navigate into arrays
Navigate into arrays automatically. If set to false, the `#array` path operator must be used to navigate into arrays.
@@ -53,10 +46,6 @@ Navigate into arrays automatically. If set to false, the `#array` path operator
- Datatype: `boolean`
- Default Value: `true`
-
-
-
-
## Advanced Parameter
-`None`
\ No newline at end of file
+`None`
diff --git a/docs/build/reference/customtask/Merge.md b/docs/build/reference/customtask/Merge.md
index a2088c746..1bec86c94 100644
--- a/docs/build/reference/customtask/Merge.md
+++ b/docs/build/reference/customtask/Merge.md
@@ -2,21 +2,20 @@
title: "Join tables"
description: "Joins a set of inputs into a single table. Expects a list of entity tables and links. All entity tables are joined into the first entity table using the provided links."
icon: octicons/cross-reference-24
-tags:
+tags:
- WorkflowTask
---
-# Join tables
-
+# Join tables
+
Joins a set of inputs into a single table. Expects a list of entity tables and links. All entity tables are joined into the first entity table using the provided links.
-
## Parameter
`None`
## Advanced Parameter
-`None`
\ No newline at end of file
+`None`
diff --git a/docs/build/reference/customtask/MultiTableMerge.md b/docs/build/reference/customtask/MultiTableMerge.md
index ec668adf6..8234a4120 100644
--- a/docs/build/reference/customtask/MultiTableMerge.md
+++ b/docs/build/reference/customtask/MultiTableMerge.md
@@ -2,17 +2,16 @@
title: "Merge tables"
description: "Stores sets of instance and mapping inputs as relational tables with the mapping as an n:m relation. Expects a list of entity tables and links. All entity tables have a relation to the first entity table using the provided links."
icon: octicons/cross-reference-24
-tags:
+tags:
- WorkflowTask
---
-# Merge tables
-
+# Merge tables
+
Stores sets of instance and mapping inputs as relational tables with the mapping as an n:m relation. Expects a list of entity tables and links. All entity tables have a relation to the first entity table using the provided links.
-
## Parameter
### Multi table output
@@ -23,8 +22,6 @@ test
- Datatype: `boolean`
- Default Value: `true`
-
-
### Pivot table name
Name of the pivot table.
@@ -33,8 +30,6 @@ Name of the pivot table.
- Datatype: `string`
- Default Value: `None`
-
-
### Mapping names
Name of the mapping tables. Comma separated list.
@@ -43,8 +38,6 @@ Name of the mapping tables. Comma separated list.
- Datatype: `string`
- Default Value: `None`
-
-
### Instance set names
Name of the tables joined to the pivot. Comma separated list.
@@ -53,10 +46,6 @@ Name of the tables joined to the pivot. Comma separated list.
- Datatype: `string`
- Default Value: `None`
-
-
-
-
## Advanced Parameter
-`None`
\ No newline at end of file
+`None`
diff --git a/docs/build/reference/customtask/Pivot.md b/docs/build/reference/customtask/Pivot.md
index a64d029f5..b04936a87 100644
--- a/docs/build/reference/customtask/Pivot.md
+++ b/docs/build/reference/customtask/Pivot.md
@@ -2,13 +2,13 @@
title: "Pivot"
description: "The pivot operator takes data in separate rows, aggregates it and converts it into columns."
icon: octicons/cross-reference-24
-tags:
+tags:
- WorkflowTask
---
-# Pivot
-
+# Pivot
+
The pivot operator takes data in separate rows, aggregates it and converts it into columns.
@@ -26,7 +26,6 @@ The following aggregation (summary) functions are available:
- **sum** - Adds up the values (works with numbers only)
- **average** - Finds the average of the values (works with numbers only)
-
## Parameter
### Pivot property
@@ -37,8 +36,6 @@ The pivot column refers to the column in the input data that is used to organize
- Datatype: `string`
- Default Value: `None`
-
-
### First group property
The name of the first group column in the range. All columns starting with this will be grouped.
@@ -47,8 +44,6 @@ The name of the first group column in the range. All columns starting with this
- Datatype: `string`
- Default Value: `None`
-
-
### Last group property
The name of the last group column in the range. If left empty, only the first column is grouped.
@@ -57,8 +52,6 @@ The name of the last group column in the range. If left empty, only the first co
- Datatype: `string`
- Default Value: `None`
-
-
### Value property
The property that contains the grouped values that will be aggregated.
@@ -67,8 +60,6 @@ The property that contains the grouped values that will be aggregated.
- Datatype: `string`
- Default Value: `None`
-
-
### Aggregation function
The aggregation function used to aggregate values.
@@ -77,8 +68,6 @@ The aggregation function used to aggregate values.
- Datatype: `enumeration`
- Default Value: `sum`
-
-
### URI prefix
Prefix to prepend to all generated pivot columns.
@@ -87,10 +76,6 @@ Prefix to prepend to all generated pivot columns.
- Datatype: `string`
- Default Value: `None`
-
-
-
-
## Advanced Parameter
-`None`
\ No newline at end of file
+`None`
diff --git a/docs/build/reference/customtask/Scheduler.md b/docs/build/reference/customtask/Scheduler.md
index ef7de7666..80523b26b 100644
--- a/docs/build/reference/customtask/Scheduler.md
+++ b/docs/build/reference/customtask/Scheduler.md
@@ -2,13 +2,13 @@
title: "Scheduler"
description: "Executes a workflow at specified intervals."
icon: octicons/cross-reference-24
-tags:
+tags:
- WorkflowTask
---
-# Scheduler
-
+# Scheduler
+
The eccenca Build plugin `Scheduler` executes a given workflow at specified intervals.
@@ -46,7 +46,6 @@ _next_ period occurs. If the start time lies in the _future_, then this is simpl
As mentioned, the `CancelWorkflow` plugin can be used on par in order to _cancel_ the otherwise never-ending execution
of a workflow.
-
## Parameter
### Workflow
@@ -57,8 +56,6 @@ The name of the workflow to be executed
- Datatype: `task`
- Default Value: `None`
-
-
### Interval
The interval at which the scheduler should run the referenced task. It must be in ISO-8601 duration format PnDTnHnMn.nS.
@@ -67,8 +64,6 @@ The interval at which the scheduler should run the referenced task. It must be i
- Datatype: `duration`
- Default Value: `PT15M`
-
-
### Start time
The time when the scheduled task is run for the first time, e.g., 2017-12-03T10:15:30. If no start time is set, midnight on the day the scheduler is started is assumed.
@@ -77,8 +72,6 @@ The time when the scheduled task is run for the first time, e.g., 2017-12-03T10:
- Datatype: `string`
- Default Value: `None`
-
-
### Enabled
Enables or disables the scheduler. It's enabled by default.
@@ -87,8 +80,6 @@ Enables or disables the scheduler. It's enabled by default.
- Datatype: `boolean`
- Default Value: `true`
-
-
### Stop on error
If set to true, this will stop the scheduler, so the failed task is not scheduled again for execution.
@@ -97,10 +88,6 @@ If set to true, this will stop the scheduler, so the failed task is not schedule
- Datatype: `boolean`
- Default Value: `false`
-
-
-
-
## Advanced Parameter
-`None`
\ No newline at end of file
+`None`
diff --git a/docs/build/reference/customtask/SearchAddresses.md b/docs/build/reference/customtask/SearchAddresses.md
index eb2f5b7b4..85e8436b0 100644
--- a/docs/build/reference/customtask/SearchAddresses.md
+++ b/docs/build/reference/customtask/SearchAddresses.md
@@ -2,18 +2,17 @@
title: "Search addresses"
description: "Looks up locations from textual descriptions using the configured geocoding API. Outputs results as RDF."
icon: octicons/cross-reference-24
-tags:
+tags:
- WorkflowTask
---
-# Search addresses
-
-
+# Search addresses
+
-**Configuration**
+## Configuration
-The geocoding service to be queried for searches can be set up in the configuration.
+The Geocoding service to be queried for searches can be set up in the configuration.
The default configuration is as follows:
com.eccenca.di.geo = {
@@ -21,22 +20,22 @@ The default configuration is as follows:
# url = "https://nominatim.eccenca.com/search"
url = "https://photon.komoot.de/api"
# url = https://api-adresse.data.gouv.fr/search
-
+
# Additional URL parameters to be attached to all HTTP search requests. Example: '&countrycodes=de&addressdetails=1'.
# Will be attached in addition to the parameters set on each search operator directly.
searchParameters = ""
-
+
# The minimum pause time between subsequent queries
pauseTime = 1s
-
+
# Number of coordinates to be cached in-memory
cacheSize = 10
}
-
+
In general, all services adhering to the [Nominatim search API](https://nominatim.org/release-docs/develop/api/Search/) should be usable.
Please note that when using public services, the pause time should be set to avoid overloading.
-**Logging**
+## Logging
By default, individual requests to the geocoding service are not logged. To enable logging each request, the following configuration option can be set:
@@ -44,7 +43,6 @@ By default, individual requests to the geocoding service are not logged. To enab
com.eccenca.di.geo=DEBUG
}
-
## Parameter
### Search attributes
@@ -55,8 +53,6 @@ List of attributes that contain search terms. Multiple attributes (comma-separat
- Datatype: `traversable[string]`
- Default Value: `None`
-
-
### Limit
Optionally limits the number of results for each search.
@@ -65,10 +61,6 @@ Optionally limits the number of results for each search.
- Datatype: `option[int]`
- Default Value: `None`
-
-
-
-
## Advanced Parameter
### JSON-LD context
@@ -79,8 +71,6 @@ Optional JSON-LD context to be used for converting the returned JSON to RDF. If
- Datatype: `resource`
- Default Value: `None`
-
-
### Additional parameters
Additional URL parameters to be attached to each HTTP search request. Example: '&countrycodes=de&addressdetails=1'. Consult the API documentation for a list of available parameters.
@@ -89,5 +79,3 @@ Additional URL parameters to be attached to each HTTP search request. Example: '
- Datatype: `string`
- Default Value: `None`
-
-
diff --git a/docs/build/reference/customtask/SendEMail.md b/docs/build/reference/customtask/SendEMail.md
index d4c5f5fd1..ec3e6417c 100644
--- a/docs/build/reference/customtask/SendEMail.md
+++ b/docs/build/reference/customtask/SendEMail.md
@@ -2,13 +2,13 @@
title: "Send email"
description: "Sends an email using an SMTP server."
icon: octicons/cross-reference-24
-tags:
+tags:
- WorkflowTask
---
-# Send email
-
+# Send email
+
Sends an email using an SMTP server with support for both plain text and HTML formatted messages.
@@ -57,7 +57,6 @@ Enable HTML formatting and use standard HTML markup in your message: