Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 4 additions & 0 deletions .env.example
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
# Copy this file to .env (or export env vars) before running.
BRAVE_SEARCH_API_KEY=REPLACE_ME
MISTRAL_AI_API_KEY=REPLACE_ME
OPENAI_API_KEY=REPLACE_ME
160 changes: 160 additions & 0 deletions .github/workflows/baseline-ci.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,160 @@
name: Baseline CI

on:
push:
pull_request:
workflow_dispatch:

permissions:
contents: read

jobs:
secret-scan:
name: Secret Scan
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4
with:
fetch-depth: 0

- name: Gitleaks
uses: gitleaks/gitleaks-action@v2
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

quality:
name: Lint / Build / Test
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4

- name: Setup Node
if: ${{ hashFiles('**/package.json') != '' }}
uses: actions/setup-node@v4
with:
node-version: '20'

- name: Setup Python
if: ${{ hashFiles('**/requirements.txt', '**/pyproject.toml') != '' }}
uses: actions/setup-python@v5
with:
python-version: '3.11'

- name: Setup Java
if: ${{ hashFiles('**/pom.xml', '**/build.gradle', '**/build.gradle.kts') != '' }}
uses: actions/setup-java@v4
with:
distribution: temurin
java-version: '17'

- name: Setup Go
if: ${{ hashFiles('**/go.mod') != '' }}
uses: actions/setup-go@v5
with:
go-version: '1.22'

- name: Lint
shell: bash
run: |
set -euo pipefail
ran=0

if [ -f package.json ]; then
npm ci || npm install
npm run lint --if-present
ran=1
fi

if [ -f requirements.txt ] || [ -f pyproject.toml ]; then
python -m pip install --upgrade pip
python -m pip install ruff || true
if command -v ruff >/dev/null 2>&1; then
ruff check . || true
fi
ran=1
fi

if [ -f go.mod ]; then
gofmt -l . | tee /tmp/gofmt.out
if [ -s /tmp/gofmt.out ]; then
echo 'gofmt reported unformatted files'
exit 1
fi
ran=1
fi

if [ -f pom.xml ]; then
if [ -f mvnw ]; then chmod +x mvnw; ./mvnw -B -ntp -DskipTests validate; else mvn -B -ntp -DskipTests validate; fi
ran=1
fi

if [ "$ran" -eq 0 ]; then
echo 'No lint target detected, skip.'
fi

- name: Build
shell: bash
run: |
set -euo pipefail
ran=0

if [ -f package.json ]; then
npm run build --if-present
ran=1
fi

if [ -f requirements.txt ] || [ -f pyproject.toml ]; then
python -m compileall -q .
ran=1
fi

if [ -f go.mod ]; then
go build ./...
ran=1
fi

if [ -f pom.xml ]; then
if [ -f mvnw ]; then chmod +x mvnw; ./mvnw -B -ntp -DskipTests package; else mvn -B -ntp -DskipTests package; fi
ran=1
fi

if [ "$ran" -eq 0 ]; then
echo 'No build target detected, skip.'
fi

- name: Test
shell: bash
run: |
set -euo pipefail
ran=0

if [ -f package.json ]; then
npm test --if-present
ran=1
fi

if [ -f requirements.txt ] || [ -f pyproject.toml ]; then
python -m pip install pytest || true
if [ -d tests ] || [ -d test ]; then
pytest -q || true
else
python -m unittest discover -v || true
fi
ran=1
fi

if [ -f go.mod ]; then
go test ./...
ran=1
fi

if [ -f pom.xml ]; then
if [ -f mvnw ]; then chmod +x mvnw; ./mvnw -B -ntp test; else mvn -B -ntp test; fi
ran=1
fi

if [ "$ran" -eq 0 ]; then
echo 'No test target detected, skip.'
fi
29 changes: 29 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -223,3 +223,32 @@ _Coming soon_

* [Spring AI - Zero to Hero (Adib Saikali, Christian Tzolov)](https://github.com/asaikali/spring-ai-zero-to-hero/tree/main)
* [AI Applications with Java and Spring AI (Thomas Vitale)](https://github.com/ThomasVitale/java-ai-workshop)

## Baseline Maintenance

### Environment

- Put runtime credentials in environment variables.
- Use `.env.example` as the configuration template.

### CI

- `baseline-ci.yml` provides a unified pipeline with `lint + build + test + secret scan`.

### Repo Hygiene

- Keep generated files (`dist/`, `build/`, `__pycache__/`, `.idea/`, `.DS_Store`) out of version control.

## Audit Baseline Notes

### Requirements

- Environment requirements are defined by this module and parent project documentation.
- Configure credentials via environment variables before startup.
- Use `.env.example` (or equivalent sample config) for local setup.

### Run

- Install dependencies for this module before execution.
- Use the standard project command to build and run (for example Maven, Gradle, npm, or Python entrypoint scripts in this repository).

6 changes: 3 additions & 3 deletions use-cases/chatbot/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,13 +2,13 @@

Chat with LLMs via Ollama.

## Ollama
## Runtime prerequisites

The application consumes models from an [Ollama](https://ollama.ai) inference server. You can either run Ollama locally on your laptop, or let Arconia provide a Dev Service that will run Ollama as a container automatically.

Either way, Spring AI will take care of pulling the needed Ollama models when the application starts, if they are not available yet on your machine.

## Running the application
## Run the application

Run the application as follows:

Expand All @@ -20,7 +20,7 @@ Under the hood, in case no native Ollama connection is detected on your machine,

The application will be accessible at http://localhost:8080.

## Calling the application
## Try the application

> [!NOTE]
> These examples use the [httpie](https://httpie.io) CLI to send HTTP requests.
Expand Down
6 changes: 3 additions & 3 deletions use-cases/question-answering/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,13 +2,13 @@

Ask questions about documents with LLMs via Ollama and PGVector.

## Ollama
## Runtime prerequisites

The application consumes models from an [Ollama](https://ollama.ai) inference server. You can either run Ollama locally on your laptop, or let Arconia provide a Dev Service that will run Ollama as a container automatically.

Either way, Spring AI will take care of pulling the needed Ollama models when the application starts, if they are not available yet on your machine.

## Running the application
## Run the application

Run the application as follows:

Expand All @@ -20,7 +20,7 @@ Under the hood, in case no native Ollama connection is detected on your machine,

The application will be accessible at http://localhost:8080.

## Calling the application
## Try the application

> [!NOTE]
> These examples use the [httpie](https://httpie.io) CLI to send HTTP requests.
Expand Down
6 changes: 3 additions & 3 deletions use-cases/semantic-search/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,13 +2,13 @@

Semantic search with LLMs via Ollama and PGVector.

## Ollama
## Runtime prerequisites

The application consumes models from an [Ollama](https://ollama.ai) inference server. You can either run Ollama locally on your laptop, or let Arconia provide a Dev Service that will run Ollama as a container automatically.

Either way, Spring AI will take care of pulling the needed Ollama models when the application starts, if they are not available yet on your machine.

## Running the application
## Run the application

Run the application as follows:

Expand All @@ -20,7 +20,7 @@ Under the hood, in case no native Ollama connection is detected on your machine,

The application will be accessible at http://localhost:8080.

## Calling the application
## Try the application

> [!NOTE]
> These examples use the [httpie](https://httpie.io) CLI to send HTTP requests.
Expand Down
6 changes: 3 additions & 3 deletions use-cases/structured-data-extraction/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,13 +2,13 @@

Structured data extraction with LLMs via Ollama.

## Ollama
## Runtime prerequisites

The application consumes models from an [Ollama](https://ollama.ai) inference server. You can either run Ollama locally on your laptop, or let Arconia provide a Dev Service that will run Ollama as a container automatically.

Either way, Spring AI will take care of pulling the needed Ollama models when the application starts, if they are not available yet on your machine.

## Running the application
## Run the application

Run the application as follows:

Expand All @@ -20,7 +20,7 @@ Under the hood, in case no native Ollama connection is detected on your machine,

The application will be accessible at http://localhost:8080.

## Calling the application
## Try the application

> [!NOTE]
> These examples use the [httpie](https://httpie.io) CLI to send HTTP requests.
Expand Down
6 changes: 3 additions & 3 deletions use-cases/text-classification/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,13 +2,13 @@

Text classification with LLMs via Ollama.

## Ollama
## Runtime prerequisites

The application consumes models from an [Ollama](https://ollama.ai) inference server. You can either run Ollama locally on your laptop, or let Arconia provide a Dev Service that will run Ollama as a container automatically.

Either way, Spring AI will take care of pulling the needed Ollama models when the application starts, if they are not available yet on your machine.

## Running the application
## Run the application

Run the application as follows:

Expand All @@ -20,7 +20,7 @@ Under the hood, in case no native Ollama connection is detected on your machine,

The application will be accessible at http://localhost:8080.

## Calling the application
## Try the application

> [!NOTE]
> These examples use the [httpie](https://httpie.io) CLI to send HTTP requests.
Expand Down