An app to check if your dog's friends are at their favorite park
Find local dog parks, make friends with the other dogs and their owners, and let one another know when you're going so your pups can always have playmates. Or use it to avoid dogs that don't get along. Either way, ensures you and your canine companions have the best time at the park.
As this is an MVP it is limited in scope to only included the public dog parks in Helsinki, and currently has no geolocation functionality.
- Docker & Docker Compose
- OpenSSL (for certificate generation)
- Node.js (for JWT secret generation, optional - auto-generated if not provided)
The easiest way to deploy DogParkPals is using the stack control script:
# Deploy everything with automatic configuration
./scripts/stack.sh --freshThis single command will:
- ✓ Copy
docker-secrets-exampletodocker-secretsif it doesn't exist - ✓ Auto-generate secure
JWT_SECRETandELASTIC_PASSWORDif needed - ✓ Fix Elasticsearch authentication configuration automatically
- ✓ Generate SSL certificates with proper Subject Alternative Names
- ✓ Build all Docker images
- ✓ Start all services (backend, frontend, observability stack)
- ✓ Seed the database with test data
- ✓ Initialize Kibana and Elasticsearch
What you get:
- Frontend: https://localhost:5173
- Backend API: https://localhost:3000
- Kibana: http://localhost:5602
- Grafana: http://localhost:3001
- Prometheus: http://localhost:9090
Note: You'll see browser warnings for self-signed certificates (safe to ignore locally)
If you prefer to configure everything manually:
-
Copy environment configuration
cp docker-secrets-example docker-secrets
-
Edit docker-secrets (Optional - auto-generation handles most settings)
nano docker-secrets
Optional manual configuration:
JWT_SECRET: Auto-generated if empty (or use:node -e "console.log(require('crypto').randomBytes(32).toString('hex'))")ELASTIC_PASSWORD: Auto-generated if empty/placeholderGOOGLE_CLIENT_ID: (Optional) For Google OAuthGOOGLE_CLIENT_SECRET: (Optional) For Google OAuth
-
Deploy using stack script
./scripts/stack.sh --fresh
# Deploy full stack with all observability
./scripts/stack.sh --fresh
# Deploy core app only (fast, low-resource - no monitoring/logging)
./scripts/stack.sh --core-only
# Stop observability services only (keeps backend/frontend running)
./scripts/stack.sh --obs-down
# Full cleanup (remove all containers, volumes, and images)
./scripts/stack.sh --clean
# Check service status
docker compose ps
# View logs
docker compose logs -f
# View specific service logs
docker compose logs -f backend
docker compose logs -f frontendElasticsearch Connectivity:
- The local observability stack exposes Elasticsearch on
http://localhost:9200 - The current branch runs Elasticsearch without X-Pack auth in local Docker compose
- Kibana connects to Elasticsearch over plain HTTP in the default local setup
- If you re-enable Elasticsearch security later, update the compose env and curl examples accordingly
Kibana Startup:
- First startup can take 2-5 minutes
- Status
health: startingis normal during initialization - If Kibana times out, core services continue - setup can be run manually later
- Access at http://localhost:5602 once healthy
Deployment Reliability:
- ✅ Automatic volume cleanup prevents stale Elasticsearch state
- ✅ Container health checks verify services before proceeding
- ✅ Kibana initialization is non-fatal (warns but continues)
- ✅ Faster timeouts with better error detection
- ✅ Progress shown every 10 seconds (less noise)
- ✅ Core-only mode available for quick testing without observability
Observability Security Model:
- The current local branch configuration keeps Kibana and Elasticsearch unauthenticated
- This reduces local setup friction while HTTPS remains enabled for the app and RabbitMQ management
- Production hardening still requires re-enabling auth and transport security for observability services
-
CAFILE=/etc/rabbitmq-exporter/ca.pem-SKIPVERIFY=false- volume mount:./certs/rabbitmq.crt:/etc/rabbitmq-exporter/ca.pem:ro
-
Expect transient
rabbitmq-exporter504 after fresh cert regeneration- After
./scripts/stack.sh --fresh, certificates are regenerated. - If
rabbitmqandrabbitmq-exporterare not restarted in sync, exporter health can showError checking url: Unexpected http code 504and briefly becomeunhealthy. - Recovery:
docker compose restart rabbitmq rabbitmq-exporter
- This is usually transient and resolves once both services are on the same cert/runtime state.
- After
-
Access the application
- Frontend: https://localhost:5173
- Backend API: https://localhost:3000
- Prometheus: http://localhost:9090
- Grafana: http://localhost:3001 (admin/admin)
- Kibana: http://localhost:5602
- RabbitMQ Management: https://localhost:15671 (guest/guest)
- Elasticsearch: http://localhost:9200 (local Docker setup uses plain HTTP with no auth)
Note: You'll see a certificate warning in your browser when accessing HTTPS URLs with the self-signed certificate. This is expected for local development and can be safely bypassed.
Management UI:
- HTTPS URL: https://localhost:15671
- Default credentials:
guest/guest
Queue Transport Security:
The project supports both plaintext and encrypted RabbitMQ connections:
| Protocol | Port | Default | Use Case |
|---|---|---|---|
amqp:// |
5672 | ❌ No | Local debugging only |
amqps:// |
5671 | ✅ Yes | Default and recommended (encrypted queue traffic) |
Default: AMQPS (amqps://rabbitmq:5671) is enabled by default.
🔒 Enabling AMQPS (Recommended for Production):
To enable encrypted queue transport, update docker-secrets:
RABBITMQ_URL=amqps://rabbitmq:5671
RABBITMQ_CA_PATH=/app/certs/rabbitmq.crt
RABBIT_SKIP_VERIFY=falsePrerequisites:
- SSL certificates must be generated first (see setup step 1 above)
- Backend will fail to start if
RABBITMQ_CA_PATHdoesn't exist when usingamqps://
Event Queue Behavior:
- Failed event messages retry up to
EVENT_QUEUE_MAX_RETRIES(default: 5) - After max retries, messages move to Dead Letter Queue:
EVENT_QUEUE_DLQ_NAME
Primary Stack Control (Recommended):
# Full deployment with auto-configuration
./scripts/stack.sh --fresh # Complete setup + seeding + observability init
# Stop observability services only (keeps app running)
./scripts/stack.sh --obs-down # Stop Elasticsearch, Kibana, Grafana, Prometheus, Logstash
# Full cleanup
./scripts/stack.sh --clean # Stop all + remove volumes/images (fresh slate)
# Deployment verification
bash scripts/verify-deploy.sh # Run pitfall checks (ports, healthchecks, TLS wiring, etc.)Manual Docker Compose Commands:
# Start all services (after stack.sh has configured everything)
docker compose --env-file docker-secrets up -d
# Start minimal services only (core app without observability)
docker compose --env-file docker-secrets up -d backend frontend rabbitmq db-init
# Stop observability services
docker compose stop elasticsearch logstash kibana prometheus grafana rabbitmq-exporter
# View all logs
docker compose logs -f
# View specific service logs
docker compose logs -f backend
# Stop all services
docker compose down
# Restart a specific service
docker compose restart backendService Profiles:
-
Minimal Stack (core app only): backend, frontend, rabbitmq, db-init
- ✓ Fully functional app with events/messaging
- ✓ Lower resource usage (~1-2GB RAM)
- ✓ Fast startup (~30-60 seconds)
- ✓ Ideal for development and testing
-
Full Stack (with observability): All minimal + elasticsearch, logstash, kibana, prometheus, grafana, rabbitmq-exporter
- ✓ Centralized logging and search (ELK stack)
- ✓ Metrics and monitoring dashboards
- ✓ Higher resource usage (~4-6GB RAM)
- ✓ Longer startup (~2-5 minutes for Kibana)
- Startup time: 3-5 minutes (Kibana alone takes 2-5 min first run)
# Minimal evaluation setup (core app only + seed database)
./scripts/stack.sh --core-only
# Full evaluation setup (all services + seed + ELK/Prometheus setup)
./scripts/stack.sh --fresh
# ⏱️ Note: Kibana can take 2-5 minutes to fully initialize on first run; the script will wait
# Reset everything (WARNING: deletes all data)
./scripts/stack.sh --clean
# Generate test logs (verify Elasticsearch is working)
bash elasticsearch/generate-test-logs.sh
# Access backend shell
docker exec -it dogparkpals-backend sh
# Run migrations
docker exec -it dogparkpals-backend npx prisma migrate deploy
# Open Prisma Studio
docker exec -it dogparkpals-backend npx prisma studioQuick reference for running DogParkPals locally with Docker and validating the event-driven pipeline.
- Docker + Docker Compose
docker-secretscreated fromdocker-secrets-example
docker compose --env-file docker-secrets up -d(starts backend, frontend, db-init, rabbitmq, prometheus, grafana, rabbitmq-exporter)docker compose logs -f backenddocker compose logs -f rabbitmq
- Backend:
https://localhost:3000/health - Status:
https://localhost:3000/status - Frontend:
https://localhost:5173 - RabbitMQ UI:
https://localhost:15671(defaultguest/guest)
DogParkPals includes Prometheus metrics and Grafana dashboards for observability.
Access:
- Prometheus:
http://localhost:9090 - Grafana:
http://localhost:3001(username:admin, password:admin) - Backend Metrics:
https://localhost:3000/metrics - RabbitMQ Exporter:
http://rabbitmq-exporter:9419/metrics(internal Docker network) - Elasticsearch:
http://localhost:9200(local Docker setup uses plain HTTP with no auth) - Kibana:
http://localhost:5602
Available Metrics:
- Node.js runtime (memory, event loop lag, GC)
- Event handler executions (success/failure counts, duration by event type)
- Background job executions (outboxPublisher, autoCheckoutJob, eventConsumer)
- Outbox event publishing (success/failure by event type)
- Auto park checkout operations
- RabbitMQ queue metrics (queue depth, message rates, connections)
Dashboards:
- "DogParkPals - System Overview" is auto-provisioned on Grafana startup
- View 10 panels covering system health, operations, performance, and event bus monitoring
- Auto-refreshes every 10 seconds, shows last hour by default
Querying Prometheus:
- Example:
dogparkpals_event_handler_executions_total{status="success"} - Example:
rate(dogparkpals_outbox_events_published_total[5m]) - Example:
histogram_quantile(0.95, rate(dogparkpals_job_duration_seconds_bucket[5m]))
DogParkPals includes an ELK stack (Elasticsearch, Logstash, Kibana) for centralized log aggregation, search, and audit trail capabilities.
Access:
- Kibana:
http://localhost:5602 - Elasticsearch:
http://localhost:9200(plain HTTP in the default local Docker stack) - Logstash: Receives logs on TCP/UDP port 5000 (not user-facing)
Verifying Logs (Fresh Deployments):
In a fresh deployment, logs are generated automatically but may be minimal initially. To verify Elasticsearch is working:
-
Wait for automatic logs (1-2 minutes after startup):
- Server startup:
"Server listening"log - Auto-checkout job: Runs every 15 minutes
- Event consumer & outbox publisher startup logs
- Server startup:
-
Generate test logs immediately (recommended for evaluations):
bash elasticsearch/generate-test-logs.sh
This hits health/status endpoints 20 times to populate Elasticsearch with verifiable logs.
-
Verify logs exist in Elasticsearch:
# Check total log count curl 'http://localhost:9200/dogparkpals-logs-*/_count' # View 5 most recent logs curl -s \ 'http://localhost:9200/dogparkpals-logs-*/_search?size=5&sort=@timestamp:desc' | jq '.hits.hits[]._source | {timestamp: .["@timestamp"], severity, log_message}'
-
Open Kibana to browse logs visually at
http://localhost:5602
Note: Logs take 5-10 seconds to flow from backend → Logstash → Elasticsearch → Kibana indexing.
Setup: After starting Docker services, initialize Kibana with dashboards and saved searches:
bash kibana/setup-kibana.shFor 42 evaluation-ready setup (starts all services + seeds DB + configures Kibana + generates test logs):
./scripts/stack.sh --freshThis will:
- Start all services (backend, frontend, Elasticsearch, Logstash, Kibana, Prometheus, Grafana, RabbitMQ)
- Wait for services to be healthy
- Seed the database with sample data
- Configure Kibana dashboards and index patterns
- Generate test logs for verification
This creates:
- Elasticsearch ILM (Index Lifecycle Management) policy for automatic log retention
- Index pattern
dogparkpals-logs-*(queries all log indices automatically) - 8 pre-configured saved searches (all logs, errors, events, failed jobs, etc.)
- 5 sample dashboards (event timeline, error analysis, user activity, system health, audit trail)
Log Retention (Automatic):
- Hot Phase (0-1 day): Active indexing, rollover at 50GB or 1 day
- Warm Phase (1-7 days): Read-only access
- Delete Phase (30+ days): Automatic deletion
Logs older than 30 days are automatically deleted to prevent disk space issues. Customize retention in elasticsearch/ilm-policy.json.
Key Features:
Event-Driven Audit Trail:
- All 34 domain event types are automatically logged to Elasticsearch
- Immutable record of every system change for compliance
- Tagged with event_id, actor_id, user_id, dog_id, park_id, organization_id
- Searchable by time, user, event type, resource ID, or outcome
Structured JSON Logging:
- Backend logs sent to Logstash via UDP (fire-and-forget, no blocking)
- Automatically parsed and enriched with contextual fields
- Error stack traces captured and searchable
- Handler performance metrics (duration_ms) tracked
- Request tracing with trace_id for correlation
Available Dashboards:
- Event Timeline - Real-time volume of all domain events (24h)
- Error Analysis - Error distribution by severity and trends (7d)
- User Activity Breakdown - Top users and event types (7d)
- System Health & Performance - Failed jobs and handler latency (24h)
- Complete Audit Trail - Searchable record of all 34 event types (30d)
Quick Searches:
Access these pre-built saved searches in Kibana → Discover:
- All Logs - View complete system logs
- Errors & Warnings - Filter for severity: error, fatal, warn
- Domain Events - Pure event-driven audit trail (context_type: event)
- Failed Background Jobs - job.failed events from outboxPublisher, autoCheckoutJob, eventConsumer
- Event Handler Performance - Handler execution logs with timing metrics
Common Queries (KQL - Kibana Query Language):
# All errors in last hour
severity: error OR severity: fatal
# Events by specific user
actor_id: 123
# Failed operations affecting a dog
dog_id: 456 AND severity: error
# Trace event workflow
event_type: (friend.request.sent OR friend.request.accepted)
# Handler performance over threshold
duration_ms > 500
# All system changes in a park
park_id: 789
Log Fields Reference:
Key searchable fields automatically extracted by Logstash:
@timestamp- Event timestamp (UTC, indexed for fast queries)severity- Log level: debug, info, warn, error, fatalcontext_type- Log category: event (domain events), request, errorevent_id- UUID of domain eventevent_type- Type of domain event (e.g., friend.request.sent)actor_id- User who triggered the eventuser_id,dog_id,park_id,organization_id- Affected resourcesduration_ms- Execution time in milliseconds (for handlers/jobs)error.message,error.stack- Exception detailstrace_id- Request correlation ID for tracing
Full field reference: See kibana/README.md
Monitoring Best Practices:
Daily:
- Open System Health & Performance dashboard
- Check for
job.failedevents (indicates stuck background jobs) - Review handler performance (watch for > 500ms outliers)
Weekly:
- Open Complete Audit Trail dashboard
- Sample events to verify audit trail accuracy
- Check Error Analysis for recurring patterns
Incident Response:
- Use Complete Audit Trail → Filter by timestamp/user/resource
- Use Error Analysis → Identify when errors started/stopped
- Export relevant logs (Kibana → inspect panel → download CSV)
Troubleshooting:
No logs appearing in Kibana?
- Verify Logstash is running:
docker compose logs logstash - Check Elasticsearch has data:
curl http://localhost:9200/dogparkpals-logs-*/_count - Generate test logs:
bash elasticsearch/generate-test-logs.sh - Run setup script:
bash kibana/setup-kibana.sh - Ensure backend is logging (check
docker compose logs backend)
"No matching indices" error?
- Wait 30 seconds for first log to arrive
- Refresh index pattern: Kibana → Stack Management → Index Patterns → dogparkpals-logs-* → Refresh fields
Kibana is slow?
- Use shorter time range (Last 24h instead of Last 30d)
- Add more specific filters
- Archive indices older than 30 days (advanced: see Elasticsearch docs)
Disk space filling up?
- Check retention policy is active:
curl http://localhost:9200/_ilm/policy/dogparkpals-logs-ilm - Manually delete old indices:
curl -X DELETE http://localhost:9200/dogparkpals-logs-2026.01.* - Reduce retention period: Edit elasticsearch/ilm-policy.json
Complete Documentation:
- Kibana setup and log retention: kibana/README.md
- Dashboard details and examples: kibana/DASHBOARDS.md
- Logstash pipeline: logstash/pipeline/dogparkpals.conf
- Elasticsearch index template: elasticsearch/index-template.json
- Elasticsearch ILM policy: elasticsearch/ilm-policy.json
- Ensure
EVENT_BUS_ENABLEDis notfalseindocker-secrets. - Check outbox publisher logs for publish success.
- Verify DLQ size in RabbitMQ UI if retries are exhausted.
- Run migrations (db-init should do this):
docker exec -it dogparkpals-backend npx prisma migrate deploy
- Full stack startup with seeding:
./scripts/stack.sh --fresh
- Core services only:
./scripts/stack.sh --core-only
- Reset all data:
./scripts/stack.sh --clean
- Verify ELK stack (logs are flowing to Elasticsearch):
bash elasticsearch/generate-test-logs.shcurl 'http://localhost:9200/dogparkpals-logs-*/_count'
- Emit backup started:
docker compose run --rm backup-events started --backupId=backup-local --target=db --storage=local
- Emit backup succeeded:
docker compose run --rm backup-events succeeded --backupId=backup-local --sizeBytes=123456 --durationMs=120000
- Emit backup failed:
docker compose run --rm backup-events failed --backupId=backup-local --error="backup failed"
- Outbox publish failures logged as
job.failedevents (see backend logs). - Event consumer start failures logged as
job.failedevents. - Auto-checkout job failures logged as
job.failedevents.
docker compose down
- Ensure Node 20+ is installed:
node --version - Backend uses local SQLite database at
backend/dev.db(created automatically). - Setup:
cd backendnpm installnpx prisma generate(generates Prisma client)npx prisma migrate dev --name init(first time only)npx prisma db seed(optional: seeds test data; configured via package.json Prisma hook)
- Start dev server:
npm run dev(TypeScript watch mode) ornpm run build && node dist/server.js(production)- Server listens on
https://localhost:3000 - Health check:
GET /healthorGET /status
- Install dependencies:
npm install
- (Optional) Create
.envfile if you need custom API URL:Note: Defaults toecho 'VITE_API_URL=https://localhost:3000' > .env
https://localhost:3000if not specified - Start the frontend dev server:
npm run dev
- Frontend app runs on
https://localhost:5174 http://localhost:5173returns a308redirect tohttps://localhost:5174- Open in browser:
https://localhost:5174 - If backend OAuth callbacks are used locally, set
FRONTEND_URL=https://localhost:5174inbackend/.env
- Frontend app runs on
- Do not commit
backend/prisma/generated/client/(generated Prisma client; runnpx prisma generateafter pulling). - Do not commit
backend/dev.db(local SQLite database). - Do not commit
docker-secrets(Docker environment variables). - Prisma migrations in
backend/prisma/migrationsare versioned; runnpx prisma migrate deployafter pulling to sync. - Environment:
DATABASE_URL=file:./dev.dbis set inbackend/.envfor local SQLite development. - Prisma config file
prisma.config.tshas been removed; Prisma readsschema.prismaand the seed hook frompackage.json.
- Copilot: Code Reviews, tests
- Youtube: ORM and database schema basics
- Pineapple Pizza: Morale Boosting
- Gregory Pellechi: Product Owner
- Laura Guillen: Product Manager
- Renato de Moraes Bonilha: Tech Lead
- Jules Pierce: Developer & Team Mascot
- Mark Byrne: Developer
- Tasks: Kanban Board on Github Projects
- Repository: Github https://github.com/OneGameDad/DogParkPals
- Diagrams, Meeting Notes, etc: Miro
- Discord: Communication
- Meetings: In-person at the Hive (averaged 1 a week), online (see Discord)
- Documentation: Markdown files, code comments
- Vite
- React
- NodeJS
- Express
- Typescript
- SQLite(Database)
- Prisma 6 (ORM)
- Jest (Backend unit tests)
- Supertest (Backend integration tests)
- RabbitMQ (Queue & Messaging)
- Prometheus (Metrics & Monitoring)
- Grafana (Dashboards & Visualization)
- Elasticsearch (Centralized Logging)
- Logstash (Log Processing & Enrichment)
- Kibana (Log Search & Dashboard)
Reasoning: All commonly used tech, requested or required in many job advertisements. They also are well documented and supported. ELK stack provides immutable event-driven audit trail, centralized logging, and compliance-ready dashboards.
The database is built with SQLite and managed by Prisma ORM. Below is an overview of the core models and their relationships:
User - User accounts with profile information, authentication, and relationships
- Authentication: email, password_hash, username
- Profile: first_name, last_name, profilePictureUrl, latitude, longitude
- Roles: CLIENT, DEVELOPER, ADMIN
- Relationships: dog ownerships, check-ins, organizations, friendships, enemies, messages, notifications, events, achievements, levels
Dog - Dog profiles with breed, size, and play style information
- Attributes: name, breed (extensive enum of 500+ breeds), gender, size (TOY, SMALL, MEDIUM, LARGE, GIANT, KAIJU), playstyle (SOCIAL, SHY, AGGRESSIVE, ENERGETIC, CALM)
- Health/Care: dateOfBirth, fixed, vaccinationRecordUrl
- Relationships: owner records, check-ins, friendships, enemies
Park - Dog park locations with amenities and descriptions
- Location: name, latitude, longitude
- Features: description, separateSmallDogArea, amenities (JSON), profilePictureUrl
- Relationships: events, comments, check-ins, users (favorites)
Event - Park events with organizers and attendees
- Details: title, description, date, startTime, endTime
- Settings: privacy (PUBLIC, PRIVATE)
- Relationships: park, organization (optional), organizer, attendees, comments
Organization - Groups for coordinating events and managing members
- Information: name, profilePictureUrl, websiteUrl, description
- Membership: owner, members with roles (INVITEE, MEMBER, MODERATOR, OWNER, BANNED)
- Relationships: events, members
Friendship - Connections between users/dogs with status tracking
- Status: PENDING, ACCEPTED, REJECTED, BLOCKED
- Supports both user-to-user and dog-to-dog friendships
Enemies - Blocked/avoid list for users and their dogs
- Owner-managed list of users/dogs to avoid
Messages - Direct messaging between users with delivery status and real-time WebSocket support
- Status: SENT, DELIVERED, READ, ARCHIVED, DELETED
- Real-time message delivery via WebSocket (Socket.io)
- Typing indicators and read receipts
- Indexed for efficient queries
- See WEBSOCKET_MESSAGING.md for details
CheckIn - Track when users and dogs visit parks
- Records: userId, dogId (optional), parkId, checkedInAt, checkedOutAt
- Enables real-time presence tracking at parks
Achievements - Badges and trophies earned by users
- Types: BADGE, TROPHY, CERTIFICATE
- Linked to UserAchievement join table for tracking earned achievements
Levels - User progression levels with points thresholds
- Attributes: name, minPoints, maxPoints, badgeUrl
- Users earn experience points (ExpPoints) and progress through levels
Notifications - User alerts for various activities with real-time WebSocket delivery
- Types: FRIENDSHIP_REQUEST, FRIENDSHIP_ACCEPTED, MESSAGE_RECEIVED, EVENT_INVITATION, EVENT_REMINDER, ACHIEVEMENT_EARNED, LEVEL_UP, COMMENT_REPLY, PARK_REVIEW, ORGANIZATION_INVITE
- Real-time push notifications via WebSocket (Socket.io)
- See WEBSOCKET_NOTIFICATIONS.md for details
DogOwner - Join table linking users to their dogs UserFavoritePark - Join table for users' favorite parks OrganizationMember - Join table with membership roles EventAttendance - Join table tracking event attendees Comment - Comments on parks and events UserLevel - Join table for user progression UserAchievement - Join table for earned achievements
- User Profiles
- Dog Profiles
- Friends List
- Enemies List
- Messages
- Notifications
- Favorite Park
- Organizations
- Events
- Parks
- Checkins
- Achievements, Levels & Badges
- Advanced Search
- Localization (English, Finnish, Spanish)
- Remote Auth (Google Login)
- Multibrowser Support
- Metrics
- Dashboards & Visualizations
| Module | Points |
|---|---|
| Web Framework (Frontend: React, Backend: Express) | 2 |
| User Interaction (Profile, Chat, Friends) | 2 |
| ORM (Prisma) | 1 |
| Notifications | 1 |
| File Upload System (jpg, png, pdf) | 1 |
| Custom Design System | 1 |
| User Management System | 2 |
| Advanced Permissions System | 2 |
| Organizations System | 2 |
| Achievements, Levels & Badges (Gamification) | 1 |
| Advanced Search | 1 |
| Localization (English, Finnish, Spanish) | 1 |
| Remote Auth (Google Login) | 1 |
| Multibrowser Support | 1 |
| Health check & status page system w/ backups, etc | 1 |
| Monitoring System w/ Prometheus + Grafana | 2 |
| Centralized Logging & Audit Trail w/ ELK Stack | 2 |
| Total: | 24 |
- Notifications
- File Uploads
- Localization
- Messaging
- Custom Design System
- Frontend Design
- Frontend Functionality
- Frontend Structure
- Advanced Search Frontend
- Achivements, Levels, Badges
- Authorization & Authentication
- Remote Authentication
- Users
- Docker
- Advanced Search Backend
- Database Schema
- Dogs
- Parks
- Organizations
- Events
- Friends
- Enemies
- Testing Framework
- Backend Refactor (Event-Driven Architecture)
- Setup RabbitMQ
- Setup Prometheus + Grafana
- Setup ELK