Skip to content

Kortexa Weather - a simple Porch demo ability#230

Open
francip wants to merge 3 commits intoopenhome-dev:devfrom
kortexa-ai:kortexa/porch
Open

Kortexa Weather - a simple Porch demo ability#230
francip wants to merge 3 commits intoopenhome-dev:devfrom
kortexa-ai:kortexa/porch

Conversation

@francip
Copy link
Copy Markdown

@francip francip commented Mar 28, 2026

What does this Ability do?

Kortexa Weather is a simple weather ability that also demonstrates how to render UI through Porch app.

Suggested Trigger Words

  • what's the weather

Type

  • New community Ability
  • Improvement to existing Ability
  • Bug fix
  • Documentation update

External APIs

Testing

  • Tested in OpenHome Live Editor
  • All exit paths tested (said "stop", "exit", etc.)
  • Error scenarios tested (API down, bad input, etc.)

Checklist

  • Files are in community/my-ability-name/
  • main.py follows SDK pattern (extends MatchingCapability, has register_capability + call)
  • README.md included with description, suggested triggers, and setup
  • resume_normal_flow() called on every exit path
  • No print() — using editor_logging_handler
  • No hardcoded API keys — using placeholders
  • No blocked imports (redis, user_config)
  • No asyncio.sleep() or asyncio.create_task() — using session_tasks
  • Error handling on all external calls
  • Tested in OpenHome Live Editor

Anything else?

Screen.Recording.2026-03-28.at.01.57.15.mov
Screenshot 2026-03-28 at 01 53 15

@francip francip requested a review from a team as a code owner March 28, 2026 09:05
@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Mar 28, 2026

🔀 Branch Merge Check

PR direction: kortexa/porchdev

Passedkortexa/porchdev is a valid merge direction

@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Mar 28, 2026

✅ Community PR Path Check — Passed

All changed files are inside the community/ folder. Looks good!

@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Mar 28, 2026

✅ Ability Validation Passed

📋 Validating: community/kortexa-weather
  ✅ All checks passed!

@github-actions
Copy link
Copy Markdown
Contributor

github-actions bot commented Mar 28, 2026

🔍 Lint Results

🔧 Auto-formatted

Some files were automatically cleaned and formatted with autoflake + autopep8 and committed.

  • Unused imports removed (autoflake)
  • Unused variables removed (autoflake)
  • PEP8 formatting applied (autopep8)

__init__.py — Empty as expected

Files linted: community/kortexa-weather/main.py

✅ Flake8 — Passed

✅ All checks passed!

@github-actions github-actions bot added the community-ability Community-contributed ability label Mar 28, 2026
Copy link
Copy Markdown
Contributor

@uzair401 uzair401 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @francip, to proceed with approval, please remove the config.json file from the repo as it is now handled by the Live editor and agent flow in the latest SDK, so it should not be included. I don’t have edit access, so you’ll need to delete it from your side.

Also, please review your ability against the following voice UX audit prompt to improve naturalness and overall user experience on voice devices:

"You are auditing a voice ability built for native US English speakers. The ability runs
on a smart speaker and all user interaction is spoken out loud — not typed. Review the
code below and find every place where the ability is brittle to natural spoken English,
has problematic voice output, or has UX issues specific to a voice device. Specifically
look for: 1. HARDCODED STRING MATCHING — any place that checks if a specific word or
phrase appears in user input (e.g. if "reschedule" in lower, if "yes" in response). For
each one, list 6-8 natural spoken English alternatives a native US speaker might say
instead. 2. LLM CLASSIFIER PROMPT EXAMPLES — any prompt that provides example phrases
to teach the LLM what the user might say. For each example phrase that sounds too formal
or textbook, suggest a natural spoken replacement. 3. EXIT / CONFIRMATION WORD LISTS —
any hardcoded list used to detect "stop", "yes", "no" or similar. For each list, add the
missing natural spoken variants. 4. VOICE OUTPUT PROBLEMS — any speak() string (or LLM
system prompt whose output goes directly into speak()) that contains: markdown
formatting (**, *, #, --), bullet points or numbered lists, emojis, URLs, or stage
directions like (pauses) or (laughs). Also flag any LLM system prompt that generates
spoken output but does not explicitly instruct the model to use plain spoken English
with no formatting. For each issue, quote the string and explain the fix. 5. RESPONSE
LENGTH — any speak() string that exceeds 30 words, or any LLM system prompt that does
not set a response length limit for spoken output. Suggest a tighter version of the
string or the length instruction to add to the prompt. Target: confirmations and
acknowledgements under 10 words, standard replies 1-2 sentences under 15 words, result
delivery 2-3 sentences max. 6. MENU-DRIVEN FLOW — if the ability contains three or more
sequential yes/no prompts asked one after another with no LLM routing between them (e.g.
repeated run_io_loop or speak + user_response calls each asking a yes/no question), flag
it. Note how many sequential prompts were found and suggest collapsing them into a
single open-ended question with LLM-based routing. For each issue found, return: - The
exact line or variable from the code - What is wrong or missing - The suggested fix or
addition Format your response as a numbered list, one issue per entry. Be specific —
quote the actual code. If the ability is already well-written across all six areas, say
so."

Once these changes are made, I’ll review it again and proceed with merging.

Copy link
Copy Markdown
Author

@francip francip left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @uzair401! Both items addressed:

1. config.json removed — deleted from the repo.

2. Voice UX audit results — ran the full 6-point audit. This ability is fire-and-forget (no user input parsing, no conversational loops, no LLM prompts), so it passed cleanly on 5 of 6 categories:

  • Hardcoded string matching: None — trigger words handled by the platform's MatchingCapability
  • LLM classifier prompts: None
  • Exit/confirmation word lists: None — no conversational loop
  • Response length: All speak() strings under 30 words
  • Menu-driven flow: None — single pass, no sequential yes/no prompts

Voice output fixes applied (4 lines):

  • "Couldn't determine your location""I can't find your location" (more natural)
  • "Couldn't get weather data for {city}""Sorry, couldn't get the weather for {city}" (softer, less technical)
  • "Feels like {x}. Humidity {y} percent.""Feels like {x}, with {y} percent humidity." (flows as one spoken thought)
  • "Something went wrong getting the weather.""Sorry, something went wrong with the weather." (softer opening)

francip and others added 3 commits March 30, 2026 09:18
- Delete config.json (handled by platform at runtime)
- Voice audit: soften error messages, improve spoken output naturalness

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

community-ability Community-contributed ability

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants