Kortexa Weather - a simple Porch demo ability#230
Kortexa Weather - a simple Porch demo ability#230francip wants to merge 3 commits intoopenhome-dev:devfrom
Conversation
🔀 Branch Merge CheckPR direction: ✅ Passed — |
✅ Community PR Path Check — PassedAll changed files are inside the |
✅ Ability Validation Passed |
🔍 Lint Results🔧 Auto-formattedSome files were automatically cleaned and formatted with
✅
|
uzair401
left a comment
There was a problem hiding this comment.
Hi @francip, to proceed with approval, please remove the config.json file from the repo as it is now handled by the Live editor and agent flow in the latest SDK, so it should not be included. I don’t have edit access, so you’ll need to delete it from your side.
Also, please review your ability against the following voice UX audit prompt to improve naturalness and overall user experience on voice devices:
"You are auditing a voice ability built for native US English speakers. The ability runs
on a smart speaker and all user interaction is spoken out loud — not typed. Review the
code below and find every place where the ability is brittle to natural spoken English,
has problematic voice output, or has UX issues specific to a voice device. Specifically
look for: 1. HARDCODED STRING MATCHING — any place that checks if a specific word or
phrase appears in user input (e.g. if "reschedule" in lower, if "yes" in response). For
each one, list 6-8 natural spoken English alternatives a native US speaker might say
instead. 2. LLM CLASSIFIER PROMPT EXAMPLES — any prompt that provides example phrases
to teach the LLM what the user might say. For each example phrase that sounds too formal
or textbook, suggest a natural spoken replacement. 3. EXIT / CONFIRMATION WORD LISTS —
any hardcoded list used to detect "stop", "yes", "no" or similar. For each list, add the
missing natural spoken variants. 4. VOICE OUTPUT PROBLEMS — any speak() string (or LLM
system prompt whose output goes directly into speak()) that contains: markdown
formatting (**, *, #, --), bullet points or numbered lists, emojis, URLs, or stage
directions like (pauses) or (laughs). Also flag any LLM system prompt that generates
spoken output but does not explicitly instruct the model to use plain spoken English
with no formatting. For each issue, quote the string and explain the fix. 5. RESPONSE
LENGTH — any speak() string that exceeds 30 words, or any LLM system prompt that does
not set a response length limit for spoken output. Suggest a tighter version of the
string or the length instruction to add to the prompt. Target: confirmations and
acknowledgements under 10 words, standard replies 1-2 sentences under 15 words, result
delivery 2-3 sentences max. 6. MENU-DRIVEN FLOW — if the ability contains three or more
sequential yes/no prompts asked one after another with no LLM routing between them (e.g.
repeated run_io_loop or speak + user_response calls each asking a yes/no question), flag
it. Note how many sequential prompts were found and suggest collapsing them into a
single open-ended question with LLM-based routing. For each issue found, return: - The
exact line or variable from the code - What is wrong or missing - The suggested fix or
addition Format your response as a numbered list, one issue per entry. Be specific —
quote the actual code. If the ability is already well-written across all six areas, say
so."
Once these changes are made, I’ll review it again and proceed with merging.
francip
left a comment
There was a problem hiding this comment.
Thanks @uzair401! Both items addressed:
1. config.json removed — deleted from the repo.
2. Voice UX audit results — ran the full 6-point audit. This ability is fire-and-forget (no user input parsing, no conversational loops, no LLM prompts), so it passed cleanly on 5 of 6 categories:
- Hardcoded string matching: None — trigger words handled by the platform's
MatchingCapability - LLM classifier prompts: None
- Exit/confirmation word lists: None — no conversational loop
- Response length: All
speak()strings under 30 words - Menu-driven flow: None — single pass, no sequential yes/no prompts
Voice output fixes applied (4 lines):
"Couldn't determine your location"→"I can't find your location"(more natural)"Couldn't get weather data for {city}"→"Sorry, couldn't get the weather for {city}"(softer, less technical)"Feels like {x}. Humidity {y} percent."→"Feels like {x}, with {y} percent humidity."(flows as one spoken thought)"Something went wrong getting the weather."→"Sorry, something went wrong with the weather."(softer opening)
- Delete config.json (handled by platform at runtime) - Voice audit: soften error messages, improve spoken output naturalness Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
What does this Ability do?
Kortexa Weather is a simple weather ability that also demonstrates how to render UI through Porch app.
Suggested Trigger Words
Type
External APIs
Testing
Checklist
community/my-ability-name/main.pyfollows SDK pattern (extendsMatchingCapability, hasregister_capability+call)README.mdincluded with description, suggested triggers, and setupresume_normal_flow()called on every exit pathprint()— usingeditor_logging_handlerredis,user_config)asyncio.sleep()orasyncio.create_task()— usingsession_tasksAnything else?
Screen.Recording.2026-03-28.at.01.57.15.mov