Skip to content

[FEAT]: Real-Time Severity Keyword Flagging #299

@Cubix33

Description

@Cubix33

📝 Description

Right now, the AI reads the audio transcript just to find answers for the PDF boxes. This feature turns the AI into an "alarm system" as well. While reading the transcript, it will scan for dangerous keywords like "gun," "heart attack," "trapped," or "hazmat." If it finds any of these words, it will automatically flag the entire generated report as a high-priority emergency.

💡 Rationale

In emergency services, speed and triage are everything. After a massive incident, supervisors might have 50 different forms waiting for their signature. They need to know which ones require immediate attention. A report about a twisted ankle can wait; a report about an active fire or a trapped civilian needs to jump to the front of the line. By flagging these instantly, FireForm moves from a simple "paperwork helper" to an active safety tool.

🛠️ Proposed Solution

We will create a list of "Trigger Words" and do a quick scan of the transcript before we even start asking the LLM questions.

  • Logic change in src/llm.py: Add a list of high-severity keywords (e.g., ["gun", "cardiac", "trapped"]) and write a simple function to search self._transcript_text for them.
  • JSON Update: If a word is found, inject a new tag like "SEVERITY": "HIGH" directly into the self._json dictionary so the rest of the system knows it's an emergency.
  • Logic change in src/filler.py: If the high severity flag is present in the data, change the output file name to include [URGENT] (e.g., URGENT_file_20260320_filled.pdf) and write a warning at the top of the Audit Trail text file.

✅ Acceptance Criteria

  • Trigger Test: If the input audio contains the word "heart attack," the final PDF file name starts with [URGENT] and the JSON shows the high severity flag.
  • Normal Test: If the input audio is just a routine report, it processes normally without any urgent tags.
  • Performance: The scan happens instantly without requiring extra, time-consuming requests to the Ollama server.

📌 Additional Context

We are intentionally using a "simple keyword search" in Python rather than asking the LLM to rate the severity. This guarantees that the check is reliable, takes zero extra seconds to run, and does not waste computer memory on another heavy AI prompt. It is a lightweight solution with meaningful impact.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions