Skip to content

Feature Request: Add HarmActionsEval for evaluation of AI agent action safety #1553

@prane-eth

Description

@prane-eth

Background

AI agents have a growing adoption across the industry, including critical applications. AI agents that have access to tools can currently call tools directly with no centralized validation layer that inspects these calls before execution, allowing harmful or disallowed tool calls to be executed without oversight.
Existing agent benchmarks evaluate the safety of final responses instead of actions.

HarmActionsEval benchmark evaluates actions. It found that 80% of the LLMs tested executed actions at the first attempt for over 95% of the harmful prompts.

Related work: https://github.com/Pro-GenAI/Agent-Action-Guard.

Proposed change

To integrate HarmActionsEval in this project.

If the maintainers express interest, I will write the code and create a Pull Request.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions