Skip to content

Add [skill] emote-behavioral-contracts#84

Open
rogerod wants to merge 3 commits intofigma:mainfrom
rogerod:main
Open

Add [skill] emote-behavioral-contracts#84
rogerod wants to merge 3 commits intofigma:mainfrom
rogerod:main

Conversation

@rogerod
Copy link
Copy Markdown

@rogerod rogerod commented Apr 8, 2026

Adds emote-behavioral-contracts to the AI Behavior section. New category for skills that govern AI behavior at trust-critical moments in UI interactions.

Adds emote-behavioral-contracts to the AI Behavior section. New category for skills that govern AI behavior at trust-critical moments in UI interactions.

#### emote-behavioral-contracts

[SOURCE CODE](https://github.com/rogerod/emote-behavioral-contracts) · [MIT](https://github.com/rogerod/emote-behavioral-contracts/blob/main/LICENSE)
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems like a low quality PR. I do not see the license attached to the repo and it also doesn't list the right MCP tools being used. This will be rejected if these issues are not addressed.

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @akbarbmirza — both issues addressed:

  1. MIT license file added to the repo root: https://github.com/rogerod/emote-behavioral-contracts/blob/main/LICENSE
  2. MCP tools corrected to get_design_context get_screenshot get_metadata

Let me know if anything else is needed.

Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The skill itself doesn't seem to be using any of those tools. Has this skill been tested with the Figma MCP server?

Copy link
Copy Markdown
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @akbarbmirza — yes, just tested it end to end against a Figma frame annotated with Emote components.

The skill called get_design_context, get_metadata, and use_figma in sequence, read the annotation components from the frame, and produced a JSON component manifest and system prompt block with the correct patterns and behavioral tokens.

I've updated the README entry to reflect those three tools. The SKILL.md uses allowed-tools: figma_mcp as a server reference — the skill instructs the agent to call whatever Figma MCP tools are needed to read the frame, which in practice are those three.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants