Skip to content

yr-96/meme-library

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

表情包.skill

简体中文 · English


效果展示

实际聊天效果 —— AI 在对话中自动配上一张啃汉堡的小猫表情包

AI 接到你"好吃"两个字,自动从你的表情包库里挑出啃汉堡的小猫回过来。表情包不是它生成的,是它从你自己的库里按当前对话语境选出来的最合适那张。


简体中文

一个分层管理、视觉自动打标签的个人表情包/贴纸库 —— 以 Agent Skill 形式打包,任何支持 skill 的 LLM agent(hermes、openclaw、Claude Code、Cursor 等)都能直接驱动。

像整理音乐库一样整理你的表情包 —— 自动打标签、可浏览、对话需要时一秒挑出最贴切的那张。

它做什么,不做什么

这个 skill 只管理表情包,不负责发送

  • 管理:接收图片(单张或批量)→ vision 模型自动打标签 → 用语义化文件名归档 → 按系列组织 → 根据对话上下文挑出最合适的一张。
  • 不发送:发送逻辑跟具体通道强绑定(WeChat / Telegram / Slack / Discord / 你自己的 bot),那部分应该由你自己的"发送 skill"来做,调用本库返回的图片路径即可。

这种切分是有意的 —— 管理逻辑通用,值得开源共享;发送逻辑因人而异,留给你自己。


🤖 30 秒上手 —— 让你的 AI 帮你装

复制下面这段话,发给你的 agent(hermes / openclaw / Claude Code / Cursor 都行):

请帮我安装 meme-library 这个 skill:

  1. git clone https://github.com/yr-96/meme-library.git ~/.local/share/meme-library(如果目录已存在就 git pull
  2. bash ~/.local/share/meme-library/install.sh —— 它会自动检测我当前的 agent 平台(hermes / openclaw / claude / cursor 按优先级),把 skill 软链到对应 skills/ 目录
  3. python3 ~/.local/share/meme-library/scripts/verify_install.py 检查依赖和 API key 是否就绪
  4. 如果 verify 报缺 ANTHROPIC_API_KEY(或其他 provider 的 key),告诉我去哪个 shell rc 加,加完之后让我重新跑一次 verify
  5. 把 README 「主动模式」一节里那段 建议指令文本 原样存进你这个平台用来放长期记忆的位置(user rules / ~/.claude/CLAUDE.md / ~/.hermes/instructions.md / ~/.openclaw/instructions.md / 任何对应位置)。这一步是让你以后回我消息时会自己想起来「这里要不要配张图」,否则 skill 就只能等我开口才用。
  6. 全部 ✓ 之后,告诉我可以试试这三句话来用:
    • 「把这张图存进 orange_cat 系列」(接着发一张图)
    • 「这是吉伊卡哇贴纸包,帮我建个新系列」(接着发一个 zip)
    • 「帮我找张表情包,对方半天没回我消息」

agent 跑完会告诉你一切就绪。这一步成功之后,你就可以直接对话使用这个 skill 了,不用再碰命令行。

💡 没有 agent,或者想自己装?看下面 手动安装


装好之后怎么用(对话示例)

skill 装上之后,下面这些话直接对你的 agent 说就行 —— SKILL.md 已经告诉它该怎么处理。

入库一张图

「把这张图存到 orange_cat 系列」(同时发图给 agent)

agent 会调 vision 模型分析图片、起个语义化文件名、写入 meta.json,然后回报:「已入库 cat_shock_03.jpg,标签:震惊/呆滞/橘猫,适用场景:面对意外消息时的惊讶反应」。

批量建一个新系列

「我刚下了一套吉伊卡哇贴纸(在桌面上的 chiikawa.zip),帮我建个新系列叫 chiikawa」

agent 会解压、逐张打标签、写入 meta.json、最后总结整套贴纸的"调性指纹"(vibe / best_for / avoid_when)写进 index.json

让 agent 主动发表情包

「我朋友刚跟我说他升职了,发个表情包庆祝下」

agent 会读 index.json 找最匹配的系列,再读那个系列的 meta.json 挑出最贴的那张,把绝对路径交给你的"发送 skill"。

例行维护

「跑下 health check 看库有没有问题」

agent 会跑 health_check.py,报告孤儿文件、缺失文件、计数漂移、调性指纹陈旧等问题,并建议是否需要 regen_vibe.py --stale-only 修复。


📦 开箱样板库:圆头耄耋(一只暴躁的小破猫)

"宝 相 庄 严"

—— 网友面对耄耋图时的统一感叹

这个 repo 自带一份真能跑起来的样板库 examples/sample_lib/,里面只装了一个 collection:maodie_cats(圆头耄耋)。15 张图涵盖震惊、装逼、领导发言、王者帝王、戴粗眉眼镜的福尔摩斯、装可爱、催回消息、降维打击……基本一只猫演完了你社交聊天里所需的全部颜艺。

先说清楚 ——「耄耋」这里不是 80 岁老头

它是 圆头耄耋,2020 年生于安徽合肥的一只暴躁流浪橘猫,原名"黄毛贼猫"。"耄耋"取自"猫爹"谐音,"圆头"是因为它启动飞机耳模式时耳朵贴成一片,从正面看脑袋圆得像个西瓜。它被博主 @白手套和马犬旺财 拍到入室偷猫粮,在镜头前一气化三哈、棘背龙形态全开,从此成为中文互联网"哈基米"亚文化的活化石、九阳豆浆"哈基米南北绿豆浆"的代言人、爱猫 TV 永远的痛。它已于 2025 年 3 月飞升哈基米星球,但表情包永世长存。

这种抽象+反差+阴阳怪气的调性,恰好是检验 meme-library "按 vibe 选图" 这套流程的最佳压力测试 —— 一只 vibe 干瘪的 collection 选出来的图永远又 generic 又出戏,但耄耋的 vibe 写得越具体,越能选出"卧槽就要这张"的精准回应。

30 秒上手

# 不用真的把 ~/.meme-library 复制过来,直接指过去
export MEME_LIB_PATH="$(pwd)/examples/sample_lib"

# 检索 —— 关键词重叠兜底,中文也能切词
python scripts/search.py --context "对方半天不回消息我都快气死了" --top-k 1
# → cat_boss_01.jpeg(凌厉催促版)

python scripts/search.py --context "听到一个让我大受震撼的消息"
# → cat_shock_01.jpg / cat_o_01.jpg / cat_pointing_shock_01.jpg

python scripts/search.py --context "想嘲讽一下对方资历不够还在跟我争"
# → cat_leader_01.jpg(领导干部西装+国旗徽章版)

python scripts/search.py --context "深藏功与名地装一下"
# → cat_scheming_01.jpg(迷之微笑窃喜版)

或者更直接 —— 装完之后跟你的 agent 说:

「我现在心情不错,刚拿下一个项目,你给我配张表情包庆祝下」

如果你按照前面的主动模式把习惯写进了 agent 的长期记忆,那连说都不用说,agent 会自己挑一张甩你脸上。

💡 想换成自己的图?把 MEME_LIB_PATH 指回 ~/.meme-library,然后用 add_meme.py / add_collection.py 入库就行 —— sample_lib 只是 demo,不影响你正经的库。


🪄 主动模式(让 agent 自己想起来发)

光装上 skill,agent 只会在你明确开口要表情包的时候用它。如果你想让它每次回话时自己评估"这里要不要配张图",需要把下面这段话存进你这个平台用来记长期偏好的位置

Agent 平台 存到哪里
Cursor 设置 → Rules → User Rules
Claude Code ~/.claude/CLAUDE.md(全局)或项目级 CLAUDE.md
hermes ~/.hermes/instructions.md(或该平台文档指定的全局指令文件)
openclaw ~/.openclaw/instructions.md(同上)
其它 按你 agent 的"system prompt / persistent memory"文档说明放

建议指令文本(直接复制,可以改阈值/语气):

我装了 meme-library skill,从现在起在跟我对话时按下面的习惯做:

【何时主动配图】
每次准备回我之前,先静默判断当前对话是否适合配一张表情包/贴纸:
  ✓ 适合: 我分享了情绪(开心/沮丧/震惊/无奈/被气笑)、说了好笑的事、
         明显的庆祝/安慰/吐槽场景、对话氛围轻松、没在赶事情
  ✗ 不适合: 严肃话题、技术/工作关键讨论、我在专注思考、对话沉重、
         我刚说"别发了"或类似话、上一张你发的我没反应

【怎么挑】
判定要发时,走 meme-library 的 SKILL.md Workflow 3:
  1. 读 $MEME_LIB_PATH/index.json
  2. 用 vibe + best_for + avoid_when 选 1-2 个最匹配的 collection
  3. 读那些 collection 的 meta.json,挑 occasion 最贴的那张
  4. 把绝对路径附在你回复的末尾(或交给我配的 sender skill)

【自我克制】
- 一回合最多一张
- 同一个 collection 连续 3 轮内不重复
- 连续 3 次判断为"不适合"后,接下来 5 轮都先不评估,降低噪音
- 拿不准就别发 —— 不发永远比发错好
- 我说"别发了""停一下"之类话,这次会话剩下时间都不再发

存好之后,agent 下一次跟你对话就会按这个节奏做了 —— 你不用再每次都喊"帮我找个表情包"。

💡 想关掉主动模式:删掉这段,或者直接对 agent 说「以后等我开口要才发」。


为什么这么设计

目前社区里"表情包 skill"基本是两种思路:

  1. 现场生成 meme(调用 memegen.link 之类的公开 API)
  2. 在 markdown 里写死一个清单

这两种都撑不到 20 张以上。meme-library 借鉴了 Anthropic Agent Skills spec 自己用的 渐进式披露(progressive disclosure) 模式:

  • index.json 很轻量,每次都加载 —— 只列每个 collection 的"调性指纹"(一句话气质总结 + best_for / avoid_when 列表)
  • 每个 collection 的 meta.json 只在 index 提示该系列相关时才加载

效果:你可以扩到几十个 collection、上百张图,agent 的 context 窗口也不会被炸掉。

库目录结构

$MEME_LIB_PATH/                  (默认:~/.meme-library/)
├── index.json                   # 系列级调性指纹
├── orange_cat/
│   ├── meta.json
│   ├── cat_boss_01.jpeg
│   └── cat_shock_01.jpg
└── chiikawa/
    ├── meta.json
    └── cute_pleading_01.png

完整 JSON schema 见 references/schema.md


手动安装

如果你不想走对话式安装,或者要装到非常规位置:

# 1. 克隆
git clone https://github.com/yr-96/meme-library.git ~/.local/share/meme-library

# 2. 软链到你 agent 的 skills 目录(按你用的 agent 选一个)
ln -s ~/.local/share/meme-library  ~/.hermes/skills/meme-library      # hermes
ln -s ~/.local/share/meme-library  ~/.openclaw/skills/meme-library    # openclaw
ln -s ~/.local/share/meme-library  ~/.claude/skills/meme-library      # Claude Code
ln -s ~/.local/share/meme-library  ~/.cursor/skills/meme-library      # Cursor

# 3. 安装 vision provider 的 SDK(按需选一个)
pip install anthropic          # 默认 provider 是 claude
# 或者 openai / google-generativeai

# 4. 配置环境变量(写到 ~/.zshrc 或 ~/.bashrc)
export ANTHROPIC_API_KEY="sk-..."
export MEME_VISION_PROVIDER=claude          # 默认就是 claude,可以省略
export MEME_LIB_PATH="$HOME/.meme-library"  # 默认值,可以省略

# 5. 跑自检
python3 ~/.local/share/meme-library/scripts/verify_install.py

或者直接跑安装脚本帮你自动完成第 2 步:

bash ~/.local/share/meme-library/install.sh

命令行使用(不靠 agent)

# 加一张图
python scripts/add_meme.py \
  --image ~/Downloads/funny_cat.jpg \
  --collection orange_cat

# 批量入库整个文件夹
python scripts/add_collection.py \
  --source ~/stickers/chiikawa/ \
  --collection chiikawa \
  --name "吉伊卡哇"

# 关键词兜底检索(agent 自己读 JSON 做语义匹配会更准)
python scripts/search.py --context "对方半天不回消息" --top-k 1

# 刷新某个系列的调性指纹(建议定期跑,或被 health_check 提示时跑)
python scripts/regen_vibe.py --stale-only

# 库健康检查
python scripts/health_check.py

接入自定义 vision provider

实现 VisionProvider 协议即可,两个方法,都返回结构化 dataclass:

class MyProvider:
    name = "myprovider"

    def describe_image(self, image_path: Path) -> ImageTags: ...
    def summarise_collection(
        self, descriptions: list[dict]
    ) -> CollectionFingerprint: ...

然后在 scripts/lib/vision.py_PROVIDERS 字典里注册一下,或者通过 MEME_VISION_PROVIDER=myprovider 环境变量切换。

配套写一个发送 skill

meme-library 只负责返回路径,发送你自己来。一个最简的发送 skill 大概 30 行:

import subprocess, json
import telegram_bot

def react(context: str, recently_sent: set[str]):
    out = subprocess.check_output([
        "python", "scripts/search.py",
        "--context", context,
        "--exclude", ",".join(recently_sent),
    ])
    matches = json.loads(out)["matches"]
    if matches:
        telegram_bot.send_photo(chat_id, matches[0]["path"])
        recently_sent.add(matches[0]["path"])

完整的行为契约在 SKILL.md 里(什么时候先发、什么时候跳过、如何处理 opt-out 等)—— 这些规则是通用的,建议任何发送 skill 都遵守。

Roadmap

  • 用 embedding 替代关键词重叠(作为可选升级)
  • 入库时自动查重(perceptual hash)
  • 浏览库的 Web UI
  • 发布到 LobeHub Skills Marketplace / openclaw 索引

欢迎贡献。

License

MIT,见 LICENSE


English

A hierarchical, vision-tagged personal meme/sticker library — packaged as an Agent Skill that any LLM coding agent (hermes, openclaw, Claude Code, Cursor, …) can drive.

Build a personal sticker pack the way you'd build a music library — tagged, browsable, and ready to be picked the moment a conversation calls for one.

What it is (and isn't)

This skill manages memes. It does not send them.

  • Manages: ingest images (single or in bulk), auto-tag them with a vision model, file them under semantic filenames, organise them into collections, and surface the right one for a given conversation context.
  • Doesn't send: delivery is channel-specific (WeChat, Telegram, Slack, Discord, your own bot). Wire meme-library into a thin "sender" skill that consumes the path it returns.

This separation is deliberate — the management logic is generic and worth sharing; the sending logic is yours.

🤖 30-second install — let your agent do it

Copy the block below and send it to your agent (works with hermes, openclaw, Claude Code, Cursor, …):

Please install the meme-library skill for me:

  1. git clone https://github.com/yr-96/meme-library.git ~/.local/share/meme-library (or git pull if it already exists).
  2. Run bash ~/.local/share/meme-library/install.sh — it auto-detects which agent platform I'm using (hermes > openclaw > claude > cursor priority) and symlinks the skill into the right skills/ directory.
  3. Run python3 ~/.local/share/meme-library/scripts/verify_install.py to check that dependencies and API keys are wired up.
  4. If verify reports a missing ANTHROPIC_API_KEY (or whichever provider key), tell me which shell rc file to add it to, then ask me to re-run verify.
  5. Copy the suggested instruction block from the README's "Proactive mode" section into wherever this platform stores persistent preferences (user rules / ~/.claude/CLAUDE.md / ~/.hermes/instructions.md / ~/.openclaw/instructions.md / equivalent). Without this step the skill stays passive — it only fires when I explicitly ask for a meme.
  6. Once everything is ✓, tell me I can try these three prompts to use it:
    • "Save this image to my orange_cat collection" (then paste an image)
    • "Here's a chiikawa sticker pack zip — build me a new collection from it"
    • "Find me a meme for when someone is being ignored"

Once your agent finishes, you're done. From then on you interact with the library purely via conversation — no CLI required.

💡 No agent, or want to install manually? See Manual install.

How to use it (conversation examples)

After install, just talk to your agent. The skill's SKILL.md tells it how to handle each request.

Add a single meme

"Save this image to the orange_cat collection" (attach the image)

The agent calls a vision model, picks a semantic filename, writes the metadata, and confirms: "Saved as cat_shock_03.jpg, tags: shock/cat/disbelief, fits when someone shares unexpected news".

Build a brand-new collection

"I just downloaded a chiikawa sticker pack (chiikawa.zip on my Desktop), build me a collection called chiikawa"

The agent unzips, tags every image one at a time, writes meta.json, then summarises the collection's vibe fingerprint into index.json.

Have the agent send a meme proactively

"My friend just told me he got promoted, send a celebratory meme"

The agent reads index.json to find the best-fitting collection, drills into that collection's meta.json, picks the best meme, and hands the absolute path to your sender skill.

Routine maintenance

"Run a health check on my meme library"

The agent runs health_check.py and reports orphan files, missing files, count drift, and stale vibe fingerprints — and suggests regen_vibe.py --stale-only if anything needs refreshing.

📦 Bundled sample library: Homelander (a sociopath in a cape)

"I am the f***ing prophet of God."

— Homelander, every other episode

This repo ships with a real working sample library at examples/sample_lib/, and the English-flavoured collection inside it is homelander. It's 7 hand-picked stills of the title sociopath from The Boys — Amazon's love letter to "what if Superman was raised in a corporate laboratory and is barely holding it together".

We picked Homelander instead of generic reaction memes for a very specific reason: the joke isn't the face, it's the gap between the face and what the face is hiding. PR-grade smile with murder underneath. Tear of gratitude when you know he's calculating your funeral. Sociable laugh that's exactly 0.3 seconds too long. That kind of layered "polite on the surface, deeply not okay underneath" energy is exactly the thing a flat tag list (#shocked, #happy, #sad) cannot capture — but a well-written occasion field absolutely can. So this collection doubles as a stress test for the "vibe-driven retrieval" claim this whole project rests on.

The 7 frames cover:

Vibe When to use it
😊 PR laugh in a sea of fans Boss told an unfunny joke, must visibly laugh
🥹 Tearful golden-hour glow Performative gratitude / fake "moved to tears"
😏 Self-satisfied basking Long-running "I told you so" finally pays off
😬 Polite measured smile Passive-aggressive "noted, thanks!"
😈 Maniac side-grin on a plane "I have decided you will not survive this flight"
🔥 Side-profile laser eyes One more Slack ping and your face will be vapor
😵 Grotesquely swollen face Reading news so absurd your face physically deforms

30 seconds to try it

# Don't bother copying anything — just point MEME_LIB_PATH at the sample
export MEME_LIB_PATH="$(pwd)/examples/sample_lib"

python scripts/search.py --context "boss told a terrible joke and I have to laugh politely" --top-k 1
# → pr_laugh_01.png

python scripts/search.py --context "deploy went wrong in a way I have never seen before"
# → swollen_face_01.png

python scripts/search.py --context "responding to a passive aggressive email"
# → polite_smile_01.png

python scripts/search.py --context "competitor's product just imploded in real time"
# → maniac_grin_01.png

Or — if you set up proactive mode above — just say something like:

"My PM just signed off on the launch date I asked for two weeks ago. Took them long enough."

…and let the agent pick smug_basking_01.png on its own.

💡 Want to use your own pictures? Point MEME_LIB_PATH back at ~/.meme-library and ingest your stuff with add_meme.py / add_collection.py. The sample library is just a demo and doesn't interfere with your real one.

🧧 The sample library also contains a Chinese-internet companion collection, maodie_cats (圆头耄耋) — a feral orange tomcat who became a meme phenomenon in 2024–25. See the Chinese version of this README for the lore, or just open examples/sample_lib/README.md for both.

🪄 Proactive mode (let the agent decide on its own)

By default the agent only uses the skill when you explicitly ask for a meme. If you want it to evaluate every reply and decide on its own whether the moment calls for a meme, paste the block below into wherever your agent stores persistent preferences:

Agent Where to store it
Cursor Settings → Rules → User Rules
Claude Code ~/.claude/CLAUDE.md (global) or a project-level CLAUDE.md
hermes ~/.hermes/instructions.md (or whatever its docs designate as the global instruction file)
openclaw ~/.openclaw/instructions.md (same idea)
Other Whatever your agent's "system prompt / persistent memory" docs recommend

Suggested instruction block (copy verbatim, tweak thresholds/tone to taste):

I have the meme-library skill installed. From now on, in every conversation:

【When to consider sending】
Before each reply, silently judge whether the current moment calls for a
meme/sticker:
  ✓ Good fits: I shared an emotion (joy, frustration, shock, exhaustion,
              dark amusement), said something funny, an obvious
              celebration/comfort/roast moment, light banter, no rush
  ✗ Bad fits: serious topics, technical/work-critical discussion, I'm
              clearly focused, the mood is heavy, I just said "stop
              sending memes", or the previous meme you sent got no reaction

【How to pick】
When you decide to send, follow Workflow 3 in meme-library's SKILL.md:
  1. Read $MEME_LIB_PATH/index.json
  2. Score collections by vibe + best_for + avoid_when, pick the top 1–2
  3. Read those collections' meta.json, pick the meme whose `occasion`
     fits best
  4. Append the absolute path to your reply (or hand it to my sender skill)

【Self-restraint】
- At most one meme per turn
- Don't repeat the same collection within 3 turns
- After 3 consecutive "not a fit" judgments, skip the evaluation entirely
  for the next 5 turns to keep the noise down
- When in doubt, don't send — silence beats a wrong send
- If I say "stop", "knock it off", or anything similar, no more memes for
  the rest of this session

Once it's saved, your agent will pick up the rhythm on the next turn — you no longer have to prompt for a meme each time.

💡 Want to disable proactive mode? Delete the block, or just tell the agent "only send memes when I explicitly ask".

Why this design

Most "meme skill" projects do one of two things:

  1. Generate memes on the fly via a public API (memegen.link, etc.).
  2. Hard-code a flat list of stickers in markdown.

Neither scales beyond ~20 items. meme-library borrows the progressive disclosure pattern from Anthropic's Agent Skills spec itself:

  • A small index.json is always loaded — it lists each collection plus a "vibe fingerprint" (one-sentence personality + best_for / avoid_when lists).
  • Per-collection meta.json is loaded only when the index suggests that collection is relevant.

Result: you can grow to hundreds of memes across dozens of collections without ballooning the agent's context window.

Library layout

$MEME_LIB_PATH/                  (default: ~/.meme-library/)
├── index.json                   # collection-level fingerprints
├── orange_cat/
│   ├── meta.json
│   ├── cat_boss_01.jpeg
│   └── cat_shock_01.jpg
└── chiikawa/
    ├── meta.json
    └── cute_pleading_01.png

See references/schema.md for the full JSON schema.

Manual install

If you'd rather not go through the agent prompt, or want a non-standard install location:

# 1. Clone
git clone https://github.com/yr-96/meme-library.git ~/.local/share/meme-library

# 2. Symlink into your agent's skills/ directory (pick one)
ln -s ~/.local/share/meme-library  ~/.hermes/skills/meme-library      # hermes
ln -s ~/.local/share/meme-library  ~/.openclaw/skills/meme-library    # openclaw
ln -s ~/.local/share/meme-library  ~/.claude/skills/meme-library      # Claude Code
ln -s ~/.local/share/meme-library  ~/.cursor/skills/meme-library      # Cursor

# 3. Install the SDK for your chosen vision provider
pip install anthropic          # default provider is claude
# or openai / google-generativeai

# 4. Configure env vars (in ~/.zshrc or ~/.bashrc)
export ANTHROPIC_API_KEY="sk-..."
export MEME_VISION_PROVIDER=claude          # default, can be omitted
export MEME_LIB_PATH="$HOME/.meme-library"  # default, can be omitted

# 5. Run the verifier
python3 ~/.local/share/meme-library/scripts/verify_install.py

Or just run the install script — it does step 2 for you and detects which agent platform you're on:

bash ~/.local/share/meme-library/install.sh

Use from the CLI

# Add a single image
python scripts/add_meme.py --image ~/Downloads/funny_cat.jpg --collection orange_cat

# Bulk-ingest a folder
python scripts/add_collection.py --source ~/stickers/chiikawa/ --collection chiikawa --name "Chiikawa"

# Heuristic retrieval (LLM-driven retrieval is more accurate)
python scripts/search.py --context "they've been ignoring my message" --top-k 1

# Refresh a collection's vibe fingerprint
python scripts/regen_vibe.py --stale-only

# Sanity-check the library
python scripts/health_check.py

Plug in a custom vision provider

Subclass VisionProvider and register it. Two methods, both returning structured dataclasses:

class MyProvider:
    name = "myprovider"

    def describe_image(self, image_path: Path) -> ImageTags: ...
    def summarise_collection(
        self, descriptions: list[dict]
    ) -> CollectionFingerprint: ...

Pair with a sender skill

meme-library returns paths; sending is up to you. A minimum sender skill needs ~30 lines:

import subprocess, json
import telegram_bot

def react(context: str, recently_sent: set[str]):
    out = subprocess.check_output([
        "python", "scripts/search.py",
        "--context", context,
        "--exclude", ",".join(recently_sent),
    ])
    matches = json.loads(out)["matches"]
    if matches:
        telegram_bot.send_photo(chat_id, matches[0]["path"])
        recently_sent.add(matches[0]["path"])

See SKILL.md for the full behaviour contract (when to send upfront, when to skip, how to handle opt-out, etc.) — those rules are generic and worth respecting in any sender.

Roadmap

  • Embedding-based retrieval as an opt-in upgrade over keyword overlap
  • Auto-detect duplicates on ingest (perceptual hash)
  • Web UI for browsing the library
  • Publish to LobeHub Skills Marketplace / openclaw index

Contributions welcome.

License

MIT — see LICENSE.

About

表情包.skill — 把表情包变成 LLM agent 能用的资产。视觉自动打标 + 语义检索 + 零依赖兜底,对话需要时一秒挑出最贴切那张。

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors