diff --git a/CHANGELOG.md b/CHANGELOG.md
index 01bdde5f3c..a90d38918d 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -5,6 +5,49 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
+## [0.8.3] - 2026-02-17
+
+### Added
+
+- ✏️ **Model edit shortcut.** Users can now edit models directly from the model selector dropdown menu, making it faster to modify model settings without navigating to separate admin or workspace pages. [Commit](https://github.com/open-webui/open-webui/commit/519ff40cb69cdc1d215cee369e9db70ff7438153)
+- 🎨 **Image edit API background support.** The image edit API now supports the background parameter for OpenAI's gpt-image-1 model, enabling background transparency control ("transparent", "opaque", "auto") when the feature is exposed in the UI. [#21459](https://github.com/open-webui/open-webui/pull/21459)
+- ⚡ **Faster model filtering.** Model access control filtering no longer makes a redundant database query to re-fetch model info that is already available in memory, reducing latency when loading model lists for non-admin users. [Commit](https://github.com/open-webui/open-webui/commit/34cd3d79e8688f589e3dd2f03415f8a8f9a13115)
+- 🔧 **Tool call display improvements.** Tool call results now display arguments in a cleaner key-value format instead of raw JSON, with a responsive layout that shows only the tool name on narrow screens and the full label on wider screens, preventing text wrapping to multiple lines. [Commit](https://github.com/open-webui/open-webui/commit/2ce935bdb10d2b26b230cd54cb649f5c667ed96a)
+- 🔄 **General improvements.** Various improvements were implemented across the application to enhance performance, stability, and security.
+- 🌐 Translations for Portuguese (Brazil), Simplified Chinese, and Traditional Chinese were enhanced and expanded.
+
+### Fixed
+
+- 📧 **USER_EMAIL variable fix.** The {{USER_EMAIL}} template variable now correctly returns the user's email address instead of "Unknown" in prompts. [#21479](https://github.com/open-webui/open-webui/pull/21479), [#21465](https://github.com/open-webui/open-webui/issues/21465)
+- 🖼️ **Image and file attachment handling fixes.** Uploaded images are now correctly sent to vision-enabled models, and file attachments now work even when no user text is entered alongside a system prompt. This fixes two issues where the backend was not properly processing file attachments: images weren't converted to the expected format for API requests, and file context was dropped when the user sent only a file without accompanying text. [Commit](https://github.com/open-webui/open-webui/commit/f1053d94c7ef7b8b78682dd73586b65a84d202a1), [#21477](https://github.com/open-webui/open-webui/issues/21477), [#21457](https://github.com/open-webui/open-webui/issues/21457)
+- 🛡️ **Missing function error handling.** Models that reference deleted functions no longer cause the entire /api/models endpoint to crash; instead, the missing functions are skipped and logged, allowing the rest of the models to load successfully. [#21476](https://github.com/open-webui/open-webui/pull/21476), [#21464](https://github.com/open-webui/open-webui/issues/21464)
+- 🚀 **Startup model pre-fetch error handling.** If model pre-fetching fails during app startup, the application now logs a warning and continues instead of crashing entirely. [Commit](https://github.com/open-webui/open-webui/commit/337109e99ce390f55a9085d0a301853637923779)
+- ⚙️ **Function module loading error handling.** Function modules that fail to load during startup or model processing are now caught and logged, preventing crashes when models reference functions with loading errors. [Commit](https://github.com/open-webui/open-webui/commit/15b893e651de71b033408e1b713e0b51f6829ab8)
+- 🗄️ **PostgreSQL group query fix.** The '/api/v1/groups/' endpoint no longer fails with a GROUP BY error when using PostgreSQL; member counts are now calculated using correlated subqueries for better database compatibility. [#21458](https://github.com/open-webui/open-webui/pull/21458), [#21467](https://github.com/open-webui/open-webui/issues/21467)
+
+## [0.8.2] - 2026-02-16
+
+### Added
+
+- 🧠 **Skill content handling.** User-selected skills now have their full content injected into the chat, while model-attached skills only display name and description in the available skills list. This allows users to override skill behavior while model-attached skills remain flexible. [Commit](https://github.com/open-webui/open-webui/commit/393c0071dc612c5ac982fb37dfc0288cb9911439)
+- ⚙️ **Chat toggles now control built-in tools.** Users can now disable web search, image generation, and code execution on a per-conversation basis, even when those tools are enabled as builtin tools on the model. [#20641](https://github.com/open-webui/open-webui/issues/20641), [#21318](https://github.com/open-webui/open-webui/discussions/21318), [Commit](https://github.com/open-webui/open-webui/commit/c46ef3b63bcc1e2e9adbdd18fab82c4bbe33ff6c), [Commit](https://github.com/open-webui/open-webui/commit/f1a1e64d2e9ad953b2bc2a9543e9a308b7c669c8)
+- 🖼️ **Image preview in file modal.** Images uploaded to chats can now be previewed directly in the file management modal, making it easier to identify and manage image files. [#21413](https://github.com/open-webui/open-webui/issues/21413), [Commit](https://github.com/open-webui/open-webui/commit/e1b3e7252c1896c04d498547908f0fce111434e1)
+- 🏷️ **Batch tag operations.** Tag creation, deletion, and orphan cleanup for chats now use batch database queries instead of per-tag loops, significantly reducing database round trips when updating, archiving, or deleting chats with multiple tags. [Commit](https://github.com/open-webui/open-webui/commit/c748c3ede)
+- 💨 **Faster group list loading.** Group lists and search results now load with a single database query that joins member counts, replacing the previous pattern of fetching groups first and then counting members in a separate batch query. [Commit](https://github.com/open-webui/open-webui/commit/33308022f)
+- 🔐 **Skills sharing permissions.** Administrators can now control skills sharing and public sharing permissions per-group, matching the existing capabilities for tools, knowledge, and prompts. [Commit](https://github.com/open-webui/open-webui/commit/88401e91c)
+- ⚡ **Long content truncation in preview modals.** Citation and file content modals now truncate markdown-rendered content at 10,000 characters with a "Show all" expansion button, preventing UI jank when previewing very large documents.
+- 🌐 **Translation updates.** Translations for Spanish and German were enhanced and expanded.
+
+### Fixed
+
+- 🔐 **OAuth session error handling.** Corrupted OAuth sessions are now gracefully handled and automatically cleaned up instead of causing errors. [Commit](https://github.com/open-webui/open-webui/commit/7e224e4a536b07ec008613f06592e34050e7067c)
+- 🐛 **Task model selector validation.** The task model selector in admin settings now correctly accepts models based on the new access grants system instead of rejecting all models with an incorrect error. [Commit](https://github.com/open-webui/open-webui/commit/9a2595f0706d0c9d809ae7746001cf799f98db1d)
+- 🔗 **Tool call message preservation.** Models no longer hallucinate tool outputs in multi-turn conversations because tool call history is now properly preserved instead of being merged into assistant messages. [#21098](https://github.com/open-webui/open-webui/discussions/21098), [#20600](https://github.com/open-webui/open-webui/issues/20600), [Commit](https://github.com/open-webui/open-webui/commit/f2aca781c87244cffc130aa2722e700c19a81d66)
+- 🔧 **Tool server startup initialization.** External tool servers configured via the "TOOL_SERVER_CONNECTIONS" environment variable now initialize automatically on startup, eliminating the need to manually visit the Admin Panel and save for tools to become available. This enables proper GitOps and containerized deployments. [#18140](https://github.com/open-webui/open-webui/issues/18140), [#20914](https://github.com/open-webui/open-webui/pull/20914), [Commit](https://github.com/open-webui/open-webui/commit/f20cc6d7e6da493eb75ca1618f5cbd068fa57684)
+- ♻️ **Resource handle cleanup.** File handles are now properly closed during audio transcription and pipeline uploads, preventing resource leaks that could cause system instability over time. [#21411](https://github.com/open-webui/open-webui/issues/21411)
+- ⌨️ **Strikethrough shortcut conflict fix.** Pressing Ctrl+Shift+S to toggle the sidebar no longer causes text to become struck through in the chat input, by disabling the TipTap Strike extension's default keyboard shortcut when rich text mode is off. [Commit](https://github.com/open-webui/open-webui/commit/38ae91ae2)
+- 🔧 **Tool call finish_reason fix.** API responses now correctly set finish_reason to "tool_calls" instead of "stop" when tool calls are present, fixing an issue where external API clients (such as OpenCode) would halt prematurely after tool execution when routing Ollama models through the Open WebUI API. [#20896](https://github.com/open-webui/open-webui/issues/20896)
+
## [0.8.1] - 2026-02-14
### Added
diff --git a/CHANGELOG_EXTRA.md b/CHANGELOG_EXTRA.md
index dbae12dcec..1b717c5876 100644
--- a/CHANGELOG_EXTRA.md
+++ b/CHANGELOG_EXTRA.md
@@ -5,6 +5,12 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
+## [0.8.3.1] - 2026.02.19
+
+### Changed
+
+- 合并官方 0.8.3 改动
+
## [0.8.1.1] - 2026.02.14
### Changed
diff --git a/backend/open_webui/config.py b/backend/open_webui/config.py
index 8bf052a763..27b6e7090d 100644
--- a/backend/open_webui/config.py
+++ b/backend/open_webui/config.py
@@ -1353,6 +1353,18 @@ def reachable(host: str, port: int) -> bool:
== "true"
)
+USER_PERMISSIONS_WORKSPACE_SKILLS_ALLOW_SHARING = (
+ os.environ.get("USER_PERMISSIONS_WORKSPACE_SKILLS_ALLOW_SHARING", "False").lower()
+ == "true"
+)
+
+USER_PERMISSIONS_WORKSPACE_SKILLS_ALLOW_PUBLIC_SHARING = (
+ os.environ.get(
+ "USER_PERMISSIONS_WORKSPACE_SKILLS_ALLOW_PUBLIC_SHARING", "False"
+ ).lower()
+ == "true"
+)
+
USER_PERMISSIONS_NOTES_ALLOW_SHARING = (
os.environ.get("USER_PERMISSIONS_NOTES_ALLOW_SHARING", "False").lower() == "true"
)
@@ -1506,6 +1518,8 @@ def reachable(host: str, port: int) -> bool:
"public_prompts": USER_PERMISSIONS_WORKSPACE_PROMPTS_ALLOW_PUBLIC_SHARING,
"tools": USER_PERMISSIONS_WORKSPACE_TOOLS_ALLOW_SHARING,
"public_tools": USER_PERMISSIONS_WORKSPACE_TOOLS_ALLOW_PUBLIC_SHARING,
+ "skills": USER_PERMISSIONS_WORKSPACE_SKILLS_ALLOW_SHARING,
+ "public_skills": USER_PERMISSIONS_WORKSPACE_SKILLS_ALLOW_PUBLIC_SHARING,
"notes": USER_PERMISSIONS_NOTES_ALLOW_SHARING,
"public_notes": USER_PERMISSIONS_NOTES_ALLOW_PUBLIC_SHARING,
},
diff --git a/backend/open_webui/main.py b/backend/open_webui/main.py
index e6523b3db9..7c2b100ae3 100644
--- a/backend/open_webui/main.py
+++ b/backend/open_webui/main.py
@@ -57,6 +57,7 @@
MODELS,
app as socket_app,
periodic_usage_pool_cleanup,
+ periodic_session_pool_cleanup,
get_event_emitter,
get_models_in_use,
)
@@ -540,6 +541,7 @@
process_chat_payload,
process_chat_response,
)
+from open_webui.utils.tools import set_tool_servers
from open_webui.utils.auth import (
get_license_data,
@@ -657,11 +659,37 @@ async def lifespan(app: FastAPI):
limiter.total_tokens = THREAD_POOL_SIZE
asyncio.create_task(periodic_usage_pool_cleanup())
+ asyncio.create_task(periodic_session_pool_cleanup())
if app.state.config.ENABLE_BASE_MODELS_CACHE:
- await get_all_models(
- Request(
- # Creating a mock request object to pass to get_all_models
+ try:
+ await get_all_models(
+ Request(
+ # Creating a mock request object to pass to get_all_models
+ {
+ "type": "http",
+ "asgi.version": "3.0",
+ "asgi.spec_version": "2.0",
+ "method": "GET",
+ "path": "/internal",
+ "query_string": b"",
+ "headers": Headers({}).raw,
+ "client": ("127.0.0.1", 12345),
+ "server": ("127.0.0.1", 80),
+ "scheme": "http",
+ "app": app,
+ }
+ ),
+ None,
+ )
+ except Exception as e:
+ log.warning(f"Failed to pre-fetch models at startup: {e}")
+
+ # Pre-fetch tool server specs so the first request doesn't pay the latency cost
+ if len(app.state.config.TOOL_SERVER_CONNECTIONS) > 0:
+ log.info("Initializing tool servers...")
+ try:
+ mock_request = Request(
{
"type": "http",
"asgi.version": "3.0",
@@ -675,9 +703,11 @@ async def lifespan(app: FastAPI):
"scheme": "http",
"app": app,
}
- ),
- None,
- )
+ )
+ await set_tool_servers(mock_request)
+ log.info(f"Initialized {len(app.state.TOOL_SERVERS)} tool server(s)")
+ except Exception as e:
+ log.warning(f"Failed to initialize tool servers at startup: {e}")
yield
diff --git a/backend/open_webui/models/chats.py b/backend/open_webui/models/chats.py
index 7ae9f7a38b..1418abd62d 100644
--- a/backend/open_webui/models/chats.py
+++ b/backend/open_webui/models/chats.py
@@ -431,22 +431,29 @@ def update_chat_title_by_id(self, id: str, title: str) -> Optional[ChatModel]:
def update_chat_tags_by_id(
self, id: str, tags: list[str], user
) -> Optional[ChatModel]:
- chat = self.get_chat_by_id(id)
- if chat is None:
- return None
+ with get_db_context() as db:
+ chat = db.get(Chat, id)
+ if chat is None:
+ return None
+
+ old_tags = chat.meta.get("tags", [])
+ new_tags = [t for t in tags if t.replace(" ", "_").lower() != "none"]
+ new_tag_ids = [t.replace(" ", "_").lower() for t in new_tags]
- self.delete_all_tags_by_id_and_user_id(id, user.id)
+ # Single meta update
+ chat.meta = {**chat.meta, "tags": new_tag_ids}
+ db.commit()
+ db.refresh(chat)
- for tag in chat.meta.get("tags", []):
- if self.count_chats_by_tag_name_and_user_id(tag, user.id) == 0:
- Tags.delete_tag_by_name_and_user_id(tag, user.id)
+ # Batch-create any missing tag rows
+ Tags.ensure_tags_exist(new_tags, user.id, db=db)
- for tag_name in tags:
- if tag_name.lower() == "none":
- continue
+ # Clean up orphaned old tags in one query
+ removed = set(old_tags) - set(new_tag_ids)
+ if removed:
+ self.delete_orphan_tags_for_user(list(removed), user.id, db=db)
- self.add_chat_tag_by_id_and_user_id_and_tag_name(id, user.id, tag_name)
- return self.get_chat_by_id(id)
+ return ChatModel.model_validate(chat)
def get_chat_title_by_id(self, id: str) -> Optional[str]:
chat = self.get_chat_by_id(id)
@@ -1267,8 +1274,8 @@ def get_chat_tags_by_id_and_user_id(
) -> list[TagModel]:
with get_db_context(db) as db:
chat = db.get(Chat, id)
- tags = chat.meta.get("tags", [])
- return [Tags.get_tag_by_name_and_user_id(tag, user_id) for tag in tags]
+ tag_ids = chat.meta.get("tags", [])
+ return Tags.get_tags_by_ids_and_user_id(tag_ids, user_id, db=db)
def get_chat_list_by_user_id_and_tag_name(
self,
@@ -1309,20 +1316,16 @@ def get_chat_list_by_user_id_and_tag_name(
def add_chat_tag_by_id_and_user_id_and_tag_name(
self, id: str, user_id: str, tag_name: str, db: Optional[Session] = None
) -> Optional[ChatModel]:
- tag = Tags.get_tag_by_name_and_user_id(tag_name, user_id)
- if tag is None:
- tag = Tags.insert_new_tag(tag_name, user_id)
+ tag_id = tag_name.replace(" ", "_").lower()
+ Tags.ensure_tags_exist([tag_name], user_id, db=db)
try:
with get_db_context(db) as db:
chat = db.get(Chat, id)
-
- tag_id = tag.id
if tag_id not in chat.meta.get("tags", []):
chat.meta = {
**chat.meta,
"tags": list(set(chat.meta.get("tags", []) + [tag_id])),
}
-
db.commit()
db.refresh(chat)
return ChatModel.model_validate(chat)
@@ -1332,40 +1335,53 @@ def add_chat_tag_by_id_and_user_id_and_tag_name(
def count_chats_by_tag_name_and_user_id(
self, tag_name: str, user_id: str, db: Optional[Session] = None
) -> int:
- with get_db_context(db) as db: # Assuming `get_db()` returns a session object
+ with get_db_context(db) as db:
query = db.query(Chat).filter_by(user_id=user_id, archived=False)
-
- # Normalize the tag_name for consistency
tag_id = tag_name.replace(" ", "_").lower()
if db.bind.dialect.name == "sqlite":
- # SQLite JSON1 support for querying the tags inside the `meta` JSON field
query = query.filter(
text(
- f"EXISTS (SELECT 1 FROM json_each(Chat.meta, '$.tags') WHERE json_each.value = :tag_id)"
+ "EXISTS (SELECT 1 FROM json_each(Chat.meta, '$.tags') WHERE json_each.value = :tag_id)"
)
).params(tag_id=tag_id)
-
elif db.bind.dialect.name == "postgresql":
- # PostgreSQL JSONB support for querying the tags inside the `meta` JSON field
query = query.filter(
text(
"EXISTS (SELECT 1 FROM json_array_elements_text(Chat.meta->'tags') elem WHERE elem = :tag_id)"
)
).params(tag_id=tag_id)
-
else:
raise NotImplementedError(
f"Unsupported dialect: {db.bind.dialect.name}"
)
- # Get the count of matching records
- count = query.count()
-
- # Debugging output for inspection
- log.info(f"Count of chats for tag '{tag_name}': {count}")
+ return query.count()
- return count
+ def delete_orphan_tags_for_user(
+ self,
+ tag_ids: list[str],
+ user_id: str,
+ threshold: int = 0,
+ db: Optional[Session] = None,
+ ) -> None:
+ """Delete tag rows from *tag_ids* that appear in at most *threshold*
+ non-archived chats for *user_id*. One query to find orphans, one to
+ delete them.
+
+ Use threshold=0 after a tag is already removed from a chat's meta.
+ Use threshold=1 when the chat itself is about to be deleted (the
+ referencing chat still exists at query time).
+ """
+ if not tag_ids:
+ return
+ with get_db_context(db) as db:
+ orphans = []
+ for tag_id in tag_ids:
+ count = self.count_chats_by_tag_name_and_user_id(tag_id, user_id, db=db)
+ if count <= threshold:
+ orphans.append(tag_id)
+ Tags.delete_tags_by_ids_and_user_id(orphans, user_id, db=db)
def count_chats_by_folder_id_and_user_id(
self, folder_id: str, user_id: str, db: Optional[Session] = None
diff --git a/backend/open_webui/models/groups.py b/backend/open_webui/models/groups.py
index 8fe720ecc6..c9a38f1ede 100644
--- a/backend/open_webui/models/groups.py
+++ b/backend/open_webui/models/groups.py
@@ -164,7 +164,14 @@ def get_all_groups(self, db: Optional[Session] = None) -> list[GroupModel]:
def get_groups(self, filter, db: Optional[Session] = None) -> list[GroupResponse]:
with get_db_context(db) as db:
- query = db.query(Group)
+ member_count = (
+ select(func.count(GroupMember.user_id))
+ .where(GroupMember.group_id == Group.id)
+ .correlate(Group)
+ .scalar_subquery()
+ .label("member_count")
+ )
+ query = db.query(Group, member_count)
if filter:
if "query" in filter:
@@ -179,9 +186,6 @@ def get_groups(self, filter, db: Optional[Session] = None) -> list[GroupResponse
json_share_lower = func.lower(json_share_str)
if share_value:
- # Groups open to anyone: data is null, config.share is null, or share is true
- # Use case-insensitive string comparison to handle variations like "True", "TRUE"
- # Handle potential JSON boolean to string casting issues by checking for both string 'true' and boolean equivalence if possible,
anyone_can_share = or_(
Group.data.is_(None),
json_share_str.is_(None),
@@ -190,7 +194,6 @@ def get_groups(self, filter, db: Optional[Session] = None) -> list[GroupResponse
)
if member_id:
- # Also include member-only groups where user is a member
member_groups_select = select(GroupMember.group_id).where(
GroupMember.user_id == member_id
)
@@ -211,21 +214,24 @@ def get_groups(self, filter, db: Optional[Session] = None) -> list[GroupResponse
else:
# Only apply member_id filter when share filter is NOT present
if "member_id" in filter:
- query = query.join(
- GroupMember, GroupMember.group_id == Group.id
- ).filter(GroupMember.user_id == filter["member_id"])
+ query = query.filter(
+ Group.id.in_(
+ select(GroupMember.group_id).where(
+ GroupMember.user_id == filter["member_id"]
+ )
+ )
+ )
+
+ results = query.order_by(Group.updated_at.desc()).all()
- groups = query.order_by(Group.updated_at.desc()).all()
- group_ids = [group.id for group in groups]
- member_counts = self.get_group_member_counts_by_ids(group_ids, db=db)
return [
GroupResponse.model_validate(
{
**GroupModel.model_validate(group).model_dump(),
- "member_count": member_counts.get(group.id, 0),
+ "member_count": count or 0,
}
)
- for group in groups
+ for group, count in results
]
def search_groups(
@@ -242,31 +248,46 @@ def search_groups(
if "query" in filter:
query = query.filter(Group.name.ilike(f"%{filter['query']}%"))
if "member_id" in filter:
- query = query.join(
- GroupMember, GroupMember.group_id == Group.id
- ).filter(GroupMember.user_id == filter["member_id"])
+ query = query.filter(
+ Group.id.in_(
+ select(GroupMember.group_id).where(
+ GroupMember.user_id == filter["member_id"]
+ )
+ )
+ )
if "share" in filter:
- # 'share' is stored in data JSON, support both sqlite and postgres
share_value = filter["share"]
- print("Filtering by share:", share_value)
query = query.filter(
Group.data.op("->>")("share") == str(share_value)
)
total = query.count()
- query = query.order_by(Group.updated_at.desc())
- groups = query.offset(skip).limit(limit).all()
- group_ids = [group.id for group in groups]
- member_counts = self.get_group_member_counts_by_ids(group_ids, db=db)
+
+ member_count = (
+ select(func.count(GroupMember.user_id))
+ .where(GroupMember.group_id == Group.id)
+ .correlate(Group)
+ .scalar_subquery()
+ .label("member_count")
+ )
+ results = (
+ query.add_columns(member_count)
+ .order_by(Group.updated_at.desc())
+ .offset(skip)
+ .limit(limit)
+ .all()
+ )
return {
"items": [
GroupResponse.model_validate(
- **GroupModel.model_validate(group).model_dump(),
- member_count=member_counts.get(group.id, 0),
+ {
+ **GroupModel.model_validate(group).model_dump(),
+ "member_count": count or 0,
+ }
)
- for group in groups
+ for group, count in results
],
"total": total,
}
diff --git a/backend/open_webui/models/oauth_sessions.py b/backend/open_webui/models/oauth_sessions.py
index f7ee5cceb8..538937483f 100644
--- a/backend/open_webui/models/oauth_sessions.py
+++ b/backend/open_webui/models/oauth_sessions.py
@@ -102,7 +102,7 @@ def _decrypt_token(self, token: str):
decrypted = self.fernet.decrypt(token.encode()).decode()
return json.loads(decrypted)
except Exception as e:
- log.error(f"Error decrypting tokens: {e}")
+ log.error(f"Error decrypting tokens: {type(e).__name__}: {e}")
raise
def create_session(
@@ -209,8 +209,15 @@ def get_sessions_by_user_id(
results = []
for session in sessions:
- session.token = self._decrypt_token(session.token)
- results.append(OAuthSessionModel.model_validate(session))
+ try:
+ session.token = self._decrypt_token(session.token)
+ results.append(OAuthSessionModel.model_validate(session))
+ except Exception as e:
+ log.warning(
+ f"Skipping OAuth session {session.id} due to decryption failure, deleting corrupted session: {type(e).__name__}: {e}"
+ )
+ db.query(OAuthSession).filter_by(id=session.id).delete()
+ db.commit()
return results
diff --git a/backend/open_webui/models/tags.py b/backend/open_webui/models/tags.py
index 64cb559547..147bb394d5 100644
--- a/backend/open_webui/models/tags.py
+++ b/backend/open_webui/models/tags.py
@@ -115,5 +115,45 @@ def delete_tag_by_name_and_user_id(
log.error(f"delete_tag: {e}")
return False
+ def delete_tags_by_ids_and_user_id(
+ self, ids: list[str], user_id: str, db: Optional[Session] = None
+ ) -> bool:
+ """Delete all tags whose id is in *ids* for the given user, in one query."""
+ if not ids:
+ return True
+ try:
+ with get_db_context(db) as db:
+ db.query(Tag).filter(Tag.id.in_(ids), Tag.user_id == user_id).delete(
+ synchronize_session=False
+ )
+ db.commit()
+ return True
+ except Exception as e:
+ log.error(f"delete_tags_by_ids: {e}")
+ return False
+
+ def ensure_tags_exist(
+ self, names: list[str], user_id: str, db: Optional[Session] = None
+ ) -> None:
+ """Create tag rows for any *names* that don't already exist for *user_id*."""
+ if not names:
+ return
+ ids = [n.replace(" ", "_").lower() for n in names]
+ with get_db_context(db) as db:
+ existing = {
+ t.id
+ for t in db.query(Tag.id)
+ .filter(Tag.id.in_(ids), Tag.user_id == user_id)
+ .all()
+ }
+ new_tags = [
+ Tag(id=tag_id, name=name, user_id=user_id)
+ for tag_id, name in zip(ids, names)
+ if tag_id not in existing
+ ]
+ if new_tags:
+ db.add_all(new_tags)
+ db.commit()
+
Tags = TagTable()
diff --git a/backend/open_webui/routers/audio.py b/backend/open_webui/routers/audio.py
index 139b64f7cf..01a857a2b6 100644
--- a/backend/open_webui/routers/audio.py
+++ b/backend/open_webui/routers/audio.py
@@ -639,13 +639,14 @@ def transcription_handler(request, file_path, metadata, user=None):
if user and ENABLE_FORWARD_USER_INFO_HEADERS:
headers = include_user_info_headers(headers, user)
- r = requests.post(
- url=f"{request.app.state.config.STT_OPENAI_API_BASE_URL}/audio/transcriptions",
- headers=headers,
- files={"file": (filename, open(file_path, "rb"))},
- data=payload,
- timeout=AIOHTTP_CLIENT_TIMEOUT,
- )
+ with open(file_path, "rb") as audio_file:
+ r = requests.post(
+ url=f"{request.app.state.config.STT_OPENAI_API_BASE_URL}/audio/transcriptions",
+ headers=headers,
+ files={"file": (filename, audio_file)},
+ data=payload,
+ timeout=AIOHTTP_CLIENT_TIMEOUT,
+ )
if r.status_code == 200:
# Successful transcription
diff --git a/backend/open_webui/routers/chats.py b/backend/open_webui/routers/chats.py
index e03cdc7ba9..69e47123f0 100644
--- a/backend/open_webui/routers/chats.py
+++ b/backend/open_webui/routers/chats.py
@@ -1131,9 +1131,9 @@ async def delete_chat_by_id(
status_code=status.HTTP_404_NOT_FOUND,
detail=ERROR_MESSAGES.NOT_FOUND,
)
- for tag in chat.meta.get("tags", []):
- if Chats.count_chats_by_tag_name_and_user_id(tag, user.id, db=db) == 1:
- Tags.delete_tag_by_name_and_user_id(tag, user.id, db=db)
+ Chats.delete_orphan_tags_for_user(
+ chat.meta.get("tags", []), user.id, threshold=1, db=db
+ )
result = Chats.delete_chat_by_id(id, db=db)
@@ -1153,9 +1153,9 @@ async def delete_chat_by_id(
status_code=status.HTTP_404_NOT_FOUND,
detail=ERROR_MESSAGES.NOT_FOUND,
)
- for tag in chat.meta.get("tags", []):
- if Chats.count_chats_by_tag_name_and_user_id(tag, user.id, db=db) == 1:
- Tags.delete_tag_by_name_and_user_id(tag, user.id, db=db)
+ Chats.delete_orphan_tags_for_user(
+ chat.meta.get("tags", []), user.id, threshold=1, db=db
+ )
result = Chats.delete_chat_by_id_and_user_id(id, user.id, db=db)
return result
@@ -1317,21 +1317,13 @@ async def archive_chat_by_id(
if chat:
chat = Chats.toggle_chat_archive_by_id(id, db=db)
- # Delete tags if chat is archived
+ tag_ids = chat.meta.get("tags", [])
if chat.archived:
- for tag_id in chat.meta.get("tags", []):
- if (
- Chats.count_chats_by_tag_name_and_user_id(tag_id, user.id, db=db)
- == 0
- ):
- log.debug(f"deleting tag: {tag_id}")
- Tags.delete_tag_by_name_and_user_id(tag_id, user.id, db=db)
+ # Archived chats are excluded from count — clean up orphans
+ Chats.delete_orphan_tags_for_user(tag_ids, user.id, db=db)
else:
- for tag_id in chat.meta.get("tags", []):
- tag = Tags.get_tag_by_name_and_user_id(tag_id, user.id, db=db)
- if tag is None:
- log.debug(f"inserting tag: {tag_id}")
- tag = Tags.insert_new_tag(tag_id, user.id, db=db)
+ # Unarchived — ensure tag rows exist
+ Tags.ensure_tags_exist(tag_ids, user.id, db=db)
return ChatResponse(**chat.model_dump())
else:
@@ -1537,11 +1529,9 @@ async def delete_all_tags_by_id(
):
chat = Chats.get_chat_by_id_and_user_id(id, user.id, db=db)
if chat:
+ old_tags = chat.meta.get("tags", [])
Chats.delete_all_tags_by_id_and_user_id(id, user.id, db=db)
-
- for tag in chat.meta.get("tags", []):
- if Chats.count_chats_by_tag_name_and_user_id(tag, user.id, db=db) == 0:
- Tags.delete_tag_by_name_and_user_id(tag, user.id, db=db)
+ Chats.delete_orphan_tags_for_user(old_tags, user.id, db=db)
return True
else:
diff --git a/backend/open_webui/routers/images.py b/backend/open_webui/routers/images.py
index 942848bc2f..d3bc3f8eee 100644
--- a/backend/open_webui/routers/images.py
+++ b/backend/open_webui/routers/images.py
@@ -849,6 +849,7 @@ class EditImageForm(BaseModel):
size: Optional[str] = None
n: Optional[int] = None
negative_prompt: Optional[str] = None
+ background: Optional[str] = None
@router.post("/edit")
@@ -953,6 +954,9 @@ def get_image_file_item(base64_string, param_name="image"):
"prompt": form_data.prompt,
**({"n": form_data.n} if form_data.n else {}),
**({"size": size} if size else {}),
+ **(
+ {"background": form_data.background} if form_data.background else {}
+ ),
**(
{}
if re.match(
diff --git a/backend/open_webui/routers/pipelines.py b/backend/open_webui/routers/pipelines.py
index 20fcd75eec..ebedd3027d 100644
--- a/backend/open_webui/routers/pipelines.py
+++ b/backend/open_webui/routers/pipelines.py
@@ -228,22 +228,23 @@ async def upload_pipeline(
headers = {"Authorization": f"Bearer {key}"}
async with aiohttp.ClientSession(trust_env=True) as session:
- form_data = aiohttp.FormData()
- form_data.add_field(
- "file",
- open(file_path, "rb"),
- filename=filename,
- content_type="application/octet-stream",
- )
+ with open(file_path, "rb") as f:
+ form_data = aiohttp.FormData()
+ form_data.add_field(
+ "file",
+ f,
+ filename=filename,
+ content_type="application/octet-stream",
+ )
- async with session.post(
- f"{url}/pipelines/upload",
- headers=headers,
- data=form_data,
- ssl=AIOHTTP_CLIENT_SESSION_SSL,
- ) as response:
- response.raise_for_status()
- data = await response.json()
+ async with session.post(
+ f"{url}/pipelines/upload",
+ headers=headers,
+ data=form_data,
+ ssl=AIOHTTP_CLIENT_SESSION_SSL,
+ ) as response:
+ response.raise_for_status()
+ data = await response.json()
return {**data}
except Exception as e:
diff --git a/backend/open_webui/routers/tools.py b/backend/open_webui/routers/tools.py
index 6657b34462..fab5039909 100644
--- a/backend/open_webui/routers/tools.py
+++ b/backend/open_webui/routers/tools.py
@@ -107,7 +107,9 @@ async def get_tools(
# MCP Tool Servers
for server in request.app.state.config.TOOL_SERVER_CONNECTIONS:
- if server.get("type", "openapi") == "mcp":
+ if server.get("type", "openapi") == "mcp" and server.get("config", {}).get(
+ "enable"
+ ):
server_id = server.get("info", {}).get("id")
auth_type = server.get("auth_type", "none")
diff --git a/backend/open_webui/routers/users.py b/backend/open_webui/routers/users.py
index 1c297d909b..fc446602bd 100644
--- a/backend/open_webui/routers/users.py
+++ b/backend/open_webui/routers/users.py
@@ -217,6 +217,8 @@ class SharingPermissions(BaseModel):
public_prompts: bool = False
tools: bool = False
public_tools: bool = True
+ skills: bool = False
+ public_skills: bool = False
notes: bool = False
public_notes: bool = True
diff --git a/backend/open_webui/socket/main.py b/backend/open_webui/socket/main.py
index b43c56b4e6..78df66b8dc 100644
--- a/backend/open_webui/socket/main.py
+++ b/backend/open_webui/socket/main.py
@@ -99,6 +99,7 @@
# Timeout duration in seconds
TIMEOUT_DURATION = 3
+SESSION_POOL_TIMEOUT = 120 # seconds without heartbeat before session is reaped
# Dictionary to maintain the user pool
@@ -147,6 +148,17 @@
aquire_func = clean_up_lock.aquire_lock
renew_func = clean_up_lock.renew_lock
release_func = clean_up_lock.release_lock
+
+ session_cleanup_lock = RedisLock(
+ redis_url=WEBSOCKET_REDIS_URL,
+ lock_name=f"{REDIS_KEY_PREFIX}:session_cleanup_lock",
+ timeout_secs=WEBSOCKET_REDIS_LOCK_TIMEOUT,
+ redis_sentinels=redis_sentinels,
+ redis_cluster=WEBSOCKET_REDIS_CLUSTER,
+ )
+ session_aquire_func = session_cleanup_lock.aquire_lock
+ session_renew_func = session_cleanup_lock.renew_lock
+ session_release_func = session_cleanup_lock.release_lock
else:
MODELS = {}
@@ -154,6 +166,7 @@
USAGE_POOL = {}
aquire_func = release_func = renew_func = lambda: True
+ session_aquire_func = session_release_func = session_renew_func = lambda: True
YDOC_MANAGER = YdocManager(
@@ -162,6 +175,31 @@
)
+async def periodic_session_pool_cleanup():
+ """Reap orphaned SESSION_POOL entries that missed heartbeats (e.g. crashed instance)."""
+ if not session_aquire_func():
+ log.debug("Session cleanup lock held by another node. Skipping.")
+ return
+
+ try:
+ while True:
+ if not session_renew_func():
+ log.error("Unable to renew session cleanup lock. Exiting.")
+ return
+
+ now = int(time.time())
+ for sid in list(SESSION_POOL.keys()):
+ entry = SESSION_POOL.get(sid)
+ if entry and now - entry.get("last_seen_at", 0) > SESSION_POOL_TIMEOUT:
+ log.warning(
+ f"Reaping orphaned session {sid} (user {entry.get('id')})"
+ )
+ del SESSION_POOL[sid]
+ await asyncio.sleep(SESSION_POOL_TIMEOUT)
+ finally:
+ session_release_func()
+
+
async def periodic_usage_pool_cleanup():
max_retries = 2
retry_delay = random.uniform(
@@ -313,15 +351,18 @@ async def connect(sid, environ, auth):
user = Users.get_user_by_id(data["id"])
if user:
- SESSION_POOL[sid] = user.model_dump(
- exclude=[
- "profile_image_url",
- "profile_banner_image_url",
- "date_of_birth",
- "bio",
- "gender",
- ]
- )
+ SESSION_POOL[sid] = {
+ **user.model_dump(
+ exclude=[
+ "profile_image_url",
+ "profile_banner_image_url",
+ "date_of_birth",
+ "bio",
+ "gender",
+ ]
+ ),
+ "last_seen_at": int(time.time()),
+ }
await sio.enter_room(sid, f"user:{user.id}")
@@ -340,15 +381,18 @@ async def user_join(sid, data):
if not user:
return
- SESSION_POOL[sid] = user.model_dump(
- exclude=[
- "profile_image_url",
- "profile_banner_image_url",
- "date_of_birth",
- "bio",
- "gender",
- ]
- )
+ SESSION_POOL[sid] = {
+ **user.model_dump(
+ exclude=[
+ "profile_image_url",
+ "profile_banner_image_url",
+ "date_of_birth",
+ "bio",
+ "gender",
+ ]
+ ),
+ "last_seen_at": int(time.time()),
+ }
await sio.enter_room(sid, f"user:{user.id}")
@@ -366,6 +410,7 @@ async def user_join(sid, data):
async def heartbeat(sid, data):
user = SESSION_POOL.get(sid)
if user:
+ SESSION_POOL[sid] = {**user, "last_seen_at": int(time.time())}
Users.update_last_active_by_id(user["id"])
@@ -709,6 +754,17 @@ async def disconnect(sid):
if sid in SESSION_POOL:
user = SESSION_POOL[sid]
del SESSION_POOL[sid]
+
+ # Clean up USAGE_POOL entries for this session
+ for model_id in list(USAGE_POOL.keys()):
+ connections = USAGE_POOL.get(model_id)
+ if connections and sid in connections:
+ del connections[sid]
+ if not connections:
+ del USAGE_POOL[model_id]
+ else:
+ USAGE_POOL[model_id] = connections
+
await YDOC_MANAGER.remove_user_from_all_documents(sid)
else:
pass
diff --git a/backend/open_webui/socket/utils.py b/backend/open_webui/socket/utils.py
index 327348626a..c33af2e71d 100644
--- a/backend/open_webui/socket/utils.py
+++ b/backend/open_webui/socket/utils.py
@@ -118,6 +118,8 @@ def setdefault(self, key, default=None):
class YdocManager:
+ COMPACTION_THRESHOLD = 500
+
def __init__(
self,
redis=None,
@@ -133,10 +135,42 @@ async def append_to_updates(self, document_id: str, update: bytes):
if self._redis:
redis_key = f"{self._redis_key_prefix}:{document_id}:updates"
await self._redis.rpush(redis_key, json.dumps(list(update)))
+ list_len = await self._redis.llen(redis_key)
+ if list_len >= self.COMPACTION_THRESHOLD:
+ await self._compact_updates_redis(document_id)
else:
if document_id not in self._updates:
self._updates[document_id] = []
self._updates[document_id].append(update)
+ if len(self._updates[document_id]) >= self.COMPACTION_THRESHOLD:
+ self._compact_updates_memory(document_id)
+
+ async def _compact_updates_redis(self, document_id: str):
+ """Rolling compaction: squash oldest half into one snapshot."""
+ redis_key = f"{self._redis_key_prefix}:{document_id}:updates"
+ all_updates = await self._redis.lrange(redis_key, 0, -1)
+ if len(all_updates) <= 1:
+ return
+ mid = len(all_updates) // 2
+ ydoc = Y.Doc()
+ for raw in all_updates[:mid]:
+ ydoc.apply_update(bytes(json.loads(raw)))
+ snapshot = json.dumps(list(ydoc.get_update()))
+ pipe = self._redis.pipeline()
+ pipe.delete(redis_key)
+ pipe.rpush(redis_key, snapshot, *all_updates[mid:])
+ await pipe.execute()
+
+ def _compact_updates_memory(self, document_id: str):
+ """Rolling compaction: squash oldest half into one snapshot."""
+ updates = self._updates.get(document_id, [])
+ if len(updates) <= 1:
+ return
+ mid = len(updates) // 2
+ ydoc = Y.Doc()
+ for update in updates[:mid]:
+ ydoc.apply_update(bytes(update))
+ self._updates[document_id] = [ydoc.get_update()] + updates[mid:]
async def get_updates(self, document_id: str) -> List[bytes]:
document_id = document_id.replace(":", "_")
diff --git a/backend/open_webui/tools/builtin.py b/backend/open_webui/tools/builtin.py
index cec0375ab7..31c79484ab 100644
--- a/backend/open_webui/tools/builtin.py
+++ b/backend/open_webui/tools/builtin.py
@@ -1532,6 +1532,69 @@ async def search_knowledge_files(
return json.dumps({"error": str(e)})
+async def view_file(
+ file_id: str,
+ __request__: Request = None,
+ __user__: dict = None,
+ __model_knowledge__: Optional[list[dict]] = None,
+) -> str:
+ """
+ Get the full content of a file by its ID.
+
+ :param file_id: The ID of the file to retrieve
+ :return: JSON with the file's id, filename, and full text content
+ """
+ if __request__ is None:
+ return json.dumps({"error": "Request context not available"})
+
+ if not __user__:
+ return json.dumps({"error": "User context not available"})
+
+ try:
+ from open_webui.models.files import Files
+ from open_webui.routers.files import has_access_to_file
+
+ user_id = __user__.get("id")
+ user_role = __user__.get("role", "user")
+
+ file = Files.get_file_by_id(file_id)
+ if not file:
+ return json.dumps({"error": "File not found"})
+
+ if (
+ file.user_id != user_id
+ and user_role != "admin"
+ and not any(
+ item.get("type") == "file" and item.get("id") == file_id
+ for item in (__model_knowledge__ or [])
+ )
+ and not has_access_to_file(
+ file_id=file_id,
+ access_type="read",
+ user=UserModel(**__user__),
+ )
+ ):
+ return json.dumps({"error": "File not found"})
+
+ content = ""
+ if file.data:
+ content = file.data.get("content", "")
+
+ return json.dumps(
+ {
+ "id": file.id,
+ "filename": file.filename,
+ "content": content,
+ "updated_at": file.updated_at,
+ "created_at": file.created_at,
+ },
+ ensure_ascii=False,
+ )
+ except Exception as e:
+ log.exception(f"view_file error: {e}")
+ return json.dumps({"error": str(e)})
+
+
async def view_knowledge_file(
file_id: str,
__request__: Request = None,
diff --git a/backend/open_webui/utils/middleware.py b/backend/open_webui/utils/middleware.py
index 3354889e7e..ec7af7733b 100644
--- a/backend/open_webui/utils/middleware.py
+++ b/backend/open_webui/utils/middleware.py
@@ -1918,6 +1918,25 @@ async def convert_url_images_to_base64(form_data):
return form_data
+def load_messages_from_db(chat_id: str, message_id: str) -> Optional[list[dict]]:
+ """
+ Load the message chain from DB up to message_id,
+ keeping only LLM-relevant fields (role, content, output).
+ """
+ messages_map = Chats.get_messages_map_by_chat_id(chat_id)
+ if not messages_map:
+ return None
+
+ db_messages = get_message_list(messages_map, message_id)
+ if not db_messages:
+ return None
+
+ return [
+ {k: v for k, v in msg.items() if k in ("role", "content", "output", "files")}
+ for msg in db_messages
+ ]
+
+
def process_messages_with_output(messages: list[dict]) -> list[dict]:
"""
Process messages with OR-aligned output items for LLM consumption.
@@ -1950,6 +1969,44 @@ async def process_chat_payload(request, form_data, user, metadata, model):
form_data = apply_params_to_form_data(form_data, model)
log.debug(f"form_data: {form_data}")
+ # Load messages from DB when available — DB preserves structured 'output' items
+ # which the frontend strips, causing tool calls to be merged into content.
+ chat_id = metadata.get("chat_id")
+ parent_message_id = metadata.get("parent_message_id")
+
+ if chat_id and parent_message_id and not chat_id.startswith("local:"):
+ db_messages = load_messages_from_db(chat_id, parent_message_id)
+ if db_messages:
+ system_message = get_system_message(form_data.get("messages", []))
+ form_data["messages"] = (
+ [system_message, *db_messages] if system_message else db_messages
+ )
+
+ # Inject image files into content as image_url parts (mirrors frontend logic)
+ for message in form_data["messages"]:
+ image_files = [
+ f
+ for f in message.get("files", [])
+ if f.get("type") == "image"
+ or (f.get("content_type") or "").startswith("image/")
+ ]
+ if message.get("role") == "user" and image_files:
+ text_content = message.get("content", "")
+ if isinstance(text_content, str):
+ message["content"] = [
+ {"type": "text", "text": text_content},
+ *[
+ {
+ "type": "image_url",
+ "image_url": {"url": f["url"]},
+ }
+ for f in image_files
+ if f.get("url")
+ ],
+ ]
+ # Strip files field — it's been incorporated into content
+ message.pop("files", None)
+
# Process messages with OR-aligned output items for clean LLM messages
form_data["messages"] = process_messages_with_output(form_data.get("messages", []))
@@ -2124,23 +2181,27 @@ async def process_chat_payload(request, form_data, user, metadata, model):
)
if "code_interpreter" in features and features["code_interpreter"]:
- form_data["messages"] = add_or_update_user_message(
- (
- request.app.state.config.CODE_INTERPRETER_PROMPT_TEMPLATE
- if request.app.state.config.CODE_INTERPRETER_PROMPT_TEMPLATE != ""
- else DEFAULT_CODE_INTERPRETER_PROMPT
- ),
- form_data["messages"],
- )
+ # Skip XML-tag prompt injection when native FC is enabled —
+ # execute_code will be injected as a builtin tool instead
+ if metadata.get("params", {}).get("function_calling") != "native":
+ form_data["messages"] = add_or_update_user_message(
+ (
+ request.app.state.config.CODE_INTERPRETER_PROMPT_TEMPLATE
+ if request.app.state.config.CODE_INTERPRETER_PROMPT_TEMPLATE
+ != ""
+ else DEFAULT_CODE_INTERPRETER_PROMPT
+ ),
+ form_data["messages"],
+ )
tool_ids = form_data.pop("tool_ids", None)
files = form_data.pop("files", None)
- # Skills: inject manifest only — model uses view_skill tool to load full content on-demand
- user_skill_ids = form_data.pop("skill_ids", None) or []
- model_skill_ids = model.get("info", {}).get("meta", {}).get("skillIds", [])
+ # Skills
+ user_skill_ids = set(form_data.pop("skill_ids", None) or [])
+ model_skill_ids = set(model.get("info", {}).get("meta", {}).get("skillIds", []))
- all_skill_ids = list(set(user_skill_ids + model_skill_ids))
+ all_skill_ids = user_skill_ids | model_skill_ids
available_skills = []
if all_skill_ids:
from open_webui.models.skills import Skills as SkillsModel
@@ -2156,13 +2217,24 @@ async def process_chat_payload(request, form_data, user, metadata, model):
and s.is_active
]
- if available_skills:
- manifest = "