Adding infos popup for LLMs (LLMListContainer component) #378
Adding infos popup for LLMs (LLMListContainer component) #378
Conversation
… tooltip display for LLM models
…ate model information
There was a problem hiding this comment.
Pull Request Overview
This pull request integrates the models.dev API to provide rich model metadata tooltips for LLM providers in ChainForge. It adds backend caching of model information, frontend state management for this data, and enhanced UI tooltips with markdown formatting.
- Adds a Flask endpoint to fetch and cache models.dev API data locally
- Implements Zustand store integration for global model metadata state management
- Enhances LLM selection UI with informative tooltips displaying model capabilities, costs, and specifications
Reviewed Changes
Copilot reviewed 8 out of 8 changed files in this pull request and generated 4 comments.
Show a summary per file
| File | Description |
|---|---|
| chainforge/flask_app.py | Adds /api/getModelsDotDev endpoint for fetching and caching model metadata |
| chainforge/react-server/src/store.tsx | Adds Zustand store properties for models.dev data |
| chainforge/react-server/src/backend/typing.ts | Defines TypeScript interfaces for models.dev API response structure |
| chainforge/react-server/src/backend/utils.ts | Implements model lookup and formatting utilities, improves image blob handling |
| chainforge/react-server/src/PromptNode.tsx | Fetches models.dev data on component initialization |
| chainforge/react-server/src/LLMListComponent.tsx | Integrates model tooltips into LLM selection menu |
| chainforge/react-server/src/NestedMenu.tsx | Updates tooltip rendering to support markdown and increased width |
| chainforge/react-server/src/GlobalSettingsModal.tsx | Populates Ollama model information for tooltip display |
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
You can also share your feedback on Copilot code review for a chance to win a $100 gift card. Take the survey.
| providerModelsDotDev = "amazon-bedrock"; | ||
| } else if (llm_item.base_model.startsWith("ollama")) { | ||
| providerModelsDotDev = "ollama"; | ||
| llm_item.model = llm_item.name; |
There was a problem hiding this comment.
This line modifies the input parameter llm_item.model, which could cause unexpected side effects for the caller. Consider creating a local copy or using a different approach to handle the Ollama model name mapping.
| llm_item.model = llm_item.name; | |
| modelName = llm_item.name; |
| # If the file does not exist, fetch it from the API | ||
| try: | ||
| # Fetch the models from the API | ||
| response = py_requests.get("https://models.dev/api.json") |
There was a problem hiding this comment.
The HTTP request lacks a timeout parameter, which could cause the application to hang indefinitely if the external API is unresponsive. Consider adding a timeout parameter like py_requests.get("https://models.dev/api.json", timeout=30).
| response = py_requests.get("https://models.dev/api.json") | |
| response = py_requests.get("https://models.dev/api.json", timeout=30) |
| console.error("Error trying to fetch Ollama models", error); | ||
| }); | ||
|
|
||
| fetch(`${Ollama_BaseURL}/api/`); |
There was a problem hiding this comment.
This fetch call appears to serve no purpose - it has no error handling, doesn't use the response, and doesn't have any side effects. This looks like leftover debugging code that should be removed.
| fetch(`${Ollama_BaseURL}/api/`); |
| export function getModelsDotDevInfos( | ||
| llm_item: LLMSpec, | ||
| modelsDotDevInfos: ModelDotDevInfos, | ||
| ): string { |
There was a problem hiding this comment.
The function documentation states it returns a 'ModelDotDevInfo object' but the actual return type is string. The documentation should be updated to reflect that it returns a formatted string representation of the model info.
|
@RoyHEyono I have just tested the configuration you suggested in issue #374 With the modifications I have done in the
|

This pull request introduces a new integration with the models.dev API to fetch and display detailed model metadata for LLM providers throughout the application. It adds backend support for fetching and caching model information, updates the frontend to display a rich tooltip with model details in the Prompt Node UI, and extends the Zustand-based global state and typings to support this new data. Additionally, it improves image blob handling and enhances the user experience with markdown-formatted tooltips.
Integration with models.dev API and Model Metadata Display:
Backend API for models.dev:
Added a new Flask route
/api/getModelsDotDevthat fetches and caches model metadata from the models.dev API, storing it locally and serving it to the frontend.Frontend fetching and state management:
LLMsProvidersInfos). This is accessed and updated in bothPromptNodeandGlobalSettingsModalcomponents.LLMListComponentuses this metadata to provide tooltips for each model in the selection menu, giving users rich contextual information.Model metadata tooltips with markdown formatting:
getModelsDotDevInfos,prettifyModelInfo) to extract and format model metadata for display.ReactMarkdownfor improved readability and formatting, and tooltip width is increased for better content display.Typing and State Enhancements:
Extended typings:
Model,ModelOllama,LLMProvider, etc.) to ensure type safety and clarity throughout the codebase.Other Improvements:
Improved robustness when converting image blobs to base64 by ensuring blobs are of the correct image type before processing.