Skip to content

Adding infos popup for LLMs (LLMListContainer component) #378

Open
loloMD wants to merge 3 commits intomainfrom
feature/LLMListContainer_infos_popup
Open

Adding infos popup for LLMs (LLMListContainer component) #378
loloMD wants to merge 3 commits intomainfrom
feature/LLMListContainer_infos_popup

Conversation

@loloMD
Copy link
Copy Markdown
Collaborator

@loloMD loloMD commented Aug 15, 2025

This pull request introduces a new integration with the models.dev API to fetch and display detailed model metadata for LLM providers throughout the application. It adds backend support for fetching and caching model information, updates the frontend to display a rich tooltip with model details in the Prompt Node UI, and extends the Zustand-based global state and typings to support this new data. Additionally, it improves image blob handling and enhances the user experience with markdown-formatted tooltips.

Integration with models.dev API and Model Metadata Display:

  • Backend API for models.dev:
    Added a new Flask route /api/getModelsDotDev that fetches and caches model metadata from the models.dev API, storing it locally and serving it to the frontend.

  • Frontend fetching and state management:

    • On initialization, the frontend fetches the models.dev metadata and stores it in the global Zustand store (LLMsProvidersInfos). This is accessed and updated in both PromptNode and GlobalSettingsModal components.
    • The LLMListComponent uses this metadata to provide tooltips for each model in the selection menu, giving users rich contextual information.
  • Model metadata tooltips with markdown formatting:

    • Added utility functions (getModelsDotDevInfos, prettifyModelInfo) to extract and format model metadata for display.
    • Tooltips are rendered using ReactMarkdown for improved readability and formatting, and tooltip width is increased for better content display.
      Typing and State Enhancements:
  • Extended typings:

    • Added comprehensive TypeScript interfaces for models.dev data structures (Model, ModelOllama, LLMProvider, etc.) to ensure type safety and clarity throughout the codebase.

Other Improvements:

  • Image blob handling:
    Improved robustness when converting image blobs to base64 by ensuring blobs are of the correct image type before processing.

@loloMD loloMD requested review from Copilot and ianarawjo August 15, 2025 16:55
@loloMD loloMD added the enhancement New feature or request label Aug 15, 2025
Copy link
Copy Markdown

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This pull request integrates the models.dev API to provide rich model metadata tooltips for LLM providers in ChainForge. It adds backend caching of model information, frontend state management for this data, and enhanced UI tooltips with markdown formatting.

  • Adds a Flask endpoint to fetch and cache models.dev API data locally
  • Implements Zustand store integration for global model metadata state management
  • Enhances LLM selection UI with informative tooltips displaying model capabilities, costs, and specifications

Reviewed Changes

Copilot reviewed 8 out of 8 changed files in this pull request and generated 4 comments.

Show a summary per file
File Description
chainforge/flask_app.py Adds /api/getModelsDotDev endpoint for fetching and caching model metadata
chainforge/react-server/src/store.tsx Adds Zustand store properties for models.dev data
chainforge/react-server/src/backend/typing.ts Defines TypeScript interfaces for models.dev API response structure
chainforge/react-server/src/backend/utils.ts Implements model lookup and formatting utilities, improves image blob handling
chainforge/react-server/src/PromptNode.tsx Fetches models.dev data on component initialization
chainforge/react-server/src/LLMListComponent.tsx Integrates model tooltips into LLM selection menu
chainforge/react-server/src/NestedMenu.tsx Updates tooltip rendering to support markdown and increased width
chainforge/react-server/src/GlobalSettingsModal.tsx Populates Ollama model information for tooltip display

Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
You can also share your feedback on Copilot code review for a chance to win a $100 gift card. Take the survey.

providerModelsDotDev = "amazon-bedrock";
} else if (llm_item.base_model.startsWith("ollama")) {
providerModelsDotDev = "ollama";
llm_item.model = llm_item.name;
Copy link

Copilot AI Aug 15, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This line modifies the input parameter llm_item.model, which could cause unexpected side effects for the caller. Consider creating a local copy or using a different approach to handle the Ollama model name mapping.

Suggested change
llm_item.model = llm_item.name;
modelName = llm_item.name;

Copilot uses AI. Check for mistakes.
Comment thread chainforge/flask_app.py
# If the file does not exist, fetch it from the API
try:
# Fetch the models from the API
response = py_requests.get("https://models.dev/api.json")
Copy link

Copilot AI Aug 15, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The HTTP request lacks a timeout parameter, which could cause the application to hang indefinitely if the external API is unresponsive. Consider adding a timeout parameter like py_requests.get("https://models.dev/api.json", timeout=30).

Suggested change
response = py_requests.get("https://models.dev/api.json")
response = py_requests.get("https://models.dev/api.json", timeout=30)

Copilot uses AI. Check for mistakes.
console.error("Error trying to fetch Ollama models", error);
});

fetch(`${Ollama_BaseURL}/api/`);
Copy link

Copilot AI Aug 15, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This fetch call appears to serve no purpose - it has no error handling, doesn't use the response, and doesn't have any side effects. This looks like leftover debugging code that should be removed.

Suggested change
fetch(`${Ollama_BaseURL}/api/`);

Copilot uses AI. Check for mistakes.
export function getModelsDotDevInfos(
llm_item: LLMSpec,
modelsDotDevInfos: ModelDotDevInfos,
): string {
Copy link

Copilot AI Aug 15, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The function documentation states it returns a 'ModelDotDevInfo object' but the actual return type is string. The documentation should be updated to reflect that it returns a formatted string representation of the model info.

Copilot uses AI. Check for mistakes.
@loloMD
Copy link
Copy Markdown
Collaborator Author

loloMD commented Aug 15, 2025

@RoyHEyono I have just tested the configuration you suggested in issue #374

With the modifications I have done in the utils.tsx file , it is fixed !

image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

GPT Image 1 to GPT 4o: "Invalid MIME type. Only image types are supported."

3 participants