LLM Prototyping is a single HTML page application designed for rapid web development and experimentation. It features a split-screen interface with an LLM (Large Language Model) chat on one side and a multi-tab code editor (HTML, CSS, JavaScript) on the other. This setup allows users to interact with an LLM for coding assistance and immediately test the generated web code within the same environment.
- OpenAI-Compatible LLM Chat:
- Connects to any OpenAI-compatible API endpoint for chat completions.
- Supports models like GPT-3.5-turbo, GPT-4, and local models via compatible endpoints (e.g., Ollama).
- Displays chat history with user and assistant messages, including highlighted code blocks.
- Configuration for API endpoint, key, and model name is stored in browser local storage.
- Integrated Code Editors:
- Uses CodeMirror for a robust editing experience.
- Separate tabs for HTML, CSS, and JavaScript.
- Syntax highlighting for all supported languages.
- Live Preview & Download:
- Render Preview: Combines code from HTML, CSS, and JS editors and renders the result in an iframe modal for instant feedback.
- Download HTML: Packages the HTML, CSS (as an inline
<style>block), and JavaScript (as an inline<script>block) into a single downloadable HTML file. The filename is intelligently derived from the content of the<title>tag in the HTML editor.
- User-Friendly Configuration:
- Simple modal for API settings.
- System messages guide users on configuration status and potential API key issues.
-
Open the Application:
- Download the
LLM-Prototyping.htmlfile. - Open it directly in your preferred web browser (e.g., Chrome, Firefox, Safari, Edge). No web server is needed.
- Download the
-
Configure the API:
- Click on the "Configure API" button located in the header of the LLM Chat panel.
- A modal dialog will appear. You need to provide the following:
- API Endpoint: The base URL for your LLM provider's chat completions API.
- Example for OpenAI compatible APIs:
https://api.openai.com/v1/chat/completions - Example for local LLM (like Ollama with an OpenAI compatible endpoint):
http://localhost:11434/v1/chat/completions(Ensure your local LLM is running and configured for API access).
- Example for OpenAI compatible APIs:
- API Key: Your secret API key for the LLM provider.
- Example:
sk-xxxxxxxxxxxxxxxxxxxxxxxxxxxx(for OpenAI) or any key your local LLM endpoint might require.
- Example:
- Model Name: The specific model you want to use.
- Example:
gpt-4,gpt-3.5-turbo,ollama/llama2, etc.
- Example:
- API Endpoint: The base URL for your LLM provider's chat completions API.
- Click "Save Configuration". The application will store these settings in your browser's local storage.
- The chat interface will update:
- If the API is configured and no previous errors were detected, a welcome message appears, and the chat is enabled.
- If any setting is missing, a message "Please configure the API settings..." is shown.
- If a previous API call failed due to authentication, a message "There might be an issue with your API key or endpoint..." is shown. Saving a new configuration will clear this message for a fresh attempt.

-
Using the LLM Chat:
- Once the API is successfully configured, the chat input field will be enabled.
- Type your message or prompt in the text area at the bottom of the chat panel.
- Click the "Send" button or press Enter (without Shift) to send your message.
- The LLM's response will be displayed in the chat area. User messages appear on the right, and assistant messages on the left. Code blocks within responses are automatically formatted and highlighted.
- Click "New Chat" to clear the current conversation history and start fresh. This is only active if the API is configured and no API key errors are flagged.
-
Using the Web Prototyping Editors:
- Switching Tabs: Click on the "HTML", "CSS", or "JavaScript" tabs located above the code editor area to switch between the respective editors.
- Writing Code: Type or paste your code directly into the active editor. The editors provide syntax highlighting. Default boilerplate code is provided to get you started.
- Render Preview:
- Click the "Render Preview" button.
- This combines the content from the HTML, CSS (as an inline
<style>tag), and JavaScript (as an inline<script>tag) editors into a single HTML structure. - The combined page is then rendered in an iframe within a modal window that appears over the application.
- Click the "×" button on the modal or press the Escape key to close the preview.
- Download HTML:
- Click the "Download HTML" button.
- This also combines the code from all three editors in the same way as the preview.
- The application will generate a single
.htmlfile and trigger a download to your computer. The filename is automatically generated from the<title>tag in your HTML code, or defaults towebpage.html.
Here are some potential ideas for future development:
- Persistent Code Snippets:
- Save and load HTML, CSS, and JavaScript code snippets to/from the browser's local storage or allow users to download/upload them as separate files.
- Allow naming and managing multiple project snippets.
- Code Formatting:
- Integrate a code formatter like Prettier or JSBeautify to automatically format the code in the editors with a button click.
- Advanced LLM Options:
- Allow users to configure more LLM parameters (e.g., temperature, max tokens, system prompt) via the API configuration modal.
- Multiple API Provider Support:
- Add pre-sets or easier configuration for other LLM API providers beyond just OpenAI-compatible ones.
- Editor Customization:
- User-selectable CodeMirror themes.
- Adjustable font sizes or other editor settings (e.g., line wrapping, tab size).
- Context Management:
- More sophisticated context management for LLM interactions (e.g., buttons to selectively include content from HTML/CSS/JS editors in the prompt to the LLM).
- File Import/Export (Individual Files):
- Allow importing existing
.html,.css, and.jsfiles directly into the respective editors. - Allow exporting the content of each editor to its respective file type.
- Allow importing existing
- Error Handling & Validation:
- Basic syntax validation within the editors.
- More detailed and user-friendly error reporting from the LLM API calls, beyond the current authentication checks.
- UI/UX Improvements:
- Resizable chat and editor panels.
- Improved layout and responsiveness for smaller screens.
- Loading indicators for editor actions if they become more complex.
- Direct Code Insertion:
- A "copy to editor" button for code blocks in chat messages to directly insert code into the active editor tab.
This project is built using plain HTML, CSS (no frameworks), and JavaScript, with CodeMirror for the editor components. All functionality is contained within the LLM-Prototyping.html file, style.css, and script.js.
