Skip to content

Actions: AtomicBot-ai/dflash

Actions

All workflows

Actions

Loading...
Loading

Showing runs from all workflows
10 workflow runs
10 workflow runs

Filter by Workflow

Filter by Event

Filter by Status

Filter by Branch

Filter by Actor

Enhance _extract_text function to support chat_template_kwargs
Build & Release DFlash MLX Server (macOS ARM64) #10: Commit 89d59f0 pushed by Vect0rM
3m 45s main
Implement message normalization for chat templates in inference server
Build & Release DFlash MLX Server (macOS ARM64) #9: Commit f486034 pushed by Vect0rM
2m 40s main
Enhance tool call parsing and logging in inference server
Build & Release DFlash MLX Server (macOS ARM64) #8: Commit a6764b6 pushed by Vect0rM
3m 16s main
Enhance tool call handling in inference server
Build & Release DFlash MLX Server (macOS ARM64) #7: Commit 177009c pushed by Vect0rM
3m 18s main
Update max_tokens handling in chat completions to support new paramet…
Build & Release DFlash MLX Server (macOS ARM64) #6: Commit b742c32 pushed by Vect0rM
2m 57s main
Add handling for Qwen3-style templates in generation prompt
Build & Release DFlash MLX Server (macOS ARM64) #5: Commit a53b897 pushed by Vect0rM
3m 1s main
Enhance draft model loading and response handling
Build & Release DFlash MLX Server (macOS ARM64) #4: Commit 18d013e pushed by Vect0rM
3m 37s main
fix: run MLX generation in thread pool + add server integration tests
Build & Release DFlash MLX Server (macOS ARM64) #3: Commit 645586f pushed by Vect0rM
3m 33s main
Refactor model loading logic to handle file paths correctly and impro…
Build & Release DFlash MLX Server (macOS ARM64) #2: Commit 643d418 pushed by Vect0rM
3m 11s main
Add server dependencies for MLX backend in pyproject.toml
Build & Release DFlash MLX Server (macOS ARM64) #1: Commit e14f921 pushed by Vect0rM
3m 41s main