This repository contains a contextual LoRA trained for black-forest-labs/FLUX.2-klein-9B. It generates 1024×1024 font atlases from a single reference image (in the same style/format as the provided examples).
Disclaimer: it works well, but not perfectly. Expect occasional artifacts and minor alignment issues.
- LoRA weights:
Ref2FontV1.safetensors - ComfyUI workflow:
Example Workflow/(see notes inside the workflow nodes) - Examples:
Example/(input images + generated atlases) - Post-processing scripts:
flux_pipeline.py,flux_grid_to_ttf.py,flux_upscale.py
The post-processing scripts require Python 3.10+ and these packages:
numpy
pillow
fonttools
scikit-image
tqdm
Upscaler is optional. If you want to use flux_upscale.py, install PyTorch separately (CPU or CUDA build):
pip3 install torch torchvision --index-url https://download.pytorch.org/whl/cu128
git clone https://github.com/SnJake/Ref2Font.git
cd Ref2Font
# from the repo root
python -m venv venv
venv\Scripts\activate
pip install -r requirements.txt
# optional for upscaler
pip3 install torch torchvision --index-url https://download.pytorch.org/whl/cu128The workflow is in Example Workflow/. It already contains detailed notes inside the nodes.
- Base model (FLUX.2 Klein 9B):
https://huggingface.co/black-forest-labs/FLUX.2-klein-9B/blob/main/flux-2-klein-9b.safetensors
Place in: ComfyUI/models/diffusion_models
- Text encoder (Qwen):
https://huggingface.co/Comfy-Org/vae-text-encorder-for-flux-klein-9b/blob/main/split_files/text_encoders/qwen_3_8b.safetensors
Place in: ComfyUI/models/text_encoders
- VAE:
https://huggingface.co/Comfy-Org/vae-text-encorder-for-flux-klein-9b/blob/main/split_files/vae/flux2-vae.safetensors
Place in: ComfyUI/models/vae
Download the LoRA:
Or from CivitAI.
Place in: ComfyUI/models/loras
- Strict black & white only (no gray, no shadows, no volume)
- 1024×1024 exactly
- Follow the examples in
Example/
After you generate the atlas, use the pipeline script to convert the atlas into a TTF font.
python flux_pipeline.py ^
--input "D:\ComfyUI_temp_nckhb_00003_.png" ^
--output-dir "G:\Flux 2 Klein 9B\test" ^
--no-upscale ^
--use-grid ^
--vectorize contours ^
--simplify 0.5 ^
--canvas 1024 ^
--contour-level 0.5 ^
--trace-scale 4 ^
--trace-blur 1.0 ^
--smooth-iters 2 ^
--baseline-mode auto ^
--baseline-quantile 0.9 ^
--baseline-min-pixels 20 ^
--cols 8 ^
--rows 9 ^
--no-auto-invertpython flux_pipeline.py ^
--input "path/to/atlas_name.png" ^
--output-dir "output/dir" ^
--no-upscale ^
--use-grid ^
--vectorize contours ^
--simplify 0.5 ^
--canvas 1024 ^
--contour-level 0.5 ^
--trace-scale 4 ^
--trace-blur 1.0 ^
--smooth-iters 2 ^
--baseline-mode auto ^
--baseline-quantile 0.9 ^
--baseline-min-pixels 20 ^
--cols 8 ^
--rows 9 ^
--no-auto-invert- Download base models (see links above) and place them in ComfyUI folders.
- Download LoRA and put it in
ComfyUI/models/loras. - Create the input image (1024×1024, pure black/white, like the examples). You can create input image in Nano Banana Pro or other similar models.
- Run the ComfyUI workflow (
Example Workflow/) and generate the atlas. - Create and activate a venv, then install dependencies.
- Run
flux_pipeline.pywith your atlas path to generate the TTF.
flux_upscale.pyis optional. You can skip upscaling with--no-upscale.- If you see odd inversion, try removing
--no-auto-invertor add--invert. - If you see vertical jitter, use
--baseline-mode auto(enabled in the example above).
MIT