This project is a comprehensive solution for converting 2D floor plan images into interactive 3D models. It utilizes deep learning for semantic understanding and morphological image processing for 3D geometry generation.
The following images demonstrate the complete workflow from floor plan input to 3D model output:
|
|
|
|
|
The project is organized into three main modules:
- SplitFloor: A deep learning module responsible for detecting structural elements (Walls, Doors, Windows).
- SemaFloor: A deep learning module dedicated to detecting room boundaries and habitable areas.
- ForgeFloor: The core pipeline that orchestrates the neural networks, processes the geometric data, and extrudes the final 3D model.
Ensure you have Python 3.8+ installed. Install the required dependencies:
pip install torch torchvision numpy matplotlib opencv-python pillow pyvista scikit-image tqdm pandas albumentations scipy scikit-learnNote: A CUDA-capable GPU is recommended for training and fast inference.
The main entry point for the pipeline is located in the ForgeFloor directory.
To launch the interactive application with live preview:
python ForgeFloor/extruder.py --guiTo process an image automatically:
python ForgeFloor/extruder.py --input path/to/floorplan.jpg --auto --output_dir outputs/- Input: A raw 2D floor plan image (JPG/PNG).
- Inference:
SplitFloorextracts structural masks (walls, windows, doors).SemaFloorextracts room masks.
- Processing:
ForgeFloorcleans the masks using morphological operations. - Extrusion: The cleaned 2D data is extruded into 3D meshes.
- Output: A 3D OBJ model, visualization overlays, and segmentation CSVs.



