Skip to content

Latest commit

 

History

History
23 lines (13 loc) · 1.79 KB

File metadata and controls

23 lines (13 loc) · 1.79 KB

CCF: Complementary Collaborative Fusion for Domain Generalized Multi-Modal 3D Object Detection

To appear at CVPR 2026

Abstract

Multi-modal fusion has emerged as a promising paradigm for accurate 3D object detection. However, performance degrades substantially when deployed in target domains different from training. In this work, focusing on dual-branch proposal-level detectors, we identify two factors that limit robust cross-domain generalization: 1) in challenging domains such as rain or nighttime, one modality may undergo severe degradation; 2) the LiDAR branch often dominates the detection process, leading to systematic underutilization of visual cues and vulnerability when point clouds are compromised.

To address these challenges, we propose three components. First, Query-Decoupled Loss provides independent supervision for 2D-only, 3D-only, and fused queries, rebalancing gradient flow across modalities. Second, LiDAR-Guided Depth Prior augments 2D queries with instance-aware geometric priors through probabilistic fusion of image-predicted and LiDAR-derived depth distributions, improving their spatial initialization. Third, Complementary Cross-Modal Masking applies complementary spatial masks to the image and point cloud, encouraging queries from both modalities to compete within the fused decoder and thereby promoting adaptive fusion.

Extensive experiments demonstrate substantial gains over state-of-the-art baselines while preserving source-domain performance. Code and models are publicly available at GitHub - CCF.

Code

Code cleaning is in progress. Please stay tuned for updates.

Framework Diagram

Below is the framework diagram from the paper:

Framework Diagram

Main Results

Framework Diagram