Skip to content

kdhRick2222/UCMNet

Repository files navigation

UCMNet: Uncertainty-Aware Context Memory Network
for Under-Display Camera Image Restoration (CVPR'26)

Daehyun Kim1 · Youngmin Kim1,2 · Yoon Ju Oh1 · Tae Hyun Kim1†

1Hanyang University    2Agency for Defense Development (ADD)

Co-corresponding author

Logo

We propose a lightweight Uncertainty-aware Context-Memory Network (UCMNet), for UDC image restoration. Unlike previous methods that apply uniform restoration, UCMNet performs uncertainty-aware adaptive processing to restore high-frequency details in regions with varying degradations.

Installation

Our implementation follows the experimental settings of previous UDC restoration works (e.g., BNUDC and FSI).
Please ensure that scikit-image==0.19.3 is installed.

pip install -r requirements.txt

Data Preparation

POLED: https://yzhouas.github.io/projects/UDC/udc.html

TOLED: https://yzhouas.github.io/projects/UDC/udc.html

SYNTH: https://drive.google.com/drive/folders/13dZxX_9CI6CeS4zKd2SWGeT-7awhgaJF

Pretrained Weights

POLED: ./checkpoints/POLED.pth

TOLED: ./checkpoints/TOLED.pth

SYNTH: ./checkpoints/SYNTH.pth

Evaluation

python testing_n_saving.py

datasets can be converted by option.py.

Logo

Visual comparisons on the POLED dataset.

Logo

Visual comparisons on the TOLED dataset.

Training

python training_n_recording.py

Citation

@inproceedings{kim2025UCMNet,
  title={UCMNet: Uncertainty-Aware Context Memory Network for Under-Display Camera Image Restoration},
  author={Daehyun Kim, Youngmin Kim, Yoon Ju Oh, Tae Hyun Kim},
  booktitle={Computer Vision and Pattern Recognition (CVPR)},
  year={2026}
}

Acknowledgement

We gratefully acknowledge the authors of BNUDC and DARKIR for their outstanding work and publicly released code, which laid the foundation for this project.

About

[CVPR'26] Official implementation of "UCMNet: Uncertainty-Aware Context Memory Network for Under-Display Camera Image Restoration"

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages