[AI] Add RAW denoise (RawNIND, Bayer + Linear) to neural restore module#20854
[AI] Add RAW denoise (RawNIND, Bayer + Linear) to neural restore module#20854TurboGit merged 9 commits intodarktable-org:masterfrom
Conversation
5807487 to
b165dc6
Compare
|
@andriiryzhkov : Nice work! Do you have some screenshots before/after to share to see where we stand with this? TIA. |
b165dc6 to
5c980c9
Compare
|
Nice work, thank you! I was reading the following paragraph
And I was wondering why you did not want to make it an iop such as the other raw denoising module. I have to say I'm not a big fan of having auxiliary DNG files flying around on my HDD, if they are strictly not needed. |
|
Thank you @andriiryzhkov for the PR. It works for me. I have one issue. If I use this file (the poor mallard subjected to so many tests ...), Canon CR3 raw from Canon R5mk2, then I get a dng file which is too large. The dng is padded with black rectangles. If I use the same file for the non-raw Example: neural restore -> raw denoise If I check the raw file in
I seems that the This is the
|
|
@KarlMagnusLarsson, fix pushed. The CFA DNG writer was setting Could you retest with the same Canon R5II raw? Should now produce a clean DNG without the black padding. Thanks for the catch. |
Works for me. Thank you @andriiryzhkov . |
|
@da-phil : Thanks for the question – fair point, worth explaining the design choice. Every task in It could be an IOP architecturally – the inputs/outputs are pipeline-shaped. What's missing is infrastructure to make a 10–30 s inference pass practical inside the pipeline:
The "produce a DNG" wrapper is the pragmatic 5.6 shape – the architectural questions above would need to land before AI tasks could become first-class IOPs. |
All valid points, indeed, thanks for addressing them. Will try out the functionality soon. |
|
I get artifacts with These artifacts are not present in the raw file after add to library and they are not present if I do the non-raw If I use this Canon R5mk2 CR3 raw file (102A6405.CR3): Test:
then I get:
I also get the same color bleeding or chromatic aberration effect in high contrast area between the hand and the background.
These artifacts are not present in the raw file after add to library and they are not present if I do the non-raw `neural restore -> denoise -> strength = 100%` . EDIT: Nvidia CUDA, NVIDIA Quadro RTX 4000 8 GB |
|
@KarlMagnusLarsson :
fix is already pushed with the last commit. As for the second one
This needs a bit more time to investigate. Probably it will require another variant of the model. Anyway, thank you for testing and reporting. This is very important finding. |
Thanks. Works.
Yes, the effect is rather pronounced. I mean strength = 100 % is perhaps pushing it in many cases, but in this picture, the color bleeding or chromatic aberration effect is there, also at strength = 50 % and even lower. |
|
@KarlMagnusLarsson I've updated the model package and pushed a few changes that should significantly reduce the chromatic fringing:
You'll need to re-fetch the rawdenoise-nind model package manually to pick up the new Bayer checkpoint. Testing on your |
|
Pushed a refactor that consolidates the DNG writers under What changed Three DNG writers existed in two places – the legacy header-only float-CFA writer at
What didn't change
Side benefits
|
|
Hello @andriiryzhkov, The new model does exactly what you state. I see the same thing. There is faint flare around the bottom finger, but it's much less pronounced.
OK
New model: neural restore -> raw denoise -> strength = 100% Old model: neural restore -> raw denoise -> strength = 100%
I see the same thing. |
Right now, I don't see what else can be done without touching models itself. I will continue testing, but I would need more examples of such behavior to generalize the case better. Model improvement is also possible, but requires much more time. |
5a81afe to
ee21d81
Compare
|
@andriiryzhkov : If you by you I'd recommend to merge now to get more field testing. |
|
@TurboGit : I just noticed one possible bug in raw denoise preview of X-Trans image. Let's me check it and we can merge after. |
|
@TurboGit : Preview is fixed. Ready to merge. |
|
I finally had time to test it on one of my astro photos, after it was merged to master. I'm able to use the new function and generate a DNG file, however I cannot read it back into darktable, see log: Is there anything I'm missing? You can try it for yourself with this image: When I'm trying Olmypus RAWs I occasionally get crashes: I was also able to crash it under gdb and got the following backtrace: I could reproduce it with this Olympus RAW file: This also happens on the CPU path: |
|
@da-phil :
This is interesting. I was not able to reproduce this neither on my build from master, nor on last nightly build. I have couple of questions to understand situation better:
|
What execution provider are you using? |
Sorry, should have mentioned that in the first place...
I'll give the nightly a shot too later. |
|
FYI: On my Laptop with an AMD Ryzen 7 8845HS with a Radeon 780M iGPU I needed three runs to eventually be able to compile the model on the first run, the first run crashed my wayland session and the second session failed like that: |
Yes this is what I am using:
Using CPU, I am getting significantly better performance. and here is the output of -d ai with GPU: `======================================== darktable 5.5.0+1120~g76988b1e45 Compile options: See https://www.darktable.org/resources/ for detailed documentation. [dt starting] as : bin\darktable.exe -d ai end: 2026-04-29 11:51:33 And here is the log with CPU: `[dt starting] as : C:\Program Files\darktable\bin\darktable.exe -d ai end: 2026-04-29 22:49:20 |
|
I do get an "out of memory error" when using it on my GPU (nVidia RTX 3060 Laptop-GPU with 6GB VRAM): Is there anything I can do here? |
AMD iGPUs are not officially supported by ROCm. So the fact that it eventually works is a huge benefit itself.
Neural restore tasks run inference by tiles of image. Size of the tile is defined of the amount of available VRAM. It is very hard to calculate how much VRAM is actually available, so try-and-fail mechanism is implemented with tile size ladder defined in model config. If initialization of model with larger tile size fails with OOM error, smaller size is selected and new attempt is taken. Final tile size is cached for the next runs. Given that, you may see OOM logs as a part of this tile size selection mechanism. |
|
For me it's not only the OOM, but also no preview is shown: |
|
@andriiryzhkov |














Heads up – this is a pretty big PR. It adds an AI-based raw denoiser that runs before the rest of the pipeline and is pretty fast. Based on the pixls.us threads asking for a real pre-demosaic AI denoiser, this seems like a well-expected feature: existing RGB denoisers run late in the pipeline after demosaic / tonemapping / lens correction, which limits how much noise can be modeled correctly. A sensor-space denoiser can see the original photon-shot-noise distribution and clean it up before any of those lossy steps.
Requires companion model package update: darktable-org/darktable-ai#21
Collaboration with the model author
I reached out to Benoit Brummer (author of NIND, RawNIND, and the UtNet2 model family) in this work, and he has been actively supporting the implementation.
What the feature does
neural_restoremodule: raw denoise.Benoit has also indicated that a dedicated X-Trans model can also be trained – separate weights trained specifically for Fuji's 6×6 CFA pattern rather than the generic-demosaic fallback used today. That's not part of this release: the
xtrans_v1contract label and thedt_restore_load_rawdenoise_xtransloader are reserved in this PR so the dedicated model can plug in later with minimal (or no) darktable code changes.Why it's fast
Where it sits in the pipeline
Conceptually: not in the pipeline at all. The denoised result is a new DNG that replaces the noisy source. Users pick the denoised DNG as their working file, apply any modules they'd normally apply, and never touch the pre-denoise path. This differs from post-pipeline RGB denoisers which stack a second noise-reduction pass on already-processed pixels.
Model packaging
Models are distributed as
.dtmodelpackages via the existing AI model catalog (data/ai_models.json). RawNIND ships with two ONNX variants:variants.bayer– for standard Bayer CFAs (RGGB / BGGR / GRBG / GBRG)variants.linear– for X-Trans + anything else without a dedicated pathThe manifest declares preprocessing policy explicitly (WB normalization, input colorspace, output-scale handling, exposure target, channel orientation, edge-padding) so darktable doesn't bake RawNIND-specific assumptions into the C code. Future models can swap in with manifest-only changes, or add new contract labels (e.g. a forthcoming dedicated X-Trans model) with a small code patch.
Architecture
Sensor coverage
dt_control_logVerification
--asan. Noheap-use-after-free, no buffer overflows.EXACT MATCHfor tile interiors, corner-tile discrepancy traced to mirror-padding convention and fixed via theedge_pad: mirror_croppeddefault._compute_bayer_prep,_pack_bayer_tile,_bayer_gain_match,_bayer_remosaic_raw,_resolve_linear_wb,_build_cam_matrices,_linear_exposure_boost,_linear_gain_match_3ch. No separate implementations to drift.Credits