OiDN Denoiser - Standalone

No, I compared rendered output from Lightwave that I denoised in blender and it preserves way more detail. Did the same with a Blender output in Lightwave and compared the result.
The blender denoised end result is much better than the Lightwave output.

I'm not talking about rendering. I simply used the final saved buffers and denoised them in both programs and compared. The blender ones kept much more fine detail (denoising the same source material!)
What I mean is, what buffers does blender utilise to denoise?
And which buffers did you use to denoise in LW?

We don't have an albedo buffer as such, so which one did you use in LW? And we have multiple normal buffers, so which of those did you use to denoise?

My opinion is that it's not that Blender's implementation is necessarily better, just that we're not using the best buffers to get the same results.
 
Blender uses the same buffers as LW. The albedo buffer in blender is called rawrgb in lw.

So I rendered 3 passes in lw (beauty, rawrgb and normalmap) and denoised thatbwith both programs.
 
Blender uses the same buffers as LW. The albedo buffer in blender is called rawrgb in lw.

So I rendered 3 passes in lw (beauty, rawrgb and normalmap) and denoised thatbwith both programs.
Okay, and the results are different? Have you tried any other buffer instead of the rawrgb?
I need to do some tests to compare blender and LightWave myself, there must be something going wrong on the LW side for the results to be different.

Have you tried taking those buffers from LW, and use the OIDN in Blender on them?
 
Okay, and the results are different? Have you tried any other buffer instead of the rawrgb?
I need to do some tests to compare blender and LightWave myself, there must be something going wrong on the LW side for the results to be different.
Other buffers are not the correct ones. OIDN needs Normal, Rawrgb (albedo) and the beauty pass. The system is not designed to use anything else. You can read that in te OIDN documentation pages.

Have you tried taking those buffers from LW, and use the OIDN in Blender on them?
I think I have explained that in my previous posts... that is exactly what I did. That is how I discovered the difference in the end result with the same source images.

The only thing I have not tried yet is using the standalone OIDN utils.
 
Other buffers are not the correct ones. OIDN needs Normal, Rawrgb (albedo) and the beauty pass. The system is not designed to use anything else. You can read that in te OIDN documentation pages.


I think I have explained that in my previous posts... that is exactly what I did. That is how I discovered the difference in the end result with the same source images.

The only thing I have not tried yet is using the standalone OIDN utils.

OIDN docs suggest using Normal (but it can actually be world or view normal) and the Albedo, is the same, they're not strict about it, you can generate different Albedo passes, including the addition of reflection and transmissive rays (as you know). The system is actually quite flexible... the problem with some of the examples above, especially the water example, is that they use LW bump maps, which aren't included in any of the buffers used, you'd need to convert the bump to a Normal, then add that to the rest of the normal buffer. (Which is possible).

I think we're agreeing over the same stuff, that the LW one shouldn't look different to the Blender one, they're the same dll's :)
Just wondered if using a different, user-generated Albedo, rather than rawRGB, would create a better result, that's all.
 
Back
Top