PDA

View Full Version : Why is the "Adaptive Sampling" pass so slow?



Snosrap
05-18-2011, 07:13 AM
I've been a long time FPrime user and with LW10 I'm transitioning back to LW's renderer with the new LCW. I have found the AA settings I like to use between .01 and .05 depending on what I'm working on. Why is "Adaptive Sampling" so slow? You would think that sampling pixels and making comparisons would be much faster than raytracing light sources. So what's the low down with the Adaptive Sampling and AA algorithims in LW?

vfxwizard
05-18-2011, 08:43 AM
Adaptive Sampling takes a little to get used to, and is not optimized for linear workflow.

However, in a nutshell, during AS passes there's a lot of raytracing going on. Actually, it doubles (iirc) the number of samples for each pass.

So if you start with 9 antialias samples (9 rays), each pixel that exceeds the threshold (marked white) will cause 18 rays to be fired, and so on for each AS pass.

In addition to this, some kind of surfaces require more rays:
- those lit by light with a quality parameter (quality^2 rays)
- those with reflection or refraction blurring (samples should equal number of rays here)
- nodes with a sampling value

I avoided mentioning radiosity, as the amount of rays fired depends on the mode: interpolated or non interpolated; with or without bounces... [In short: don't lower radiosity rays when using interpolated modes.]

As for number of rays and multipliers, I am writing this straight from memory, so I may have got some numbers wrong. But doesn't really matter. What matters is that the more AS passess are made, the lower should be the quality (# of samples) for surfaces, lights, etc.

This works because at each evaluation more rays are fired and averaged, thus giving the same result as an higher number of samples. [Don't use values too low - as a general rule don't go below 2 for reflections/lights and below 9 for non-interpolated radiosity.]

However the right approach is very scene dependent. Sometimes AS is much faster than brute force, sometimes the reverse is true.

Last but not least, the way contrast detection is computed means that in linear workflow you need to use very low values for threshold, wasting a lot of samples. You may find some threads here about Lightwolf's free color corrector for an approximate solution to this issue that will give AS a speed boost when using linear workflow.


There would be a lot more to say, but this should already provide a reasonable speed increase. Hope it helps!

raw-m
05-18-2011, 01:46 PM
Nice writeup! I would pay good money for someone to make a video tutorial about this. As much as I read articles about the Camera Properties and no matter how articulate I find it hard to apply these rules to my scenes. Even though the settings are very scene dependent I think to see a pro chipping away at some test renders and see what values they are tweaking would be invaluable!

Anyone? :)

Snosrap
05-18-2011, 09:52 PM
Thanks for the informative reply! "is not optimized for linear workflow." That I don't understand.

vfxwizard
05-19-2011, 08:09 AM
@Snosrap My pleasure! as for "not optimized for linear workflow" there's a long standing issue with adaptive sampling that was pointed out since its introduction.

Contrast detection is performed with an absolute value (the threshold) on the linear values. This is the right thing to do, but when using a linear workflow (only) rendering could be much faster by checking contrast in the gamma corrected image instead of the actual render.

This issue arises because in a gamma corrected rendering you have very low values in the image. Those values are then remapped into brigher pixels for display.

But this means that to catch that dark fine grain you need to lower the threshold, thus firing more rays than necessary even in the brighter areas of the image. It's not wrong, but neither is as fast as it could - hence the "not optimized" definition (which is an understatement, as the speedup can be huge).


There are some workarounds -the most common is to apply Lightwolf's "db&w Simple Colour Corrector" (free plugin) as a pixel filter with a gamma of 2.2 so that LW is forced to detect contrast in the display space. Then, an inverse gamma of .4545 is applied as an image filter.

This approach works in practice, but is less than ideal for antialiasing. There are several threads that deal with this issue, but you can skip them and just try the above setup - chances are your renders will be faster (higher threshold = less rays) and you won't even notice the minor artifacts produced by the forced color space conversions.

The best solution would be to have this as an option inside LW's renderer. Hopefully in the future...


@raw-m Thanks! I prefer written tutorials, but this in fact is something where a video would work much better. Actually, doing some LW tutorials was in my new years' propositions... Too bad it's May already. :foreheads

jameswillmott
05-19-2011, 08:20 AM
It should be noted that adaptive antialiasing seems to use a 'creeping' algorithm, so it explores the neighbourhood where it encounters a high contrast of pixels. Good for small details, the sampler will travel along the high contrast edges, it's pretty neat to watch and clever when you think about what it's doing.

clagman
05-19-2011, 10:58 AM
Adaptive AA is a clever method, it just needs a bit of refinement is all. Currently I use it to do all the heavy lifting in my scenes (yes with db&w pixel filter applied being the lesser of two evils).

vfxwizard
05-19-2011, 12:52 PM
It should be noted that adaptive antialiasing seems to use a 'creeping' algorithm

Nice idea, I never thought about this -to me has always looked like a 3x3 kernel.

I will test this out with a pattern and a darkening gradient. If it's creeping it should match even below the threshold. Thanks for the tip!

Snosrap
05-19-2011, 09:04 PM
Thanks again for the explanation. I had forgotten about all the multiple settings using LW's renderer. However I'm liking the look more than FPrime's. Thanks again, it looks I have a lot to learn.

gerardstrada
05-19-2011, 09:24 PM
Agreed with what vfxwizard has said (very interesting!), just a detail might need a bit more of clarification, I think:


Contrast detection is performed with an absolute value (the threshold) on the linear values. This is the right thing to do, but when using a linear workflow (only) rendering could be much faster by checking contrast in the gamma corrected image instead of the actual render.

The right thing to do is perform the antialiasing (AA) operation in linear values, but what is not the right thing to do (and this has been overlooked by some outstanding render engines) is to consider the contrast threshold in linear space. Contrast threshold should be calculated in non-linear space, I think.

For being efficient, the idea of minimizing the aliasing artifacts according to a threshold implies our visual perception. That is to say, we need apply more AA samples on linear values, where our visual perception is more sensitive. Since we are going to visualize in gamma-encoded space, we can expect that the sampling according to a contrast threshold behaves perceptually better in non-linear space. Moreover, this gamma/log correction should not be fixed, but rather should be selected by the user according to the output color space.

Then, the AA sampling should be performed on linear values, but the contrast threshold used for applying this sampling should be calculated in non-linear space.

It's because of these two considerations that a simple gamma correction at pixel filter level is a half solution. Let's see this situation with a very simple example and how we can solve it:

This ball has been illuminated with an area light (quality=3) and rendered with Adaptive Sampling (AS) at 0.02.


http://imagic.ddgenvivo.tv/forums/AA/IFGC.jpg

It has been gamma corrected after render at image filter level. We can see that even when AS has a very low value, we can still notice the shading noise in the dark areas.


http://imagic.ddgenvivo.tv/forums/AA/IFGCD.jpg

AA in borders however are nice and clean. This is because AA values have been generated in linear space. But since the contrast threshold has been calculated also in linear space, the sampling is not perceptually uniform for our vision. In simple words: AA = correct / Threshold = incorrect.

For gamma-correcting the contrast threshold we need to apply this correction at pixel filter level. We can use the useful Mike Wolf's Simple ColourCorrector pixel filter or we can apply the correction in the so useful DP Image Filter Node Editor by using our favorite color correction node (SG_CCNode, db&w Colour Space node, db&w Simple Colour Corrector node, Aurora's Color Correction node, etc, etc, etc). The first option is simple and faster, the second option is slower but more flexible and powerful.

Let's go with the first option in this case:


http://imagic.ddgenvivo.tv/forums/AA/PFGC1.jpg

We have now a nice and more smooth shading (even in the dark areas) and we are using the same AS level!

However, since the AA has been performed on non-linear values, AA is deficient in borders because it has been calculated with the wrong values:


http://imagic.ddgenvivo.tv/forums/AA/FM.gif

A valid way to minimize this problem by keeping the simplicity of the setup is to increase the oversample. However it won't solve all the problems and we'll still have some issues in the AA of reflective borders and details, but it may be a viable solution in many cases. In simple words: AA = incorrect / Threshold = correct.

A way to calculate the contrast threshold in non-linear space but perform the AA on linear values is by using the DP Image Filter Node Editor with any of our favorite color correction nodes and the DP GlobalBuffers.

Idea is to apply a gamma correction from the Final Color buffer so that the renderer takes into account this new contrast ratio when calculating the sampling threshold. AA for all internal buffers will be affected by this new contrast threshold, but if there's no gamma correction pre-applied directly to these buffers, AA values will be linear for them at render time.


http://imagic.ddgenvivo.tv/forums/AA/ASTsetup.png

This means that we are able to use the Final Color buffer to affect the contrast threshold at render time (we don't save this results, it's just for gamma correcting the AA sampling) and at the same time we store in a DP GlobalBuffer the same Final Color buffer in linear way (without any gamma correction applied). Then we can save this linear result with the DP Get GlobalBuffer node.


In simple words: AA = correct / Threshold = correct.


http://imagic.ddgenvivo.tv/forums/AA/PFGC2.jpg http://imagic.ddgenvivo.tv/forums/AA/SM.gif

This last option is slower but provides more correct results in AA values. Consider gamma-correcting the threshold for linear workflow with any of the ways discussed here is indeed able to provide better results with the same AS values, which gets translated in faster render times as vfxwizard has already explained.

Let's see a small example with very reflective surfaces (where AA flickering is usually problematic). In this mini-sequence AA has been applied in linear space and later the final result has been gamma-encoded at image filter level:


http://imagic.ddgenvivo.tv/forums/AA/AAnogamma.gif

Contrast Threshold is 0.02 and we still experience a lot of flickering in our CG element. The worst thing is that even if we decrease more the contrast threshold value, we'll see no difference because contrast ratios are too high to determinate the subtle differences in dark areas.

In this mini-sequence, same AA levels has been applied according to a contrast threshold in non-linear space at pixel filter level:


http://imagic.ddgenvivo.tv/forums/AA/AAgamma.gif

There's no flickering and even (when was unnecessary) if we diminish the threshold value, we'll indeed see a noticeable improvement. Perhaps Lightwave can use a similar solution natively :)



Gerardo

probiner
05-19-2011, 10:02 PM
Wooow.... :eek:
I gues this one will be definetly be grabbed.

Thanks again for an explanation Gerardo, for the content and presentation.

Cheers

vfxwizard
05-20-2011, 04:06 AM
Uber post Gerardo, thanks!

I did try the same setup you describe, but scrapped it because it produced significant brightness shifts. Have you ever noticed something like that? (I will look for my test scenes with the brightness shift.)

However if you suggest something I know it has to be good, so I tried again today and lo and behold, it works! So thank you.


BTW, adding a limiter node before the gamma allows to clamp values, thus catering for another feature request of mine: low dynamic range contrast detection (without clamping the render as LDR does). This shaves off some more rays when trying to antialias extra bright surfaces next to standard ones.

Would you mind sending a link to your post to the developers as a feature request? I have been asking for these options in LW since 9.x - your post is a great demonstration of the usefulness of having this in LW. And seeing is believing. :)


what is not the right thing to do (and this has been overlooked by some outstanding render engines) is to consider the contrast threshold in linear space. Contrast threshold should be calculated in non-linear space, I think.

Geek talk alert. :bday: I strongly agree with you.

From an engineer's perspective, the whole purpose of linear workflow is to decouple rendering and presentation. So performing contrast detection in linear space produces a render that is not biased towards any display. I can relate with this, the geek in me gives a nod of approval.

Yet, adaptive sampling is already an user-defined, arbitrary, bias. Everything below the threshold is not deemed important, so if I choose a threshold that renders clean @gamma 1.8 then display the image @2.2 I still have to face a perceptually-biased image.

So what's the point of sticking to linear? Allowing for non-linear contrast detection can only bring practical benefits as you have clearly shown.

vfxwizard
05-20-2011, 06:07 AM
Gotcha! The culprit was real lens camera's irradiance falloff. Setting it to zero there are no brightness shifts and everything works like a charm.

Thanks again Gerardo!

gerardstrada
05-20-2011, 06:15 PM
I gues this one will be definetly be grabbed.

Pedro, hope it helps! :thumbsup:



BTW, adding a limiter node before the gamma allows to clamp values, thus catering for another feature request of mine: low dynamic range contrast detection (without clamping the render as LDR does). This shaves off some more rays when trying to antialias extra bright surfaces next to standard ones.
Don't want to get off-topic but this is an excellent idea, vfxwizard!

It really improves the AA when the lighting contrast ratios are high. When the lighting contrast ratios are very very high, I've been trying another way. This is an experimental way to overcome aliasing results in scenarios with very high dynamic range (HDR) without loosing HDR data. Method is non-destructive, which means we are able to re-expose or tonemap our results without losing pixel data and keeping always a smooth antialiasing at any exposure level, no matter how high may be the contrast ratio between adjacent pixel values. As your feature request, I guess this concept might be implemented natively in Lightwave, too. I've shared this in a CGTalk forum, but I'll share it here as well in case might be interesting or improved. Idea is pretty obvious as we'll see if we take a look to the situation:

For antialiasing operations in lighting conditions with high contrast ratios, pixels with very high values (superbright pixels) can be averaged with pixels with very low values (very dark or totally black pixels). If AA is applied in FP space, the averaged pixels will be still too bright for displays and we'll get aliasing. i.e. contrast ratio of this simple scene is about 133000:1


http://imagic.ddgenvivo.tv/forums/AA/HDRAA.gif


And we can see flickering due to aliasing in very bright areas.


http://imagic.ddgenvivo.tv/forums/AA/aliased.jpg


If we under-expose a lot the image, we'll see antialiasing is there, it's just that those areas are still too bright.


http://imagic.ddgenvivo.tv/forums/AA/AAunderexposure.jpg

Traditional ways to solve this problem is to pre-limit the dynamic range (not necessarily at 0-1 range, just a compromise between medium dynamic range and good AA results) or to pre-tonemap at render time (again, not necessarily loosing all dynamic range). Both are very viable options and work very well for many cases.

But these solutions have their downsides as well. Tonemapping operators preserve details but depending on the type of operator, not always preserve the intended contrast (though it could be re-adjusted in post) and for CG integrations it might not be totally scene-referred. Limiting the dynamic range is destructive for pixel data (is more or less what happens in a medium/low dynamic range camera sensor) but it's what probably provides the best AA results.

So what if we need to keep full dynamic range in very high contrast ratios scenarios and still get a smooth antialiasing?

In such case, idea was to use the full dynamic range (DR) result - which has the HDR values and looks aliased - but we use also the areas of the limited range AA - which looks right but has LDR values. What we do is to isolate the good AA results (limited range) so that it replace the bad AA results from the full DR version. We can re-expose or tonemap the HDR version and always use the good AA results.

To do so, we need the DP_Pixel Filter Node Editor to store a full DR version and a limited range version - both generated at render time - this is important because results are not the same if we limit the dynamic range after render:


http://imagic.ddgenvivo.tv/forums/AA/StoredVersions.png

Later in post (this can be done in any compositing package, but for explicative purposes I'm using DP_Image Filter Node Editor within Lightwave), we isolate the difference between the good AA results and the bad AA results by subtracting the LDR result from a limited range version of the HDR result. This difference is what is left over for getting the correct antialiasing areas.


http://imagic.ddgenvivo.tv/forums/AA/DifferenceAA.jpg

So we just need to subtract this from the HDR result. But before this operation, we limit its dynamic range too. Any re-exposure or tonemapping operation in FP domain can be done before this substraction. So we are able to adjust the exposure level of the HDR result (where aliasing is still a problem due to very bright pixels) by using the good AA antialiasing.


http://imagic.ddgenvivo.tv/forums/AA/HDRAAfixed.gif

In cases where some of the original aliased pixels are already antialiased (due to some tonemapping operation or under-exposure adjustments), we can use the luma of the tonemapped/underexposed version to drive the strength of the antialiasing correction.


http://imagic.ddgenvivo.tv/forums/AA/AAdifferenceScaled.png


This provides a better AA according to the tonemapped areas


http://imagic.ddgenvivo.tv/forums/AA/TM.gif

Though ranges between 0-1 provide better AA, the final limited range is up to us. This technique can be automated in a node and applied to any buffer that presents AA problems due to superhigh contrast ratios (commonly the reflection shading buffer).


http://imagic.ddgenvivo.tv/forums/AA/TMAAfixed.gif

Other cases may be solved more easily with the solutions discussed before.



Gerardo

COBRASoft
05-20-2011, 07:15 PM
Oh my!

NT, integrate this somehow natively please :D!

vfxwizard
05-21-2011, 07:19 AM
Implementing both CS corrected and clamped contrast detection inside LW should be very easy as long as they are user-selectable options and there's no risk of breaking older scenes.

Probably it's just a matter of persuading the developers that these would be worthy additions to the many checkboxes already there -and I think Gerardo's examples are great for this.


About the HDR reflections, that's brilliant! I wasn't able to replicate it in LW (global buffers other than #1 appeared empty, yet were saved correctly - maybe it's me) but with a bit of scaling and adjustment the clamp & subtract worked great in compositing.

Thanks a lot for sharing the workflow!

gerardstrada
05-21-2011, 10:32 PM
vfxwizard,

I use the last version of DP_Filter Node Editors for LW9.6.

In the previous version for LW10, only the first GlobalBuffer was able to store buffers - maybe you need to update to the last version? In last version for LW10 (x32) all GlobalBuffers are able to store buffers, but not all GlobalBuffers work properly with MultiThreading (I get horizontal artifacts here in some buffers), but it seems the last x64 version is working properly.

These tools are updated very often, and reports help to improve our tools, then please, report any issue to Denis Pontonnier in this thread (http://www.newtek.com/forums/showthread.php?t=71751).

Thank you,



Gerardo

omichon
05-22-2011, 01:57 AM
Thanks vfxwizard, for highlighting this issue !
As always, awesome (and very instructive) demonstration Gerardo ! Thanks for all of them, btw :)

allabulle
05-22-2011, 07:33 AM
What a nice thread! Thanks everyone.

I'm testing it now, with different scenes and values and it's really something. Please NewTek Dev Team take notice. A comment on this issue from someone at NewTek would be more than welcome too.

adk
05-26-2011, 08:55 PM
Great thread folks & many thanks for the in-depth explanations :thumbsup:

lino.grandi
05-27-2011, 11:43 AM
Simply brilliant. Can't be ignored.

Matt
05-27-2011, 02:37 PM
Gerado, I passed this on to the dev team, in fact we had a meeting on it, we are reviewing this.

What also came up in the meeting was that your post was exemplar in how to suggest changes, a clear and well explained case.

So thank you for that!

Will keep you informed!

:)

Cageman
05-27-2011, 03:19 PM
What also came up in the meeting that your post was an exemplar in how to suggest changes, a clear and well explained case.


Word!

:thumbsup:

grangerfx
05-27-2011, 03:32 PM
Impressive work Gerardo! If you want to get something implemented in LW, this is a very good way to do it. Can you supply us sample scenes? I was already planning to implement something like this for 11 and would like to include this information in my research.

-Mark

Gregg "T.Rex"
05-27-2011, 03:50 PM
If you want to get something implemented in LW, this is a very good way to do it.
-Mark


THAT i'll keep in mind for future requests! :thumbsup:

T.Rex

gerardstrada
05-27-2011, 03:54 PM
Whoa! Thank you, Matt, and Lino!

Much appreciated! :thumbsup:



Gerardo

gerardstrada
05-27-2011, 03:55 PM
Impressive work Gerardo! If you want to get something implemented in LW, this is a very good way to do it. Can you supply us sample scenes? I was already planning to implement something like this for 11 and would like to include this information in my research.

-Mark
Sure, let me prepare some things and I'll post it here.



Gerardo

Cageman
05-27-2011, 04:03 PM
Cool stuff is going on here, for sure!

Thanks gerardstrada!

:)

probiner
05-27-2011, 04:27 PM
Grabbed! And so LW keeps going on the edge of Linear Workflow :)

gerardstrada
05-30-2011, 03:08 AM
Impressive work Gerardo! If you want to get something implemented in LW, this is a very good way to do it. Can you supply us sample scenes? I was already planning to implement something like this for 11 and would like to include this information in my research.

-Mark
Mark,

Thank you. I think these examples would be impossible without the DP Pixel Filter Node Editor. So thank you very much to Denis Pontonnier for such a great tool! Would be great to see his innitiative nativelly implemented within Lightwave.

As we agree, I'm attaching here (http://imagic.ddgenvivo.tv/forums/AA/aa.rar) some scenes (4):

1. AA_tarditional.lws - It's the sample of the first image of my first post (linear output gamma-corrected at post-process).
2. AA_halfEnhanced.lws - It's the sample of the third image of my first post (gamma-corrected output at render time).
3. AA_fullEnhanced_LW96.lws/AA_fullEnhanced_LW10.lws - It's the sample of the fifth image of my first post (gamma-corrected render but linear output through DP Global Buffers).
4. AA_Replacement_LW96.lws/AA_Replacement_LW10.lws - It's the sample of the seventh image of my second post. The first image of my second post is easily achieved by deactivating DP Pixel Filter Node Editor and DP Image Filter Node Editor and just gamma correcting the output with a simple gamma correction (as in AA_traditional.lws case).

I made all samples in LW9.6 and tested there (the LW10 versions were re-adjusted in LW10 but haven't tested yet). In order to get the intended results in LW10, we need set up all color space conversions to linear in the CS panel.

Won't post the robot scene but the setup is almost the same than the AA_Replacement_LW96/AA_Replacement_LW10 scenes. The only difference is in the curve correction (a film log curve in one case and a gamma curve in the other) and the value of the Limiter node. In this regard, we can adjust the Limiter according to the color depth of the output medium, so that there be no pixel data loss.

In the AA_Replacement_LW96/AA_Replacement_LW10 scenes I simplified the surface setup by just using a conductor material and the DP Curvatures shader to achieve a light warping effect for enhanced surface depiction. Then, besides the DP Filter Node Editors:

http://dpont.pagesperso-orange.fr/plugins/nodes/DP_Filter.html

We need install also the DP Kit for the Curvatures shader:

http://dpont.pagesperso-orange.fr/plugins/nodes/Additionnal_Nodes_2.html

And the TruArt's SplitMaterial node from TrueArt's Node Library:

http://www.trueart.pl/?URIType=Directory&URI=Products/Plug-Ins/TrueArt Node Library

Also, for gamma correction, all scenes use the SG_CCNode from Sebastian Goetsch:

http://www2.informatik.hu-berlin.de/~goetsch/CCTools/
(perhaps it can give you some ideas for ICC/ICM compatibility and color management)
Since for this specific cases we are not using ICC/ICM color profiles, we don't need the colorprofiles.txt. Just install the plugin as any other. Also, since we are just using the simple gamma options of this plugin, we can use it in LW10.

Please, do let me know if you need any other stuff.

Thanks,



Gerardo

funk
05-30-2011, 08:49 AM
I'm just playing around with these scenes and noticed some artifacts using AA_Replacement_LW10.lws.

Sometimes it renders fine, other times I get black artifacts. Not sure if the bug is in lw10 or dpkit

dpont
05-30-2011, 10:48 AM
I'm just playing around with these scenes and noticed some artifacts using AA_Replacement_LW10.lws.

Sometimes it renders fine, other times I get black artifacts. Not sure if the bug is in lw10 or dpkit

Probably a multithreading issue,
can't be sure, because I can't reproduce,
you could try the very last version uploaded today, win32 yet.

Denis.

funk
05-30-2011, 11:01 AM
removed

funk
05-30-2011, 11:11 AM
I havent tried the new dp filter but I have some more info.

I connected GetGlobalBuffer to the imagefilter output to bypass all of the tonemapping etc.

1. I went from buffer 1 and rendered over and over. Sometimes I see black lines.
2. I went from buffer 3 and rendered multiple times. It was always perfect

I decided to go into the pixel filter and send the RenderBuffer color to global 5 instead of global 1

Then in imagefilter

1. I went from buffer 5 and rendered over and over. Always perfect
2. I went from buffer 3 and rendered multiple times. Sometimes had black lines

Conclusion:

The first global buffer used sometimes gets corrupted. The second one is always correct

funk
05-30-2011, 11:22 AM
I just tested the new dpfilter and it still happens although the artifacts are much smaller now.

For quick testing I disable adaptive sampling on the camera and do the steps above.

That way each render takes less than 1 second

I also tested in lw9.6 with the new dpfilter and it has the same problem

(note: you need to load AA_Replacement_LW10.lws into lw9.6 with the new dpfilter. AA_Replacement_LW96.lws doesnt have the right connections)

funk
05-30-2011, 11:30 AM
removed

funk
05-30-2011, 11:38 AM
The workaround for now is this:

1. In pixel filter, add an extra connection from color to global 5
2. In image filter, connect the output of global 1 into a dummy node (use any node and dont connect its output anywhere)
3. use global 5 output to do all your work.

That way global 3 and 5 remain perfect for render output, while global 1 gets corrupted but never rendered anywhere
(only the first USED global buffer output gets corrupted and only some times)

allabulle
05-31-2011, 06:42 AM
Gerado, I passed this on to the dev team, in fact we had a meeting on it, we are reviewing this.

What also came up in the meeting was that your post was exemplar in how to suggest changes, a clear and well explained case.

So thank you for that!

Will keep you informed!

:)

Great! And thanks for sharing publicly.

gerardstrada
05-31-2011, 11:25 AM
The workaround for now is this:

1. In pixel filter, add an extra connection from color to global 5
2. In image filter, connect the output of global 1 into a dummy node (use any node and dont connect its output anywhere)
3. use global 5 output to do all your work.

That way global 3 and 5 remain perfect for render output, while global 1 gets corrupted but never rendered anywhere
(only the first USED global buffer output gets corrupted and only some times)
The version I was using was for 9.6. Version from 09/03/2011 or yesterday don't present any artifacts here (but an empty background instead). Denis has solved it in the update of today (tested in v9.6). Still have some issues in v10. Please, report how it behaves in your case.



Great! And thanks for sharing publicly.
Hola Allabulle, hope it helps! :)



Gerardo

Matt
05-31-2011, 01:01 PM
And the TruArt's SplitMaterial node from TrueArt's Node Library:

http://www.trueart.pl/?URIType=Directory&URI=Products/Plug-Ins/TrueArt Node Library

Dang, no 64bit version :(

Note to developers: 64bit is the way to go these days! ;)

funk
05-31-2011, 01:18 PM
Still have some issues in v10. Please, report how it behaves in your case.


Just tried the latest one and still have artifacts. I did 10 renders. 2 of them had artifacts. 8 were perfect. So it has a 20% failure rate

Edit: just tested 9.6 too. Same problems

gerardstrada
05-31-2011, 01:34 PM
Dang, no 64bit version :(

Note to developers: 64bit is the way to go these days! ;)

We can use also the useful Material Blender Node from Michael Wolf:

http://www.db-w.com/products/dbwtools/download/viewcategory/13


Just tried the latest one and still have artifacts. I did 10 renders. 2 of them had artifacts. 8 were perfect. So it has a 20% failure rate

Please, wait some minutes and check Denis website for the last update :)



Gerardo

funk
05-31-2011, 04:34 PM
Tried again with the newest version and still have the same problem.

Its definitely a multi-threading issue though, because it doesnt happen when rendering with 1 thread. Anyway, the workaround I listed above (http://www.newtek.com/forums/showthread.php?p=1147549#post1147549) works so no big deal :)

gerardstrada
05-31-2011, 05:26 PM
The version I've tried here works perfectly in LW10 with MultiThreading so far. Not sure if we are talking about the same version but with multiple installations of this plugin, it's better delete the previous version, disable Autoscan Plugins, restart Layout and add the new version by hand. Also in case you are using multiple LW versions, verify the path of the plugin with the Edit Plugin panel or make a safe copy of LW10.cfg and delete it, then restart Layout from scratch.



Gerardo

funk
06-01-2011, 01:11 AM
Here's what I just tried.

1. Grabbed the latest dpkit and dpfilter
2. Started with no configs (eg. I started from scratch)
3. Added only dpkit and dpfilter plugins (manual scan)
4. Loaded AA_ReplacementL_LW10.lws.
5. Since many plugins were missing the surface looked different so I replugged the conducter into the greek sculpture, disabled adaptive sampling and fixed the connections in image and pixel filter so all plugins got bypassed (eg straight into global 1, then output from global 1)
6. Hit render multiple times... I still get random artifacts occasionally (about 20% of the time)

This happens in 9.6 too (using dpfilter96). I have attached the output (see 4 black straight lines) and the edited scene

omichon
06-01-2011, 01:40 AM
Funk, I just tested your scene here but all looks fine. Don't know if it could help but here is my rig :
I use the very last versions of DP_Kit and DP_Filters, LW9.6-32bit on Win7sp1-64bit, Multithreading set to automatic with an Intel Xeon X5650, 6 physical cores (with hyperthreading ON to get 12 threads), 4Gb RAM.
I rendered a F9 sequence (thanks Matt !) without any problem (same with Gerardo's scene).

funk
06-01-2011, 06:55 AM
Funk, I just tested your scene here but all looks fine. Don't know if it could help but here is my rig :
I use the very last versions of DP_Kit and DP_Filters, LW9.6-32bit on Win7sp1-64bit, Multithreading set to automatic with an Intel Xeon X5650, 6 physical cores (with hyperthreading ON to get 12 threads), 4Gb RAM.
I rendered a F9 sequence (thanks Matt !) without any problem (same with Gerardo's scene).

Thanks for testing. I just tried the same thing in 9.6 and got no corruption either, but then I did a few f9 renders manually and saw the black line artifacts again... hmmm

Edit: in lw10 I get corrupted frames even using Matt's f9 render sequence

omichon
06-01-2011, 07:20 AM
I tried on my laptop (Dell XPS15), and got 2 corrupted frames (over 60).
Intel Core i5 M460, 4Gb RAM, same OS, same LW version.
Could it be CPU related ?

funk
06-01-2011, 07:24 AM
I tried on my laptop (Dell XPS15), and got 2 corrupted frames (over 60).
Intel Core i5 M460, 4Gb RAM, same OS, same LW version.
Could it be CPU related ?

No idea, but I'm glad to see you got corrupted frames too. At least I know it's not just me :) It seems to happen in lw10 much more often than 9.6 though

omichon
06-01-2011, 07:47 AM
I don't have any problem on my workstations (X5650, X5550), but always get some (very few) corrupted frames on the laptop (i5 M460). So I suspect it is CPU related.
Just hope it could help Denis tracking down the issue.

funk
06-01-2011, 08:25 AM
Denis just emailed me an experimental build for lw10. I rendered 120 frames using f9 render sequence + about 40 manual f9 renders.

All rendered perfectly. I guess we'll see the fix up on his site soon

allabulle
06-01-2011, 09:31 AM
Denis just emailed me an experimental build for lw10. I rendered 120 frames using f9 render sequence + about 40 manual f9 renders.

All rendered perfectly. I guess we'll see the fix up on his site soon


Invaluable testing, then. Thanks.

omichon
06-01-2011, 10:50 AM
Denis just emailed me an experimental build for lw10. I rendered 120 frames using f9 render sequence + about 40 manual f9 renders.

All rendered perfectly. I guess we'll see the fix up on his site soon

Confirmed. All is fine with the new build of DP_Filter :)
Great work Denis :thumbsup:

funk
06-01-2011, 11:08 AM
Good job Denis :)

Gerardo sorry about derailing your cool thread here.

bazsa73
06-01-2011, 12:46 PM
Oh gosh! So many professionals and they are listened to! Wonderful Matt and Lino, and of course all the nice people around here. My little scene I was working on for a few months suffered heavily from flickering, a reflective golden candelier with minuscule details and now I understand why.

gerardstrada
06-01-2011, 03:42 PM
Good job Denis :)

Gerardo sorry about derailing your cool thread here.
It's not really my thread :) but thank you for helping to improve our tools, man :thumbsup:

Again, thank you very much, Denis! :beerchug:



Gerardo

funk
06-01-2011, 11:43 PM
Now that this issue is out of the way, I have started some quick testing. The output from global buffers seems to have very different AA/AS than the render buffer.

eg. if I plug the render buffer color output into a global buffer in the pixel filter then save the render buffer output and global buffer output in image filter, they are different.

I have attached images from my quick test. It has AA 1, AS 0.03, oversampling 0 (I just wanted to test the AS threshold)

The render buffer matches what lightwave would normally output. Its nice and smooth.

The global buffer looks like it has less AA and AS. The lines look more jagged and the reflection in the background is grainy

Why the difference? Id have to crank my AA/AS higher to smooth out the global buffers, which wastes time.

Its faster and smoother to just use dbw simple color correcter pixel filter at gamma 2.2, then an fp gamma image filter at .4546 (1/2.2). This matches the render buffer (using dp pixel/image filter) but renders faster

gerardstrada
06-02-2011, 02:33 AM
As I commented before, we can solve this just at pixel filter level and smooth out the lines/borders with the oversample, but as we have discussed before also, solving this at pixel filter level only is a half solution because the resulting AA values are wrong. The correct AA values (linear) are even smoother.

If that's not the case in the current tests is because the output from Final Color Buffer and output from Global Buffers should be the same, but that's a technical situation (hopefully momentary) with our tools. The principle shown here still standing; that is to say, the contrast threshold should be non-linear while AA values should be linear.

Hope this can be solved.



Gerardo

funk
06-02-2011, 03:01 AM
Oh I completely agree , sorry if it sounded otherwise. The idea/principle behind it is correct. I'm just not getting the correct results.

When I used this method I noticed my AA/AS was not smooth. That lead me to do the tests in my last post. I assumed these would look identical. I wasn't sure if my understanding was wrong, but you seem to agree they should be the same.


This points to some sort of difference in the way AA/AS is processed in the global buffers

funk
06-02-2011, 03:28 AM
removed

funk
06-02-2011, 04:47 AM
I have some more info. The global buffer doesnt match the AA reconstruction filter or oversampling. It seems to not use a reconstruction filter or oversampling at all.

I was using a Mitchell reconstruction filter. If you set the reconstruction filter to CLASSIC and oversampling to 0.0, the match is almost perfect.

Global buffers seem to have their own style of AA that matches nothing else in lw.

gerardstrada
06-02-2011, 09:31 PM
It seems the reconstruction filter is the key here. It would be great if Lightwave's developers could share in the SDK (or perhaps directly with Denis) the way that each reconstruction filter is weighted or the necessary info to solve this more easily, so that we can have, already in v9.6 and v10, a practical and complete solution for AA in Lightwave.
This would be useful not only for a better antialiasing, but also for the other capabilities that DP Filter Node Editors has in the areas of multipass rendering (http://www.newtek.com/forums/showthread.php?t=99989&p=943491), and maybe compositing (http://www.newtek.com/forums/showthread.php?t=71751&p=574855), filtering (http://www.newtek.com/forums/showthread.php?t=117309&p=1120657), FX (http://www.spinquad.com/forums/showthread.php?19309-New-options-for-Render-Management&p=274815&viewfull=1#post274815), and color grading (http://www.spinquad.com/forums/showthread.php?19309-New-options-for-Render-Management&p=230133&viewfull=1#post230133) within Lightwave. And there's a lot of more things that we can do with these tools that I have not linked, or have not shown yet, or people already know, or can guess.
This is a free tool with excellent and some unique capabilities. All users would benefit by a more closer integration, I think :)



Gerardo

gerardstrada
06-03-2011, 05:37 PM
I have some more info. The global buffer doesnt match the AA reconstruction filter or oversampling. It seems to not use a reconstruction filter or oversampling at all.

I was using a Mitchell reconstruction filter. If you set the reconstruction filter to CLASSIC and oversampling to 0.0, the match is almost perfect.

Global buffers seem to have their own style of AA that matches nothing else in lw.
Just in case, GlobalBuffers match perfectly here the Classic and Box reconstruction filters (oversample 0.0) when we set up Blue Noise as the sampling pattern. The other two sampling patterns provide a match almost perfect, but not identical as Blue Noise.



Gerardo

dpont
06-04-2011, 04:01 AM
I have some more info. The global buffer doesnt match the AA reconstruction filter or oversampling. It seems to not use a reconstruction filter or oversampling at all...

Not sure this is the right place to discuss the
development of DP Filter.
In a private version here I added reconstruction filtering,
close to Final LW render (except for the last algorithm),
there are always some subtle differences and I don't think
I can do much more into this, I run also into other issues
in some unevaluated areas of Buffers, but this is a specific
problem to DP Filter.

Denis.

funk
06-04-2011, 07:27 AM
That's great Denis! Can't wait to try it

gerardstrada
06-04-2011, 07:56 PM
Not sure this is the right place to discuss the
development of DP Filter.
:hijack:

:D

No, seriously, thanks to Snosrap for his patience. We can move this discussion to the ExtraBuffers thread, maybe:

http://www.newtek.com/forums/showthread.php?t=71751

This is really an amazing update!
Thank you, Denis!

Thank you very much also to Lightwave developers for taking the time to take a look at this thread and their constant will for improvements.
They are really doing an awesome job with LW renderer!



Gerardo

Snosrap
06-04-2011, 11:31 PM
No, seriously, thanks to Snosrap for his patience. We can move this discussion to the ExtraBuffers thread, maybe:

No, seriously, you guy's are freaking crazy smart. Learning tons. Thanks for all the discussion. :thumbsup:

GraphXs
06-05-2011, 10:30 AM
After reading this thread I'm still try to wrap my brain around it! Amazing stuff!

djwaterman
06-12-2011, 12:39 PM
Yeah, it's threads like this that make LW seem like it might have a future. Hooray for the smart people!

dballesg
06-15-2011, 02:05 AM
I was already planning to implement something like this for 11 and would like to include this information in my research.

-Mark

Hi Mark,

Don't know is this will be useful for your research, but I stumbled with this paper when I was looking for something different, wonders of google.

Antialiasing Recovery (http://research.microsoft.com/en-us/um/people/hoppe/proj/aarecovery/)

David

COBRASoft
08-16-2011, 01:01 PM
Bump!

So the NT dev team doesn't forget this thread :).

gerardstrada
08-16-2011, 03:33 PM
Hi Mark,

Don't know is this will be useful for your research, but I stumbled with this paper when I was looking for something different, wonders of google.

Antialiasing Recovery (http://research.microsoft.com/en-us/um/people/hoppe/proj/aarecovery/)

David

I hadn't seen the document you have linked until now. The method you have linked is based in the same principle than my method. The curious thing is that - according to the microsoft webpage in your link - they published it in March 30, 2011 and I published my method two weeks before in March 15, 2011 in the CG General Discussion forum of CGTalk:

http://forums.cgsociety.org/showpost.php?p=6906927&postcount=358
hmmm... that's interesting http://forums.cgsociety.org/images/smilies/smile.gif

Thank you for the link!



Gerardo

MSherak
08-26-2011, 01:15 PM
Has anyone looked at the images when using this method and Motion Blur is on?? The main channels will get bunked but the alpha will be fine. Wondering is there is a way to clean this up since it looks like the new sampled cleaned AA is not blending right back into the buffers.

-M

NoPIColor, NoPIAlpha, PIColor, PIAlpha (PI = Pixel/Image nodes)

gerardstrada
08-26-2011, 07:14 PM
Can not reproduce the problem here (LW 9.6/LW10.1)

http://imagic.ddgenvivo.tv/forums/AA/mBlurTest.png

But as we have agreed, in order to separate the operation principle of this proposal in LW from the intrinsic technical aspects of the used tools, we have moved all discussions about the improvements of these 3rd-party tools to this Denis' thread (http://www.newtek.com/forums/showthread.php?t=71751). There, he has improved the aspects mentioned here and other aspects not mentioned in this thread. There are some other tips about the implementation of this method as well.

When mentioning a bug, please, verify if you are using the last version of DP FNE, specify the LW version you are using and your scene settings. It's better if you can share a very simple scene isolating the bug. Thanks for helping to improve our tools.



Gerardo

lardbros
09-14-2011, 03:25 AM
Hoping some development has happened in this area ready for November :)

Go Newtek, go! :)

MDSPECIFIC
02-14-2012, 07:34 AM
Finally got the time for experimenting with this. I understand the principle of this method, the testing files and results are ok and everything, so I'm using one of mine practice scene with this technique and seem to me I cant achieve improvement or I can see it or, more likely, I'm not doing it right. I'm attaching my testing scene (LW 9.6 content), I want improved render, flicker free, so if somebody have spear time to play with it and point me out a little I'll be much appreciated. :thumbsup:

gerardstrada
02-15-2012, 03:38 PM
There are some considerations that might help in your scene:

The gamma-corrected contrast threshold methods provide improvement when we work in linear light and when using Adaptive Sampling (AS), however in the attached scene there's no linear workflow setup, nor AS is being used.

The setup applied in the attached scene provides improvement when we get flickering due to the sampling of super-bright pixels with pixels of very low values (this happens when working with very high contrast ratios). However, the lighting ratios in the attached scene don't surpass the 100%. Pixel values in the HDR environment/reflection maps don't surpass the 100% neither.

Instead of using the setup for getting nice AA in super-bright pixels you would want to try the setup shared to gamma-correct the contrast threshold - you'll need to switch to a linear workflow first. In such case the setup for controlling the contrast threshold could be something like this (in DP PFNE):


http://imagic.ddgenvivo.tv/forums/AA/PFNEGammaConf.png


Denis Pontonnier has improved his DP Global Buffers, by just enabling "MultiThread/Persp AA" only, render is faster than before.


In DP IFNE, the setup may be like this:

http://imagic.ddgenvivo.tv/forums/AA/IFNEGammaConf.png

With the previous setup in both FNE you should notice smoother results in wood veins and shadowed areas. If you are outputting for Rec. 709 or sRGB, the simple pre-gamma correction setup should be viable too.



Gerardo

Btw, consider that very high intensity values in high frequency bump maps tend to produce flickering; you might want to decrease those values, force a better sampling, or try instead with anisotropic shaders for reflections/specularity. You might want to use also real HDRIs, fresnel effects (incidence angle gradients may work (http://forums.newtek.com/showpost.php?p=763397&postcount=17)) and DP Cook-Torance shader (http://forums.newtek.com/showpost.php?p=991296&postcount=35) (Beckmann 2) for your metal shading.

MDSPECIFIC
02-17-2012, 06:05 AM
Gerardo, thanks for the inputs. I put a scene without hdr, radiosity, aa/as, with some standard surface setup with bumps, letting the others to experiment.
I get smother shadows with suggested node setup but I steel have small jagged lines/edges when playing animation, until I put oversampling.
Like I was thinking, the final render results (flickering, jagged lines) is very surface/material depending, so like you said I got to modify those channels. From now on I am completely in linear workflow, and then I can pull the max from yours previous setup.

Another thing to ask. On still images/renders I could put bigger values for bumps, and then again if I want to animate and render same scene I need to put smaller bumps for reducing flickering effect.
Some surfaces look more real when bumps are bigger, but then again, when I use AS (Or oversample) they are completely smoothed out.
So what's your suggestion, or do you have some rule for this situation?
Thanks again, cheers! :thumbsup:

Barnard
02-18-2012, 10:07 AM
Nice writeup! I would pay good money for someone to make a video tutorial about this.

Though this particular subject is not my specialty, I am very good at archetectural modelling and would like to be able to make a video of me designing various objects with text based instructions on the tools and techniques used. This is to be a video tutorial for animators with less skill.

Is there some kind of software I can use to video record my screen activity, like how the print screen works for capturing a jpeg of the screen?

gerardstrada
02-19-2012, 06:13 PM
Gerardo, thanks for the inputs. I put a scene without hdr, radiosity, aa/as, with some standard surface setup with bumps, letting the others to experiment.
I get smother shadows with suggested node setup but I steel have small jagged lines/edges when playing animation, until I put oversampling.
Like I was thinking, the final render results (flickering, jagged lines) is very surface/material depending, so like you said I got to modify those channels. From now on I am completely in linear workflow, and then I can pull the max from yours previous setup.

Another thing to ask. On still images/renders I could put bigger values for bumps, and then again if I want to animate and render same scene I need to put smaller bumps for reducing flickering effect.
Some surfaces look more real when bumps are bigger, but then again, when I use AS (Or oversample) they are completely smoothed out.
So what's your suggestion, or do you have some rule for this situation?
Thanks again, cheers! :thumbsup:
Yes, oversampling (and softer reconstruction filters) commonly help there. And it has an explanation, I think. Contrary to what one might think, details are very smooth, blurry and large in real photographs or photograms. If we see these images at their original resolution, the motion blur, dof, difraction, grain, etc, are very notorious, what provides a very blurry look to the images. What happens is that commonly, the images we see are already down-sampled and sometimes automatically enhanced with sharper algorithms (i.e. bicubic sharper). There's also a perceptual phenomenon that tends to minimize blurriness in small images. Blurring due to DOF and mBlur for example, is not so notorious in smaller versions of the same image.

Guess what happens is that when raw rendered images lack of motion blur, dof, glares and other optical effects, images tend to look very harsh and - to keep the same perceptual relationship in sharpness - people tend to 'boost' details too much, instead of reproducing the optical effects that happen in reality.

High frequency details in procedural textures bring problems in bump/normal maps because most of them are aliased. In such cases, something that commonly helps is bake the texture in a bitmap (at double of resolution in which the surface will be displayed in screen) and apply it back with pixel blending and mipmapping enabled. In those cases a good strategy is diminish the texture amplitude value and play with the brightness/contrast parameters of the bitmap. Later, if we need a crispy image, we can apply a sharpness filter (DP Filter may help there).



Gerardo

gerardstrada
04-09-2013, 08:48 PM
Bump!

So the NT dev team doesn't forget this thread :).
Sorry for comment this so late but as was pointed out by Mark, the non-linear contrast threshold is available natively since LightWave Eleven!


http://s23.postimg.org/tsc86sdqz/new_LWThreshold.gif

The trick for making it work is by enabling the proper nonlinear output space in the CS panel. A sample for sRGB would be:

http://s7.postimg.org/i5t6j363v/non_lin_Threshold.png

It's an elegant solution because the contrast threshold should be consistent with our output colorspace, so both things (output space and contrast threshold) are set up automatically.

For saving linear results we may choose OpenEXR files in the Output panel or for saving in TIFF FP or other FP formats we may use DP Global Buffers in Image Filter Node Editor:


http://s18.postimg.org/vapnug9rt/DPGBsave.png


Kudos for them! :thumbsup:



Gerardo

OFF
04-10-2013, 03:19 AM
Gerardo, excuse me - where to get the gamma preset (Gamma2.2) as in your CS example settings?

COBRASoft
04-10-2013, 04:04 PM
^^ Wondering this too, I have sRGB and others, but no Gamma 2.2...

gerardstrada
04-11-2013, 09:13 PM
Uppsss! version twelve leaked! :D

Nah... I did it myself :)

It's an optimized sRGB preset. I used a 256-pixel linear scale, corrected at 2.2 gamma. Then, I measured each pixel in LW viewer, plotted and optimized that in DP Curve (thanks to Denis) and copied later those values to a 256-res LW color table. The curve needs to be optimized 'cos we would need a huge table resolution for covering very dark colors accurately, but the floating-point discrepancies are not really relevant for integer picked colors since at the end the final color is rounded to the same integer value.

I made the LUT for quickly converting colors from other peoples' scenes that have set up wrongly the sRGB gamma for picked and light colors, but I don't use it for my own work. I prefer color correct in the picker - with the SG_CCPicker - because not all colors need to be linear for all tasks.

Reason why picked/light colors need to be corrected at 2.2 gamma somehow and not to sRGB gamma (in the sRGB preset) is because computer monitors are calibrated at a gamma 2.2, not sRGB gamma. The sRGB gamma is for encoding images, not screen colors. The sRGB standard regarding to the "display gamma" specification assumes a computer monitor gamma of 2.2 in order that a sRGB encoded image can be displayed correctly. So screen colors are displayed at 2.2 gamma while, in color-managed environments, sRGB images are encoded according to the sRGB gamma.

Therefore, gamma 2.2 for picked colors and sRGB gamma for images.



Gerardo

lardbros
04-12-2013, 07:05 AM
Wow... brain fried again by you Gerardo :D

Know it's rude... but any chance for access to your optimised sRGB preset? ;)

Gregg "T.Rex"
04-12-2013, 07:55 AM
Uppsss! version twelve leaked! :D

Nah... I did it myself :)

It's an optimized sRGB preset. I used a 256-pixel linear scale, corrected at 2.2 gamma. Then, I measured each pixel in LW viewer, plotted and optimized that in DP Curve (thanks to Denis) and copied later those values to a 256-res LW color table. The curve needs to be optimized 'cos we would need a huge table resolution for covering very dark colors accurately, but the floating-point discrepancies are not really relevant for integer picked colors since at the end the final color is rounded to the same integer value.

I made the LUT for quickly converting colors from other peoples' scenes that have set up wrongly the sRGB gamma for picked and light colors, but I don't use it for my own work. I prefer color correct in the picker - with the SG_CCPicker - because not all colors need to be linear for all tasks.

Reason why picked/light colors need to be corrected at 2.2 gamma somehow and not to sRGB gamma (in the sRGB preset) is because computer monitors are calibrated at a gamma 2.2, not sRGB gamma. The sRGB gamma is for encoding images, not screen colors. The sRGB standard regarding to the "display gamma" specification assumes a computer monitor gamma of 2.2 in order that a sRGB encoded image can be displayed correctly. So screen colors are displayed at 2.2 gamma while, in color-managed environments, sRGB images are encoded according to the sRGB gamma.

Therefore, gamma 2.2 for picked colors and sRGB gamma for images.



Gerardo

Hey Gerardo, this is really interesting!
Is this because the sRGB encoding can peek at gamma 2.4?
Most values lay at the median value of 2.2 though...
How strong visually at the end result is the difference between a sRGB picked color and a gamma 2.2 picked color?

gerardstrada
04-14-2013, 10:17 PM
Wow... brain fried again by you Gerardo

Know it's rude... but any chance for access to your optimised sRGB preset?

Larbros, not rude at all! I haven't shared it before because the SG_CCPicker solution is better, besides could be confusing how to use a specific preset.

I'm attaching the LUT better, but as I commented before, I don't use it for my own work and though is viable for linearizations, I don't recommend it for displaying. SG_CCFilter or even FPGamma is better for that.

Anyway, its general purpose usage for *)sRGB would be like is shown in the previous image but would be good to explain some things:

- Picked Colors: Use Gamma 2.2 if you are using a computer monitor. It supposes we should use our monitor ICC profile here, but there's a issue described here (http://forums.newtek.com/showthread.php?133785-Does-anybody-use-quot-Display-Color-Space-quot-LUT-with-success&p=1303560&viewfull=1#post1303560) (and a possible native solution we can discuss later) in LightWave CS panel.

- Light Colors: same as above.

- Palette Files: Use sRGB if you have generated your palette files in a color-managed environment i.e: sRGB working color space in PS. If you have not used a color-managed app for generating your palette files (let's say we have used Brilliance or DeluxePaint :D), you would be better with Gamma 2.2 - though the monitor colorspace would be optimal.

- 8-bpc Files: Use sRGB - that's precisely what's intended for. This is commonly applicable for 16-bpc files too - but not always.

- FP files: Use Linear (these files are linear by nature).

- Alpha: Use Linear for pictures / sRGB for line-art graphics.

- Display Color Space: Use sRGB if what you are trying of previewing is how the render will look like coded as sRGB image. This implies your monitor is calibrated in the sRGB standard. If you just want to preview how the raw image is seen on computer screen, just apply a 2.2 gamma - I'd recommend even FPGamma before my LUT for that http://forums.cgsociety.org/images/smilies/smile.gif If you want to simulate any other display standard or you are indeed previewing in medium gamut monitor (aRGB), sRGB-ish monitors, DCI-P3, etc, I strongly recommend SG_CCFilter (http://www2.informatik.hu-berlin.de/~goetsch/CCTools/). It's the most reliable solution until the native ICC compatibility (correct conversion concatenation / primaries translation) can be solved.

Output Color Space: Same as display. Just notice that a Gamma 2.2 LUT / monitor profile will go a bit further in the contrast threshold than the sRGB gamma for very dark areas and will increase the AA a bit beyond the visual perceptual threshold. That's commonly better.



Gerardo

*) You can save your own sRGB preset according to your specific case with that criteria. Though I wouldn't recommend work in sRGB space.

gerardstrada
04-14-2013, 10:25 PM
Hey Gerardo, this is really interesting!
Is this because the sRGB encoding can peek at gamma 2.4?
Most values lay at the median value of 2.2 though...
How strong visually at the end result is the difference between a sRGB picked color and a gamma 2.2 picked color?
Gregg, Hey! Glad you find it interesting!
Something curious about sRGB gamma is that values of 2.4 gamma are not actually reached in the resulting values of an sRGB curve. The intended gamma curve appearance of sRGB gamma formula is 2.2. But it has a gamma value of 2.4 just because there's a linear segment near the blacks. This linear segment works like a slope by lowering the subsequent values in the curve. If gamma would be 2.2, the final curve wouldn't reach the overall 2.2 gamma values (middle values would be in the range of about 1.8). So an increment in the maximum value is necessary to fulfill with the 2.2 gamma curve appearance. Main reason why there's a linear segment in the formula is because this allow for roundtripping gamma compression/expansion with integer values in 8-bpc images. It also introduces a slight (more pleasant) tone-curve and help to handle the viewing flare.

This means that overall differences in values between sRGB gamma and 2.2 gamma are minimal for linearization. However they are noticeable in very dark and dark colors when these colors are gamma corrected for displaying or outputting. Medium and bright colors are really unnoticeable.



Gerardo

maxi3dcp
07-09-2014, 12:06 PM
Hola Gerardo, este setup de AA usando DP PFNE/IFNE ha sufrido algún cambio por el nuevo sistema de AS unificado de LW, o se vio mejorado?

Gracias!!

gerardstrada
07-09-2014, 06:04 PM
Hola Gerardo, este setup de AA usando DP PFNE/IFNE ha sufrido algún cambio por el nuevo sistema de AS unificado de LW, o se vio mejorado?

Gracias!!

This AA setup by using DP PFNE/IFNE has experienced some change due to the new unified AS or it has been improved?


In the first node setup (the one used for correcting the AS threshold) we don't it any more. As was pointed out by Mark (LWDT) in this same thread, the non-linear contrast threshold is available natively since LightWave Eleven! As for the second node setup (the one that has not been implemented and it's used for replacing HDR AA by LDR AA) the setup doesn't need any gamma correction at pixel level any more being that we take profit of the new non-linear threshold of recent versions by just switching to a non-linear output space in CS panel.

Hola Maxi, en el primer arreglo nodal (el usado para corregir el umbral de AS) no lo necesitamos más. Como fue señalado por Mark (LWDT) en este mismo thread, el umbral de contraste no-lineal está disponible nativamente desde LW11! En cuanto al segundo arreglo nodal (el que no ha sido implementado y es usado para reemplazar el AA de ARD por AA de BRD) el arreglo no necesita más de ninguna corrección de gama a nivel de pixel ya que tomamos ventaja del nuevo umbral no-lineal de las recientes versiones simplemente cambiando a un espacio de salida no-lineal en el panel de CS.



Gerardo

erikals
08-27-2014, 06:18 PM
hi Gerardo,
sorry to bump, but you being an expert on this area i have a question for you http://erikalstad.com/backup/misc.php_files/smile.gif

is there any smart way of cutting down the render time for the LightWave content "Grass Instancing" scene ?
when animated the AA can take a quite big piece of the render time...

http://forums.newtek.com/showthread.php?140653-Best-render-settings-to-avoid-noise-in-grass&p=1373040&viewfull=1#post1373040

https://encrypted-tbn1.gstatic.com/images?q=tbn:ANd9GcSZThgjf23rGMo47DgPmEaduPCHCdQEP VgjJ1C2axktKFul9DwX

just curious if there is a way...

gerardstrada
09-02-2014, 03:57 AM
Hello Erik,

Sorry for the late replay.

Agree. Settings by default are not suitable for animation:

http://s18.postimg.org/jd0vq1snt/default.gif

If you ask me, JonW settings looks pretty good there:

http://s29.postimg.org/wkepk8smv/m_B_settings.gif

And render time is fast considering the level of flickering of this type of scene. Geometry grass needs high AA settings no matter what renderer we use because the subpixel detail (in dependence of the AA samples) is defining the geometry. high AA, mBlur, APS, a bit of post-DOF and a soft reconstruction filter are your friends in such cases.

Since in these cases subpixel details might be so small that a pixel might define the whole geometry, Pixel Per Polygon APS tends to help to sample better those subpixel details from frame to frame. To keep subdivision low, Catmull-Clark works better than Subpatch when using PPP APS option. In this case I used 1/45 AS with APS at 5 PPP:

http://s1.postimg.org/m1wou4q33/APS_settings.gif

Notice I'm not using mBlur, nor Gaussian Reconstruction Filter in the above sample, and it's 36% faster than the previous setup and results are flicker free as well.

Maybe other trick worth to try in BG grass is by using transparency maps on grass cards instead of actual geometry. With transparency maps we are able to pre-AA or pre-soften the aliased edges of the grass. Transparency however might increase render time if RT transparency is enabled.

Haven't used in the previous sample but 2 mBlur passes is useful as well because it helps to average the current pixel with the subsequent one. post-DOF is commonly more useful than denoise filters because in the former, the kernel size is in dependence on the focal distance (where the farther geometry is the problematic), while in the latter, the kernel size is constant across the whole frame and it might blur detail in the process. But for large areas of very small geometry, a denoisier helps because the small geometry presents a noise-pattern. In such case you might want to take a look to this approach:

http://forums.newtek.com/showthread.php?71751-Extra-Buffer-nodes&p=1397271&viewfull=1#post1397271

It was designed for shading noise, not "small geometry" flickering, but it might be worth to try in the mentioned cases http://erikalstad.com/backup/misc.php_files/smile.gif



Gerardo

erikals
09-02-2014, 01:30 PM
Nice! http://erikalstad.com/backup/misc.php_files/king.gif
had to read it twice, but got it now... :]


Transparency however might increase render time if RT transparency is enabled.
yes, i found it to not gain anything from my simple test...

i was also wondering in this case if it could be an idea to render at 12fps, for then to time-stretch it to 24fps...
either using a regular method, or using a plugin like Twixtor... will have to give it some test-runs...


thank you so much again for helping me/us out here...! http://erikalstad.com/backup/misc.php_files/047.gif

gerardstrada
06-29-2017, 11:31 PM
Implementing both CS corrected and clamped contrast detection inside LW should be very easy as long as they are user-selectable options and there's no risk of breaking older scenes.

Probably it's just a matter of persuading the developers that these would be worthy additions to the many checkboxes already there -and I think Gerardo's examples are great for this.
I totally agree, the checkboxes idea is very useful way to go.

Just in case, shared here an improved and simpler method for enhancing AA in high contrast ratios:

http://forums.newtek.com/showthread.php?71751-Extra-Buffer-nodes&p=1510469&viewfull=1#post1510469

full dynamic range AA:
https://s18.postimg.org/hqvgad2bt/AASC-unclipped.jpg

limited dynamic range AA :
https://s7.postimg.org/j0e72mkor/AASC-clipped.jpg

full dynamic range pre/post filtered:
https://s15.postimg.org/9dhy9w0fv/AASC-softclipped.jpg

think it would be very easy to implement something like that as an additional checkbox.



Gerardo

madno
07-02-2017, 01:19 AM
Hi,

maybe you can help me with the attached scene.

I use luminous geometry to simulate real lights. Materials for the objects are LW's Dielectric and Carpaint with blurred reflections.
There are DP Flood Lights (parented to my simulated lights) but they are switched off in Scene Editor at the moment.

I like the style of this type of renders but to me it seems nearly impossible to get rid of the noise without high render times.

Thought the NIF NPF filter with Gerardos nodes could help but had no luck (render time went up but result did not improve).

Am I doing it wrongly or be on the wrong track at all?

137257

137258

137259

137260

137261


Text geometry was requested by a user in another thread - I initially made a sub-d version to post there but then had fun creating this scene. Now I am thinking to put textures etc. but first hope to find a way to reduce the noise.

gerardstrada
07-02-2017, 04:28 AM
Maybe you need to enable Multithread/Persp.AA? this allows AA compatibility with the pixel filter.
btw, for the grainy result in your shading, this technique in reflection buffer (instead of illumination pass as shown in the post) may help:

http://forums.newtek.com/showthread.php?146003-Gerardo-Estrada-DPont-Denoiser



Gerardo

p.s. sometimes diminishing AA passes and increasing shading samples helps as well by depending on the scene.

maxi3dcp
07-03-2017, 07:28 PM
Render time: 21m 17s sin Nodes
137285
Render time: 10m 52s con Nodes
137286
Render time: 9m7s con Nodes y cambiando el HDRI
137287
Render time: 18m24s sin Nodes!!
137288

Acá el tiempo de render subió
Render time: 21m38s
137289
Render time: 26m7s
137290

Gracias Gerardo!!!
Max

gerardstrada
07-03-2017, 10:54 PM
Hey Maxi!

How are you doing? some nice examples there!

Haven't noticed render speed improvements in my scenes here. Idea was just to solve artifacts in reflections, so we could skip it in cases this problem is not critical. Guess if it would be a native solution it might be faster.

Saludos!



Gerardo

Sensei
07-04-2017, 05:00 AM
Maxi, try also double/triple resolution, and all AA off.. Then resize down in Photoshop.

gerardstrada
07-04-2017, 06:36 AM
Just in case, rendering at higher resolution without AA and then resizing down may help for noisy results, but it won't help much for artifacts due to super bright pixel values.

Rendered at quadruple resolution without AA and then resized:
https://s17.postimg.org/dc1nr42nz/no_AAbig_Reduced-pp.jpg

problem is still there because the contrast ratios to be averaged are way too high. So some kind of highlights compression and expansion before AA calculation (or probably resizing) would be still necessary.



Gerardo

Sensei
07-04-2017, 10:39 AM
Just in case, rendering at higher resolution without AA and then resizing down may help for noisy results, but it won't help much for artifacts due to super bright pixel values.

That depends in which format you will write file.
If it's HDRI, floating point, 1000.0 will remain 1000.0.
But if it's RGBA 32 bit, 1000.0 will be truncated to 255,
and after blending with surrounding pixels, it won't exceed 255 anymore.

gerardstrada
07-04-2017, 07:29 PM
...and all your results will be clipped, which the proposed method tries to avoid. This is, getting nice AA and still keeping HDR values.



Gerardo