PDA

View Full Version : oh 'Gods of Lnear Workflow' - is combo of FPrime & G2 useful?



3dWannabe
04-05-2010, 10:36 AM
I've been reading about the linear workflow techniques, but the dark previews seem to be the downside to the perfect renders.

I have fprime.

Worley labs thoughtfully suggested I could additionally buy G2 if I wanted to gamma correct my fprime previews (would it only be the fprime previews?)

I'm also using (well, actually starting to learn) exrTrader, Janus, Fusion6.

But - as I really don't know what I'm doing, I don't know the "gotcha's" I might encounter if I go ahead and purchase G2 for use with fprime (and hopefully for use with 'normal' renders for situations where fprime won't work.

Any comments? Should I try to get that to work?

Matt
04-06-2010, 09:37 AM
The Gamma Correction features in G2 are not only for FPrime, they just work with FPrime, so should work fine.

Personally Worley's decision to not add a Gamma Correction setting to FPrime is a mistake IMO. I'm afraid I'm not about to pay a large chunk for G2 for that one feature, so without it, FPrime is pretty much useless to me now. Any further updates of FPrime without that feature I doubt I would buy, so that's one revenue stream down.

Shame, I liked FPrime too.

3dWannabe
04-06-2010, 10:32 AM
Matt - I sent Worley a msg last week asking him about a tutorial on using G2 & frpime in a linear workflow, but didn't get a response.

I don't mind paying him for his work and time, but I don't really feel the need for G2 (one more point of failure) - and agree that gamma should be in fprime, especially as linear workflow seems to have been a hot topic for a while, even though I'm just now learning about it.

Maybe he'd be willing to add just that feature to prime for $50? I'm sure it would generate more interest in the folks who may have not upgraded to the latest release and seem to be frustrated with the lack of gamma in frime.

Do you think you could msg the other folks you know who might be interested, and see if they can all send Worley a request? He might pay attention if enough people were interested?

artstorm
04-06-2010, 11:17 AM
I played around a bit with G2 and linear workflow last year but couldn't figure out how to get it to work correctly. Setting G2's image processing section's gamma to 2.2 (which also affects the FPrime window), do no not give you the result you'd expect. Instead of getting an overall brighter image which you get
when applying a 2.2 gamma anywhere else, you get a more contrasty image when applying 2.2 gamma in G2.
I did get pretty close by using a gamma of 0.4546 in G2 which made the image darker and then brighten it with the G2 brighten controls. This got rid of the contrast problems, but I didn't find any consistency in what bright values to use to make it a reliable method.

Not useful in LWF in other words. I didn't spend more time digging in the issue at that time as I don't use FPrime that often anymore, so if any workaround or method to get G2's gamma to behave as one expects it'd be interesting to know. You never know if you need to start up fprime for a project down the road. But at the moment as far as I know, there is no straightforward way to get a linear workflow with FPrime by using G2.

Matt
04-06-2010, 01:42 PM
Matt - I sent Worley a msg last week asking him about a tutorial on using G2 & frpime in a linear workflow, but didn't get a response.

I don't mind paying him for his work and time, but I don't really feel the need for G2 (one more point of failure) - and agree that gamma should be in fprime, especially as linear workflow seems to have been a hot topic for a while, even though I'm just now learning about it.

Maybe he'd be willing to add just that feature to prime for $50? I'm sure it would generate more interest in the folks who may have not upgraded to the latest release and seem to be frustrated with the lack of gamma in frime.

Do you think you could msg the other folks you know who might be interested, and see if they can all send Worley a request? He might pay attention if enough people were interested?

Maybe, the reason I was given was that it would add too much complexity to FPrime! I totally don't agree with that.

3dWannabe
04-06-2010, 02:08 PM
Maybe, the reason I was given was that it would add too much complexity to FPrime! I totally don't agree with that.

Well, from what Johan is saying, there may be some issues.

As I'm just starting out with Linear, I certainly don't want a "can't get there from here" situation where fprime & G2 won't really work as needed. I don't think there's a 'try before buy' with G2?

Hopefully Worley will chime in at some point and clarify?

Would benefit him to show how it would work, or modify so that it does.

zardoz
04-07-2010, 02:36 AM
well, I use db&W tools plugin and add the simple colour corrector in the pixel filter and choose gamma 2.2. This for lw renderer only. Doesn't work with fprime.

http://www.db-w.com/component/option,com_remository/Itemid,84/func,select/id,13/

Lightwolf
04-07-2010, 03:25 AM
Maybe, the reason I was given was that it would add too much complexity to FPrime! I totally don't agree with that.
It's for the same reason that there is no option to correct the pixel aspect ratio as well.
Sheesh, us LW users really must be dweebs ;)

Cheers,
Mike

rsfd
04-07-2010, 04:31 AM
wouldn't buy G2 only for that feature, do an F9 instead and add FPGamma Image Filter in Image Processing tab to preview your renders in LogSpace. (turn off prior to output as floating point)
[btw I think the usefulness of FPrime has dropped with the speed of LW9.5/.6 renderer and the possibilities of Viper (eventhough Viper can't be used for previewing in LogSpace]

zardoz
04-07-2010, 04:45 AM
rsfd, use db&w's plugin because you don't have to be looking at a dark preview. While it renders you can see it with the correction applied. It doesn't work while calculating the radiosity.

rsfd
04-07-2010, 05:28 AM
@zardoz:
not sure, if I understand correctly: setting dbw SCC with Ga2.2 to color input of Node Pixel Filter in processing tab?
I'm getting a quick look at the rendered image while F9 renders, but when F9 has finished, I'm just getting a black image.

I'm usually using dbwTools' "Simple Colour Corrector" to linearize textures and colors and apply FPGamma (set to 2.2) to preview linear F9 renders "gamma-corrected" (transformed to LogSpace) in ImageViewer.
When I render the final output (32-bit exr) I'm turning FPGamma off to have a linear output for post-work.

zardoz
04-07-2010, 05:37 AM
no...dbw tools have another plugin , not a node one, a simple pixel filter called db&w simple colour corrector. there you can add a 2.2 gamma and lightwave while it renders (after the gi calculation) will display the corrected gamma. I use it a lot while previewing my renders.

Lightwolf
04-07-2010, 05:56 AM
no...dbw tools have another plugin , not a node one, a simple pixel filter called db&w simple colour corrector. there you can add a 2.2 gamma and lightwave while it renders (after the gi calculation) will display the corrected gamma. I use it a lot while previewing my renders.
Not only that, but if you use adaptive sampling then the sampling threshold will be computed using the gamma corrected pixels, which can be a little faster.

If you want to save linear then you can also add an image filter with the inverse gamma (1 / gamma). Since the image buffer is float this won't really affect the image quality.

@rsfd: The db&w Tools include the SCC as a node, image filter as well as a pixel filter.

Cheers,
Mike

Captain Obvious
04-07-2010, 06:03 AM
Not only that, but if you use adaptive sampling then the sampling threshold will be computed using the gamma corrected pixels, which can be a little faster.
Technically, though, results are incorrect. For example:

Taking the average of two values, one 0.25 and the other 0.75, and then applying a gamma correction of 2.0 yields 0.71. Taking the same two values and applying a gamma correction *BEFORE* averaging them yields 0.68.

Lightwolf
04-07-2010, 06:07 AM
Technically, though, results are incorrect. For example:
Yup, you're absolutely right. Which is why it'd be nice to have a gamma correction for the AS threshold only.

*hmmm*

Cheers,
Mike

rsfd
04-07-2010, 06:09 AM
hmm, still don't get it!
I do use dbwTools, but I can't find a "Simple Colour Corrector" Image or Pixel Filter.
Probably only part of the Win version of dbwTools - I'm still on a Mac.
I'm going only nodal, so I guess I'll stick with my way anyway.

Lightwolf
04-07-2010, 06:19 AM
hmm, still don't get it!
I do use dbwTools, but I can't find a "Simple Colour Corrector" Image or Pixel Filter.
Probably only part of the Win version of dbwTools - I'm still on a Mac.
I'm going only nodal, so I guess I'll stick with my way anyway.
V1.5?

Cheers,
Mike

rsfd
04-07-2010, 06:31 AM
jip, was examinating why this would be - on the machine where I'm right now I have an older version installed (never had my login data here when I wanted to update, that's why :foreheads ).
So I will download v.1.5 tomorrow.
Thanks for reminding!

Lightwolf
04-07-2010, 06:38 AM
(never had my login data here when I wanted to update, that's why :foreheads ).
You don't need to login to download them. That's only necessary for our commercial plugins.

Cheers,
Mike

Edit: You actually had me worried for a minute, so I double checked... ;)

rsfd
04-07-2010, 06:40 AM
aarrrgh, (no comment)

Lightwolf
04-07-2010, 06:44 AM
aarrrgh, (no comment)
:D (also no comment)

Cheers,
Mike

rsfd
04-07-2010, 07:17 AM
got it! It's all in place!

thanks zardoz
and of course: thanks alot Mr. Lightwolf! :thumbsup:

Matt
04-07-2010, 08:12 AM
Wait! Hang on! Mike, I thought the Simple Colour Corrector was for surfaces, how do you get this to affect the render preview? Using ShaderMeister? Inquiring minds need to know!

:)

Lightwolf
04-07-2010, 08:15 AM
Wait! Hang on! Mike, I thought the Simple Colour Corrector was for surfaces, how do you get this to affect the render preview? Using ShaderMeister? Inquiring minds need to know!

:)
If I may be so bold as to quote myself...


@rsfd: The db&w Tools include the SCC as a node, image filter as well as a pixel filter.

Cheers,
Mike

Matt
04-07-2010, 08:27 AM
If I may be so bold as to quote myself..

You may, and this works with FPrime?

zardoz
04-07-2010, 08:46 AM
from what I've tried...no.

because it's a pixel filter, right?

I didn't use linear wf because of whne lw is rendering you always look at a very dark image and in the end it applies the gamma. With this colour corrector I can preview my render. Only during the GI phase it doesn't work, but after that it's fine.

Matt
04-07-2010, 09:02 AM
Mike, damn you, had me all excited then and everything! :p

COBRASoft
04-07-2010, 12:22 PM
This seems interesting. Matt do you have a video tut about this workflow (with db&w tools) or can somebody supply a simple scene which illustrates this perfectly?

3dWannabe
04-07-2010, 01:05 PM
This seems interesting. Matt do you have a video tut about this workflow (with db&w tools) or can somebody supply a simple scene which illustrates this perfectly?

This may help, has several videos on the thread.

http://www.newtek.com/forums/showthread.php?t=102397

rsfd
04-08-2010, 06:51 AM
ok: dbw-scc pixel/image filter
now I'm curious:
what are the technical advantages to use the dbw-scc pixel (or image) filter over using the FPGamma image filter to preview F9 renders as LogSpace?

Lightwolf
04-08-2010, 07:49 AM
ok: dbw-scc pixel/image filter
now I'm curious:
what are the technical advantages to use the dbw-scc pixel (or image) filter over using the FPGamma image filter to preview F9 renders as LogSpace?
None with the image filter (especially as it's broken as I just discovered) - the pixel filter give you the advantage of seeing the image as will be with gamma correction while it's rendering. Allowing you to judge abort and tweak earlier.

Cheers,
Mike

Tobian
04-08-2010, 07:55 AM
For one you can *see* the render as it's doing it (after the radiosity pass) which counts for a lot, especially if you want to make sure it's looking about right. It's very hard to guess if a LCS image is cooking properly. In my case, when you're waiting 3-6 hours for a render, leaving it to chance or your imagination is not great! :D

The other thing is that adaptive sampling does not compensate for LCS, so any pixels in the lower end of the brightness scale, won't be caught and anti-aliased. The only option to get round this is either use very high AA levels, very tiny adaptive sampling values (which can do horrible things to render times, as it needs then to take LOADS more sub-passes!) and also anti-aliases all the pixels which don't need it, more. Not using AS is an option, but then it does a lot for render speed!

3dWannabe
04-08-2010, 08:13 AM
The other thing is that adaptive sampling does not compensate for LCS, so any pixels in the lower end of the brightness scale, won't be caught and anti-aliased. The only option to get round this is either use very high AA levels, very tiny adaptive sampling values (which can do horrible things to render times, as it needs then to take LOADS more sub-passes!) and also anti-aliases all the pixels which don't need it, more. Not using AS is an option, but then it does a lot for render speed!

But, if you are generating at 2x or 4x for compositing in fusion or nuke, you don't have to deal with AA issues in LW (and also maybe reflection blurring and DOF are best handled during compositing also)?

I'm trying to get a handle on what is best left to compositing and how this affects the linear workflow, so please correct me if I'm off base.

3dWannabe
04-08-2010, 08:15 AM
The biggest problem with Worley is.... he rarely says anything till he's ready to debut something. So... he MAY have something up his sleeves NOW, then again he may not. I'm still waiting for a vilumetric Sas2. :devil:

Well, maybe he bases his decisions on where to devote his time on the number of requests he gets for a feature.

I hope that everyone who might be interested is emailing him. How else is he going to know there's interest?

Tobian
04-08-2010, 08:19 AM
Well if you're gonna change the rules, then sure, why bother with AA :p Reflection blurring is NOT best left to post, unless you are only doing a tiny superficial softening. Post cannot calculate the depth of the reflection ray being cast.. There is no such buffer to do that, so far as I know. It's also the problem with doing DOF in post too, reflections and refractions should have DOF based on the focal distance of the reflected object, which is nearly impossible to fake in post, if it becomes a visible issue.

3dWannabe
04-08-2010, 08:44 AM
Well if you're gonna change the rules, then sure, why bother with AA :p Reflection blurring is NOT best left to post, unless you are only doing a tiny superficial softening. Post cannot calculate the depth of the reflection ray being cast.. There is no such buffer to do that, so far as I know. It's also the problem with doing DOF in post too, reflections and refractions should have DOF based on the focal distance of the reflected object, which is nearly impossible to fake in post, if it becomes a visible issue.

I'm studying several articles in hdri3d by Gerardo Estrada and Proton Vaughan that seemed to say that LW doesn't output a certain buffer needed to be able to composite AA properly (I forget the name), so compositing is generally at 2x or 4x. The articles also mentioned blurring reflection in post, how it is so much faster.

But, I'm very interested in learning the downside (as you mentioned) to doing certain things in post. I guess there are quality and time trade-offs no matter which way you go.

rsfd
04-08-2010, 10:21 AM
Lightwolf, Tobian
thanks for clearing that up!

Tobian
04-08-2010, 02:09 PM
Of course blurring reflection is post is faster. Everything in post is faster. Is it any good though, is the question. The answer in a lot of cases is.. no, unless you know a shed loads of tricks and workarounds, which work only in very specific shots.

And Yeah Lightwolf, I noticed the image filter was broken ages ago, thought I had mentioned it to you? :)

Lightwolf
04-08-2010, 04:40 PM
And Yeah Lightwolf, I noticed the image filter was broken ages ago, thought I had mentioned it to you? :)
No, not that I remember. But we only get a few bug reports, so it didn't surprise me (I honestly do wish we'd get more).

Cheers,
Mike

Tobian
04-08-2010, 05:00 PM
If I notice any horrendous flaws in your software, I shall endeavour to point them out, though I only use a few of them. I already did some bug-testing for your cache node :D

Captain Obvious
04-08-2010, 05:00 PM
No, not that I remember. But we only get a few bug reports, so it didn't surprise me (I honestly do wish we'd get more).
I'd love to file some, but I haven't found any. Yet. :D

Lightwolf
04-08-2010, 05:05 PM
I'd love to file some, but I haven't found any. Yet. :D
I'm not going to tell you where to look ;)

If I notice any horrendous flaws in your software, I shall endeavour to point them out, though I only use a few of them. I already did some bug-testing for your cache node :D
Please do that - even if it's not horrendous. I think that every developer relies on feedback to either iron out issues or improve the software.

Cheers,
Mike

3dWannabe
04-23-2010, 09:03 PM
I just read a pdf on Nuke's linear workflow (oddly enough, found in a thread on Fusion's pigsfly.com board when linear also seems a hot topic)

http://www.swdfx.com/PDF/Nuke_Color_Management_Wright.pdf

And as I read it, a thought occurred to me ... when it stated that everything was linear in Nuke, but displayed in the viewer with gamma applied according to a LUT so that the image looks 'right' in the viewer.

I never got any response from Worley about adding the ability to change gamma in fprime, or to the issues posted where one user had an unsatisfactory experience using G2 to correct the gamma for a linear workflow.

So ... as I use the GretagMacbeth Eye-One to correct my LCD, and it has an input for the desired gamma target to use when creating the LCD's profile ... would it be possible to tell it to create a profile for the gamma that would 'correct' the linear image so it looks 'normal'?

Switching between monitor profiles could probably be automated by some utility, or I could write one in .NET for Win7.

Would this work?

Lightwolf
04-24-2010, 04:41 AM
Would this work?
Yes, but you'd get massive banding. The problem is that the LUT would be applied after the image has been converted to the 8-bit frame buffer on your graphics card - which may effectively (after the LUT has been applied) leave you with 5-6 bits per component sent to the monitor.

Cheers,
Mike

gerardstrada
04-24-2010, 09:03 PM
I played around a bit with G2 and linear workflow last year but couldn't figure out how to get it to work correctly. Setting G2's image processing section's gamma to 2.2 (which also affects the FPrime window), do no not give you the result you'd expect. Instead of getting an overall brighter image which you get
when applying a 2.2 gamma anywhere else, you get a more contrasty image when applying 2.2 gamma in G2.
I did get pretty close by using a gamma of 0.4546 in G2 which made the image darker and then brighten it with the G2 brighten controls. This got rid of the contrast problems, but I didn't find any consistency in what bright values to use to make it a reliable method.

Not useful in LWF in other words. I didn't spend more time digging in the issue at that time as I don't use FPrime that often anymore, so if any workaround or method to get G2's gamma to behave as one expects it'd be interesting to know. You never know if you need to start up fprime for a project down the road. But at the moment as far as I know, there is no straightforward way to get a linear workflow with FPrime by using G2.

There's a way to work in linear light with G2 and FPrime. I share it about 3 years ago in Issue#18 of HDRI3D magazine. The thing with G2 is that it pretends to offer a wider flexibility and it confuses since its gamma tool doesn't behave in a regular way - like a gamma function - but like a power function. That is to say, in the opposite way (though in math this is not accurate for large data, for our purpose is the opposite way). This means that when we input, let's say a 2.2 value, it behaves like 1/2.2, by linearizing the result instead of gamma-correcting it. But the curious thing is that 'gamma' and 'brightness' are not tied like in a common power function, so resulting effect is not like a power function neither, and we need to compensate with the Bright parameter to get a gamma-like result. Controls go even further since we have RGB channels separately, but also white 'Gamma' and Bright parameters, which behave without affecting the overall saturation of the input image.

For getting a gamma-like operation, I came up with this:

For a log2lin operation (2.2 gamma):

http://imagic.ddgenvivo.tv/forums/G2log2lin.png

For a lin2log operation (2.2 gamma):

http://imagic.ddgenvivo.tv/forums/G2lin2log.png


I just read a pdf on Nuke's linear workflow (oddly enough, found in a thread on Fusion's pigsfly.com board when linear also seems a hot topic)

http://www.swdfx.com/PDF/Nuke_Color_...ent_Wright.pdf

And as I read it, a thought occurred to me ... when it stated that everything was linear in Nuke, but displayed in the viewer with gamma applied according to a LUT so that the image looks 'right' in the viewer.

I never got any response from Worley about adding the ability to change gamma in fprime, or to the issues posted where one user had an unsatisfactory experience using G2 to correct the gamma for a linear workflow.

So ... as I use the GretagMacbeth Eye-One to correct my LCD, and it has an input for the desired gamma target to use when creating the LCD's profile ... would it be possible to tell it to create a profile for the gamma that would 'correct' the linear image so it looks 'normal'?

Switching between monitor profiles could probably be automated by some utility, or I could write one in .NET for Win7.

Would this work?

Though the pdf has some minor inaccuracies, it's overally right. However, be aware not all post-processing operations are benefited by linear space (even if it's performed in FP). Color gradings, film grain, scaling up, painting filters, etc.. behave better in log/screen space. DOF, sharping, blurring, scaling down, glows, color blendings, motion blur, etc... behave better in lin space.

Agree also with Michael Wolf about the 8-bpc issue. Although it supposes WCS support 16-bits color transformations now, so if the graphic card and the color management system works in 16-bits (working at OS level with profiles that support FP operations), gamma corrections and color transformations on 16-bpc images will behave better than with 8-bpc. But even in that way it doesn't assure any posterization in wider gamuts - we need a full FP system there.

Btw, I've written an incoming article for HDRI 3D magazine for people interested in how to manage colors within Lightwave 3D for HDRI-based lighting setups by keeping intact our FP pipelines.



Gerardo

Lightwolf
04-25-2010, 05:16 AM
Agree also with Michael Wolf about the 8-bpc issue. Although it supposes WCS support 16-bits color transformations now, so if the graphic card and the color management system works in 16-bits (working at OS level with profiles that support FP operations), gamma corrections and color transformations on 16-bpc images will behave better than with 8-bpc. But even in that way it doesn't assure any posterization in wider gamuts - we need a full FP system there.
The problem is that data is usually sent to the framebuffer at 8bpc... and then corrected by the LUT - which is to late.

Which is precisely why you need colour management within the app, to correct before it is being sent to be displayed.

Cheers,
Mike

3dWannabe
04-25-2010, 08:54 AM
There's a way to work in linear light with G2 and FPrime. I share it about 3 years ago in Issue#18 of HDRI3D magazine.

....

Gerardo

Gerardo - The orignal poster was saying he got it to work, but it was inconsistent. Does this work consistently and is it a usable workflow?

Are their any downsides (increase render time, stability, etc.)? And - are you using this with FPrime - or is there some other more advanced method now (as that article was a while ago)?

Thanks for the great articles!

BTW - When I subscribed to HDRI3D, partly for your excellent Avatar issue article, I bought another 6 or more back issues. I think I've added another 3-4 to my list of back issues I'll want since them.

Guess I should have just bought ALL of them!

gerardstrada
04-25-2010, 07:44 PM
The problem is that data is usually sent to the framebuffer at 8bpc... and then corrected by the LUT - which is to late.

Which is precisely why you need colour management within the app, to correct before it is being sent to be displayed.

Cheers,
Mike

Yep, that's usually the situation, but talking about more complex CM systems, it depends on how they are able to work at OS level - most of them are able to handle this in FP space nowadays. Though that's not the case 3DWanabe is talking about. Also, even when a CM system is able to handle data in half or full FP space, the thing is that - at OS level - the most affordable ones are able to display 1D LUTs only, and that's the main reason why is better to perform color transformations at app level, because 3Dcubes profiles are managed there. This is advisable for an appropriate accuracy, mainly when the display device doesn't decouple - like happens with LCD monitors.


Gerardo - The orignal poster was saying he got it to work, but it was inconsistent. Does this work consistently and is it a usable workflow?

If results are not consistent with different types of input images encoded/decoded with the same gamma, then the method doesn't work. For proper results, the value in Bright parameter is 45.45. for a log2lin operation (2.2 gamma) and 145.45 for a lin2log operation (2.2 gamma). Notice however that G2 only allows tenth decimals, not hundredth, but the difference is like the one between the sRGB gamma and 2.2 gamma. For practical purposes, the way I proposed is reliable with any type of image coded/decoded with a 2.2-ish gamma.


Are their any downsides (increase render time, stability, etc.)? And - are you using this with FPrime - or is there some other more advanced method now (as that article was a while ago)?

G2 and FPrime work nicely together (no slow downs, pretty stable, etc). As far as I know G2 is the only Pixel Filter that works with FPrime, so it's the better way so far. However I don't use FPrime since LW 9.6. For LCS workflows, I prefer the SG_CCTools, since they work with color profiles not only at gamma level but also at gamut level, what provides more predictable and consistent results when working with very large gamuts.



Thanks for the great articles!

BTW - When I subscribed to HDRI3D, partly for your excellent Avatar issue article, I bought another 6 or more back issues. I think I've added another 3-4 to my list of back issues I'll want since them.

Guess I should have just bought ALL of them!

Hey! Thanks! Hope you find them useful :)



Gerardo

P.D. Btw, just in case, the excellent Avatar article is from Robin Nations.

3dWannabe
05-15-2010, 01:56 PM
Notice however that G2 only allows tenth decimals, not hundredth, but the difference is like the one between the sRGB gamma and 2.2 gamma. For practical purposes, the way I proposed is reliable with any type of image coded/decoded with a 2.2-ish gamma.

Have you ever spoken with Worley about changing this from tenths to hundredths? For the price he charges, it would seem reasonable to be able to use the full precision that prime & G2 use internally?



G2 and FPrime work nicely together (no slow downs, pretty stable, etc). As far as I know G2 is the only Pixel Filter that works with FPrime, so it's the better way so far. However I don't use FPrime since LW 9.6. For LCS workflows, I prefer the SG_CCTools, since they work with color profiles not only at gamma level but also at gamut level, what provides more predictable and consistent results when working with very large gamuts.


Possibly fprime & G2 would work well for 'quickly' getting 95% of the the way there, and for the final tweaking, SG_CCTools would be used?

[Again, for the premium cost of fprime & G2, could Worley be persuaded to incorporate the additional features of SG_CCTools at the gamut level. I'm sure you have more pull than I do]

BTW - Do you use Jovian as your color picker instead of SG_CCPicker?

gerardstrada
05-15-2010, 05:00 PM
Have you ever spoken with Worley about changing this from tenths to hundredths? For the price he charges, it would seem reasonable to be able to use the full precision that prime & G2 use internally?
I have not worried about it. But I think a next version of G2 (G3?) needs a completely new structure. In the time G2 was released, FPrime didn't exist, so G2 has a preview window for displaying an accurate LUT (gamma level only) for any gamma value, sRGB, and Rec. 709. Fprime preview window could have a similar LUT, but I think an eventual G3 version would need to be re-thought as the pixel filter of FPrime, with an automatic linear workflow (a la Kray (http://www.kraytracing.com/wiki/Kray_plugins#Quick_Linear_Workflow_plugin)) and with the chance of switching to a nodal environment if user needs further control. Worley could add also CM presets (as 'camera presets' maybe), for working with different color profiles (a la Arion Render (http://www.randomcontrol.com/downloads/tutorials/mov/arion/introduction_to_the_arion_ui.mov)). Using standard color profiles will facilitate user's customizations for special needs also. Other solution might be that Worley provides a connection module or nodal bridge so that any other pixel filter could work with FPrime, but that would kill G2 at pixel level as the nodal system did at shader level.


Possibly fprime & G2 would work well for 'quickly' getting 95% of the the way there, and for the final tweaking, SG_CCTools would be used?

[Again, for the premium cost of fprime & G2, could Worley be persuaded to incorporate the additional features of SG_CCTools at the gamut level. I'm sure you have more pull than I do]
I suggested LUTs at gamut level for G3 in an article for HDRI3D magazine about 3 years ago, but there's no new version of G2 since then. altho FPrime & G2 work well for 'quickly' getting 99.9% of the the way there (at least at gamma level only), there's not way to use SG_CCTools with FPrime in post-process. Have not idea if this is possible since FPrime is a black box for other developers. FPrime recognizes SG_CCFilter used in pre-process or SG_CCNode in Surface Node Editor. As post-filter however, G2 is the only way to work properly with FPrime.


BTW - Do you use Jovian as your color picker instead of SG_CCPicker?
No. I use SG_CCPicker.



Gerardo

3dWannabe
05-15-2010, 05:26 PM
I have not worried about it. But I think a next version of G2 (G3?) needs a completely new structure. In the time G2 was released, FPrime didn't exist, so G2 has a preview window for displaying an accurate LUT (gamma level only) for any gamma value, sRGB, and Rec. 709. Fprime preview window could have a similar LUT, but I think an eventual G3 version would need to be re-thought as the pixel filter of FPrime, with an automatic linear workflow (a la Kray (http://www.kraytracing.com/wiki/Kray_plugins#Quick_Linear_Workflow_plugin)) and with the chance of switching to a nodal environment if user needs further control. Worley could add also CM presets (as 'camera presets' maybe), for working with different color profiles (a la Arion Render (http://www.randomcontrol.com/downloads/tutorials/mov/arion/introduction_to_the_arion_ui.mov)). Using standard color profiles will facilitate user's customizations for special needs also. Other solution might be that Worley provides a connection module or nodal bridge so that any other pixel filter could work with FPrime, but that would kill G2 at pixel level as the nodal system did at shader level.


I suggested LUTs at gamut level for G3 in an article for HDRI3D magazine about 3 years ago, but there's no new version of G2 since then. altho FPrime & G2 work well for 'quickly' getting 99.9% of the the way there (at least at gamma level only), there's not way to use SG_CCTools with FPrime in post-process. Have not idea if this is possible since FPrime is a black box for other developers. FPrime recognizes SG_CCFilter used in pre-process or SG_CCNode in Surface Node Editor. As post-filter however, G2 is the only way to work properly with FPrime.


No. I use SG_CCPicker.



Gerardo

Very good points about frime4, G3 Hard to decide as Worley seems to pre-announce features about as often as Apple does. Maybe he'll surprise us all with nVidia GPU rendering!

And, of course I just stumbled upon this NewTek 'change in LW 3D development' thread http://www.newtek.com/forums/showthread.php?p=1017741#post1017741 which seems to indicate the 'linear promised land' could come from Newtek itself:

"Added to that now are a linear color space workflow".

I'm actually encouraged by the lack of retirement of Modeler/Layout, as that might mean more 3rd party development and new purchases of LW - but from reading the very long discussion thread, I'm not sure anyone knows what to make of it [so why even issue a press release that is unclear and confusing? Someone needs to retake the marketing 101 class?]

Out of curiosity, why SG_CCPicker over Jovian? I got the impression that it was a great product and could do everything SG_CCPicker could do and more? Am I missing something obvious?

gerardstrada
05-15-2010, 07:25 PM
Very good points about frime4, G3 Hard to decide as Worley seems to pre-announce features about as often as Apple does. Maybe he'll surprise us all with nVidia GPU rendering!

That would be great!


And, of course I just stumbled upon this NewTek 'change in LW 3D development' thread http://www.newtek.com/forums/showthread.php?p=1017741#post1017741 which seems to indicate the 'linear promised land' could come from Newtek itself:

"Added to that now are a linear color space workflow".
Well, if you ask me, it will useful for most of users in regarding to linear workflows. But not so useful for most of users in regarding to color management (CM) workflows.


I'm actually encouraged by the lack of retirement of Modeler/Layout, as that might mean more 3rd party development and new purchases of LW - but from reading the very long discussion thread, I'm not sure anyone knows what to make of it [so why even issue a press release that is unclear and confusing? Someone needs to retake the marketing 101 class?]
Guess Layout and Modeler will be there for some time yet as independent modules. Though naming Core as a LW app can cause to be perceived as 'another module'. Maybe it would be better re-categorize it instead.


Out of curiosity, why SG_CCPicker over Jovian? I got the impression that it was a great product and could do everything SG_CCPicker could do and more? Am I missing something obvious?

In regarding to gamma correction only, we are able to not only choose any gamma value in SG_CCPicker, but also the real sRGB gamma and Rec. 709 gamma.



Gerardo

3dWannabe
05-15-2010, 09:33 PM
In regarding to gamma correction only, we are able to not only choose any gamma value in SG_CCPicker, but also the real sRGB gamma and Rec. 709 gamma.


Gerardo - I found this thread where you give some amazingly detailed info on the SG_CCPicker with lots of images to help explain the concepts:

http://www.newtek.com/forums/showthread.php?t=82605&highlight=rec+709

I'll have to read thru it a few times, and your HDRI3D articles when I get them, and hopefully I'll understand.

Thanks!

3dWannabe
06-04-2010, 03:03 PM
If results are not consistent with different types of input images encoded/decoded with the same gamma, then the method doesn't work. For proper results, the value in Bright parameter is 45.45. for a log2lin operation (2.2 gamma) and 145.45 for a lin2log operation (2.2 gamma). Notice however that G2 only allows tenth decimals, not hundredth, but the difference is like the one between the sRGB gamma and 2.2 gamma. For practical purposes, the way I proposed is reliable with any type of image coded/decoded with a 2.2-ish gamma.



Gerardo - tried to send you a PM, but your box is full.

Worley added extra digits to 'Bright' in G2 for me (VERY nice of him).

You might ask him for a copy (he said it was untested, so he isn't releasing it into the 'wild', and as a neophyte to linear, I'm probably a poor tester).

I'm sure a sidebar update to your HDRI3D #18 would be great for all the G2/fprime users (as you mentioned you've got an article coming out).

Or even a password protected pdf in the download section for HDRI3D subscribers outlining changes to the G2/fprime workflow) if the issue is already locked?

I know you use another method now, but that instant feedback of fprime sure is nice!

shrox
06-04-2010, 03:24 PM
It should be noted that there are no 'gods of linear workflow', only a few animal spirit guides, and most of them are rabid.

gerardstrada
06-07-2010, 02:27 AM
Gerardo - tried to send you a PM, but your box is full.

Worley added extra digits to 'Bright' in G2 for me (VERY nice of him).

You might ask him for a copy (he said it was untested, so he isn't releasing it into the 'wild', and as a neophyte to linear, I'm probably a poor tester).

I'm sure a sidebar update to your HDRI3D #18 would be great for all the G2/fprime users (as you mentioned you've got an article coming out).

Or even a password protected pdf in the download section for HDRI3D subscribers outlining changes to the G2/fprime workflow) if the issue is already locked?

I know you use another method now, but that instant feedback of fprime sure is nice!

3dWannabe, That's good news! very nice from Worley.

Thanks for the suggestion :thumbsup:



Gerardo

3dWannabe
06-07-2010, 09:04 AM
3dWannabe, That's good news! very nice from Worley.

Thanks for the suggestion :thumbsup:

Gerardo

If you do look at the version with the extra bright digits, I'd love to find out exactly what they should be set to, and you seem to have possibly the best grasp of linear [at least among the subset who publish articles] of anyone on the forum.

I'm going to fire off some questions about your issue #18 and #19's in hdri3d, as at the moment I'm perplexed with the introduction of ProPhoto RGB with a gamma of 1.8. I really need to read them over a few more times first, but ...

As a quick question, do I have this much right?

We calibrate our monitor to gamma 2.2 (6500K/D65). Then we linearize and 'work' in linear space, and monitor our work by converting linear to our non-linear viewing world. And we finally output according to the color space of our desired output, generally from a compositing program such as Fusion (while working in linear within fusion). Makes perfect sense

This is where I lose you: Is our non-linear viewing world then ProPhoto RGB (which is a 1.8, when out monitor is calibrated to 2.2, so this can't be right)? Or, is that what we work in (but, I thought we worked in linear, so I'm lost). I lost you at ProPhoto RGB.

So, I've definitely had a brain hiccup somewhere and strayed from the golden path leading to a linear workflow.

---

BTW - I just bought this, and it is Perfect for reading magazines or books while using the computer. A desk book holder that comes with an LED light that's very bright. This is life changing! I can reference your articles while I'm typing!

http://www.activeforever.com/p-518-levo-desk-book-holder.aspx?CMPID=NT

gerardstrada
06-07-2010, 08:27 PM
Yes, it's a bit confusing because we don't do that with input images, only with colors from the picker. But let's remember we are using the perceptual method (for picker's colors only).

Contrary to other apps that have CM capabilities built-in, within LW, we pick colors in monitor space always (gamma & gamut), and if we gonna preview our output result according to another color space, we choose those colors perceptually, since there's no way to see this approximation in the picker. This doesn't happen with input images where we are able to recover their intended color appearance and linearize them when performing the color conversion through color profiles.

Then, if we linearize the color we are picking according to our monitor's gamma (2.2) and later we gamma correct the output according to Prophoto RGB (1.8), we'll obtain a darker color that the one we choose, because there's a difference in the gamma correction value. In order to match at least the same tone that we have chosen in the picker (hues will be different since Prophoto gamut it's much more wider than monitor's gamut), we need to linearize picker's colors according to the gamma correction curve of Prophoto RGB (1.8). Since we are previewing the output result within LW immediatly, we are able to make hues adjustments, if necessary.

This same thing happens if let's say we are working for HDTV color space. In such a case, we preview the result with Rec. 709 gamma formula (log version of HDTV color profile) and it's necessary to de-gamma picker's colors from Rec. 709 to linear, not from 2.2 to linear. The same thing goes from sRGB or any other color space.

Btw, idea of Kodak behind ProphotoRGB color space was to provide a wide color space for photography, but it can be used also when working with wider gamut printing inks. For motion picture production however, there's a more appropriate working color space nowadays called Universal Camera Film Print Density.



Gerardo

P.D. Interesting design... but I would have to stop the habit of reading in the bathroom :D

3dWannabe
07-10-2010, 11:46 AM
Yes, it's a bit confusing

Gerardo - I've been re-reading your hdri3d #18 issue, as well as the #19 on color management and #30 on using DPONT's nodes.

In Issue #18, you outline 3 different linear workflows, and I'm hoping this 4th possibility makes sense?

I would like to initially work in linear with fprime/G2 to get the lighting and colors 'close', then switch to SG_CCTools for final tuning of colors in ProPhoto RGP, then do a multi-pass render for final compositing in Fusion.

I've got a few questions.

1. In issue #18, you're setting up G2 for a gamma of 2.2. If I wanted to use ProPhoto RGB (with a 1.8 gamma), I understand how to set up the R,G,B 'Gamma' values as 1/1.8 = .555555, but I'm not sure how you calculated the R,G,B 'Bright' values (and what they should be for a gamma of 1.8 ProPhoto RGB)? Or - am I totally off-base and would still use 1/2.2 here for ProPhoto RGB (as you said only for colors, not for images - but??? G2 would see both colors and images when it 'corrects' the linear for previewing?? I may be lost again!)

2. To get the linear colors while in this stage while using fprime/G2, would the SG_CCTools color picker work, or would I need to use something like Jovian (or, as you said it is perceptual, does it matter)? I'm also guessing that Lightwolf's 'simple color corrector' couldn't be used to de-gamma with frime/G2? Does fprime only understand nodes supplied by NewTek?

3. On page 78 of HDRI3D #18, you mention several "key passes/buffers" needed to a multi-pass render (and especially needed for your multi-pass linear where all the linearizing is done in post, which I can see happening at times). I can see you've done a lot more work with DPONT's plug-ins since #18 in issue #30. Have you changed your node setup for multi-pass, and do you have any tips for making that work wit Janus? (BTW Janus' Lernie is working on an HDRI3D article, would be great of you could collaborate and add a sidebar and node setups to his article?)

BTW - on page 77 of #18, figure #11 and figure #12 are identical for linearizing using G2. I'd guess they used the wrong mage for one. Do you have the correct one?

Looking very forward to your next HDRI3D article!!

Thanks!!!

gerardstrada
07-11-2010, 08:59 PM
Gerardo - I've been re-reading your hdri3d #18 issue, as well as the #19 on color management and #30 on using DPONT's nodes.

In Issue #18, you outline 3 different linear workflows, and I'm hoping this 4th possibility makes sense?

I would like to initially work in linear with fprime/G2 to get the lighting and colors 'close', then switch to SG_CCTools for final tuning of colors in ProPhoto RGP, then do a multi-pass render for final compositing in Fusion.
There's one thing you might want to consider before using Prophoto RGB:

Unless we are working with photographic film, motion picture production, print work with wider gamut printing inks, or laser projections, we won't want to work with wide gamut color spaces. The usage of larger gamuts is not necessary when working for web, TV, video production or common print work, because these output mediums won't display those additional colors and we would be wasting colors that we'll never see, and complicating things unnecessarily.

Other thing you might want to consider is that the multipass linear workflow and the inverse linear workflow were deviced before the existence of proper tools for the classic linear workflow (not even a basic gamma correction node existed in that time!), a time where most of people found the classic linear workflow too expensive to be implemented correctly. In that context, hybrid LCS workflows were necessary to work in the same way as always (non-linear way), but getting the linear workflow "look" and several of its benefits at lower costs with just one click. Idea behind this multipass linear workflow was in fact to provide a one button solution within LW, a simple setup taken further by the linear workflow plugin of Kray (and several other unbiased render engines). Then, these 2 workflows were designed as a transition between the old non-linear workflow and a proper LCS workflow (called classic linear workflow in the article). But the classic linear workflow is the more appropriate way of approaching the CG work in linear light with Lightwave render engine nowadays, and with the current tools available, there's no excuses to not use it above the other methods, I think.

As for the method with FPrime/SG_CCTools you are trying to design, we need to note that the SG_CCTools is recognized by FPrime as pre-process/surface level, but not as post filter. So they work well at half-way only.

If the method you propose assumes to ignore gamut when working with FPrime, and just before rendering switch to LW renderer and apply SG_CCTools to perform shading/lighting adjustments, it would imply to work first at gamma level only (with FPrime/G2 combo), but the problem with ProphotoRGB is that its gamut is so wide, that it's impossible to use unmanaged input images in ProphotoRGB and get a closer preview, because we'll get very desaturated colors and shifted hues according to the difference between ProphotoRGB gamut and our monitor's gamut. In such case, and assuming the working color space of your whole project is ProphotoRGB, you'd need to convert images in pre-process/surface level from ProphotoRGB to monitor space (lin) for working with FPrime/G2 (post gamma correction for preview in G2 PF would be 2.2 in that case), and later, disable all SG_CCFilter/nodes in pre-processing/surface level and apply SG_CCFilter as post-filter for preview from linear ProphotoRGB to monitor space. But you'd need to re-adjust all your picked colors from a 2.2 gamma correction to a 1.8 gamma correction all over again. Looks like too much work and complicated, I think.

Other option that seems quite difficult is to approximate the final color appearance with current G2 PF parameters. At least, a kind of 3Dcube/colormatrix corrector would be needed in that case to get a closer match. For this, the Transform2 vector node could be used (I've been able to make an RGB2XYZ conversion with it in DP_FNE), but DP_FNEs don't work with FPrime. Then in G2 case, it would be as trying to approximate a 3D LUT result with an 1D LUT tool. Some hues could match pretty close, but others won't, and even if you find a setting, it will work only for that workstation (since it would be a device-dependent adjustment) and you won't be able to save any preset unless you save an empty scene.

Probably, an option you might want to try is the usage of a medium-range color space, something like aRGB maybe. It works with 2.2 gamma, so you wouldn't need to change any picked color after switching to SG_CCTools, or convert from your project's working color space to monitor space. Color appearance is easier to approximate also (even more if your monitor is aRGB-range), and even if it's sRGB-range, G2 gamma/brightness adjustments will let you closer. But if you don't really need large gamuts, you could work just with sRGB or Rec. 709, things would be much more simpler in that way.

In your general CM workflow, take into account also that, even when Fusion CM capabilities with LUTs are really good when working with a CM system, its CM capabilities with standard color profiles are limited and not user-friendly, and you'll need to setup some things by hand to get correct results.


I've got a few questions.

1. In issue #18, you're setting up G2 for a gamma of 2.2. If I wanted to use ProPhoto RGB (with a 1.8 gamma), I understand how to set up the R,G,B 'Gamma' values as 1/1.8 = .555555, but I'm not sure how you calculated the R,G,B 'Bright' values (and what they should be for a gamma of 1.8 ProPhoto RGB)? Or - am I totally off-base and would still use 1/2.2 here for ProPhoto RGB (as you said only for colors, not for images - but??? G2 would see both colors and images when it 'corrects' the linear for previewing?? I may be lost again!)

With SG_CCTools images don't need independent gamma corrections since SG_CCFilter or Node takes care of that when performing color space conversions in pre-process/surface level. In such a case, only picked colors need to be gamma corrected (according to the working color space gamma). Later as post-process, the color profile conversion is from the lin version of a color space to a log version. For preview purposes it can be the same color space or another color space depending on your color flow. Now, in the case of a color space with a gamma value of 1.8, the approximation in G2 - if I remember well - was 0.556 for gamma and 136% for brightness.


2. To get the linear colors while in this stage while using fprime/G2, would the SG_CCTools color picker work, or would I need to use something like Jovian (or, as you said it is perceptual, does it matter)? I'm also guessing that Lightwolf's 'simple color corrector' couldn't be used to de-gamma with frime/G2? Does fprime only understand nodes supplied by NewTek?
SG_CCPicker is completely compatible with FPrime since the renderer simply takes the value that we have already entered for a color. SG_CCFilter and Node are compatible with FPrime as pre-process/surface level, too. I don't use Jovian Color Picker, but my guess is that it's also compatible with FPrime for the same reason. It's also compatible here with the useful Michael Wolf's Simple Colour Corrector node and Aurora's Gamma Correction node used at pre-process/surface level.



BTW - on page 77 of #18, figure #11 and figure #12 are identical for linearizing using G2. I'd guess they used the wrong mage for one. Do you have the correct one?
You are right. I must have forgotten to send the correct image. It was this one:

http://imagic.ddgenvivo.tv/forums/FNE/NIF/gammacorrection.jpg

It's just other way of gamma correction but by using DP_IFNE.


On page 78 of HDRI3D #18, you mention several "key passes/buffers" needed to a multi-pass render (and especially needed for your multi-pass linear where all the linearizing is done in post, which I can see happening at times). I can see you've done a lot more work with DPONT's plug-ins since #18 in issue #30. Have you changed your node setup for multi-pass, and do you have any tips for making that work wit Janus?
Yes, disable it before rendering. Getting back to what I said previously about the multipass linear workflow, I've tried it for some few projects and made some other variations later, but since 2007 (SG_CCTools) I don't use it anymore. I designed it for an studio where people didn't understand the classic linear workflow very well and wanted a simpler approach even if it wasn't accurate - due to its nature, it has some limitations since the color mix in rendered passes is made in log space, thing that is more critical with multi-layering transparencies. I'd recommend now to try the classic linear workflow with the current tools available.


(BTW Janus' Lernie is working on an HDRI3D article, would be great of you could collaborate and add a sidebar and node setups to his article?)
Yeahhh! I'm looking forward to that interesting article, too!!! I'm sure it will be a GREAT article!



Gerardo

3dWannabe
07-11-2010, 09:56 PM
There's one thing you might want to consider before using Prophoto RGB:


My 'final output' is usually h.264 for the web and also DVD/BluRay.

From what you're saying, sRGB might be the way to go (I guess issue #19 seemed so adamant about using ProPhoto RGB as the 'best' color space).

I might loose a little bit on the BluRay, but hopefully, it would still look very good?




Getting back to what I said previously about the multipass linear workflow, I've tried it for some few projects and made some other variations later, but since 2007 (SG_CCTools) I don't use it anymore. I designed it for an studio where people didn't understand the classic linear workflow very well and wanted a simpler approach even if it wasn't accurate - due to its nature, it has some limitations since the color mix in rendered passes is made in log space, thing that is more critical with multi-layering transparencies. I'd recommend now to try the classic linear workflow with the current tools available.


I think you are talking about multi-pass rendering of gamma-corrected passes (where you're remove gamma for colors in post when compositing) that you mentioned in issue #18.

My goal is to generate multi-pass LINEAR renders of all the major 'useful' passes. Everything would be done the 'classic' linear way, except it would generate a multi-pass exr.

I'm really just learning nodes (your issue #30 article is certainly one way to jump in with both feet for a beginner). I was hoping to:

1. Get lighting, etc. 90% finished using fprime/G2 renders with sRGB working space (BTW - there's no reason to use Adobe RGB in Photoshop if I'm not printing, so I could just standardize on sRGB as my working space?)

2. Remove fprime/G2 and render only with SG_Tools to finish things up.

3. Do a multi-pass final render using Janus/exrTrader and composite in Fusion.

If this still sounds reasonable, I think what I need is to come up with a preset of nodes that I can load for this final multi-pass render with exrTrader/Janus (or a few presets - an 'every pass' preset and a few 'less than all passes' presets. I've got to re-read issue #30 a few more times!!!

Is there a way to have a node setup for an entire scene (global) - or are nodes just at the surface level?

I really need to figure out a preset I can load, get it setup so that exrTrader iunderstands the extra buffers from DPONT nodes, and then whatever exrTrader does (to create the extra passes) is transparent to Janus at that point.

Thanks for the detailed reply!

gerardstrada
07-13-2010, 10:29 PM
My 'final output' is usually h.264 for the web and also DVD/BluRay.

From what you're saying, sRGB might be the way to go (I guess issue #19 seemed so adamant about using ProPhoto RGB as the 'best' color space).
Yes, well, ProphotoRGB has one of the most larger gamuts and though the article never says it's the 'best' color space, it was the better choice when we needed to output for all mediums (including those with wider gamuts). But these things evolve constantly and there are for example, better color spaces for film work nowadays.



I might loose a little bit on the BluRay, but hopefully, it would still look very good?
Let's see... for web you'll need to output in sRGB, for DVD (and if you are in USA) you'll need SDTV NTSC (which is not NTSC 1953) and for BlueRay HDTV (Rec. 709). People tend to think that the chromaticities of these color spaces are just the same, but they are not exactly the same.
VideoHD and videoNTSC are the most similar among them, really quite similar, but videoHD is a little bit larger. It covers a bit more of greens and magentas than videoNTCS, while videoNTCS covers a bit more of yellows and cyans than videoHD. sRGB is larger than the other ones, but it's a bit offset towards cyans and greens, so while it covers more of these hues than the others two, it doesn't cover all the yellows and magentas than these, though it covers more reds due to its shape). However in practice, these differences are so minimum that we are able to work in any of them and later we can re-map any of these gamuts with Relative Colorimetric rendering intent with excellent results.


I think you are talking about multi-pass rendering of gamma-corrected passes (where you're remove gamma for colors in post when compositing) that you mentioned in issue #18.

My goal is to generate multi-pass LINEAR renders of all the major 'useful' passes. Everything would be done the 'classic' linear way, except it would generate a multi-pass exr.

I'm really just learning nodes (your issue #30 article is certainly one way to jump in with both feet for a beginner). I was hoping to:

1. Get lighting, etc. 90% finished using fprime/G2 renders with sRGB working space (BTW - there's no reason to use Adobe RGB in Photoshop if I'm not printing, so I could just standardize on sRGB as my working space?)

2. Remove fprime/G2 and render only with SG_Tools to finish things up.

3. Do a multi-pass final render using Janus/exrTrader and composite in Fusion.

If this still sounds reasonable,
That sounds like an excellent workflow to me! sRGB as working color space looks like your better choice for the workflow you described with the FPrime/G2 combo + SG_CCTools. First because, if you work with an sRGB-range monitor and you don't need/want gamut accuracy in your LW preview, you could skipt gamut and work at gamma level only with G2 Previewer (sRGB/Rec 709 gamma preview) and SG_CCPicker and Node for gamma linearizations (sRGB/Rec709). But if you work with an aRGB monitor or you just want a more accurate preview, you may want to include gamut. For that, the only color profile conversion that you'll need to do is for the video footage, since all other images will only need an sRGB gamma conversion. Later you can preview with SG_CCFilter/Node from lin sRGB to log HDTV/SDTV/sRGB as needed.


I think what I need is to come up with a preset of nodes that I can load for this final multi-pass render with exrTrader/Janus (or a few presets - an 'every pass' preset and a few 'less than all passes' presets. I've got to re-read issue #30 a few more times!!!

Is there a way to have a node setup for an entire scene (global) - or are nodes just at the surface level?

I really need to figure out a preset I can load, get it setup so that exrTrader iunderstands the extra buffers from DPONT nodes, and then whatever exrTrader does (to create the extra passes) is transparent to Janus at that point.

Thanks for the detailed reply!
I see... In the case of DP_FNEs is possible to setup multiple passes globally (it's described in the article of HDRI3D magazine). But as far as I can see, Janus is not updated to work with the new DP_Extra Buffers version. New versions of DP_Extra Buffers are able to work with 24 buffers now - not only with 6. We have also the new DP_Global Buffers and it will be great if Janus could recreate its interface as well. Maybe if you change the Get Global Buffers node as PE_DPONTBUFFER... but Global Buffers are also 24 so an update is necessary, I think. A solution in the meantime might be to set up your global passes in DP_PFNE and save them as ExtraBuffers, and though this is not recommended in Janus documentation, for multilayering rendering with Janus, you could try by setting them up in the master scene and let Janus to consider the FNEs (with the new ExtraBuffers or GlobalBuffers) as it would normally, but you'd need to re-adjust by hand each image file path for each pass scene. A less slow way to do this is by opening the pass scene in a notepad and edit the paths and file names with the replace function. Hope Lernie has a better suggestion for this. As for ExrTrader, it works only with LW native buffers, so in order it can read unconventional buffers, you'll need to overwrite some LW native buffers in DP_PFNE. This is possible if you don't need some LW native buffers. Otherwise, you'll need to use ExrTrader for saving conventional buffers and DP_FNE for saving the unconventional buffers/passes.



Gerardo

3dWannabe
07-14-2010, 08:35 AM
I see... In the case of DP_FNEs is possible to setup multiple passes globally (it's described in the article of HDRI3D magazine). But as far as I can see, Janus is not updated to work with the new DP_Extra Buffers version. New versions of DP_Extra Buffers are able to work with 24 buffers now - not only with 6. We have also the new DP_Global Buffers and it will be great if Janus could recreate its interface as well. Maybe if you change the Get Global Buffers node as PE_DPONTBUFFER... but Global Buffers are also 24 so an update is necessary, I think. A solution in the meantime might be to set up your global passes in DP_PFNE and save them as ExtraBuffers, and though this is not recommended in Janus documentation, for multilayering rendering with Janus, you could try by setting them up in the master scene and let Janus to consider the FNEs (with the new ExtraBuffers or GlobalBuffers) as it would normally, but you'd need to re-adjust by hand each image file path for each pass scene. A less slow way to do this is by opening the pass scene in a notepad and edit the paths and file names with the replace function. Hope Lernie has a better suggestion for this. As for ExrTrader, it works only with LW native buffers, so in order it can read unconventional buffers, you'll need to overwrite some LW native buffers in DP_PFNE. This is possible if you don't need some LW native buffers. Otherwise, you'll need to use ExrTrader for saving conventional buffers and DP_FNE for saving the unconventional buffers/passes.

I get the impression that when you choose the exrTrader preset, exrTrader is responsible for determining the buffers output based on this preset.

Am I incorrect on this?

Maybe what's needed is the ability in exrTrader to setup more buffers in a preset (in addition to the native Lightwave buffers) so that more of the buffers in your hdri3d issue #30 article can be output at once?

I'm definitely going to have to re-read the exrTrader and Janus manuals!

BTW - I've been re-reading your issue #30 article, and highlighting in red all the references to 'global'. Looks like you've got it all setup already!

All those buffers look great! Maybe you or Cageman could come up with a tutorial on properly using them in Fusion or Nuke?

Lightwolf
07-14-2010, 11:32 AM
I get the impression that when you choose the exrTrader preset, exrTrader is responsible for determining the buffers output based on this preset.

Am I incorrect on this?
No, you're right. But exrTrader only has access to the buffers as provided by the SDK.


Maybe what's needed is the ability in exrTrader to setup more buffers in a preset (in addition to the native Lightwave buffers) so that more of the buffers in your hdri3d issue #30 article can be output at once?
You'd still need a way to access them though. I've got something in the pipeline for exrTrader V2, but it'll still be a while (currently it's in the "proof of concept" phase - i.e. the idea basically works).

Cheers,
Mike

gerardstrada
07-14-2010, 01:35 PM
You'd still need a way to access them though. I've got something in the pipeline for exrTrader V2, but it'll still be a while (currently it's in the "proof of concept" phase - i.e. the idea basically works).
That sounds quite interesting!



Maybe what's needed is the ability in exrTrader to setup more buffers in a preset (in addition to the native Lightwave buffers) so that more of the buffers in your hdri3d issue #30 article can be output at once?
Yeah, it would be great if ExrTrader/Janus could access to the new DP_Extra Buffers or DP_Global Buffers! Perhaps other way could be a Pixel Filter version that pre-perform the buffers transformation setup internally and send this buffers presets to the exporter/saver, but the chance to be able to build our own transformed buffers is more useful, I think.


All those buffers look great! Maybe you or Cageman could come up with a tutorial on properly using them in Fusion or Nuke?

Thanks! Hope Cageman has the time to make a video tutorial. For my behalf, I'm writing an incoming article for HDRI3D magazine where I want to show the compositing process in a more application-independent way (otherwise the article would be too extensive). This is why I composed the Tumbler scene (for Issue# 30) within DP_FNE, so all the compositing process and color grading was made within Lightwave. Idea is that if you can do it within Lightwave, you can do it within any compositor. I'm sure readers will be able to adapt the process to their own compositing packages.



Gerardo

3dWannabe
07-14-2010, 02:02 PM
That sounds quite interesting!

Thanks! Hope Cageman has the time to make a video tutorial. For my behalf, I'm writing an incoming article for HDRI3D magazine where I want to show the compositing process in a more application-independent way (otherwise the article would be too extensive). This is why I composed the Tumbler scene (for Issue# 30) within DP_FNE, so all the compositing process and color grading was made within Lightwave. Idea is that if you can do it within Lightwave, you can do it within any compositor. I'm sure readers will be able to adapt the process to their own compositing packages.



Gerardo
Maybe you could create a very short video to go along with the article? Just a loose summary showing all the concepts in action.

Sometimes it really helps to see it done, as it's easy to misread an article or miss an important step.

A password protected video would be just one more reason for folks to buy the mag!

Anyway, definitely looking forward to the article, with or without a video!

3dWannabe
07-15-2010, 10:59 AM
No, you're right. But exrTrader only has access to the buffers as provided by the SDK.

You'd still need a way to access them though. I've got something in the pipeline for exrTrader V2, but it'll still be a while (currently it's in the "proof of concept" phase - i.e. the idea basically works).

Cheers,
Mike
Mike - I'll be looking forward to exrTrader V2!

But now that I've had a taste of what can be done with DPONT's tools and Gerardo's techniques - I'm starting to go into 'extra-buffer-withdrawal'!

I'm going to need a buffer 'fix' SOON before I get the 'shakes' - and exrTrader V2 is my only hope for getting the monkey off my back!

BTW - a bit of a double post as I asked on this http://www.newtek.com/forums/showpost.php?p=1037452 thread yesterday, but ...

For SG_CCTools (I know you didn't write them but I bet you've checked them out), I'm a bit confused about what to do with the SampleICC library? It seems more like a libary used in compiling, and I'm not sure where it should go?

Also, the SampleICC library seems to require http://www.libtiff.org/ v3.6.1 to be located inside a particular folder. I did find a page discussing v3.6.1, but it loaded an exe for 3.8.2). Is this required to use the tools?

wxWidgets in comparison seems simple as I found an install for it.

3dWannabe
07-15-2010, 09:36 PM
I've been doing some testing with SG_CCTools, Jovian, fprime and G2.

I'm not finished, but I thought I'd make a post anyway, in the likelihood I have a few mistaken assumptions.

Assumption #1 - if I have a 20% grey log of 127,127,127, and I linearize that with Wolf's Simple Color Corrector, if I then convert the output back to log (using G2 or SG_CCTools Image Filter), I should see 127,127,127 again.

So, my initial test was to create a scene with ambient light at 100% and the main light turned down to 0.

I displayed a linearized in Photoshop photo, and a few squares of different colors linearized by the Simple Color Corrector. One color was 127, 127, 127 before being linearized.

BTW - I'm monitoring the colors by using Jovian's screen grab on the fprime screen in this case, and later on a normal Lightwave render output screen.

First the good news for those who might be interested in using fprime & G2. I could make my linear picture look about perfect (almost identical to tabbing to the image displayed in the exact same position in Photoshop) by tampering with G2's settings. I used .4545 for the RGB values and 188% for the 'bright'. This also gave a very close match to a linearized dark blue color square.

However, when I looked at a 127,127,127 gray that was lineared with the Simple Color Corrector, and then 'converted' back to log by G2, the RGB output was 138,138, 138.

I could make those values go down by adjusting the 'bright' values. If I changed the 'bright' to Gerardo's 145.5%, I got 106,106,106. So, there is a way to get 127, 127, 127 - but this seems to affect other color levels differently. I'll report back on this later.

I removed G2 and tried SG_CCTools.

input profile: linear sRGB
ouput profile: tried both log_sRGB and monitor profile with no detectable difference
both input and output intent were set to perceptual

Now the 127, 127, 127 ended up as 107,107, 109 (but with a lot of weird garbage pixels showing up only when I used SG_CCTools??)

So, I'm a bit confused. Should I be able to get 127, 127, 127 back?

BTW - for G2 users who reported inconsistent results, you have to enter the values in G2 and tab all the way thru or they won't 'take' I watched this happen a few times while playing with the 'bright' values.

Sometimes even though I changed all 3 values, only one output would get affected.

Also, I'll mention my very, very unscientific testing of the SG_CCTools color picker vs. Jovian. At least for 127, 127, 127 (maybe a poor test?), Jovian linearized it to 55,55,55 while SG_CCTools color picker linearized it to (55,55,55) if I set it to log,linear - and 54,54,54 if I set it to sRGB, linear.

I did some testing for SG_CCNode and the values were different in the blue channel from Simple Color Corrector depending on how I setup SG_CCNode
plus 1 in blue (log, linear, 2.2, not filled in)
same in blue (log, linear, 2.2, log_srgb, lin_srgb)
minus 2 in blue (log, linear, 2.2, log_MyCenterMonitor, lin_srgb)

Hope this is a bit useful? Would be better if I actually fully understood what I was doing!

gerardstrada
07-16-2010, 04:55 PM
I've been doing some testing with SG_CCTools, Jovian, fprime and G2.

I'm not finished, but I thought I'd make a post anyway, in the likelihood I have a few mistaken assumptions.

Assumption #1 - if I have a 20% grey log of 127,127,127, and I linearize that with Wolf's Simple Color Corrector, if I then convert the output back to log (using G2 or SG_CCTools Image Filter), I should see 127,127,127 again.
There's a principle for all this kind of test: Results will depend on the gamma settings you use for linearize the image and for gamma correct it later. If gamma value used for linearizing an image is not the same gamma assumed for correction, results won't be the same. But if gamma value used for linearizing an image is the same gamma assumed for its correction, results should be the same.

If we have used the same asummed gamma for linearization and gamma correction, your asumption is expected for SG_CCPicker, SG_CCNode, SG_CCFilter, Jovian Picker, Michael Wolf's Simple Color Corrector and Tim Dunn's Gamma Correction nodes, but not for G2 (at least not with the settings I shared). As I commented before, those settings are not accurate but will let you very near for sRGB-ish gamma. In the case of a 127/127/127 color, G2 settings (.4545/145) will give you 125/125/125 color for a color linearized with a 1/2.2 gamma exponent. Brightness of 146.5 will let you exactly in 127/127/127 again. If you are not getting these results is because gamma used for linearizations is not the same for gamma corrections. Let's see...


So, my initial test was to create a scene with ambient light at 100% and the main light turned down to 0.

I displayed a linearized in Photoshop photo, and a few squares of different colors linearized by the Simple Color Corrector. One color was 127, 127, 127 before being linearized.
Ok. I think here is where issues could begin. Gamma curve in Photoshop depends on the working color space we use. If for example, you had used aRGB color space, gamma would be 2.2, but if you have used sRGB, gamma would be the sRGB gamma formula, which is not 2.2 (it's i fact 2.4) but the formula has a linear segment near blacks that provides it a 2.2 'look'. If you linearize the image with a simple .4545 gamma correction within sRGB color space, you wouldn't be linearizing the image accurately, if then, you gamma correct with sRGB gamma formula, results won't be the same. Remember also that decimals are important if you want accurate round-trips. I think it would be easier if you use pure colors within LW and linearize/gamma_correct there.


BTW - I'm monitoring the colors by using Jovian's screen grab on the fprime screen in this case, and later on a normal Lightwave render output screen.

First the good news for those who might be interested in using fprime & G2. I could make my linear picture look about perfect (almost identical to tabbing to the image displayed in the exact same position in Photoshop) by tampering with G2's settings. I used .4545 for the RGB values and 188% for the 'bright'. This also gave a very close match to a linearized dark blue color square.

However, when I looked at a 127,127,127 gray that was lineared with the Simple Color Corrector, and then 'converted' back to log by G2, the RGB output was 138,138, 138.

I could make those values go down by adjusting the 'bright' values. If I changed the 'bright' to Gerardo's 145.5%, I got 106,106,106. So, there is a way to get 127, 127, 127 - but this seems to affect other color levels differently. I'll report back on this later.
Results with your settings here are not 138-138-138, but 162-162-162. I'm using here SG_CCPicker and results are the same in Photoshop. This is what I do: In an empty scene, I use the Node Texture in Textured Environment. There, I'm able to linearize a Color node output (127/127/127) directly with SG_CCPicker or with SG_CCNode, Gamma Correction node and Simple Colour Corrector node. Dither Intensity should be OFF.


I removed G2 and tried SG_CCTools.

input profile: linear sRGB
If you use color profiles here for testing gamma, you are opening the doors to gamut/white points, etc. as well, and this can change not only the brightness of a color but also its hues and saturation. Better do keep things isolated in gamma aspect only for testing gamma. In such case, use the gamma values provided by the SG_CCNode.


ouput profile: tried both log_sRGB and monitor profile with no detectable difference
both input and output intent were set to perceptual
This is weird. Are you sure your monitor profile is not the generic sRGB monitor profile provided by the OS?


Now the 127, 127, 127 ended up as 107,107, 109 (but with a lot of weird garbage pixels showing up only when I used SG_CCTools??)

Are you seeing these artifacts with LW renderer or with FPrime? If it's with FPrime, it's a compatibility issue and the same as SG_AmbOcc node, there's nothing to do by the behalf of a 3rth-party since there's no SDK for Fprime. But if artifacts appear with LW renderer, please, report it here or directly to Sebastian.


So, I'm a bit confused. Should I be able to get 127, 127, 127 back?
Yes, but let's remember again that sRGB gamma is not 2.2, so you won't get the same color if you have linearized an image with a simple gamma exponent and then gamma correct it with a sRGB gamma formula. If you have linearized the image/color with SG_CCNode, you should get the same original color (I get the same color here), but if you have used Simple Color Corrector or Color Correction Node you'll need to use decimals (.xxxx) and round them properly.


Also, I'll mention my very, very unscientific testing of the SG_CCTools color picker vs. Jovian. At least for 127, 127, 127 (maybe a poor test?), Jovian linearized it to 55,55,55 while SG_CCTools color picker linearized it to (55,55,55) if I set it to log,linear - and 54,54,54 if I set it to sRGB, linear.
Yes, this is right because as we have seen before, sRGB gamma formula is not exactly 2.2 gamma.


I did some testing for SG_CCNode and the values were different in the blue channel from Simple Color Corrector depending on how I setup SG_CCNode
plus 1 in blue (log, linear, 2.2, not filled in)
same in blue (log, linear, 2.2, log_srgb, lin_srgb)
minus 2 in blue (log, linear, 2.2, log_MyCenterMonitor, lin_srgb)

Results here with SG_CCTools are consistent and accurate for any color. Remember if we linearize a color with a linear color profile, we need to gamma correct it with the log version of that color profile because you are adding gamut and other color space aspects to the ecuation. Otherwise try with the gamma conversions options of SG_CCNode only.

If we try SG_CCPicker and SG_CCNode for 2.2 gamma linearizations (log2linear), and later we gamma correct these linear colors with SG_CCNode (within DP_IFNE) or with FPGamma, we get exactly the same colors.

For Aurora's Gamma Correction Node, if we linearize, let's say a 127/127/127 color with .45 gamma value, will get 126/126/126 color after gamma correction with FP_Gamma or SG_CCNode (2.2). To get an accurate result we need to use more decimals (0.4545), that the node will round as 0.4546. This automatic rounding to .4546 gives accurate results (127/127/127). With Michael Wolf's Simple Colour Corrector Node this rounding is not made automatically, and a .4545 value gives 126/126/126, so we need to enter .4546 by hand to get an accurate result. Again, gamma conversions are very sensitive to decimals if you want accuracy.


Hope this is a bit useful? Would be better if I actually fully understood what I was doing!
It you need further help with that, you could post here the images you have used to check them out.



Gerardo

3dWannabe
07-16-2010, 08:28 PM
It you need further help with that, you could post here the images you have used to check them out.


Gerardo - I was not trying to linearize colors in Photoshop. I just had one reference picture I was experimenting with. [I'm including a scene with the image as an attachment]

Based on our conversations, as I'm primarily creating h.264 for the web, DVD and sometimes BluRay, it looks like I should work in sRGB?

To make a linear workflow happen, I believe I've got to be able to:

1. linearize any textures and photos that will generally be sRGB but could be aRGB.

I was hoping to be able to linearize all the images in the LW content folder and other images I might work with into whatever output format would be best. I've got 24GB RAM and 2GB VRAM, so I need to figure out the best format that won't cause issues in Lightwave. I can use Photoshop to convert then to linear sRGB (if that is appropriate)? But which format (tga, exr, etc.)?

You made a very good point about sRGB not being 2.2 gamma. It sounds like trying to use 1/2.2 in simple color corrector would not be a good way to convert them (if they are not really 2.2).

In your article, you converted them to ProPhoto RGB (but I thought I was supposed to use sRGB)? I'm unclear on how I would convert them using a node if I did want to quickly do that?

If I did use SG_CCNode, what gamma exponent would I use if .454545 is not correct? 1/2.4?



If you use color profiles here for testing gamma, you are opening the doors to gamut/white points, etc. as well, and this can change not only the brightness of a color but also its hues and saturation. Better do keep things isolated in gamma aspect only for testing gamma. In such case, use the gamma values provided by the SG_CCNode.

Are you saying that for the node, leave the input and output profiles blank? There would be no difference between using the Simple Color Corrector then, right?


BTW - I searched the Lightwave Layout pdf and didn't find any reference to 'Dither Intensity' to turn it off?

2. linearize colors. For the 127,127,127 I created a Color node with that value, connected a Simple Color Corrector to it with .4545. If I output that to an exr and opened it in Fusion, and applied a LUT of 2.2, for some reason I get 107,107,107. So, I'm making a very bad assumption, and can't even round trip a color properly.

---
BTW - I'm seeing the artifacts not with fprime/G2 but only with the SG_CCFilter.

I'm going to include my very crude 'test' program.

It currently has a linear sRGB image used on the box_with_giant_lin_picture surface. The image node that displays it currently go right out to the surface color, so the output should stay linear.

The greyBox 127 surface had a color node of 127,127,127 that is fed into a simple color corrector using .4545 gamma and then output to the surface color (so - it should hopefully be linear).

I have SG_CCFilter loaded as an Image Filter in the Effects/Processing tab.

it is using an input profile of lin_sRGB I created in photoshop, and an output profile of my gamma 2.2 D65 i1 Display calibrated monitor.

When I hit F9, the linear image displays pretty close to correct, but the linear output of 127,127,127 from the simple color corrector shows up as 107,107,107.

**** Possibly this is due to my using lin_sRGB as the input profile of SG_CCFilter? ****

Should this input profile be lin_aRGB? But, I'm working in sRGB (or should I setup Photoshop as aRGB instead of sRGB)?

And, if I did that, the linear image of the girl wouldn't look quite right. Would I then convert all log sRGB images to a linear aRGB images to linearize them?

As you can see, I'm a still bit confused as to how to get colors and pictures setup properly.

Ahhh. thanks so much for your patience! If I can get all this figured out, I'll have to create a 'dummies guide to linear workflow' video! I certainly feel like one.

3dWannabe
07-16-2010, 09:41 PM
Gerardo - I tried several other things, like using a linear aRGB.

Then - I found I'd made a mistake - or forgotten to save my changes when I set ambient intensity to 100% and light intensity to 0%. They are reversed in the GammaTest.zip I uploaded.

I also found the Dither you mentioned (annoying that dither is mentioned only 5 times in the layout manual pdf)!

So, now I'm getting 127,127,127 back for my gray. It was the light that was the problem. Sorry - my error! Ahhhh!

I'll re-post if I'm still having problems with fine tuning the values. Thanks!

gerardstrada
07-17-2010, 06:04 PM
Glad you solved problem!

Just in case, it's not recommendable linearize images and save them in low bit depth, since we'll lose color data in dark areas of the image and later, when you gamma correct it, you'll get color quantization (posterizations in darker areas). In case you need to save linear images, use a 16/32-bpc format (exr, tiff, xdepth).

Since saving log 8-bpc images in linear space at 16/32-bpc increase the image file size considerably, 8-bpc images are commonly linearized within the 3D package. You could use the Image Editor for bright images, but dark images will be posterized. In such case you might want to map those textures in a blank FP image for its linearization in Image Editor, or linearize them when the image is already in the FP domain of the renderer (Surface NE/Textured Environment).

Can't see your SG_CCFilter setup here (settings are not transferable when color profiles names are not the same), but in case you want to preview in sRGB, the settings you describe are right: lin sRGB for input profile and the log version of your monitor profile for the output profile, but can't reproduce the artifacts you mention here. Maybe the problem is in your color profiles. If you still have this problem, I could take a look at them.

As for the SG_CCNode, if you want to work only at gamma level, leave the I/O profiles blank and only set up the I/O space gamma. If you want to work at gamut&gamma level and your images are already in sRGB space, you only need to linearize images at gamma level as well (only images that are not sRGB would need color profiles conversions). For previewing in real sRGB space in post-process you will need to use the I/O color profiles.



Gerardo

Lightwolf
07-18-2010, 03:53 AM
Since saving log 8-bpc images in linear space at 16/32-bpc increase the image file size considerably, 8-bpc images are commonly linearized within the 3D package.
The downside of that (at least within LW) is that you either apply the CM before the mip-maps are created, which again results in banding that can be countered by dithering... or after the mip-maps are created (which is what nodal set-ups do) which has the down side of the scaling algorithm to create the mip-maps not using linear images (as it should).
It only makes a relatively small difference though.

Cheers,
Mike

3dWannabe
07-18-2010, 09:42 AM
The downside of that (at least within LW) is that you either apply the CM before the mip-maps are created, which again results in banding that can be countered by dithering... or after the mip-maps are created (which is what nodal set-ups do) which has the down side of the scaling algorithm to create the mip-maps not using linear images (as it should).
It only makes a relatively small difference though.

Cheers,
Mike
So there is a bit of loss?

If you have enough VRAM and RAM (I've got 2GB and 24GB), is there any real penalty with pre-converting images to linear in Photoshop rather than using nodes?

I notice I can't convert to exr in Photoshop CS4 (at least without some extra plug-in). I guess with Photoshop, 16 tiff would be the correct choice?

Is there another simple way to convert to exr with Photoshop, or some overlooked conversion program? And, I guess HALF would be fine?


can't reproduce the artifacts you mention here. Maybe the problem is in your color profiles. If you still have this problem, I could take a look at them.

It might be my icc profiles. I'll put together a zip with the profiles and instructions tonight.

Sorry about my earlier posts about 127,127, 127 not working. I should have rechecked my setup before posting. Dumb mistake.

I'm also interested in what you think about the downsides of linearizing images in Photoshop if there's ample VRAM and RAM.

Would this cause unforeseen problems - or actually slow down my renders?

BTW - now that I'm very close to getting fprime/GW/SG_CCTools to work perfectly (I'm still off a digit or so, but I'll deal with that later), I've been trying to convert an existing project to linear.

It uses IFW2 presets. I'm trying to figure out how to get a reflective gold surface to look like it did when it was log.

Besides the colors and images, are there other changes to specularity, etc. that must be accounted for when converting to linear?

I'm exploring the proper way to convert light intensities on this thread which I'm sure is part of the problem:
http://www.newtek.com/forums/showpost.php?p=1038152&postcount=198

and I understand that light colors must also be converted.

So far, this is working well. I can quickly enable and disable G2 and SG_CCTools to switch between them. The fprime/G2 previews look very close to the SG_CCTools renders.

Thanks so much for your articles and kind suggestions from many folks on these threads!

Lightwolf
07-18-2010, 09:55 AM
So there is a bit of loss?
Not a loss, but the scaling isn't quite 100% accurate.


If you have enough VRAM and RAM (I've got 2GB and 24GB), is there any real penalty with pre-converting images to linear in Photoshop rather than using nodes?
VRAM doesn't make any difference here. Well, the penalty is that images take longer to load and use 4 times as much memory.


I notice I can't convert to exr in Photoshop CS4 (at least without some extra plug-in). I guess with Photoshop, 16 tiff would be the correct choice?
Probably... but it would also be blown up to 32bit float internally by LW (which only supports 8- or 32-bit)


Is there another simple way to convert to exr with Photoshop, or some overlooked conversion program? And, I guess HALF would be fine?
Why not use LW?

Cheers,
Mike

gerardstrada
07-18-2010, 08:35 PM
The downside of that (at least within LW) is that you either apply the CM before the mip-maps are created, which again results in banding that can be countered by dithering... or after the mip-maps are created (which is what nodal set-ups do) which has the down side of the scaling algorithm to create the mip-maps not using linear images (as it should).
It only makes a relatively small difference though.

Cheers,
Mike
That's a good point. Mostly for the difference between images linearized in the nodal system and the images linearized in the Image Editor (in some special cases the difference is indeed not so small), and altho I understand the logic behind it, in practice, I've not been able to notice any difference (in mipmap results) between linearizing an image in the Image Editor and loading an image already linear. I've got also good results with the blank FP image trick, but different to the other two ways.


So there is a bit of loss?
No loss. But bandings due to mipmapping/AA issues can be seen mostly with lines pattern textures.


If you have enough VRAM and RAM (I've got 2GB and 24GB), is there any real penalty with pre-converting images to linear in Photoshop rather than using nodes?
If you have enough RAM, just more disk space usage.


I notice I can't convert to exr in Photoshop CS4 (at least without some extra plug-in). I guess with Photoshop, 16 tiff would be the correct choice?
Photoshop support openEXR by default (well, at least the very basic features).


Is there another simple way to convert to exr with Photoshop, or some overlooked conversion program? And, I guess HALF would be fine?
LW would be other good way, but the advantage of doing this in Photoshop is when we convert the 8-bpc image to 32-bpc, the image will be automatically linearized according to your working color space gamma, let's say sRGB gamma formula. If you do it within LW, you'd have to linearize the image with the proper value/formula/profile before saving it.


I'm also interested in what you think about the downsides of linearizing images in Photoshop if there's ample VRAM and RAM.
Linearizing within LW is for saving RAM and disk space. That's suitable most of the times when thinking in long-term, but if you are on x64 and you could afford RAM and disk space, go ahead - all will be simpler.


Would this cause unforeseen problems - or actually slow down my renders?
Nothing besides RAM and disk space. As for the slown down, it depends on your equipment vs. your setups and the complexity of your scenes, I think. At least here, performing the linearizations of 8-bpc textures within LW has been more convenient, but all cases are different and you need to try what is more proper in your case. The mipmapping difference is relatively small (most of the times) as Mike has already pointed out, and as I commented before, in the mipmap case, I have not found any difference between linearizing an image in Image Editor or loading one already linear. The main problem I found of linearizing an 8-bpc image in the Image Editor is in fact the color quatizations in dark areas of the image. Perhaps an strategy in your case might be to try of linearizing your textures within LW (with the more suitable method for you), and later if you have issues with some few textures due to mipmapping, you could convert only those to 32-bpc.


BTW - now that I'm very close to getting fprime/GW/SG_CCTools to work perfectly (I'm still off a digit or so, but I'll deal with that later), I've been trying to convert an existing project to linear.

It uses IFW2 presets. I'm trying to figure out how to get a reflective gold surface to look like it did when it was log.

Besides the colors and images, are there other changes to specularity, etc. that must be accounted for when converting to linear?
When working in LCS, we linearize all log colors that will have incidence in the final colors of the image. This means plain colors, procedural textures colors, image colors, gradient colors (but not color data from normal maps/vector displacement maps, etc. since these color data have no direct incidence in the final colors of the image), light colors, backdrop colors, textured environment colors will need also linearizations. But just colors. No scalar maps, no specular maps, no bump maps, no diffuse maps, etc.


I'm exploring the proper way to convert light intensities on this thread which I'm sure is part of the problem:
http://www.newtek.com/forums/showpos...&postcount=198

and I understand that light colors must also be converted.
Light colors must be linearized and you might want to use inverse square falloff for lights, but there's no need to gamma correct bright intensities of anything - unless you are trying to match the intensities you got by working in log space - what, for realistic lighting, I don't see the point.

For your gold surface, what you might want to approximate is physically correct fresnel effect for metallic materials, by staying away from linear ramps in incidence angle gradients. Tint colors for reflections should be linear as well. You might want to try also energy-conserving shaders.


So far, this is working well. I can quickly enable and disable G2 and SG_CCTools to switch between them. The fprime/G2 previews look very close to the SG_CCTools renders.
Glad you are finding a workflow that works for you!



Gerardo

3dWannabe
07-18-2010, 08:58 PM
For your gold surface, what you might want to approximate is physically correct fresnel effect for metallic materials, by staying away from linear ramps in incidence angle gradients. Tint colors for reflections should be linear as well. You might want to try also energy-conserving shaders.


Wow - I'd just heard about energy conserving shaders on a making of Iron Man podcast.

And here I find a link where you were doing this in nodes a year ago.

http://www.newtek.com/forums/showpost.php?p=902706

I'm impressed!

Is this the type of node setup you're talking about for linear gold?



Can't see your SG_CCFilter setup here (settings are not transferable when color profiles names are not the same), but in case you want to preview in sRGB, the settings you describe are right: lin sRGB for input profile and the log version of your monitor profile for the output profile, but can't reproduce the artifacts you mention here. Maybe the problem is in your color profiles. If you still have this problem, I could take a look at them.


Ok, I'm including my scene along with my profiles and the .txt file used to load them.

The image below shows the speckles in the grey box. I used log_aRGB.icc and lin_RGB.icc in this as it shows more speckles than if I use log_sRGB.icc and lin_sRGB.icc.

If I leave input profile and output profile blank I don't see any speckles.

gerardstrada
07-19-2010, 05:56 PM
Wow - I'd just heard about energy conserving shaders on a making of Iron Man podcast.

And here I find a link where you were doing this in nodes a year ago.

http://www.newtek.com/forums/showpost.php?p=902706

I'm impressed!

Is this the type of node setup you're talking about for linear gold?
It could be something like that as well (if you are mixing direct lighting with ambient lighting) but I was referring to something simpler like Conductor material. You could use also the useful tools of TruArt's Split Material or Michael Wolf's Material Blender to re-adjust the reflection shading with more flexibility.


Ok, I'm including my scene along with my profiles and the .txt file used to load them.

The image below shows the speckles in the grey box. I used log_aRGB.icc and lin_RGB.icc in this as it shows more speckles than if I use log_sRGB.icc and lin_sRGB.icc.

If I leave input profile and output profile blank I don't see any speckles.
Well, your color profiles looks fine (checked here with profile medic). The bug is related with MultiThreading and Surface NE, it seems. If we use 1 Thread, it disappears. I'm using an unreleased version of SG_CCFilter and Node here and I also tend to use color profiles conversions as pre-process and post-process (but not in Surface NE) because as I recommended in the article, I always use a common working color space in my color workflows, which means all images in any app share this common color space, and gamma corrections in Surface NE are made at gamma level only (that's reason why I never met this issue, I guess). I'll re-post this bug to Sebastian later (if you can, please do the same). I know he will be extremely busy until September at least, but in the meantime, a solution for your work is to use also a common working color space, that is to say, if all your images are in sRGB color space, you just need to use the I/O gamma fields (from sRGB gamma to lin) in Surface NE. Color profile conversions would be needed only at post filter in that case, but if you need to perform a profile conversion to a texture/footage, do it as pre-process in Image Editor and use the FP blank image trick if necessary. Thank you very much for your report :thumbsup:



Gearrdo

3dWannabe
07-29-2010, 08:31 PM
Now that I've progressed to the point of "knowing that I don't know", I'm a bit confused about using Adaptive Sampling with a linear workflow.

My concerns at this point are for getting a great looking render without needing a render farm, and I gather it can reduce render times and help with anti-aliasing.

In the Except - Anti Aliasing Guide, it says:

http://www.except.nl/lightwave/aa/index.htm
"You do not want to use AS if you are re-exposing your image a lot after rendering. The AS system works on the gamma that LW renders in, so after re-exposing things may become aliased again. For linear workflow and similar techniques it's therefore recommended to turn AS off alltogether.

--
Is he saying that since applying gamma may cause aliasing anyway, don't do it? I'm not sure that makes sense if it can cut down on render times, and I wonder how much aliasing a program like Fusion or Nuke introduces when applying gamma during compositing?

So, I must be missing the point.

Adaptive sampling is mentioned a few times inside this very thread,

http://www.newtek.com/forums/showpost.php?p=1006695&postcount=34
http://www.newtek.com/forums/showpost.php?p=1006243&postcount=13

but I just don't have a definitive rule (as if there was one), of when it should and should not be used with a linear workflow.

faulknermano
07-29-2010, 11:58 PM
http://www.except.nl/lightwave/aa/index.htm
"You do not want to use AS if you are re-exposing your image a lot after rendering. The AS system works on the gamma that LW renders in, so after re-exposing things may become aliased again. For linear workflow and similar techniques it's therefore recommended to turn AS off alltogether.

--
Is he saying that since applying gamma may cause aliasing anyway, don't do it? I'm not sure that makes sense if it can cut down on render times, and I wonder how much aliasing a program like Fusion or Nuke introduces when applying gamma during compositing?



I'm also not sure about that suggestion, too. If we stick to linear images (and in FP, too) all the way into compositing, I do not see any problem about exposing an image. Am I wrong?

Tobian
07-30-2010, 06:20 AM
No the point he is making is that the LW renderer AS compares the contrast/difference between neighbouring pixels to determine if they need to be re-rendered again. The LW renderer is unaware of the concept of Gamma (adding a display filter or a pixel filter which alters the render doesn't tell the rendering engine anything, it's a post process).

By it's nature the LW rendering engine therefore works in 'screen space' (call it LOG, 2.2 sRGB or whatever you want). Sure it's rendering in float, but the engine can only compare pixel differences in screen space. This means that very luminous surfaces don't AA properly (or appear too) and anything in the lower end of the luminance scale is too tightly bunched together for the engine to see differences, so they don't recieve any adaptive sampling. The simplest way to visualise this is to create a black->white gradient in Photoshop, then apply a 0.4545 gamma adjustment to it. If you are uing LW to render in LCS it 'sees' something akin to that: All the darks are all bunched up together and it can't 'see' enough difference. It's only the post filter/display correction which means you can see the image how it's supposed to be, but as I said this is after the rendering is done.

Using standard AA means every pixel recieves AA, not just based on a luminance/difference tolerance. It's less efficient, because it's brute force, but it means everything is being anti-aliased. The way round this would be for NT to have some sort of awareness to gamma in the rendering engine, such as gamma(or colour space) adjusted tolerance in the AS...

3dWannabe
07-30-2010, 07:40 AM
No the point he is making is that the LW renderer AS compares the contrast/difference between neighbouring pixels to determine if they need to be re-rendered again. The LW renderer is unaware of the concept of Gamma (adding a display filter or a pixel filter which alters the render doesn't tell the rendering engine anything, it's a post process).

By it's nature the LW rendering engine therefore works in 'screen space' (call it LOG, 2.2 sRGB or whatever you want). Sure it's rendering in float, but the engine can only compare pixel differences in screen space. This means that very luminous surfaces don't AA properly (or appear too) and anything in the lower end of the luminance scale is too tightly bunched together for the engine to see differences, so they don't recieve any adaptive sampling. The simplest way to visualise this is to create a black->white gradient in Photoshop, then apply a 0.4545 gamma adjustment to it. If you are uing LW to render in LCS it 'sees' something akin to that: All the darks are all bunched up together and it can't 'see' enough difference. It's only the post filter/display correction which means you can see the image how it's supposed to be, but as I said this is after the rendering is done.

Using standard AA means every pixel recieves AA, not just based on a luminance/difference tolerance. It's less efficient, because it's brute force, but it means everything is being anti-aliased. The way round this would be for NT to have some sort of awareness to gamma in the rendering engine, such as gamma(or colour space) adjusted tolerance in the AS...
You write very clearly. I believe I understand now.

Do you know if there's any 'awareness' taking place in the 'linear workflow' of LW 10?

IRML
07-30-2010, 07:50 AM
you can make lightwave sample the AS at output gamma by putting gamma correction as a pixel filter, which might speed up renders a little bit, I don't think it makes that much difference though, because linear renders tend to be darker overall so if you make the AS more sensitive it's going to have the same sort of coverage you got before

for the most part if you have a tight AS setting like 0.02 or lower I find things look fine even if you wack up the exposure really high in post, so the brute force method of not using AS at all is a bit extreme for me

Tobian
07-30-2010, 07:55 AM
It took me a while to write that as it's complex to explain! :D the trouble with a lot of this kind of stuff is if you write using the correct jargon it hurts your brain, and I don't understand all the jargon :) I understand it more on a visual level by observing the effects of what's going on, so that's how I try and explain it, but it's hard as there's a lot of contradictory and confusing information about :D

3dWannabe
07-30-2010, 08:04 AM
you can make lightwave sample the AS at output gamma by putting gamma correction as a pixel filter, which might speed up renders a little bit, I don't think it makes that much difference though, because linear renders tend to be darker overall so if you make the AS more sensitive it's going to have the same sort of coverage you got before

for the most part if you have a tight AS setting like 0.02 or lower I find things look fine even if you wack up the exposure really high in post, so the brute force method of not using AS at all is a bit extreme for me

Are you saying use a pixel filter to apply 2.2 gamma to the linear?

If so, I'm compositing in linear, so that wouldn't work for me.

IRML
07-30-2010, 08:08 AM
your compositor should be able to handle converting a 2.2 image to linear on import, but if not then just use a gamma image filter set to 0.4545 at the end, the pixel filter will convert the image to 2.2 before the AA gets to work on it, and the image filter will convert back to linear afterwards

3dWannabe
07-30-2010, 08:30 AM
your compositor should be able to handle converting a 2.2 image to linear on import, but if not then just use a gamma image filter set to 0.4545 at the end, the pixel filter will convert the image to 2.2 before the AA gets to work on it, and the image filter will convert back to linear afterwards

Thanks! That sounds like a good plan! I'm for anything that can speed up renders.

Most of the render time for a current project seems to get spent with AA, not radiosity, so anything that will speed that up is welcome.

IRML
07-30-2010, 08:52 AM
let me know how you get on, I've known about that technique for a while but I've only ever used it in crappy test renders, so I'm interested in some proper results

3dWannabe
07-30-2010, 09:02 AM
let me know how you get on, I've known about that technique for a while but I've only ever used it in crappy test renders, so I'm interested in some proper results
Well, I'm rather a neophyte at all this.

I wouldn't depend on my results as I probably don't have the proper eye to evaluate them - or understand the proper settings to use for testing.

Would be great if some of the heavy hitters would experiment a bit?

I wonder if the author of the except AA & Radiosity guides would be interested in checking that out?

Tobian
07-30-2010, 12:29 PM
The problem is though, because you are doing the AA adaptive sampling in 2.2/Log space, the AA is slightly wrong (it may be imperceptible, but it will be wrong, and noticeable when compositing). It's better to do the AA in Linear colour space, on a technical level: Though to be fair I render like that all the time, though I just do the last step, applying an inverse gamma (of .4545) in my compositing app, just so I can see the result in LW to make sure it's all OK. Display port gamma correction in LW10 means I don't have to do that now, so I will just use that instead :)

It also depends highly on your samples, and highly scene dependant, but sometimes if you have high AA, it can be quicker to just not enable AS at all. This sometimes applies to 'noisy' renders, which involve blurry reflections.

faulknermano
07-30-2010, 04:25 PM
I see... In the case of DP_FNEs is possible to setup multiple passes globally (it's described in the article of HDRI3D magazine). But as far as I can see, Janus is not updated to work with the new DP_Extra Buffers version.



The newest (unreleased) build has been updated to work with the new buffers. It's coined "GlobalBuffers", as opposed to the older "Extra Buffers". I've been using it a lot, actually, and added some finer Janus-side control over the buffers.

faulknermano
07-30-2010, 04:26 PM
You'd still need a way to access them though. I've got something in the pipeline for exrTrader V2, but it'll still be a while (currently it's in the "proof of concept" phase - i.e. the idea basically works).

Cheers,
Mike

C'mon Mike! Hurry up! I'm itching to use this! :D (take your time... but not too long, okay?)

faulknermano
07-30-2010, 04:33 PM
Using standard AA means every pixel recieves AA, not just based on a luminance/difference tolerance. It's less efficient, because it's brute force, but it means everything is being anti-aliased. The way round this would be for NT to have some sort of awareness to gamma in the rendering engine, such as gamma(or colour space) adjusted tolerance in the AS...

Ah, I see. So you're saying that AS is detecting an assumed sRGB 2.2 gamma then? A colour space specification would be nice.

Tobian
07-30-2010, 05:49 PM
Technically the AS is done on the image buffer which has no specific nature, it's just resolving roughly as 2.2/srgb/Log with regards to the AS sensitivity threshold, so yes a colour space setting for AS would 'fix' that issue. Ideally of course LW would have simply rendered in Linear format, and then a display gamma was applied, but that would take a lot more work to make happen! :)

gerardstrada
08-03-2010, 04:20 AM
The newest (unreleased) build has been updated to work with the new buffers. It's coined "GlobalBuffers", as opposed to the older "Extra Buffers". I've been using it a lot, actually, and added some finer Janus-side control over the buffers.
That's sounds great!
Btw, looking forward to your article in HDRI3D magazine.


Ah, I see. So you're saying that AS is detecting an assumed sRGB 2.2 gamma then? A colour space specification would be nice.
Let's see... human visual perception is non-linear and more sensitive to changes in darker areas than bright ones. According to our visual perception, a linear code for a differential threshold will consider less tones in darker areas. So in this case the problem is not that AS is detecting an assumed gamma but precisely the opposite, that is to say, the threshold is applied linearly, when a non-linear way would provide the desired results. It would be suitable that this non-linear way could be consistent with the response curve of the output simulation.



Gerardo

gerardstrada
08-03-2010, 04:34 AM
let me know how you get on, I've known about that technique for a while but I've only ever used it in crappy test renders, so I'm interested in some proper results
Well, I'm rather a neophyte at all this.

I wouldn't depend on my results as I probably don't have the proper eye to evaluate them - or understand the proper settings to use for testing.

Would be great if some of the heavy hitters would experiment a bit?

I wonder if the author of the except AA & Radiosity guides would be interested in checking that out?
The workaround works, but keep in mind that some sub-frame operations (like native mBlur, dof, etc) will be applied in gamma encoded space. So you have to decide on per case basis.



Gerardo

jwiede
10-02-2010, 01:40 PM
Gerard, since Sebastian's SG_CCTools still haven't been ported to UB, is the only real solution for MacUB LW to wait for LW10's LCS support? I have FPrime & G2, but since the images in your previous posts are no longer showing up, I can't see how you populated the fields (and just placed my orders of back-ordered issues 18 & 30 today). Further, it sounds like the FPrime+G2 approach is only suitable for preview, not for output.

I keep hoping Sebastian will get Mike to port the SG_CCTools for UB, but no luck (hint, hint).

Lightwolf
10-02-2010, 01:50 PM
I keep hoping Sebastian will get Mike to port the SG_CCTools for UB, but no luck (hint, hint).
Unlikely, there's too many third party dependencies in the code (heck even the colour picker uses a third party GUI toolkit!). And considering that LW 10 will support it out of the box...

Cheers,
Mike

jwiede
10-02-2010, 02:37 PM
Yeah, now that you've said that, I recall that response coming up previously (well, minus the LW10 part). Hopefully Gerard can at least restate the G2 values needed (I only get "?" boxes for the embedded dialog images in his posts), but if not, the #18 back issue will have them.

3dWannabe
10-02-2010, 02:48 PM
Yeah, now that you've said that, I recall that response coming up previously (well, minus the LW10 part). Hopefully Gerard can at least restate the G2 values needed (I only get "?" boxes for the embedded dialog images in his posts).


Here is the missing image you'll need.

I just bought LW10, but I haven't had a chance to run it as I don't have my keys setup, so I can't comment even if I could without breaking some rule.

But, my thoughts going into it were that I'd be happy if I could have some input, maybe get some bugs fixed, and maybe have a usable version a bit before it is officially released. But, mainly to support NewTek, as there really can't be new versions without revenue streams.

I didn't plan on actually being able to use it in 'production' immediately, although, I am always hopeful .

You'll end up buying it anyway, so you might as well support Newtek now as later, and it's cheaper now (I think you missed the free rigging DVD, but the reduced pricing still applies).

artstorm
10-02-2010, 05:03 PM
Unlikely, there's too many third party dependencies in the code (heck even the colour picker uses a third party GUI toolkit!). And considering that LW 10 will support it out of the box...

Cheers,
Mike

Well it will only have Linear Workflow (gamma correction) out of the box. CCTools also does conversion between ICC profiles, which is an invaluable feature.

Lightwolf
10-02-2010, 07:19 PM
Well it will only have Linear Workflow (gamma correction) out of the box. CCTools also does conversion between ICC profiles, which is an invaluable feature.
Which is why I'd like to see the colour management in LW to be plugin based.
Mind you, the concept of a changeable working space doesn't exist in 3D rendering (and most compositing apps) anyhow.

Cheers,
Mike

artstorm
10-02-2010, 09:09 PM
Which is why I'd like to see the colour management in LW to be plugin based.
Mind you, the concept of a changeable working space doesn't exist in 3D rendering (and most compositing apps) anyhow.

Cheers,
Mike

Yes, as far as I know After Effects is the only compositing application that can handle different color workspaces.

Which gives a pretty nice workflow together with CCTools. You can setup the CC Filter to use a calibrated monitor profile in LightWave, made with a Spyder for instance, together with a linear color profile.
Which gives an identical preview render in LightWave as when you render out as exr and import with the same linear color profile in After Effects with a destination color space (like HDTV/Rec 709 or Universal Camera Film Printing Density) according to the final output.
And as also After Effects uses the calibrated profile for proofing you get a very true color representation through the entire chain starting in LightWave.

I don't know of any package that can handle this as well as LightWave together with the CCTools, so I'm extremely happy we have these plugins, especially when using a wide gamut screen that displays oversaturated images unless the application can use the monitor profile to correct them, which CCTools does brilliantly for LightWave.

gerardstrada
10-02-2010, 09:44 PM
3dWannabe, thank you.

Jwiede, for ~2.2 gamma correction you might want to try with 146.5% in Bright parameters as well.

Artstorm, :agree: I've heard that color management would be included as a color space conversion feature out of the box, but not through ICC/ICM color profiles (wich is arguable). Though ICC/ICM compatibility would be supported for 3rd-party. This would allow to SG_CCTools and plugins alike to recognize color profiles from input images automatically and perform linearizations automatically as well.

Mike, the concept of a changeable working space (which is commonly the output color space) indeed exist now in the new Maya 2011, but it's poorly implemented (if both things mentioned earlier are implemented, Lightwave3D will be far better in this matter, I think - well it already is better with the SG_CCTools). As for compositing packages, the concept has been also applied to them: All compositing packages work internally in RGB color space, but most of them leave it as it is, that is to say, ambiguous, and if we need managed colors, we have to define and convert all color spaces for input and output images by hand (in similar way we do now within LW with the SG_CCTools, but harder) or we have to use an external CM system that takes care of that more easily. In the special case of AFX, as Artstorm has already explained, it already has a built-in CM system that simplify this process. There, color management (and linear workflow) is made automatically out of the box.



Gerardo

Lightwolf
10-03-2010, 09:48 AM
Mike, the concept of a changeable working space (which is commonly the output color space) indeed exist now in the new Maya 2011, but it's poorly implemented (if both things mentioned earlier are implemented, Lightwave3D will be far better in this matter, I think - well it already is better with the SG_CCTools).
Actually, it's poorly documented to start with.
The kicker is "Additionally, you can specify the colour profile used for internal rendering colour calculations" - which could be anything, but I assume for efficiency it's just a profile->lin conversion (anything else would slow down rendering to a crawl - and mental ray isn't colour space aware anyhow).
I'm actually not even sure if Maya takes the display profile into account. It's certainly not documented and apparently not available as an option either. *shrugs*


As for compositing packages, the concept has been also applied to them: All compositing packages work internally in RGB color space, but most of them leave it as it is, that is to say, ambiguous, and if we need managed colors, we have to define and convert all color spaces for input and output images by hand...
We discussed this already, the concept of a working colour space has no relation to the colour spaces for i/o.

Cheers,
Mike

3dWannabe
10-03-2010, 11:02 AM
Yes, as far as I know After Effects is the only compositing application that can handle different color workspaces.


Fusion has this capability. You can have several viewers all with a different LUT that also allows you to change gamma for a linear workflow, of course.

And there are a number of color space tools:

http://www.vfxpedia.com/index.php?title=Eyeon:Manual/Tool_Reference/Color/Color_Space

artstorm
10-03-2010, 01:47 PM
Fusion has this capability. You can have several viewers all with a different LUT that also allows you to change gamma for a linear workflow, of course.

And there are a number of color space tools:

http://www.vfxpedia.com/index.php?title=Eyeon:Manual/Tool_Reference/Color/Color_Space

It's not really the same thing, as Fusion as doesn't take into account your monitor's profile to soft proof it. Which means that you'd have to use a monitor which has it's gamma and gamut calibrated correctly in hardware, unless you can generate a LUT file which takes both the monitor's profile and the destination workspace into account (Nuke can generate such a LUT somewhat close to accurate, which would then work in Fusion as well).
Anyway, both options is way more expensive to get close to accurate colors than using the ICC profiles like LW SG_CCTools and After Effects does.

Captain Obvious
10-03-2010, 03:27 PM
Honestly, I never saw the point of getting complete*accuracy with this stuff unless you also control the delivery medium (ie, high-quality print). The TVs or computer monitors that will be used to watch it will still be horribly wrong, so the colours are going to be off anyway. Might as well not bother, and just settle for "good enough".

3dWannabe
10-03-2010, 03:49 PM
Honestly, I never saw the point of getting complete*accuracy with this stuff unless you also control the delivery medium (ie, high-quality print). The TVs or computer monitors that will be used to watch it will still be horribly wrong, so the colours are going to be off anyway. Might as well not bother, and just settle for "good enough".

I did some tests a month ago with Final Cut Pro, which I use to combine multiple cameras with audio - and for editing.

There was no way I could get out, using h.264, what I put in.

Adobe was better, but still things got changed.

I kind of see your point (unless you're outputting for the 'big screen'), but hope things will get better.

artstorm
10-03-2010, 04:16 PM
Honestly, I never saw the point of getting complete*accuracy with this stuff unless you also control the delivery medium (ie, high-quality print). The TVs or computer monitors that will be used to watch it will still be horribly wrong, so the colours are going to be off anyway. Might as well not bother, and just settle for "good enough".

Well, it still makes some sense to care about colors even if your target is different computer monitors or TVs, which you don't have control over. If you deliver a color correct product for the medium in question, you have the best chance that it will look good for as many people as possible. Some are off on one side of the spectrum and some on the other side, and the product you deliver is in between. If you work with your colors already way off, and then the final output is watched by someone with their colors even more off, it will start to look really strange.
But still, as most of these things, it all boils down to how anal you are about perfection. :D

But it's not that uncommon that you have control over the final output medium as well. Print work as you mentioned, and also film work. On my latest shot I delivered for a film I had the ICC profile provided for the film printing. And using CCTools and After Effects with a linear workflow together with the ICC profiles in question was just such a smooth workflow on my equipment / setup.
Of course I could have done color proofing only in post or bought a more expensive monitor, but now I got it more or less as I wanted already in the LightWave on my humble monitor. And also when using CCTools, I know for sure that if I pass the LightWave scene along to someone else to continue working on it, and they have a calibrated setup, that they will see the same colors that I made in the first place. I did have some serious headaches with this and skin tones in LightWave before I got CCFilter and passed scenes along.

gerardstrada
10-03-2010, 04:28 PM
Mike, it's poorly documented indeed, but it's not just a gamma conversion only, it uses the sRGB, HDTV and CIE XYZ primaries and assumes D65 white point. It doesn't care if the render engine isn't colour space aware, since the color conversion is made in the input images and colors, before the render engine process them. Maya doesn't take into account any display profile (it assumes sRGB, I think) and that's just one of the several disadvantages of their poor implementation.

The concept of a working color space indeed has relation with the way we manage input images, and the way we preview the output. Within a package where we don't have the possibility to choose a working space automatically (let's say Lightwave and SG_CCTools, or let's say Fusion), we have to assume the color spaces of each input image (they can be different) and convert them to a common linear space (well-know as working color space for this matter). Though these packages render/process images in RGB space (ambiguous), this conversion over input images changes their color numbers before the processing takes place. Later, for previewing, we assume the linear common space and convert to our non-linear display space (this setup should be disabled when saving the output images). Within AFX happens the same thing when we choose a working color space, it's just that all this is made automatically by the package.

artstorm, in the recent version of Fusion we are able to specify RGB primaries, white point, gamma, linear limit and slope to approximate our monitor's profile. It's not so easy as with AFX, but it's a good move in the right direction, I think.



Gerardo

Lightwolf
10-03-2010, 04:49 PM
Honestly, I never saw the point of getting complete*accuracy with this stuff unless you also control the delivery medium (ie, high-quality print). The TVs or computer monitors that will be used to watch it will still be horribly wrong, so the colours are going to be off anyway. Might as well not bother, and just settle for "good enough".
I certainly agree. However, that's only half of the reason to get it right. The other one being that the actually computations on colours (which includes just about any image processing operation or any rendering process that produces colours) need an accurate translation from and to linear.
Where and how you handle that is largely a matter of convenience and preference though (and yes, convenience is a big factor here).

Cheers,
Mike

Lightwolf
10-03-2010, 04:59 PM
Mike, it's poorly documented indeed, but it's not just a gamma conversion only, it uses the sRGB, HDTV and CIE XYZ primaries and assumes D65 white point. It doesn't care if the render engine isn't colour space aware, since the color conversion is made in the input images and colors, before the render engine process them.
Yup, which is precisely what LW 10 does.


The concept of a working color space indeed has relation with the way we manage input images, and the way we preview the output.
Yes, but it has no relation to the actual processing of the image data - it doesn't change the algorithms a single bit. Which makes sense if you think about it from a physical point of view.
Or, put it that way... the concept of multiples, different working spaces is conceptually wrong to start with, as there is only one. That is, if you define working as what the CPU is meant to number crunch on.
If you use any other "working" colour space then it's basically bound to be the wrong algorithm.


Within AFX happens the same thing when we choose a working color space, it's just that all this is made automatically by the package.
Which basically means it's no different at all - except more automated coupled with the loss of control (and plenty of ways to get it wrong, i.e. due to not being able to match and mix colour depths for example).


artstorm, in the recent version of Fusion we are able to specify RGB primaries, white point, gamma, linear limit and slope to approximate our monitor's profile.
Actually, you can use any macro (which is a mini flow with exposed controls) or tool to correct the display (however, that's only for the views).
As an extension to that also Fuses (scripted tools) which can include pixel shaders.
Using OS colour management functionality should be fairly easy to code for that as well (last time I did it was maybe 20-30 lines of code max) - but I suppose it's not high on eyeons of their customers priority list.
Looking back at the colour management course at Siggraph then that seems to be the least of anybodies worries (that's focused on film though).

Cheers,
Mike

gerardstrada
10-03-2010, 08:24 PM
Yup, which is precisely what LW 10 does.
Not only LW10, but LW 9.x with SG_CCTools too :)


Yes, but it has no relation to the actual processing of the image data - it doesn't change the algorithms a single bit. Which makes sense if you think about it from a physical point of view.
I have not referring to processing algorithms in any part of my post, I just said that the concept of a changeable working space is applied also to compositing packages, and though all them work internally in RGB color space, most of them leave it as it is (ambiguous) and we have to define and convert all color spaces for input and output images by hand. This is just a fact.



Or, put it that way... the concept of multiples, different working spaces is conceptually wrong to start with, as there is only one. That is, if you define working as what the CPU is meant to number crunch on.
If you use any other "working" colour space then it's basically bound to be the wrong algorithm.
You can argue against the term, and I understand from a programmer point-of-view that the term working space is - at least - confusing, but I don't think is totally wrong since the RGB numbers indeed change when we define a working color space. Let's consider also that in this case, the term is used for Autodesk and Adobe from a user point-of-view, since a working space for the user interface is necessary in order to produce a specified color appearance.


Which basically means it's no different at all
Things are not so simple. There's indeed differences depending on the color conversions tools of the compositing packages. In Fusion by default for example, the Gamut tool doesn't provide Rendering Intent options (though its rendering intend is obscure, it could be using Perceptual or Relative Colorimetric, I guess), also the conversion only takes into account chromaticities, but not other color space aspects, which difficult the CM setups unnecessarily, or the ColorSpace tool (which is in fact a ColorModel tool) is not accurate neither in its RGB conversions since it assumes a generic color space (itu 601) instead of the phosphors of the monitor that the image was originally intended to be displayed on. So things are indeed different in other packages if we don't have a CM system that handle this more easily.


except more automated coupled with the loss of control (and plenty of ways to get it wrong, i.e. due to not being able to match and mix colour depths for example).
AFX approach is a no-brainer solution that doesn't leave gaps to the user to commit critical mistakes. Not sure what you are referring about "match and mix colour depths" but within AFX, we are able to work at the same time with 8-bpc, 16-bpc and 32-bpc images.


Actually, you can use any macro (which is a mini flow with exposed controls) or tool to correct the display (however, that's only for the views).
As an extension to that also Fuses (scripted tools) which can include pixel shaders.
Using OS colour management functionality should be fairly easy to code for that as well (last time I did it was maybe 20-30 lines of code max) - but I suppose it's not high on eyeons of their customers priority list.
Looking back at the colour management course at Siggraph then that seems to be the least of anybodies worries (that's focused on film though).

Cheers,
Mike
Yup, but it's not like a real color profile conversion. The problem with an OS colour management functionality is that we need a real CM system running behind the scenes to get the proper results, mostly for LCD monitors which don't decouple.



Gerardo

Lightwolf
10-04-2010, 01:29 AM
Not only LW10, but LW 9.x with SG_CCTools too :)
To a degree yes, except that within LW9 plugins get to handle the pixel data to late and the colour management thus doesn't affect all processing.


You can argue against the term, and I understand from a programmer point-of-view that the term working space is - at least - confusing, but I don't think is totally wrong since the RGB numbers indeed change when we define a working color space.
RGB number always change as soon as color management is active - unless the source image is in the destination colour space already.


Let's consider also that in this case, the term is used for Autodesk and Adobe from a user point-of-view, since a working space for the user interface is necessary in order to produce a specified color appearance.
Actually, Autodesk doesn't use the term when it comes to Maya.


...or the ColorSpace tool (which is in fact a ColorModel tool) is not accurate neither in its RGB conversions since it assumes a generic color space (itu 601) instead of the phosphors of the monitor that the image was originally intended to be displayed on.
As it should, since it's "normal" image processing tool like any other.


Not sure what you are referring about "match and mix colour depths" but within AFX, we are able to work at the same time with 8-bpc, 16-bpc and 32-bpc images.
The fact that the complete concept breaks down for the purpose of getting operations to perform properly if your project is not in 32-bit float (as it does in Photoshop).


Yup, but it's not like a real color profile conversion.
Nope, I never said that. But it gives you flexibility.

The problem with an OS colour management functionality is that we need a real CM system running behind the scenes to get the proper results, mostly for LCD monitors which don't decouple.
Current OSes have at least the same CM functionality as parts of their API as the free library used by SG_CCTools provides.

Cheers,
Mike

Captain Obvious
10-04-2010, 02:16 AM
If you work with your colors already way off, and then the final output is watched by someone with their colors even more off, it will start to look really strange.
Well sure, it they're way off. But if they're only off by a little, then it doesn't really matter.




I certainly agree. However, that's only half of the reason to get it right. The other one being that the actually computations on colours (which includes just about any image processing operation or any rendering process that produces colours) need an accurate translation from and to linear.
Where and how you handle that is largely a matter of convenience and preference though (and yes, convenience is a big factor here).
I'm a big fan of doing EVERYTHING in linear and only correcting for the actual output device.

gerardstrada
10-04-2010, 03:19 PM
To a degree yes, except that within LW9 plugins get to handle the pixel data to late and the colour management thus doesn't affect all processing.
If you are referring to perform the color conversions in the FP domain, we have ways to achieve that with SG_CCTools, too. Not so easy yet because of 9.x limitations - but it's doable (just in case, I've written an article about FP linear color conversions in HDRI3D magazine Issue# 33 (http://www.hdri3d.com/index.php?option=com_content&view=category&layout=blog&id=170&Itemid=103)).


RGB number always change as soon as color management is active - unless the source image is in the destination colour space already.
The difference is that the transformed colors for input images are processed in that way, it's not just a preview LUT as for the output images. This means that input images are processed with their RGB numbers changed, and the resulting color appearance is different too, according to the working space we choose. Since we are defining the color boundaries according to this working color profile, I think the term working color space is not so wrong after all.


Actually, Autodesk doesn't use the term when it comes to Maya.
Autodesk uses the term workig color profile, while Adobe uses the term working color space, seeing that a color profile describes a color space, for this matter it's the same thing.


As it should, since it's "normal" image processing tool like any other.
It "should" only if the monitor primaries are unknown, but ideally, it should take into account the monitor's color space. If Fusion could take into account our monitor's profile, this would be possible. In fact, if it could work with ICC/ICM profiles, this would be easier.



The fact that the complete concept breaks down for the purpose of getting operations to perform properly if your project is not in 32-bit float (as it does in Photoshop).
That's why you have the chance to work in 32-bit float :)


Nope, I never said that. But it gives you flexibility.
I'd say it's simple but limited. Without taking into account how to get the proper adjustments, the thing is that simple RGB exposed controls are not very useful for the purpose of proofing colors in LCD monitors. Guess the CustomGamut tool is Eyeon's solution for the lack of a monitor's profile recognition or additional working color profiles.


Current OSes have at least the same CM functionality as parts of their API as the free library used by SG_CCTools provides.

Cheers,
Mike
If you ask me, WCS has in fact better functionality (because of the CIECAM02 implementation), but one thing is a display profile and other thing is a working space profile. it's certainly not recommendable use a working space profile as display profile for a monitor at OS level. For solving the correct visualization of a working color space within a package, a solution at app level is a more appropriate approach, I think. That's one of the reasons why professional CM systems have connection plugins for compositing packages.

I think ICC/ICM compatibility in any compositing or 3D package would make this whole thing simpler an cheaper for the most of users.



Gerardo

Lightwolf
10-04-2010, 04:37 PM
If you are referring to perform the color conversions in the FP domain, we have ways to achieve that with SG_CCTools, too.
I was mainly talking about conversion prior to mip-mapping.


This means that input images are processed with their RGB numbers changed, and the resulting color appearance is different too, according to the working space we choose. Since we are defining the color boundaries according to this working color profile, I think the term working color space is not so wrong after all.
If the conversion to the colour space used for processing results in different numerical values due to a working colour space concept then that is absolutely wrong. The underlying maths is only correct for one colour space. Just like in the real world.

Autodesk uses the term workig color profile...
Well, not in Maya (which is what I believe we're talking about). And yes, I searched the docs.


It "should" only if the monitor primaries are unknown, but ideally, it should take into account the monitor's color space. If Fusion could take into account our monitor's profile, this would be possible. In fact, if it could work with ICC/ICM profiles, this would be easier.
In the case of converting from some image with a profile to the one and only working colour space it doesn't make any difference.
The monitor profile is of no interest except for displaying to the user or if said monitor is the final output medium (which is rarely the case).

That's why you have the chance to work in 32-bit float :)
Yup, and that's also why the pre-float colour management concept used by both PS and AE doesn't make a lot of sense in the film/video world either. Especially as it didn't fulfil one requirement back in the days: to make sure that the actual processing is correct.


I'd say it's simple but limited.
You can mimic just about anything with those options. It's tricky to get there though.

I'll just leave it at that because the rest doesn't really need to be discussed further.

Cheers,
Mike

gerardstrada
10-04-2010, 08:54 PM
I was mainly talking about conversion prior to mip-mapping.
Well, most of the times the differences has been almost imperceptible here, but that's indeed a good improvement :)


If the conversion to the colour space used for processing results in different numerical values due to a working colour space concept then that is absolutely wrong. The underlying maths is only correct for one colour space. Just like in the real world.
Is the contrary, otherwise CM workflows implemented in compositing packages would be absolutely wrong and they would provide absolutely wrong results, when the fact is that the workflow provides correct results. The underlying maths only take into account RGB numbers. Notice the color space in which the processing is made is the ambiguous RGB space and an absolute and defined color space is needed to provide the proper color appearance. This is basically made by any CM system for any compositing package, is made within LW with SG_CCTools, and now within Maya. It's in fact what happens in reality and the way how we mimic the real color flow when devices capture and display images. This is the way how new IBL implementations are able to match more easily and automatically an image captured by a real camera with the CG imagery... by replicating the real color flow in the imagery generation.



Well, not in Maya (which is what I believe we're talking about). And yes, I searched the docs.
The working color space concept has been adopted by Maya, too. They indeed call it working color profile in the manual.

But check for yourself from Autodesk web page:

http://area.autodesk.com/maya2011/features

"A working color profile can be set globally and overridden on individual textures and render passes"



In the case of converting from some image with a profile to the one and only working colour space it doesn't make any difference.
The monitor profile is of no interest except for displaying to the user or if said monitor is the final output medium (which is rarely the case).

It indeed makes difference. i.e: an RGB2XYZ conversion depends on the phosphors of the monitor that the image was originally intended to be displayed on. When this data is not available, itu 601 is usually assumed, but in such case results can be from slightly different (in an sRBG range monitor) to drastically different (in an aRGB range monitor). You can test this by yourself within LW with DP_IFNE and the Transform2 node used as a color matrix.


Yup, and that's also why the pre-float colour management concept used by both PS and AE doesn't make a lot of sense in the film/video world either. Especially as it didn't fulfil one requirement back in the days: to make sure that the actual processing is correct.

The CM concept of AFX was born with the new ICC v4.2 for Motion Picture Workflows, not before. That version works natively in FP space, and as far as I know, major CM systems get updated to FP space based on the work of the International Color Consortium. And before that, proprietary color transformations algorithms were based also in the ICC v2.1 rendering intents. I've had the chance to try Lustre and CineSpace personally, and surprisingly, ICC/ICM color conversions provided more accurate results (compared with the final medium) than their expensive counterparts (3D LUTs) ..yes, within AFX.


You can mimic just about anything with those options. It's tricky to get there though.

I'll just leave it at that because the rest doesn't really need to be discussed further.

Cheers,
Mike

Not really, Mike. Simple RGB exposed controls are just like 1D LUTs. Adjustments are not only tricky but inconsistent from image to image, since we can not mimic the changes in hues and saturation of the color gamuts topographies with that. But if our monitor is LCD, results will never be correct because they don't decouple. That's why equalEyes is not very good for LCD displays. Maybe with a sort of 3Dcolor cube control, but that would be slower.



Gerardo

Lightwolf
10-05-2010, 02:27 AM
It's in fact what happens in reality and the way how we mimic the real color flow when devices capture and display images.
I think this is the core of the matter... when it comes to processing the raw numbers both the input as well as the output device characteristics should be moot. Which is one major reason why CM is needed (as well as being able to accurately see what you're doing, but that again is a fringe issue in a pipeline. Fringe as in: it happens on the borders, not the inside - and the display is a border)


The working color space concept has been adopted by Maya, too. They indeed call it working color profile in the manual.
Interesting, I didn't find it in the actual manual page on colour management.


"A working color profile can be set globally and overridden on individual textures and render passes"
Looking at that and their implementation it's more like a default conversion profile as opposed to a working profile.


It indeed makes difference. i.e: an RGB2XYZ conversion depends on the phosphors of the monitor that the image was originally intended to be displayed on.
In this case it depends on what tool you're talking about. Gamut in Fusion is a colour tool while Colour Space is basically a channel twister (and thus more like a generic image processing function).


I've had the chance to try Lustre and CineSpace personally, and surprisingly, ICC/ICM color conversions provided more accurate results (compared with the final medium) than their expensive counterparts (3D LUTs) ..yes, within AFX.
More accurate results in what sense? Colour proofing or image processing?

Because if it's colour proofing then we're talking about two different things again...
[QUOTE=gerardstrada;1065628]
Not really, Mike. Simple RGB exposed controls are just like 1D LUTs. Adjustments are not only tricky but inconsistent from image to image, since we can not mimic the changes in hues and saturation of the color gamuts topographies with that. But if our monitor is LCD, results will never be correct because they don't decouple. That's why equalEyes is not very good for LCD displays. Maybe with a sort of 3Dcolor cube control, but that would be slower.
Who is talking about simple RGB controls?

Cheers,
Mike

gerardstrada
10-05-2010, 05:22 PM
I think this is the core of the matter... when it comes to processing the raw numbers both the input as well as the output device characteristics should be moot. Which is one major reason why CM is needed (as well as being able to accurately see what you're doing, but that again is a fringe issue in a pipeline. Fringe as in: it happens on the borders, not the inside - and the display is a border)
Well, we are discussing it here :)


Interesting, I didn't find it in the actual manual page on colour management.
hmm... I think I did read it in one of their documentations. Anyway, it's in the frontpage of their official features website.


Looking at that and their implementation it's more like a default conversion profile as opposed to a working profile.
I think they call it a working color profile because the way is used when we specify the color profile used for the internal rendering color calculations in MentalRay (which supports color profiles). These options looks like the global equivalent of the local options found in SG_CCFilter for input and output profiles. In this way the input profiles can be assigned (and changed individually later in the Texture node) and convert them to a common working color space. Looks like same concept but different implementation. MentalRay for Maya transforms the rendering color space to an internal color space - guess the RGB color space (which from a programmer POV is the real working color space).


In this case it depends on what tool you're talking about. Gamut in Fusion is a colour tool while Colour Space is basically a channel twister (and thus more like a generic image processing function).
Yes I was referring to the ColorSpace tool:

"...or the ColorSpace tool (which is in fact a ColorModel tool) is not accurate neither in its RGB conversions since it assumes a generic color space (itu 601) instead of the phosphors of the monitor that the image was originally intended to be displayed on."



More accurate results in what sense? Colour proofing or image processing?

Because if it's colour proofing then we're talking about two different things again...
Color transformations algorithms are the same for color managing input images (for processing) or output images (for previewing). In fact, our results were not surprising since ICC/ICM profiles produces more accurate color appearance than 3DLUTs with their advanced interpolation algorithms. In spite of being more accurate, the main limitation of ICC/ICM profiles is their speed in post-production work, which is critical mostly with 4k/8k images. But I think they are better for CG work.


Who is talking about simple RGB controls?

Cheers,
Mike
Otherwise (kind of 3DcolorCube controls/Matrix) solution wouldn't be so practical with so many controls and slower than the CustomGamut tool.



Gerardo

Lightwolf
10-05-2010, 05:41 PM
I think they call it a working color profile because the way is used when we specify the color profile used for the internal rendering color calculations in MentalRay (which supports color profiles).

You're right, it does support a selection of them. They're used when writing out the frame buffer or to convert color arguments of shaders.


These options looks like the global equivalent of the local options found in SG_CCFilter for input and output profiles.
Within mental ray it's either the destination or the source colour profile, never both, as the internal representation is known.
I suppose textures can use the functionality as well, but that would be equivalent to using a node in LW.


Yes I was referring to the ColorSpace tool
Yeah, o.k. then. I see that as a fancy channel mixer - as long as it's completely reversible (and it is) it fulfils it's purpose.


Color transformations algorithms are the same for color managing input images (for processing) or output images (for previewing).
Yes, that's why I made the distinction to actual image processing (i.e. transform, blend, blur, etc...).


Otherwise (kind of 3DcolorCube controls/Matrix) solution wouldn't be so practical with so many controls and slower than the CustomGamut tool.

What do you mean by a 3D color cube / matrix, a 3D LUT?

Cheers,
Mike

gerardstrada
10-05-2010, 09:26 PM
You're right, it does support a selection of them. They're used when writing out the frame buffer or to convert color arguments of shaders.
According to the manual, the profile determines the rendering color space, these colors are transformed to an internal color space before they are written to the color frame buffer.


Within mental ray it's either the destination or the source colour profile, never both, as the internal representation is known.
I suppose textures can use the functionality as well, but that would be equivalent to using a node in LW.
Just in case, I'm referring to the I/O profile parameters for color transformations within the filter, not to input or output images.


Yeah, o.k. then. I see that as a fancy channel mixer - as long as it's completely reversible (and it is) it fulfils it's purpose.
Even in that way, and mainly for people with non-sRGB-ish monitors, it's not a bad idea to have the chance to take into account (in an easier way) the monitor's profile within the package, I think.


What do you mean by a 3D color cube / matrix, a 3D LUT?

Cheers,
Mike

I'm referring to a matrix where we specify how much red, green and blue contributes to each RGB channels. Something like what we are able to achieve with the Transform2 node within DP_IFNE or something like the advanced controls in the spectral sensitivity color settings of Virtual DarkRoom:

rR rG rB
gR gG gB
bR bG bB

These kind of controls are slower and indeed more tricky to setup than simple

R G B

controls.



Gerardo

Lightwolf
10-06-2010, 02:25 AM
According to the manual, the profile determines the rendering color space, these colors are transformed to an internal color space before they are written to the color frame buffer.
Interestingly enough I've yet to see a single shader (in source code) that would actually take colour profiles into account when computing colours.

Just in case, I'm referring to the I/O profile parameters for color transformations within the filter, not to input or output images.
Strictly speaking any kind of processing is a colour transformation. And taking profiles into account every time a pixel is read or written anywhere in the system is ludicrous at best.


Even in that way, and mainly for people with non-sRGB-ish monitors, it's not a bad idea to have the chance to take into account (in an easier way) the monitor's profile within the package, I think.
Actually, that is a very bad idea. Since it's used as an image processing tool, you wouldn't want the result to vary depending on the machine it's run on. That idea is even worse if you start to take a render farm into account.


I'm referring to a matrix where we specify how much red, green and blue contributes to each RGB channels. Something like what we are able to achieve with the Transform2 node within DP_IFNE or something like the advanced controls in the spectral sensitivity color settings of Virtual DarkRoom:

rR rG rB
gR gG gB
bR bG bB

These kind of controls are slower and indeed more tricky to setup than simple

R G B

controls.

Yes, but easy to derive from imagery and extremely fast to compute. And you wouldn't want to control a profile anyhow. You either have a proper one that you measured or you don't.

Cheers,
Mike

gerardstrada
10-06-2010, 02:27 PM
Interestingly enough I've yet to see a single shader (in source code) that would actually take colour profiles into account when computing colours.
Though they say color parameters of shaders may also be flagged with a color profile, I don't think also the color profile is taken into account in the source code of the shader. It seems the color profiles conversions in that case are made before shader evaluation.


Strictly speaking any kind of processing is a colour transformation. And taking profiles into account every time a pixel is read or written anywhere in the system is ludicrous at best.
And I agree, but the color transformations I'm referring is the color conversions performed by the SG_CCFilter or Node only. When we apply them in pre-process (Image Editor) or Surface NE, LW reads the color transformation once only, after that, color conversions for those colors or images are not re-calculated (even if we edit other aspects of the node tree) unless we change the settings within the filter or node again. So with LW & SG_CCTools, and contrary to Maya, color management doesn't increase render times at all.


Actually, that is a very bad idea. Since it's used as an image processing tool, you wouldn't want the result to vary depending on the machine it's run on. That idea is even worse if you start to take a render farm into account.
It's a good idea precisely for the opposite reason, because without the possibility to recognize a monitor's profile, result indeed vary depending on the machine Fusion is running on. You have to see how the itu 601 space is displayed with an aRGB monitor in a non-color managed environment. All colors blow up.


Yes, but easy to derive from imagery and extremely fast to compute. And you wouldn't want to control a profile anyhow. You either have a proper one that you measured or you don't.

Cheers,
Mike
Extremely fast to compute at OS level? hmmm... wondering why 3DLUTs are necessary then. And as you said first, it's tricky to get the proper results. They are too many controls and you'd need the proper color matrix conversion for each color space. It could be a valid solution though, but still too complex for something that could be solved easily by the package by just recognizing ICC/ICM profiles.



Gerardo

Lightwolf
10-06-2010, 03:54 PM
When we apply them in pre-process (Image Editor) or Surface NE, LW reads the color transformation once only, after that, color conversions for those colors or images are not re-calculated (even if we edit other aspects of the node tree) ...
Actually they are, unless you apply them directly to the image as an image filter.


It's a good idea precisely for the opposite reason, because without the possibility to recognize a monitor's profile, result indeed vary depending on the machine Fusion is running on. You have to see how the itu 601 space is displayed with an aRGB monitor in a non-color managed environment. All colors blow up.
But that's not the point of the tool in the first place. It's like saying you'd want a colour correction to render differently or a scale to produce a different result because of... what, the display colour profile? You might as well use the current voltage of the CPU as a metric then ;)


Extremely fast to compute at OS level?
At any level.


hmmm... wondering why 3DLUTs are necessary then.
Because they're even simpler to process (especially on older GPUs).


And as you said first, it's tricky to get the proper results. They are too many controls and you'd need the proper color matrix conversion for each color space. It could be a valid solution though, but still too complex for something that could be solved easily by the package by just recognizing ICC/ICM profiles.
It depends on what you want. You can easily generate a 3D LUT from a test image if you have a before/after version for example.
(Which is also why the idea of protected profiles is, rightfully, a silly one).

Cheers,
Mike

gerardstrada
10-06-2010, 04:43 PM
Actually they are, unless you apply them directly to the image as an image filter.
Naither in Surface NE as long as we don't disconnect the image/texture/color to the SG_CCNode. We can change everything in the node setup and LW doesn't recalculate the color conversion.



But that's not the point of the tool in the first place. It's like saying you'd want a colour correction to render differently or a scale to produce a different result because of... what, the display colour profile? You might as well use the current voltage of the CPU as a metric then ;)
We need to proof colors according to the monitor profile to keep the intended color appearance. That's why any CM system need the monitor profile in the first place.


At any level.

Because they're even simpler to process (especially on older GPUs).
If they are simpler to process (at any level), then wondering why CM systems supports complex and closed algorithms interpolations to accelerate and deal with 3D LUTs. equalEyes for example, works only with 1D LUTs precisely to keep faster color conversions. The slowness of 3DLUTs and how they have to be managed for keeping accuracy in less computation time is a critical aspects in all CM system. Guess this will change with new GPUs.


It depends on what you want. You can easily generate a 3D LUT from a test image if you have a before/after version for example.
(Which is also why the idea of protected profiles is, rightfully, a silly one).

Cheers,
Mike
Well, it also depends on the purpose of the LUT. If the LUT is for artistic purposes, that might be Ok, but when LUTs are for accurate color profile conversions (where gamut mapping, nodes interpolations and other color spaces transformations are critical) then manufacturers of CM systems need to protect their know-how, since that's the heart of their business.



Gerardo

Lightwolf
10-06-2010, 05:09 PM
Naither in Surface NE as long as we don't disconnect the image/texture/color to the SG_CCNode. We can change everything in the node setup and LW doesn't recalculate the color conversion.
It calculates it every single time a node output is evaluated, which can be multiple times per surface and final pixel.
It may not need to set-up the profile, but it needs to process the conversion every time.
Even using something like the Cache node will only help speed up a current evaluation.
Nothing is cached in the node editor (except for one time per frame set-ups of node _internal_ data). The processing is live every time.


We need to proof colors according to the monitor profile to keep the intended color appearance. That's why any CM system need the monitor profile in the first place.
Yes, I know. But that has nothing to do with what the tool is intended for or image processing operations in general.


If they are simpler to process (at any level), then wondering why CM systems supports complex and closed algorithms interpolations to accelerate and deal with 3D LUTs.
I've no idea to be honest. And yes, obviously a 1D LUT will be even faster.


Well, it also depends on the purpose of the LUT. If the LUT is for artistic purposes, that might be Ok, but when LUTs are for accurate color profile conversions (where gamut mapping, nodes interpolations and other color spaces transformations are critical) then manufacturers of CM systems need to protect their know-how, since that's the heart of their business.

Yeah, but there's nothing that can be protected if you can extract the process into a 3D LUT easily.

Cheers,
Mike

gerardstrada
10-06-2010, 06:13 PM
It calculates it every single time a node output is evaluated, which can be multiple times per surface and final pixel.
It may not need to set-up the profile, but it needs to process the conversion every time.
Even using something like the Cache node will only help speed up a current evaluation.
Nothing is cached in the node editor (except for one time per frame set-ups of node _internal_ data). The processing is live every time.
Something must be missed since it would mean that every time the node output is evaluated, the conversion would be re-processed, however the practical usage behaves differently: after the first calculation, LW takes no time for subsequent changes or render tests if the input texture or color is not disconnected from the node.


Yes, I know. But that has nothing to do with what the tool is intended for or image processing operations in general.
If sRGB-ish monitors are assumed, it's Ok. I'm just saying that the monitor profile recognition within a CM workflow would provide more accurate color reproduction in any case (and with any color tool).


I've no idea to be honest. And yes, obviously a 1D LUT will be even faster.
Guess those 3DLUTs for color profile conversions are somehow different then.


Yeah, but there's nothing that can be protected if you can extract the process into a 3D LUT easily.

Cheers,
Mike

The thing is that even when 3D LUT is able to contain in itself all the data necessary for a correct color mapping and color reproduction, the color transformation engines of advanced CM systems that generate the 3D LUTs don't include this data within the file. These 3D LUTs behaves accurately only within the CM system environment. i.e: without a CM system, linear interpolations are assumed and that can screw up the results depending on the color matrix resolution and how complex the original algorithms are. Reverse engineering is pretty hard without these data within the LUT file. And things are actually worse with the Academy/ASC LUT format.



Gerardo

COBRASoft
10-06-2010, 06:27 PM
I'm lost with all this high tech color talk :(.

Lightwolf
10-06-2010, 06:31 PM
Something must be missed since it would mean that every time the node output is evaluated, the conversion would be re-processed, however the practical usage behaves differently: after the first calculation, LW takes no time for subsequent changes or render tests if the input texture or color is not disconnected from the node.
Then there might be some preprocessing going on within the node to prep the conversion.
Trust me though, node connections are re-evaluated _every_ time (and imho more often than needed, but that's another issue).
This is not true for nodes applied as an image filter to an image, as the resulting image is actually stored in memory by LW in that case.
(Technically the nodes would still be re-evaluated for every pixel, but they're just not used anyore at all).
When it comes to surface shading it's brute force all the way.


If sRGB-ish monitors are assumed, it's Ok. I'm just saying that the monitor profile recognition within a CM workflow would provide more accurate color reproduction in any case (and with any color tool).
And it doesn't, because it is not a colour tool in the first place. It's an image processing tool. But that's what I've been saying all along. It's purpose it entirely different (which is quite obvious if you actually use it in a project).


Guess those 3DLUTs for color profile conversions are somehow different then.
I've no idea of what they use internally... *shrugs*


These 3D LUTs behaves accurately only within the CM system environment. i.e: without a CM system, linear interpolations are assumed and that can screw up the results depending on the color matrix resolution and how complex the original algorithms are.
Once you have your set of images you can generate the 3D LUT in any resolution you want (for as many data samples as there are pixels in the image).

Interestingly the idea of protected profiles was the laughing stock at the Siggraph course. And that was the speakers making fun of the idea.

Cheers,
Mike

gerardstrada
10-06-2010, 07:27 PM
Then there might be some preprocessing going on within the node to prep the conversion.
Trust me though, node connections are re-evaluated _every_ time (and imho more often than needed, but that's another issue).
This is not true for nodes applied as an image filter to an image, as the resulting image is actually stored in memory by LW in that case.
(Technically the nodes would still be re-evaluated for every pixel, but they're just not used anyore at all).
When it comes to surface shading it's brute force all the way.
Probably there must be a pre-processing going on. Would have to ask to Sebastian. Anyway, the good thing is that re-evaluation takes no time :)



And it doesn't, because it is not a colour tool in the first place. It's an image processing tool. But that's what I've been saying all along. It's purpose it entirely different (which is quite obvious if you actually use it in a project).
Sorry, but color models conversions are color transformations performed by the ColorSpace tool (Tools/Color/ColorSpace). But beyond the semantics and the usage one gives it, without the recognition of the monitor profile, results won't be predictable because color reproduction is wrong (which is quite obvious if you actually use it in a color managed project). i.e.: If you use it as a fancy channel mixer tool as you said, lets say to diminish later the reds in the blues and you are in a sRGB-ish monitor, results will look Ok there, but if later you change to an aRGB monitor (or share the project with another studio, etc) you'll see that the reds in that image will be nuclear. If Fusion would be able to recognize the monitor profile, it wouldn't matter what monitor we use, we always keep - as much as possible - the same color appearance.



I've no idea of what they use internally... *shrugs*
Of course they are different...



Once you have your set of images you can generate the 3D LUT in any resolution you want (for as many data samples as there are pixels in the image).

Interestingly the idea of protected profiles was the laughing stock at the Siggraph course. And that was the speakers making fun of the idea.

Cheers,
Mike
You are talking about two completely different things. People think that what they see in their monitors is all what they get, but that's not the case in film, laser projections and print work. It seems that some people don't know that 3D LUTs for artistic purposes are totally different than LUTs for accurate color profile conversions where gamut mapping, chromatic adaptations, nodes interpolations, managing of out-of-gamut areas, and several other color spaces transformations need to be taken into account.



Gerardo

Lightwolf
10-07-2010, 05:11 AM
Probably there must be a pre-processing going on. Would have to ask to Sebastian. Anyway, the good thing is that re-evaluation takes no time :)
Well, it surely does, even with pre-processing.


Sorry, but color models conversions are color transformations performed by the ColorSpace tool (Tools/Color/ColorSpace). But beyond the semantics and the usage one gives it, without the recognition of the monitor profile, results won't be predictable because color reproduction is wrong (which is quite obvious if you actually use it in a color managed project).
Actually, they're only predictable within the usage context if the display profile is not taken into account. As I mentioned earlier, it's like having other processing functionality take the display profile into account.
The display profile is there to judge the output, not to influence the computation.

Since you don't seem know what it's actually used for, here's an example:

Quite often chroma is samples at a rate of less than one sample per pixel when dealing with video (4:2:2 and especially 4:2:0 and 4:1:1). Which sucks if you try to key. One way to improve the chance of success here is to convert to YUV, blur U and V (either radially or horizontally depending on the source) and then convert back to RGB.


You are talking about two completely different things. People think that what they see in their monitors is all what they get, but that's not the case in film, laser projections and print work.
I don't know what "people" think, but it's quite obvious that numerically we work in ranges that are way beyond our reproduction devices.


It seems that some people don't know that 3D LUTs for artistic purposes are totally different than LUTs for accurate color profile conversions where gamut mapping, chromatic adaptations, nodes interpolations, managing of out-of-gamut areas, and several other color spaces transformations need to be taken into account.

It's still a LUT, a Look Up Table. Pipe in a number and it returns one. Pipe in a series of numbers and you can reconstruct it. Computer science 101, no magic involved.

Cheers,
Mike

gerardstrada
10-07-2010, 04:30 PM
Well, it surely does, even with pre-processing.
Then it might be other thing, because as you can confirm it if you try it, subsequent calculations after the first conversion take no time.



Actually, they're only predictable within the usage context if the display profile is not taken into account. As I mentioned earlier, it's like having other processing functionality take the display profile into account.
The display profile is there to judge the output, not to influence the computation.

Since you don't seem know what it's actually used for, here's an example:

Quite often chroma is samples at a rate of less than one sample per pixel when dealing with video (4:2:2 and especially 4:2:0 and 4:1:1). Which sucks if you try to key. One way to improve the chance of success here is to convert to YUV, blur U and V (either radially or horizontally depending on the source) and then convert back to RGB.
You are talking about the usage of ColorSpace tool for performing later image transformations, and of course in that context the monitor profile has no relevance, but I'm talking about the usage of ColorSpace tool for performing color transformations, where it indeed has relevance. Two different things.

Since you don't seem know what it's actually used for, here's an example:

RGB color space have correlations between their channels that are not necessarily proportional with our visual perception, which complicates the process of performing perceptually equal changes in color values. This may be useful to adjust color balance while keeping other colors also looking realistic. In order to change the appearance of a pixel’s color in a coherent way, we need a color space that represent the response of the three types of cones of the human eye. LMS space can be used for this purpose, which has the advantage in this case to be logarithmic as well (though Photoshop's Lab space would be more suitable). We are able to convert then from RGB color space to XYZ, and from there to LMS with a Matrix tool.

In this context we indeed need an accurate color reproduction of what we are doing, and a monitor profile recognition is needed, otherwise adjustments won't be predictable.


I don't know what "people" think, but it's quite obvious that numerically we work in ranges that are way beyond our reproduction devices.
Well, thinking that LUTs for transferring looks and LUTs for color profile conversions are the same thing doesn't reveal a clear knowledge. Besides, in the case of the LUT generated from the set of before/after images you are referring, it depends on the color space of these images. If these images are in the sRGB range, you won't have un-reproducible colors.


It's still a LUT, a Look Up Table. Pipe in a number and it returns one. Pipe in a series of numbers and you can reconstruct it. Computer science 101, no magic involved.

Cheers,
Mike
I work with both kind of LUTs, and it's not so simple because the aspects that should be taken into account to build the LUTs are quite different, and results are not the same thing in practice. In the case of the LUT generated from the set of before/after images you are referring, what we are just doing is to measure the difference between the before/after images to transfer this color difference from one image to the other. No rocket science there because the process of generating this kind of LUT is not taking into account color spaces, seeing that obviously, both images are in the same color space. It's just a LUT for matching looks, it's purpose is not proofing colors (which is the purpose of my suggestion to artstorm), in such case both images are in different color spaces. For accurate color space conversions you won't need 2 images, because when 2 images are in different color spaces, even when the number of their blue colors is the same, both blues are indeed different, so what we need is 2 different types of color profiles, and the conversion and LUT generation need to take into account gamut mapping, chromatic adaptations, nodes interpolations, etc, etc, etc. And the thing is that several of the algorithms for preserving color appearance and optimizing color reproduction are proprietary.

In practice, if you want to use an artistic LUT as a color profile conversion LUT you'll fail. First because the result you'll obtain will be in the same color space, which means your final output won't look the same in the output medium because color and hues relationships has not been taken into account. Second because that LUT only will work only for the color scheme of the sample images, if for some reason the color scheme changes in the sequence (let's say because of lighting conditions) the look will screw up and you'll need to take new samples again. It's limited and certainly not versatile for proofing colors. That's why artistic LUTs are used for matching looks and color profile LUTs for color space conversions.



Gerardo

Lightwolf
10-07-2010, 05:03 PM
Then it might be other thing, because as you can confirm it if you try it, subsequent calculations after the first conversion take no time.
I will... and any nodal process takes time if the output is being used. Heck, it even takes time of the node just passes through values (not a lot though).


You are talking about the usage of ColorSpace tool for performing later image transformations, and of course in that context the monitor profile has no relevance, but I'm talking about the usage of ColorSpace tool for performing color transformations, where it indeed has relevance. Two different things.
Precisely. I'm talking about it's intended use, you talk about it's implied usage by you. ;)
It's a bit like blaming a Spaniard for not speaking Swahili ;)


Since you don't seem know what it's actually used for, here's an example:
Actually, your example only shows that you don't seem to know what the tools purpose is. Which is perfectly o.k.


Well, thinking that LUTs for transferring looks and LUTs for color profile conversions are the same thing doesn't reveal a clear knowledge.
I'm talking about the concept of LUts, not their use.

Besides, in the case of the LUT generated from the set of before/after images you are referring, it depends on the color space of these images. If these images are in the sRGB range, you won't have un-reproducible colors.
D'oh... Honestly.


I work with both kind of LUTs, and it's not so simple because the aspects that should be taken into account to build the LUTs are quite different, and results are not the same thing in practice. In the case of the LUT generated from the set of before/after images you are referring, what we are just doing is to measure the difference between the before/after images to transfer this color difference from one image to the other.
Which precisely what a LUT does in the first place (as the name implies).
And that's also precisely why you can use a LUT to capture any kind of colour transformation (assuming that it is not context sensitive in terms of surrounding pixels of course, but MC doesn't anyhow - unlike some kinds of tone-mapping).

No rocket science there because the process of generating this kind of LUT is not taking into account color spaces, seeing that obviously, both images are in the same color space.
Even if they were in different colour spaces, the captured LUT still captured the entire transform from A to B (assuming a suited A for measurement of course).
Still no rocket science.
Honestly, look up the concepts of LUTs again, especially 3D LUTs.
If you know the corresponding output value of any possible input value, then there is no need to know the process that leads to the output in the first place, as simple as that.

Cheers,
Mike

Lightwolf
10-07-2010, 05:17 PM
Then it might be other thing, because as you can confirm it if you try it, subsequent calculations after the first conversion take no time.
Believe it or not, I just constructed a simple test case (front projection mapping, simple geometry, high amount of AA to get some decent render times).
1m18s without the SG_CCNode on the surface, 1m49s with the node.
Which is a 30s difference. Mind you, this is a test case to highlight the difference.

Cheers,
Mike

gerardstrada
10-07-2010, 11:21 PM
Believe it or not, I just constructed a simple test case (front projection mapping, simple geometry, high amount of AA to get some decent render times).
1m18s without the SG_CCNode on the surface, 1m49s with the node.
Which is a 30s difference. Mind you, this is a test case to highlight the difference.

Cheers,
Mike
Have no idea what you are doing there... The point is that if you edit the node tree in any way (without disconnecting the texture from the SG_CCNode), the color conversion is not re-calculated by the node.


Precisely. I'm talking about it's intended use, you talk about it's implied usage by you. ;)
It's a bit like blaming a Spaniard for not speaking Swahili ;)
Not. Even if you don't like it, ColorSpace tool performs color transformations, not image processing, which is what you do after that. But what you do after the color transformation of ColorSpace tool, it's up to you. You perform an image processing, I perform other color transformation.

Don't understand how the recognition of a monitor color space profile could damage the package. If you don't know how to use it, just don't use it, but people who work with CM workflows can take advantage of it.


Actually, your example only shows that you don't seem to know what the tools purpose is. Which is perfectly o.k.
The purpose of the tools are any type of purpose we can take from them. I don't limit the usage of my tools to any squared criteria.


I'm talking about the concept of LUts, not their use.
Not, check your first post, you were talking about the usage of a tool for matching a color look from a set of before/after images. That's the usage, not the concept.



D'oh... Honestly.
Yeah, sure.


Which precisely what a LUT does in the first place (as the name implies).
And that's also precisely why you can use a LUT to capture any kind of colour transformation (assuming that it is not context sensitive in terms of surrounding pixels of course, but MC doesn't anyhow - unlike some kinds of tone-mapping).
Yes, that's precisely what a LUT does, and you can capture any kind of colour transformation, but you don't seem to know that the implications behind of building each type of LUT and their results are totally different if you are building a LUT for just transferring looks than if you are building a LUT for color profile conversions. Because they provide different results in practice as I explained previously.


Even if they were in different colour spaces, the captured LUT still captured the entire transform from A to B (assuming a suited A for measurement of course).
Still no rocket science.
Honestly, look up the concepts of LUTs again, especially 3D LUTs.
If you know the corresponding output value of any possible input value, then there is no need to know the process that leads to the output in the first place, as simple as that.

Cheers,
Mike

Honestly, look up the concepts of LUTs again, especially 3D LUTs for color profile conversions. LUTs for just transferring looks need to be in the same color space, that's the way how you know that the output value actually corresponds to any possible input value, otherwise you won't get correct results.



Gerardo

Lightwolf
10-08-2010, 01:57 AM
Have no idea what you are doing there... The point is that if you edit the node tree in any way (without disconnecting the texture from the SG_CCNode), the color conversion is not re-calculated by the node.
Well, apparently it is, otherwise it would not use any processing time when rendering.
It's re-calculated every time LW pulls a value from the nodal surface.


Not. Even if you don't like it, ColorSpace tool performs color transformations, not image processing, which is what you do after that.
So does a channel boolean or a matte control by that standard...


Don't understand how the recognition of a monitor color space profile could damage the package. If you don't know how to use it, just don't use it, but people who work with CM workflows can take advantage of it.

Well, for one because the display space has no place within a processing pipeline - but I suppose that's just me. You seem to prefer composites and renders that produce different results depending on the peripherals hooked up to a machine. I think that actually counters the idea of colour management completely - which is used to produce predictable results.


The purpose of the tools are any type of purpose we can take from them. I don't limit the usage of my tools to any squared criteria.
Oh, be as creative as you like to be, no problem there. Just don't expect more than the intended purpose.


Honestly, look up the concepts of LUTs again, especially 3D LUTs for color profile conversions. LUTs for just transferring looks need to be in the same color space, that's the way how you know that the output value actually corresponds to any possible input value, otherwise you won't get correct results.
In the same colour space as what? It doesn't matter how you re-create the transformation, if it produces the same results it's correct. And any given transformation can be captured and then re-applied within the same context producing the same results. Regardless of an colour space issues (as long as the images you apply the capture to are in the same space).

It's not necessarily elegant - but that's a different issue.

Cheers,
Mike

GraphXs
10-08-2010, 08:09 AM
Wow, I'm really try'n to understand all this..but my brain is about to exploded! So is CC in GC just base on the output? (web, blueray,NTSC, Film,LCD, etc) Gamma correction 2.2 is used to make the image look correct, for what? Also after the 3D is rendered at gamma 2.2 the compositing package, say After Effects needs to be corrected to Gamma 2.2 to look correct?

gerardstrada
10-08-2010, 02:00 PM
Well, apparently it is, otherwise it would not use any processing time when rendering.
It's re-calculated every time LW pulls a value from the nodal surface.
Maybe, but for some reason, re-calculation in the node (before rendering) takes no time here. I have no programing background, so have to ask to Sebastian (he just got married, btw!).


So does a channel boolean or a matte control by that standard...
In some way it depends on how we use the tools, I think.


Well, for one because the display space has no place within a processing pipeline - but I suppose that's just me. You seem to prefer composites and renders that produce different results depending on the peripherals hooked up to a machine. I think that actually counters the idea of colour management completely - which is used to produce predictable results.

Without the monitor's profile for color proofing, we won't get predictable results. That's the reason why CM systems need a display profile. Images need to be converted to a common working color space in order they can be processed correctly and the display profile is necessary for correct reproduction of the output colors. Without these two aspects taken into account will happen precisely what you are referring, that is to say, any color decision you make in your composites and renders will look different according to the display device.



In the same colour space as what? It doesn't matter how you re-create the transformation, if it produces the same results it's correct. And any given transformation can be captured and then re-applied within the same context producing the same results. Regardless of an colour space issues (as long as the images you apply the capture to are in the same space).

It's not necessarily elegant - but that's a different issue.

Cheers,
Mike
The thing is that it doesn't produce the same results. LUTs for just transferring looks need to be in the same color space (and not any color space) because otherwise the values of the before/after images won't correspond between them.

An example:

We have 2 blue pictures. totally blue. one is in sRGB space and the other is in aRGB space. If you read the color values of both images, you'll see let's say 0-0-255. same value in maths, but the fact is that aRGB gamut is bigger than sRGB gamut, and in practice, the blue color (0-0-255) of the image in aRGB space is a lot more saturated than the blue color (0-0-255) in sRGB space. However, in maths, both have the same numbers, and if we try to get the difference of both values to build a 3DLUT, we get no difference, so our 3DLUT will be useless for that value. In a more complex image (let's say a landscape) other values will be totally wrong.

Now, in order to get a more useful 3DLUT (for artistic purposes) both images need to be in the same color space. But this common color space need to be an absolute and unambiguous color space, big enough to contain both gamuts. But the critical aspect is not only the gamut, but the chromatic adaption for both white points, black point compensation, viewing conditions and several other aspects. But let's stay with the gamut issue only. If for some reason this common color space doesn't contain both gamuts completely, we'll have blind areas for the 3DLUT an it will be useless there too. If these areas are not contained in our display gamut, we won't notice any difference, but it will be notorious in the output medium. The other issue is that if for some reason, color temperature changes, the color relationships will vary too, and our 3DLUT will be useless again.

These issues are solved (and sometimes with not so optimal solutions depending on the color engine) when we use 3DLUTs from color profile conversions.



Gerardo

gerardstrada
10-08-2010, 02:05 PM
Wow, I'm really try'n to understand all this..but my brain is about to exploded! So is CC in GC just base on the output? (web, blueray,NTSC, Film,LCD, etc)
In color management workflows, color corrections take into account input images from multiple sources, a common working color space, the output medium, its reference display and also our preview display. Idea is get color consistency and predictability no matter the image systems, applications or output mediums.


Gamma correction 2.2 is used to make the image look correct, for what?
For most of monitors.


Also after the 3D is rendered at gamma 2.2 the compositing package, say After Effects needs to be corrected to Gamma 2.2 to look correct?
By working with a LCS workflow, final render has no gamma (1.0) and it's saved in that way to be processed linearly in the compositing package which apply a display LUT. Gamma applied within the 3D package is for preview purposes only.



Gerardo

Lightwolf
10-08-2010, 04:37 PM
Maybe, but for some reason, re-calculation in the node (before rendering) takes no time here.[/SIZE]
Well, of course not, the hit happens during rendering - not before. But that's what I tried to say all along.
[QUOTE=gerardstrada;1066910]
Without the monitor's profile for color proofing, we won't get predictable results....[/SIZE]
The result of a composite should always be the same regardless of the display profile used. After all, the display profile is a temp profile for viewing, not a permanent profile that gets baked into the comp (if it does then you have a problem). Any reliance on the display profile for computing the final output is a bake of the display profile.
You need the display profile to be able to visually judge your comp, but not to accurately render it (unless your monitor is the final output medium and you wish to bake that profile in.
That's why it's a display profile.
[QUOTE=gerardstrada;1066910]
We have 2 blue pictures. totally blue. one is in sRGB space and the other is in aRGB space....
Maybe I wasn't clear enough. A 3D LUT can be created for any colour conversion from any specific colour space to any other specific colour space. And it will work precisely in that case only (but with different source images of course, as long as they're in the same colour space).
So in your case you'd have two LUTs, one from aRGB and the other from sRGB. Each one will respectively cover and recreate your entire colour transformation from aRGB or sRGB to whatever destination was used in the original set-up.

Cheers,
Mike

GraphXs
10-08-2010, 05:46 PM
Thanks gerardstrada, that helps me understand a little bit more. Q: Is it true that working in gamma space 2.2 make it easier to light GI scenes so we don't have to use a lot of lights to light the scene correctly. Does it make sense just to always work that way?

gerardstrada
10-08-2010, 06:43 PM
Well, of course not, the hit happens during rendering - not before. But that's what I tried to say all along.
Ah, Ok. was referring to the previews here.


The result of a composite should always be the same regardless of the display profile used. After all, the display profile is a temp profile for viewing, not a permanent profile that gets baked into the comp (if it does then you have a problem). Any reliance on the display profile for computing the final output is a bake of the display profile.
You need the display profile to be able to visually judge your comp, but not to accurately render it (unless your monitor is the final output medium and you wish to bake that profile in.
That's why it's a display profile.
For image processing, yes, but - without a CM workflow - not for color conversions that affect the final color appearance. Since by default, Fusion doesn't proof colors, your color decisions are based on your specific monitor space, and if you see the resulting color appearance in another machine, or even worst, in the output medium, it will be different or totally different. To avoid that, we need that Fusion recognize the monitor's profile for proofing colors. In fact, ICC/ICM compatibility would be easier. The concept of proofing colors implies the monitor appearance never get baked into the comp, it's for preview purposes only. If you want to output for computer, better use sRGB color space.


Maybe I wasn't clear enough. A 3D LUT can be created for any colour conversion from any specific colour space to any other specific colour space. And it will work precisely in that case only (but with different source images of course, as long as they're in the same colour space).
So in your case you'd have two LUTs, one from aRGB and the other from sRGB. Each one will respectively cover and recreate your entire colour transformation from aRGB or sRGB to whatever destination was used in the original set-up.

Cheers,
Mike
Well, when we are building a 3D LUT for color profile conversions we just need the color profiles in order the color engine makes its work. That's all. there's no before/after source images, nor 2 different LUTs.



Gerardo

gerardstrada
10-08-2010, 06:46 PM
Thanks gerardstrada, that helps me understand a little bit more. Q: Is it true that working in gamma space 2.2 make it easier to light GI scenes so we don't have to use a lot of lights to light the scene correctly.

GraphXs, maybe you are referring to the case of working in linear light by previewing with 2.2 gamma. In such a case, yes, I've noticed that GI scenes, lighting falloffs, DOF, mBlur, color blendings, diffuse shadings, fresnel effects and several other lighting and shading aspects behaves more realistically. I've noticed also that we need less light bounces for GI, too.


Does it make sense just to always work that way?

I think is highly recommended to work in linear color space (LCS), mostly with the new version of LW 10 which will solve several issues about this kind of workflows, but I encourage you to try it with the current version too, there are several tools and ways and it's indeed worthwhile.



Gerardo

Lightwolf
10-09-2010, 05:13 AM
Ah, Ok. was referring to the previews here.

It's effectively the same there, just less obvious because there's less pixels to process.


... To avoid that, we need that Fusion recognize the monitor's profile for proofing colors. In fact, ICC/ICM compatibility would be easier. The concept of proofing colors implies the monitor appearance never get baked into the comp, it's for preview purposes only.
Precisely what I've been trying to get across all along.


Well, when we are building a 3D LUT for color profile conversions we just need the color profiles in order the color engine makes its work. That's all. there's no before/after source images, nor 2 different LUTs.
The starting point here was that proprietary, closed and protected mechanisms don't make sense because you can bake any transformation into a 3D LUT. *shrugs*

Cheers,
Mike

gerardstrada
10-09-2010, 03:00 PM
It's effectively the same there, just less obvious because there's less pixels to process.
The first time it's very obvious here, but not later as you say.


Precisely what I've been trying to get across all along.
Great! :thumbsup:


The starting point here was that proprietary, closed and protected mechanisms don't make sense because you can bake any transformation into a 3D LUT. *shrugs*

Cheers,
Mike
It indeed makes sense in the case of 3DLUTs for color profile conversions. Because even when the raw color transformation is contained in the built LUT, in the case of 3D LUT formats from professional CM systems, the color space support, color spaces transformations and interpolations are managed in the CM system. not in the LUT file. This means that a 3DLUT file running out of the CM system will not produce a correct color reproduction. And you have to see the difference.

And this is the issue with the Academy/ASC LUT I was referring before, since it's ambiguous in the aspects mentioned previously.



Gerardo

Lightwolf
10-09-2010, 03:09 PM
The first time it's very obvious here, but not later as you say.
Well, I consider rendering to be "later", and that's where the real performance hit comes into play.


It indeed makes sense in the case of 3DLUTs for color profile conversions. Because even when the raw color transformation is contained in the built LUT, in the case of 3D LUT formats from professional CM systems, the color space support, color mapping management and interpolations are managed in the CM system. not in the LUT file. This means that a 3DLUT file running out of the CM system will not produce a correct color reproduction. And you have to see the difference.
But that's nothing we've been discussing. The point is... if you have a suitable image and pass it through any of the fancy shmancy proprietary systems then you can capture the colour transformation precisely and store that within a 3DLut.
I'm not saying that you create the 3D Lut within that system!

In a way it's exactly the same as measuring the characteristics of a display device. There is no reason to take it apart if you have the ability to feed it anything you want to and measure the respective output.

Cheers,
Mike

gerardstrada
10-09-2010, 04:44 PM
But that's nothing we've been discussing. The point is... if you have a suitable image and pass it through any of the fancy shmancy proprietary systems then you can capture the colour transformation precisely and store that within a 3DLut.
I'm not saying that you create the 3D Lut within that system!

In a way it's exactly the same as measuring the characteristics of a display device. There is no reason to take it apart if you have the ability to feed it anything you want to and measure the respective output.

Cheers,
Mike
This is precisely what we are discussing. Why these algorithms are proprietary. Because there's no practical way to get the same functionality, advantages and same practical results in other ways.

A LUT for looks is limited because what you are suggesting would be like getting a 3DLUT (for artistic purposes) from a 3D LUT of a CM system (for color profiles conversions). And the thing is that we should measure the displayed image to get the same color appearance. And if we measure the resulting displayed image with the 3DLUT applied as we do with the characteristics of a display device, we were limiting the resulting 3DLUT to that specific displaying device and gamut, and again, it will be only useful for the color appearance of that specific image in that specific device and in an static way. If by chance the colors in the sequence change their relationships (because let's say lighting conditions changed or it's other environment), our 3DLUT will be useless and we'll have all the problems mentioned previously. We would need a 3DLUT (for looks) taken from the resulting 3DLUT of the CM system for every single color scheme and sequence, and it shouldn't be baked in images because it would have a very limited gamut.

And if we already have the CM system which provides proper color transformation and reproduction in an easier way, why would we need other 3DLUT than the ones created by this system in the first place?
hmmm... maybe if we could not afford several workstations linked to the CM system... But in such case, we'd need to validate the resulting LUT with other LUT for matching the display devices which should be the same model preferably. Don't get me wrong, it's a valid solution if you can afford a CM system in the first place, but it has its limitations for production.

And that's the reason why studios still need the solutions provided by the CM systems manufacturers and the reason why they keep their algorithms proprietary. And that's why the people or studios who can not afford expensive CM systems are better with ICC/ICM profiles instead of 3D LUTs, I think.

ICC/ICM profiles are cheaper or free (same as their color engines), easier to use and more accurate, still slower but it hopefully will change with the new GPU technology.



Gerardo

Lightwolf
10-09-2010, 05:30 PM
This is precisely what we are discussing. Why these algorithms are proprietary.
Erm, no, please go back through the threads and check our discourse on this.

Cheers,
Mike

gerardstrada
10-09-2010, 05:51 PM
Erm, no, please go back through the threads and check our discourse on this.

Cheers,
Mike

Well, see first an RGB exposed control solution changing to a 3DLUT taken from before/after images in different color spaces (where the proprietary algorithms were questioned), later the same type of LUT but taking into account other LUTs for color conversions, and finally your last suggestion, which is indeed a valid solution but with some limitations for production.



Gerardo

Lightwolf
10-09-2010, 06:01 PM
Well, see first an RGB exposed control solution changing to a 3DLUT taken from before/after images in different color spaces (where the proprietary algorithms were questioned),

Erm, they weren't. It was proprietary profiles. And when I asked about specifics to what you were talking about the only thing that came up was matrices.

... later the same type of LUT but taking into account other LUTs for color conversions, and finally your last suggestion, which is indeed a valid solution but with some limitations for production.
That was my suggestion all along when it comes to matching a proprietary path. Whatever you read in between considering other LUTs doesn't sound like what I meant in the first place.

Depending on the circumstances baking out a conversion to a LUT isn't that bad either, especially if you have a fairly rigid pipeline.

The main implication here is, if all you need is a display profile and you work in an environment that profits from working in a defined colour space (such as compositing), then you don't necessarily need full CM support if you have 3D LUT support. There's a single defined input and output.

Cheers,
Mike

gerardstrada
10-09-2010, 07:40 PM
Erm, they weren't. It was proprietary profiles. And when I asked about specifics to what you were talking about the only thing that came up was matrices.

That was my suggestion all along when it comes to matching a proprietary path. Whatever you read in between considering other LUTs doesn't sound like what I meant in the first place.
Well, you referred first to RGB exposed controls and I talked about matrices before you begin to talk about LUTs ...Anyway. What is really proprietary (and the heart of the CM system) are the algorithms for color transformations and color reproduction within the CM system.



Depending on the circumstances baking out a conversion to a LUT isn't that bad either, especially if you have a fairly rigid pipeline.
Limitations in the process might be endured, but blind zones in out-of-gamut areas might be a problem depending on colors reproduced by the output medium.


The main implication here is, if all you need is a display profile and you work in an environment that profits from working in a defined colour space (such as compositing), then you don't necessarily need full CM support if you have 3D LUT support. There's a single defined input and output.

Cheers,
Mike
What happens is that color conversions are not always from the working color space to the output medium, sometimes we'll need a concatenation of color conversion when we take into account the real color flow. Let's say from the working color space to the output medium and from there to the reference display (this taking into account our preview display). If you have 3DLUT support from real color profiles conversions (from a CM system) you'll need the CM system for correct color reproduction, and if you have 3DLUT support from the baked conversion, you'll have the limitations mentioned in the previous posts. If you ask me, any of these solutions are better than nothing. But I still think that ICC/ICM compatibility would be better for most of users.



Gerardo

Lightwolf
10-10-2010, 05:16 AM
But I still think that ICC/ICM compatibility would be better for most of users.
Let me put it this way... it'd be better to have it. I would question "for most users" though. ;)

Cheers,
Mike

gerardstrada
10-10-2010, 06:02 PM
Let me put it this way... it'd be better to have it. I would question "for most users" though. ;)

Cheers,
Mike
I'm aware that many users don't pay attention to CM workflows, or they don't need it at all, or don't even know what's that, and they are the most. But I refer to most of users who need to pay attention to CM workflows, and mostly for CG work. Professionals, small/medium studios (from CG and post-production work) who need to take care about CM in their daily work (for laser projections, film and print) can not afford expensive CM systems (based on 3D LUTs). And they are the most in the group of users who need this. When we use 3DLUTs out of a CM system, linear interpolations are assumed, correct color reproduction is poor as well, and not consistent from display to display. Proprietary 3DLUTs from color profile conversions are not so cheap or very expensive (from USD 100 up to USD 5000), and costs for real CM systems increase per computer linked to the color system. The Academy/ASC LUT is ambiguous in several key aspects, as in the color space support, color spaces transformations and interpolations, and there's not a common agreement for the good of the standardization. It's in this context that programs with ICC/ICM compatibility provide predictable results and color consistency in a cheaper and easier way. Profiling is cheaper, color engines and color profiles are cheaper or free, it's a compatible standard with a wider range of image devices, and they support also CLUTs - 1D/3DLUTs (which means we can create and use 3DLUTs in an ICC/ICM environment with a better color reproduction), XML data directly associated with devices, metadata, white points, black point compensations, floating point encoding ranges, multiple rendering intents, viewing conditions specifications, and much more. The visual model CIECAM02 work with ICM profiles which can extend even more the CM functionalities. Indeed hope LW10 has ICC/ICM compatibility, at least for 3rd-party developers.



Gerardo

abdelkarim
06-05-2011, 03:01 PM
wow i like this girl :D . its awesome . i love she :D .

so beautifull . i love here :D .