PDA

View Full Version : Colour Management in LightWave



John Geelan
01-18-2009, 02:55 AM
I am coming from a successful colour management workflow in Photoshop for the purposes of print graphics.
I am new to CG and LightWave but want to implement a similar colour management workflow in LightWave for the purpose of print graphics.
I understand there are problems involved in doing so and that there are solutions being offered, for example by Gerado, for a linear workflow based on Sebastian Goetsch's SG-CCTools.
To understand the solution, I must first understand the problem.
Can anybody outline ALL of the problems involved in the implementation of a colour management workflow in LightWave with a view to print graphics?

akademus
01-18-2009, 11:54 AM
I guess the first and the biggest issue is LW's inability to work in CMYK colour space due to it's nature as animation and FX package.

Captain Obvious
01-18-2009, 12:52 PM
Well, the ONLY problem I can think of is that Lightwave, like all CG packages, work in linear RGB color space. Just maintain a linear workflow, though, and everything will be peachy. There's no CMYK, obviously, so you'll have to do that conversion in Photoshop.

toby
01-18-2009, 02:23 PM
the print graphics section is likely to have more info -
http://www.newtek.com/forums/forumdisplay.php?f=173

Captain Obvious
01-18-2009, 02:56 PM
the print graphics section is likely to have more info -
http://www.newtek.com/forums/forumdisplay.php?f=173
Except nobody ever bloody posts there!

Matt
01-18-2009, 03:10 PM
Some of this information might help:

http://www.lightwiki.com/SG_CCTools_-_For_Color_Management_and_Linear_Workflows

John Geelan
01-19-2009, 07:32 PM
As I understand it, problems exist for a linear workflow well before any CMYK vs RGB printing considerations.
The problems appear to centre on how to ensure a linear gamma workflow throughout LightWave, so as to maintain control of colour and achieve colour predictability up to, and including, the rendered image.
Apparently, the reason it is difficult to implement a linear gamma workflow in LightWave is because LightWave combines linear and non-linear gamma processes within its design.
There is also the added complication that the artist's monitor is non-linear gamma corrected.

One scenario which demonstrates the problem can be stated as follows:
A rendered image from LightWave is always that of a scene viewed through one of LightWave's virtual cameras.
LightWave's virtual cameras operate at gamma 1. This means there is a 1:1 ratio between what the virtual camera sees and what that same virtual camera outputs ie, it is operating with a linear gamma.
A problem arises when the artist chooses a colour using the windows colour picker. The colours of the windows colour picker are non-linear, they are already gamma corrected to a gamma value of 2.2. (I imagine LightWave's colour picker would have a linear gamma?)
But LightWave's virtual camera assumes it is looking at a colour with linear gamma and will treat the colour as such. The camera imposes a conversion on the resulting colour resulting from virtual photons striking its sensor - a non-linear gamma colour of gamma 2.2 is converted to a linear gamma ie, gamma 1.
The reduction factor is 1/2.2 = 0.4545. This means the colour which leaves the camera has its gamma value reduced, from its original gamma value of 2.2, by a factor of 0.4545.
Now this same distorted colour will be presented to the artist via his/her monitor. And here another problem arises - the artist's monitor is non-linear ie, is corrected to a gamma value greater than 1. A further conversion of the distorted colour takes place according to the monitor's gamma value.
While there are means, I believe, by which a realistic colour can be achieved during pre-processing, it appears that other elements of the scene ie, shadows and light falloff, will require separate treatment.
It appears then, that the goal of controlling colour, and controlling elements such as shadows and light falloff, in order to achieve predictable output, is very difficult.
From what I gather, similar problems also arise with reflection, refraction, caustics etc.

Experts reading the above will, no doubt, spot inaccuracies and, perhaps, downright fallacies.
I really don't care whether I might have made a fool of myself. As somebody new to CG, my purpose is to achieve an accurate understanding of ALL the problems involved in implementing a linear gamma workflow in LightWave.
I'll be grateful for any help in achieving just that.

Matt - Thanks for the link. I've been away for the weekend and just got back. I'll digest the info tomorrow.

toby
01-19-2009, 11:42 PM
As I understand it, problems exist for a linear workflow well before any CMYK vs RGB printing considerations.
The problems appear to centre on how to ensure a linear gamma workflow throughout LightWave, so as to maintain control of colour and achieve colour predictability up to, and including, the rendered image.
Apparently, the reason it is difficult to implement a linear gamma workflow in LightWave is because LightWave combines linear and non-linear gamma processes within its design.
There is also the added complication that the artist's monitor is non-linear gamma corrected.

One scenario which demonstrates the problem can be stated as follows:
A rendered image from LightWave is always that of a scene viewed through one of LightWave's virtual cameras.
LightWave's virtual cameras operate at gamma 1. This means there is a 1:1 ratio between what the virtual camera sees and what that same virtual camera outputs ie, it is operating with a linear gamma.
A problem arises when the artist chooses a colour using the windows colour picker. The colours of the windows colour picker are non-linear, they are already gamma corrected to a gamma value of 2.2. (I imagine LightWave's colour picker would have a linear gamma?)
But LightWave's virtual camera assumes it is looking at a colour with linear gamma and will treat the colour as such. The camera imposes a conversion on the resulting colour resulting from virtual photons striking its sensor - a non-linear gamma colour of gamma 2.2 is converted to a linear gamma ie, gamma 1.
The reduction factor is 1/2.2 = 0.4545. This means the colour which leaves the camera has its gamma value reduced, from its original gamma value of 2.2, by a factor of 0.4545.
Now this same distorted colour will be presented to the artist via his/her monitor. And here another problem arises - the artist's monitor is non-linear ie, is corrected to a gamma value greater than 1. A further conversion of the distorted colour takes place according to the monitor's gamma value.
While there are means, I believe, by which a realistic colour can be achieved during pre-processing, it appears that other elements of the scene ie, shadows and light falloff, will require separate treatment.
It appears then, that the goal of controlling colour, and controlling elements such as shadows and light falloff, in order to achieve predictable output, is very difficult.
From what I gather, similar problems also arise with reflection, refraction, caustics etc.

Experts reading the above will, no doubt, spot inaccuracies and, perhaps, downright fallacies.
I really don't care whether I might have made a fool of myself. As somebody new to CG, my purpose is to achieve an accurate understanding of ALL the problems involved in implementing a linear gamma workflow in LightWave.
I'll be grateful for any help in achieving just that.

I'm trying to wrap my head around gamma and colospace issues. Never thought about color pickers - doh! But are you sure there's no conversion that takes place after you've picked a color?
Thanks for sharing

A Mejias
01-20-2009, 12:39 AM
I guess the first and the biggest issue is LW's inability to work in CMYK colour space due to it's nature as animation and FX package.

As a rule of thumb you never want to work in CMYK. You may have had to 10 years ago, but not any more. CMYK conversion is the very last step you take if you're working for print. And in many modern printing processes you never use CMYK at all, because the printing hardware driver interface does that for you automatically. In any case the only people that need to work in CMYK are those in prepress. They will do the conversions with the proper transforms for the equipment, inks and paper they are using. Doing it in Photoshop with the default setting yourself is a mistake that can lead to very poor results.

Kids don't try this at home!

John Geelan
01-20-2009, 05:20 AM
But are you sure there's no conversion that takes place after you've picked a color?

A conversion to linearize?
Nothing I've read, so far, indicates an automatic conversion to linear gamma takes place using the windows colour picker.
Following the link Matt provided, I find this statement - "The SG_CCPicker is currently the only customizable picker that automatically linearize any color we choose."
I imagine LightWave's colour picker, on the other hand, ought to be, by design, operating with a linear gamma, consequently, there is no need for a conversion.

Accurate and detailed info is proving pretty difficult to find in respect of LightWave and linear workflows.

hydroclops
01-20-2009, 05:32 AM
First you need to use hardware to calibrate your monitor and create a profile. But are most monitors even good enough to make this worthwhile?

I'm interested in this also. The color picker is one issue as are any image files used as image maps. I get confused by the light falloff issue.

Calibrate monitor (profile(s) for monitor and/or video card???)
choose a color space
color picker
image files
light fall-off

What did I leave out?

I'm thinking of getting a pro to come to my house for an hour and help me untangle this.

Sarford
01-20-2009, 05:48 AM
Just out of curiosity, what use is having a color-managed workflow? In video most of the output gets put through some form of image manipulation (AE, Combustion, Shake etc) to get some color grading, judged on the apearance on calibrated monitors. Then it goes onto the web or on telly and every recipient has different settings on his or her tv/monitor, seeing something different than what you put out.

In print I see more value for this workflow, but even there I've never seen back on print what I saw on my calibrated monitor or from my calibrated printer because of different inks, paper stock, gamuth, conditions etc.

hydroclops
01-20-2009, 06:26 AM
Just out of curiosity, what use is having a color-managed workflow?

My analogy is to photography/cinematography. A linear workflow in a cg pkg. like LW is like having exposure control in film. When you go into color correction, you have somewhere to start from and something to work with.

Though I am open to being wrong, or overly concerned. Hopefully someone who knows a lot will contribute here.

John Geelan
01-20-2009, 07:46 AM
Just out of curiosity, what use is having a color-managed workflow?

Predictable colour output!
An accurately calibrated and profiled system when used with, say, Photoshop, will result in a printed output almost identical to the view on the monitor. Differences which do exist will be, largely, due to the difference in light sources ie, the radiated light of the monitor and the reflected light from the print.

I'm begining to see how the term "Colour Management" may be too narrow a term to use regarding LightWave.
There are other technologies within LightWave concerned with processes other than colour.

LightWave, unlike Photoshop, does not offer the user a choice of colour space.
In another thread, and which probably explains the reason why, somebody indicated LightWave employs a floating-point colour space.
The colossal values capable of being represented by floating-point numbers are ideal, and perhaps necessary, to represent everything in a LightWave scene - not just colour alone.

Experts required!

Sarford
01-20-2009, 08:49 AM
Predictable colour output!
An accurately calibrated and profiled system when used with, say, Photoshop, will result in a printed output almost identical to the view on the monitor. Differences which do exist will be, largely, due to the difference in light sources ie, the radiated light of the monitor and the reflected light from the print.

You will never get that, and that has to do with different gamuths. A monitor has a different gamuth (different range of colors it can display/print) than a printing press and one doesn't necceserily fall within the range of the other. The gamuth of a printing press is also influenced by paper stock you use. You can use a calibration profile of a certain type of stock in Photoshop but I found its resemblance limited. Besides, it would mean you'd need a calibration profile of every paper you use, made on the press you're gonna use.

The best way I think is use a as lineair workflow as much as possible, preferably in HDR and then do your tone-grading in Photoshop or HDRShop. But for realy good color, I think you can't escape proofing.

But don't get too hung up on a totaly controlable workflow couse there always will be margins (sometimes pretty hefty) if you change mediums. I never found it too be life threatening important anyway.

I've spend quite some money into a color management workflow and found it to be mostly wasted money. Anyhow, that's just my experience, so don't let it put you off (realy, no sarcasm here).

Let me know how your experience turns out.

BigHache
01-20-2009, 09:50 AM
It might be better to think of Lightwave (or any 3D package) in the way a photographer would approach his subject before opening the shutter. Photographers don't have a color space, they have light meters, lights, and gels to depict the colors of the analogue object into their medium, be it film or digital. You would, essentially, be doing the same thing in Lightwave.

This could be a good example: Say for instance you have a logo that uses PMS 135. You want to model and render a 3D object with that color and reproduce it in print, though it would be a CMYK build in the end. You will never be able to do that because of the object's shape, it's surface properties, and how you light it. If you take a real object (that's not flat), paint it PMS 135, then photograph it, that's about the same result.

Sarford has good comments on tone mapping.

John Geelan
01-20-2009, 11:38 AM
Sarford wrote:

.... that has to do with different gamuths. A monitor has a different gamuth (different range of colors it can display/print) than a printing press and one doesn't necceserily fall within the range of the other.

That is precisely the purpose of implementing a colour management workflow ie, to ensure colours are not out of gamut to an unacceptable degree for the artist's choice of colour conversion ie, Absolute Colorimetric, Relative Colorimetric, Saturation or Perceptual.
In other words, colour output becomes predictable.


Besides, it would mean you'd need a calibration profile of every paper you use, made on the press you're gonna use.

Exactly! All hardware MUST be accurately calibrated and profiled.


The best way I think is use a as lineair workflow as much as possible ...

In LightWave?
Yes, I agree. But there are problems in implementing a linear gamma workflow in LightWave for predictable output.
There is a set of solutions offered by Gerado, for just such a linear workflow in LightWave, based on Sebastian Goetsch's SG-CCTools.
My problem is to understand why such a solution is necessary in the first place.
In other words, what are the problems in implementing a linear gamma workflow in LightWave and which require the special solutions on offer.


But for realy good color, I think you can't escape proofing.

I imagine proofing will always remain part of a good workflow. I can't see anybody commit to a €10,000 print run without first proofing.
However, achieving an acceptable proof should not rely solely on a trial and error process.
An accurate linear gamma workflow should produce a very predictable proof.


... and then do your tone-grading in Photoshop or HDRShop.

I think this is one of the reasons I originally disliked CG. I now suspect tone mapping from a linear to a perceptually corrected colour may have been poorly done or not done at all. At the time I hadn't a clue what the problem might be.


(realy, no sarcasm here).

None taken! Many thanks for your input.

Captain Obvious
01-20-2009, 05:20 PM
I imagine LightWave's colour picker, on the other hand, ought to be, by design, operating with a linear gamma, consequently, there is no need for a conversion.
Lightwave's color picker is linear, yes, so if your monitor is nonlinear (as they all are, pretty much) then you need to do stuff. What you see when you pick your color is what you get when you press F9. But if you encode in, say, sRGB, then it's no longer WYSIWYG.




But are most monitors even good enough to make this worthwhile?
Yes, they all are. There is no monitor in existence that's worse off as calibrated. They can't all become exceptionally accurate, but a bad monitor with good calibration is better than a bad monitor with no calibration.

Mike_RB
01-20-2009, 05:57 PM
LW needs to work in full linear with color picking, and render previewing in gamma corrected or LUT color space. Things like the AA contrast ratio should also take the target gamma into account as 2.2 stretches the blacks and compresses the whites, our samples aren't going towards smoothing the right areas...

Here is a car setup for linear space and corrected in post to 2.2. Everything looks better when you do this. Your GI is more realistic, needs less bounces and the physical fresnel's look strong enough as they are getting punched up the right amount (inverse square falloff lights actually work too). You can't work un-corrected and get as easily realistic results.

http://www.elementvfx.com/WebDemo/alfa.zip

http://www.elementvfx.com/WebDemo/alfa_001.jpg

John Geelan
01-20-2009, 07:40 PM
Mike_RB Wrote:
LW needs to work in full linear with color picking....

Granted! However, implementing a full linear workflow appears to be problematic.
For example, Gerado has developed a number of linear workflows based on SG-CCTools.
I'm looking for a list of reasons as to why such tools are necessary in the first place.
What is it about LightWave, which is designed for a linear workflow, and yet, which also makes a linear workflow difficult to implement?
Though solutions are on offer, I don't understand them because I don't understand the problems which requires a solution.

Your image of the car and its setting is superb - impossible to distinguish from a photograph of an actual car!
Without a full understanding of a linear workflow, I imagine, such realism will forever lie beyond most CG artists - except, perhaps, for those who persevere with a trial and error approach and accidentally achieve excellence.
I downloaded your zip file and look forward to examining it tomorrow!

gerardstrada
01-21-2009, 12:21 AM
I am coming from a successful colour management workflow in Photoshop for the purposes of print graphics.
I am new to CG and LightWave but want to implement a similar colour management workflow in LightWave for the purpose of print graphics.
I understand there are problems involved in doing so and that there are solutions being offered, for example by Gerado, for a linear workflow based on Sebastian Goetsch's SG-CCTools.
To understand the solution, I must first understand the problem.
Can anybody outline ALL of the problems involved in the implementation of a colour management workflow in LightWave with a view to print graphics?
Please, do consider linear workflow is not color management (at least not in the strict sense of the term) however color management may imply a linear workflow, but it's much more than that.

The purpose of a color management workflow is colorimetric consistency. Purpose for a linear workflow is a realistic light behavior.

At least within Lightwave, there's no problem of working for print graphics, in fact it's the only 3D package that allows to work appropriately for any output medium thanks to the SG_CCTools.

If you already have implemented a successful colour management workflow in Photoshop, you may apply exactly that workflow within Lightwave with the SG_CCTools.



I guess the first and the biggest issue is LW's inability to work in CMYK colour space due to it's nature as animation and FX package.
Yes, but with the SG_CCTools we can preview our output results in CMYK color space by converting from RGB to CMY to CMYK. With XDepth we can load CMYK images within LW as well and we can have a CMYK pantone with SG_CCPicker. However, I prefer to work in RGB and convert later to CMYK.


One scenario which demonstrates the problem can be stated as follows: ...
Yep, that's the idea bassically. Similar thing happens with BG footage, 8-bpc textures, BGcolors, etc. Things get worse since a BG footage may be in HD and a texture in sRGB and grdient texture in AppleRGB, and so on, what implies different gamma and gamut for each one.



I imagine LightWave's colour picker would have a linear gamma?
Nope. Except SG_CCPicker, all other pickers choose colors in log space (included LW's). The gamma for a given picked color depends on monitor's gamma (not necessarily 2.2).


But are you sure there's no conversion that takes place after you've picked a color?
Yes. Except for the SG_CCPicker.


CMYK conversion is the very last step you take if you're working for print. And in many modern printing processes you never use CMYK at all, because the printing hardware driver interface does that for you automatically. In any case the only people that need to work in CMYK are those in prepress. They will do the conversions with the proper transforms for the equipment, inks and paper they are using. Doing it in Photoshop with the default setting yourself is a mistake that can lead to very poor results.
Yes. Besides many printers these days are RGB printers. However the possibility within LW to use a CMYK color space as working color space is there - just in case - for some special cases.


I imagine LightWave's colour picker, on the other hand, ought to be, by design, operating with a linear gamma, consequently, there is no need for a conversion.
At this moment LightWave's color picker choose colors in log space without any conversion. But we can do that with SG_CCPicker.


First you need to use hardware to calibrate your monitor and create a profile. But are most monitors even good enough to make this worthwhile?
Nowadays, yes. Though according to the model, there're some limitations in calibration.


I'm thinking of getting a pro to come to my house for an hour and help me untangle this.
You may want to take a look to HDRI3D magazine (http://www.hdri3d.com) Issue#18 (http://www.hdri3d.com/issues/h18.htm) and #19 (http://www.hdri3d.com/issues/h19.htm) :)



Just out of curiosity, what use is having a color-managed workflow? In video most of the output gets put through some form of image manipulation (AE, Combustion, Shake etc) to get some color grading, judged on the apearance on calibrated monitors. Then it goes onto the web or on telly and every recipient has different settings on his or her tv/monitor, seeing something different than what you put out.
Because as every image device has it's own gamut, every image system has a standarized color space too. And if we want to keep at least some color consistency among several image devices, we need - at least - to take care about image systems gamut and color management. Example: See a Pixar's movie in a cinema theater. On DVD. An illustration of the movie. The trailer on web. And you'll see a consistent "look" among any medium. Have you seen someone who don't know anything about color management trying to show his/her work to his/her client? Even if they have the same monitor's model, they won't see the same intended colors.


Predictable colour output!
Yes! Predictable and consistent :)


LightWave, unlike Photoshop, does not offer the user a choice of colour space.
With SG_CCTools we can choose a working color space within LW and to see a result which is as accurate as our monitor is able to display.



The colossal values capable of being represented by floating-point numbers are ideal, and perhaps necessary, to represent everything in a LightWave scene - not just colour alone.
Lightwave works internally in XYZ color model at 128 bpc (some say it's expandable up to 320) which covers the whole human vision and even imaginary colors. The missing thing is that the only color space available to display those colors is our monitor color space.

For LightWave, then, these colors from images or from a color picker are interpreted as "absolute" values and are not dependent on any given color space within it. LightWave displays these colors according to our monitor color space since it is the best possible way to show us better representations of these "real" colors. However, Lightwave (and all other 3D packages) don't manage colors, and this means that 3D packages don't have the facilities and the system to convert these "absolute" colors to "relative" colors. Relative to our monitor's gamut. And this is where SG_CCTools show its usefulness.



You will never get that, and that has to do with different gamuths. A monitor has a different gamuth (different range of colors it can display/print) than a printing press and one doesn't necceserily fall within the range of the other. The gamuth of a printing press is also influenced by paper stock you use. You can use a calibration profile of a certain type of stock in Photoshop but I found its resemblance limited. Besides, it would mean you'd need a calibration profile of every paper you use, made on the press you're gonna use.
It's not so cheap right now, but we can get that. New aRGB monitors allow this since CMYK color spaces are contained within the aRGB color space (even when aRGB monitors are 5% smaller than the independent-device color space AdobeRGB(1998). People interested may want to take a look to this thread (http://forums.cgsociety.org/showthread.php?p=5625012&p=5605921) about some color spaces differences.


Photographers don't have a color space
mmm... this is not accurate. As photographers, we indeed have color spaces. A digital camera has it's own color space, a film stock has its own color space, a photographic paper has its own color space, the scanner have its own color space, etc. Even human vision has its own color space. Did you know that there is an amazon butterfly that has a blue color, so blue that exceeds the human vision perception?


This could be a good example: Say for instance you have a logo that uses PMS 135. You want to model and render a 3D object with that color and reproduce it in print, though it would be a CMYK build in the end. You will never be able to do that because of the object's shape, it's surface properties, and how you light it. If you take a real object (that's not flat), paint it PMS 135, then photograph it, that's about the same result.
It has nothing to do with color spaces but yes, I agree with this :)



In other words, what are the problems in implementing a linear gamma workflow in LightWave and which require the special solutions on offer.
If I understand your question: SG_CCTools simplifies A LOT the implementation of the classic linear workflow within Lightwave by providing high accurate results at gamma level. And it's the only system so far - and I'm talking about all 3D packages here - that allows to implement a linear workflow at gamut level.


An accurate linear gamma workflow should produce a very predictable proof.
Maybe some people get surprised by this but linear workflow has not much to do with soft-proofing. Soft-proofing has more to do with color management.



I think this is one of the reasons I originally disliked CG. I now suspect tone mapping from a linear to a perceptually corrected colour may have been poorly done or not done at all. At the time I hadn't a clue what the problem might be.
You may want to take a look at this thread (http://www.spinquad.com/forums/showthread.php?t=19309&p=230619) about the usage of tonemapping in this regard.



For example, Gerado has developed a number of linear workflows based on SG-CCTools.
I'm looking for a list of reasons as to why such tools are necessary in the first place.
What is it about LightWave, which is designed for a linear workflow, and yet, which also makes a linear workflow difficult to implement?

It seems there is a misunderstanding about the LCS workflows I proposed in HDRI3D magazine and SG_CCTools. When I designed those workflows SG_CCTools didn't exist. It's after Sebastian Goetsch read the article that he developed the SG_CCTools (some color specialists told me that was impossible - even more within LW). And though the SG_CCTools was designed to specially work with the classic linear workflow, it's completely compatible with any of the 3 LCS workflows proposed because its color management facilities.



Gerardo

Lightwolf
01-21-2009, 02:38 AM
Lightwave works internally in XYZ color model at 128 bpc (some say it's expandable up to 320)
Not quite. It's RGB and 32bpc (bits per channel) at least as far as the storage goes. Some calculations may happen at 64bpc internally.
That gives you 96bits per RGB pixel or 128bits per RGBA. More if you add the other buffers that LW can produce to the equation (39 last time I checked, a total of 39*32 = 1248bits, 156 bytes - a meaningless number, except for marketing purposes though ;) ).

Cheers,
Mike

gerardstrada
01-21-2009, 11:24 AM
Not quite. It's RGB and 32bpc (bits per channel) at least as far as the storage goes. Some calculations may happen at 64bpc internally.
Yes, it stores color in RGB model, but as far as I understand, renderer process internally in XYZ color model. However, one or another, they are able to cover the whole human vision.



That gives you 96bits per RGB pixel or 128bits per RGBA. More if you add the other buffers that LW can produce to the equation (39 last time I checked, a total of 39*32 = 1248bits, 156 bytes - a meaningless number, except for marketing purposes though ).
Wow! that's even more if we take into account other buffers. Thanks for the info, Mike.



Gerardo

Lightwolf
01-21-2009, 11:58 AM
Yes, it stores color in RGB model, but as far as I understand, renderer process internally in XYZ color model. However, one or another, they are able to cover the whole human vision.
It's RGB for sure. XYZ is a lot newer than the LW renderer and has only hit the rendering scene a few years ago.

Cheers,
Mike

John Geelan
01-21-2009, 03:56 PM
Gerado - Many thanks for taking time out to post - much appreciated!:)


The purpose of a color management workflow is colorimetric consistency. Purpose for a linear workflow is a realistic light behavior.

This is the clarity I'm looking for - a great help!


You may want to take a look to HDRI3D magazine Issue#18 and #19

I've ordered issues #18 and #19 of the HDRI magazine. I should have them next week.

Now that I won't further confuse linear workflow with colour management, and based on the info in your post, I'd like to try a summary of what I understand you are saying. Please correct me where I'm wrong.:thumbsup:

I'll divide the summary into two parts; 1) Colour Management in LightWave and 2) Linear workflow in Lightwave.


1) Colour Management in LightWave.

* At the heart of LightWave is a device independant colour engine called CIE XYZ.
* XYZ is a perceptually based colour space which doesn't contain actual colours, only a mathamatical definition of colour.
* LightWave does not manage colour ie, it can not, by itself, translate an abstract definition of colour into "actual" colour.
* Image colours or picker colours are ambiguous within LightWave.
* For LightWave to display "actual" colour on a monitor, a monitor profile is required.
* A monitor profile describes the behaviour of the monitor ie, defines its colour space.
* It is the monitor profile which gives meaning to the RGB values of image colours or picker colours.
* The monitor profile removes the ambiguity of the RGB values of the XYZ colour space and gives them "actual" colour appearence according to the monitor profile's colour space RGB values.
* LightWave utilizes floating-point numbers to capture, store and reproduce every value of every detail in a scene.
* The reason LightWave does not offer a choice of a working colour space to the user, like photoshop does, is that working colour spaces are gamma corrected and device dependant.
* Working colour spaces, such as ProPhoto RGB, would undermine the linear workflow intended in LightWave's design.
* The vast range of colour and light values which constitute a scene are saved in the rendered file with their colour referencing only the XYZ colour space ie, in linear gamma.


2) Linear Workflow in LightWave

* Non-linear bitmapped images and textures must be linearized immediately ie, during the pre-render stage. (Does BG mean bitmapped graphic?)
* Windows picker and LWPicker select only gamma corrected colours(colours in Log space) at the the monitors gamma setting.
* SG_CCTools enables the selection of colours in a linear gamma.
* SG_CCTools simplifies a linear workflow.

I have other thoughts on linearization, but the above are the most important for now.:D
Thanks again!

JonW
01-21-2009, 04:31 PM
First of all we all need to get our eyes calibrated. Otherwise everything downstream is a waste of time.

My optometrist said I am about 1 - 2% weak in yellow/green, so as a result I have to be careful not to wind up these colours.

If we are worried about colour accuracy we should we working in a grey room, very easy to create, but very few people do.

http://www.creativepro.com/article/the-darkroom-makes-a-comeback

Someone I met just can’t see green at all. From what I have read 10% of males have issues with colour perception.

Mr Rid
01-21-2009, 05:02 PM
...
Can anybody outline ALL of the problems involved in the implementation of a colour management workflow in LightWave with a view to print graphics?

No. :D

This will become an endless discussion as everyone handles things a little differently. I dont care what anyone says, no one fully understands all the variables among different apps, monitors, formats, gammas, platforms, and color spaces. There is always another variable in the 'color space conundrum' as American Cinematographer called it.

At the end of the day, someone along the pipeline just winds up eyeballing it. If it looks right, then it is right no matter what the data says.

Cageman
01-21-2009, 05:27 PM
At the end of the day, someone along the pipeline just winds up eyeballing it. If it looks right, then it is right no matter what the data says.

Ohh... this is the kind of attitude I like... :)

Listen to Mr.Rid... he IS right about this. The data should ALWAYS be considered as guide, not a rule.

gerardstrada
01-21-2009, 09:48 PM
It's RGB for sure. XYZ is a lot newer than the LW renderer and has only hit the rendering scene a few years ago.

Cheers,
Mike
Not so sure as you since RGB color space is described in XYZ model, which is from 30's or 40's, I think. Maybe someone from NT can clarify this.

Anyway, even if it's as you say, the important thing for color management is both cover the whole range of human vision perception.



Gerado - Many thanks for taking time out to post - much appreciated!

You're very welcome, John. Hope to help and do not confuse more :D


1) Colour Management in LightWave.

* At the heart of LightWave is a device independant colour engine called CIE XYZ.

Or RGB according to Mike :)



* XYZ is a perceptually based colour space which doesn't contain actual colours, only a mathamatical definition of colour.
* LightWave does not manage colour ie, it can not, by itself, translate an abstract definition of colour into "actual" colour.
When we choose, let's say, a blue color in LW - or any other 3D package - for the 3D package, we are choosing the blue more blue that our vision is able to perceive. However the real thing is that we are seeing the blue more blue that our monitor is able to display. And when we work in that way, we are working in monitor space. Then if for actual colors you mean the colors we are seeing in our monitor, yes, that's right.



* Image colours or picker colours are ambiguous within LightWave.
In fact they are not ambiguous, they are in monitor color space. And that's the problem for some mediums like film.



* For LightWave to display "actual" colour on a monitor, a monitor profile is required.
LW will always display a color in monitor space. Even if we have not calibrated our monitors. LW (through SG_CCTools) requires a monitor profile to manage colors appropriatelly.


* A monitor profile describes the behaviour of the monitor ie, defines its colour space.
Yes, we can say it's the standard way of describing a color space.



* It is the monitor profile which gives meaning to the RGB values of image colours or picker colours.
Picker colors are always in monitor color space. But not colors from images. What monitor color profile does is to tell the color engine: you are from here (any color space), and you go to here (monitor profile) by remapping its color values according to each gamut.


* The monitor profile removes the ambiguity of the RGB values of the XYZ colour space and gives them "actual" colour appearence according to the monitor profile's colour space RGB values.
*
Only when it's part of a color management system.


LightWave utilizes floating-point numbers to capture, store and reproduce every value of every detail in a scene.
Gamuts are described in 16-bits. the curious thing is these floating point values are more useful for contrast ratio (brightness levels in relation to the darks).



* The reason LightWave does not offer a choice of a working colour space to the user, like photoshop does, is that working colour spaces are gamma corrected and device dependant.
Nope. There are device-dependent color spaces (monitors, scanners, digital cameras, video cameras...) and there are device-independent color spaces (sRGB, aRGB, AppleRGB, CIERGB, ProphotoRGB...) and all they work either in log or lin space very well.

The reason why LW or any other 3D package don't offer an option of a working colour space is... only they know that. I assume its because this is a new paradigm yet in professional workflows and nobody has cared about this before. Many editors (software) don't offer also this option yet. But this is changing pretty fast.



* Working colour spaces, such as ProPhoto RGB, would undermine the linear workflow intended in LightWave's design.
No at all. Remember there are linear versions of color spaces. see this linear render:

http://imagic.ddgenvivo.tv/forums/SGCCTools/1.jpg
it's Prophoto.

Same pic gamma-corrected:
http://imagic.ddgenvivo.tv/forums/SGCCTools/2.jpg



* The vast range of colour and light values which constitute a scene are saved in the rendered file with their colour referencing only the XYZ colour space ie, in linear gamma.
No. This depends on if you have managed colors or not. If you have managed colors, we gonna save the rendered image in our working color space. If we are not managed colors we gonna save the rendered image according to our monitor color space. And that's a disadvantage of not implementing a color management workflow.



2) Linear Workflow in LightWave

* Non-linear bitmapped images and textures must be linearized immediately ie, during the pre-render stage. (Does BG mean bitmapped graphic?)
* Windows picker and LWPicker select only gamma corrected colours(colours in Log space) at the the monitors gamma setting.
* SG_CCTools enables the selection of colours in a linear gamma.
* SG_CCTools simplifies a linear workflow.

Yes. That's right. Just consider the selection of a color from a color picker is always in log space. The difference is if the picker is able to linearize appropriately a given color.

Btw, there are several linear workflows for Lightwave - or any other package or renderer (Classic Linear workflow, Perceptual Linear workflow, Inverse Linear workflow. Multipass Linear workflow...) and each one has their own particularities. I guess you are referring to the classic linear workflow - which provides the most accurate results :)


If we are worried about colour accuracy we should we working in a grey room, very easy to create, but very few people do.

Other people say it should be black, but that's too much, I think :)


First of all we all need to get our eyes calibrated. Otherwise everything downstream is a waste of time.

My optometrist said I am about 1 - 2% weak in yellow/green, so as a result I have to be careful not to wind up these colours...

...Someone I met just can’t see green at all. From what I have read 10% of males have issues with colour perception.

About daltonism and differences in color perception among people, well, you are very right. That's the reason I guess, why - as happens with audio - some people are more talented for color grading.



No.

This will become an endless discussion as everyone handles things a little differently. I dont care what anyone says, no one fully understands all the variables among different apps, monitors, formats, gammas, platforms, and color spaces. There is always another variable in the 'color space conundrum' as American Cinematographer called it.

At the end of the day, someone along the pipeline just winds up eyeballing it. If it looks right, then it is right no matter what the data says.

I agree about this is a very wide topic and studios and specialists has opposite oppinions and it's better to use whatever fits best our needs. But for that, we need to be really sure that what we are seeing is what we are really getting. At least in 3D packages, this doesn't happen when we work for output mediums with wider gamuts.

Remember, a color management workflow is not an artistic option. It's a technical one.

In CG for motion picture production for example, some CG artists, professionals and studios don't understand very well the importance of color management because they are not in charge of this (and they shouldn't). Many 'errors' that are not seeing in their computer monitors are present in the cinema screen later, and only then, one understand the importance of this matter. But even when something like that happens, they don't care because it's not part of their job to fix the situation. These issues are solved in the DI process. But there are some limitations, and this is the reason why a great CG artist (or studio) may deliver a not so excellent take (talking about CG integration). However the same CG professional can deliver an awesome next take. This has nothing to do with his/her inspiration, skills or talent. It had to do with color management. According to the monitor color space, some hues relantionships can be clipped for a CG element (cold or warm tones according to color spaces). So when it comes to DI, there's not much that the colorist can do because even if he/she use an expandible rendering intent (like Perceptual or an in-house solution), there's no room to create the appropriate proportion between hues. This is like trying to get more detail in the highlights of a LDR image. Then, a take looks CG-ish. This can be already solved with a system like SG_CCTools. And this is the reason why some cleaver people are beginning to care about color management in the lighting capture process to get consistent results in any lighting condition or color grading style.

If this doesn't matter, there's no reason why an studio like Pixar (whom share several technical aspects about its work) has invested sooo much time and money in its color management systems and be so hermetic about how it works.

However, and like an excellent CG artist told me, these stuff should be hidden to our eyes. And he is righ, I think. But 'till then, we can choose to ignore it (like some great CG artists do with linear workflows) or deal with that.



Gerardo

toby
01-21-2009, 11:55 PM
brain... bleeding... she con't take i much longa captain, she's gonna blooo

I'm just sitting here taking notes, figger it out later =)
thanks y'all!

WShawn
01-22-2009, 02:49 PM
We recently purchased a 24" wide-gamut LCD monitor (S-IPS panel) for our non-Lightwave workstation and have had to use the color management tools in Photoshop and After Effects to make sure the monitor displays those colors correctly. We calibrated the monitor using a Spyder 3 Elite system, but even calibrated, untagged images, such as those in the Safari web browser, and even the system icons, look cartoonishly red.

The Adobe software will tag the images and display their colors correctly based on the target output device. 95% of our work is video and animation, so we target for NTSC SDTV.

I'm still working on a 19" LaCie CRT monitor for LW, but I'm itching to switch to a larger widescreen LCD monitor. Without a way to tag images rendered in Lightwave, when I display such an image after an F9 render I won't know if the colors are what I want. I might have an image with a little red in it, but if the monitor doesn't know the image was rendered for an sRGB colorspace or whatever, it's going to make the reds appear blown out. So then I reduce the reds in the scene so it renders okay on the monitor, but when I bring the untagged rendered image (.png, .psd, .tga, whatever) into After Effects my reds suddenly look desaturated when viewed in software that uses color management.

Not ideal.

Shawn Marshall
Marshall Arts Motion Graphics

John Geelan
01-22-2009, 03:58 PM
Hope to help and do not confuse more.

Superb post, Gerardo - enlightenment is s l o w l y dawning!:D

Be delighted if you can comment on the following:


Or RGB according to Mike

* LightWave's device independant colour space, whether XYZ or RGB, must satisify two essential requirements; a) be a linear colour space and b) have a colour gamut greater than or equal to the colour gamut of human colour vision.



When we choose, let's say, a blue color in LW ... we are choosing the blue more blue that our vision is able to perceive.

* Using either windows colour picker or LW_Picker the blue we are choosing is an absolute blue - a mathamatical definition embracing all possible shades of blue as defined in LW's device independant colour space.
But the "actual" shade of blue we see on the monitor is a shade of blue whose RGB values have been remapped, by the colour engine, to blue values within the gamut of the monitor colour space.



In fact they are not ambiguous, they are in monitor color space. And that's the problem for some mediums like film.

* What I actually meant to say was - Until selected, picker colours are ambiguous within LightWave.
It is the monitor's colour space and gamma calibration which determine the colours we see.



Picker colors are always in monitor color space. But not colors from images.

* I'm guessing, but I think I see now why bitmapped images might require separate consideration from colour pickers.
All bitmapped images brought into LightWave are in log space. They will have a non-linear gamma because they will have been saved in their working colour space and colour intent if, of course, they were created in a colour management workflow. Otherwise, they will have been saved in the monitor colour space used in their creation.
It never occurred to me before - but, is it possible to create a linear bitmap?
I ask because colour management in Photoshop cannot truly be turned off - the default working colourspace and colour intent are always active.

* Now I'm confused! Your linear schoolroom image is in jpeg which, I believe, is a gamma encoded format - how did you avoid gamma encoding? Or did you?

* When a bitmap in log space is brought into LightWave, Lightwave's virtual camera, which operates with linear gamma, reinterprets the bitmap's colour as linear. In the rendered output from this virtual camera, the bitmap colour will have changed due to two conversions.
The first conversion is effected by the camera assuming the bitmap to be linear.
A bitmap in, for example, gamma 2.2 suffers a colour distortion factor of 1/2.2 = 0.4545 gamma.
The second distortion occurs when this, already distorted, bit map colour is remapped into the monitor colour space.
The distortion factor now becomes 2.2/1.7455 = 1.2603 gamma.

* The distortion factor of 0.4545 can be corrected pre-render stage.
However, lighting features such as shadows, light falloff, refraction, reflections and caustics will also require correction as a consequence.

* The infinity of values capable of being represented by a floating-point precision assure all light values will be preserved.


If you have managed colors, we gonna save the rendered image in our working color space. If we are not managed colors we gonna save the rendered image according to our monitor color space. And that's a disadvantage of not implementing a color management workflow.

So saving a rendered image in .exr format in LightWave, with the intention of tone mapping in Photoshop Extended, and to eventually to print as a HDR image, really necessitates a large working colour space? Otherwise, not only will predictable and consistant colour be unobtainable, but the relatively small gamut of the monitor colour space will have discarded much of the image's light and colour values.

* I have to say I'm really looking forward to getting into your two articles! Many thanks.:thumbsup:

Lightwolf
01-22-2009, 04:26 PM
All bitmapped images brought into LightWave are in log space.
Not quite... HDRs should be linear (and are if saved properly).
Not log space either, but gamma space. Log space would be footage from a film scan, i.e. DPX or CIN files (usually).

Cheers,
Mike

John Geelan
01-22-2009, 04:38 PM
Not log space either, but gamma space = gamma 1?

I was thinking of bitmapped images created in Photoshop.

kopperdrake
01-22-2009, 04:44 PM
I do a lot of print work, and believe me, with all the technological approaches you can take along the way, I've found the best way is to render out an image close to what you're after, then sit there in Photoshop with the CMYK colour picker (still in RGB mode though - never give up that extra RGB information), and a process colour book open and make sure your important colours are correct to the book. Even then your printer's press will have its own biases, but that's the best way I've found. There's no way to guarantee a colour output from Lightwave unless you have the simplest of scenes, and that tends to not be why we work in 3D.

Lightwolf
01-22-2009, 04:50 PM
= gamma 1?
That's linear though...

And I suppose even in PS it depends heavily on your settings - as well as the file format.

Cheers
Mike

Andrewstopheles
01-22-2009, 07:15 PM
I do a lot of print work, and believe me, with all the technological approaches you can take along the way, I've found the best way is to render out an image close to what you're after, then sit there in Photoshop with the CMYK colour picker (still in RGB mode though - never give up that extra RGB information), and a process colour book open and make sure your important colours are correct to the book. Even then your printer's press will have its own biases, but that's the best way I've found. There's no way to guarantee a colour output from Lightwave unless you have the simplest of scenes, and that tends to not be why we work in 3D.

This is an excellent example of a workflow applied in the real world. I came to LW after many years in prepress working in a color managed environment: grey walls, color corrected lighting, viewing booths, photodensitometers and spectrophotodensitometers, various proofing devices and ten "10 color" printing presses printing on various substrates with various ink systems. I can tell you that you will blow your brains out looking at all the possibilities, so the best approach is to solve it project by project similar to kopperdrake's post.

You need color management for print. Period. You need some sort of color management workflow for 3D as well, if consistent color is important to the job (almost always true). However, if you make the effort to input reasonably accurate colors then you will have a better chance to get reasonably accurate colors out. You must consider the temperature of the lights used in your scene, the color of any reflections and bounced or radiosity light, shadow colors and the list goes on and on.

Personally, I am interested in learning more about "tone mapping", as it sounds like a straight forward approach to determining final color.

I will be watching this thread closely. Lots of knowledgeable individuals have already posted some valuable info here.

Mr Rid
01-22-2009, 07:51 PM
...

However, and like an excellent CG artist told me, these stuff should be hidden to our eyes. And he is righ, I think. But 'till then, we can choose to ignore it (like some great CG artists do with linear workflows) or deal with that.



Gerardo

I dont find it matter of ignoring or not ignoring. It just isn't possible to correctly manage color all the way thru a 3D-integrated-with-film production pipeline according to any standard or set of rules, because there currently aren't any (the ACS has proposed coming up with some). And every vendor along the pipe has their own way of doing things. I cant even get a QT to look the same on a Mac as it does on a PC monitor.

The big color space problem I have is viewing elements on our monitors the way they will appear on film. If you have to make photoreal CG integrate with a log cineon plate, what should the 3D artists be lighting to? This gets complicated because the 3D artist needs to view the plate in LW in the same way the 2D artist is viewing it in a log-to-lin space, under a theatrical display LUT (from Kodak in our case), in order to know how things will actually look on film. Am talking about a display LUT that is not baked into the image in any way.

Using CCtools to recreate the LUT in LW just turned into a mess that had everyone scratching their heads. Procedural colors and HV did not respond the same way as color managed maps. More importantly, everything just appeared way too bizarro under the CCtool LUT. It didnt make any sense to the lighters. We found it impossible to get CG to look the same way under the LUT as it would normally look without the LUT. All we could do was ballpark values and leave it to 2D to fudge elements into place. This leaves things too subjective on both ends. For 3D, it was like trying to sculpt with boxing gloves, and 2D gets a purple cat and assumes is suppose to look that way.

The simpler solution was to just linearize the cineons (exrs, jpgs, etc) for the LW artists to light to and skip the LUT. But then the problem is that the exrs out of LW would respond completely differently under the 2D department's display LUT than the cineon log-to-lin plate.

We went around for months, talking to other post pros, reading articles and tutorials... each proposed setting/profile as 'solution' just brought on a new set of incompatibilites. Compatibility is like the original sin of all things computer as we endlessly debate the true path.

Back to eyeballing.

John Geelan
01-23-2009, 04:24 AM
Without a way to tag images rendered in Lightwave .... Originally posted by WShawn.

From what I gather, tagging an image in LW may not be necessary for colour accuracy, if the rendered image is saved in a HDR format ie, .exr.
The resulting HDR file is a step above a RAW image generated in a digital camera.
Most RAW images are linear and tagging them with a colour space takes place only when opened in an application such as Adobe's Camera RAW. The huge advantage of the RAW image is that the artist has complete control as to how the image will be interpreted ie, white balance, temperature, contrast etc.
By opening a HDR image in Photoshop Extended, you can tag it with a colourspace and the save it in a bitmap format for usage elsewhere.

John Geelan
01-25-2009, 02:59 AM
Anybody know how to linearize/un-gamma a bitmap or, for that matter, create a linear bitmap?

gerardstrada
01-25-2009, 04:06 AM
We recently purchased a 24" wide-gamut LCD monitor (S-IPS panel) for our non-Lightwave workstation and have had to use the color management tools in Photoshop and After Effects to make sure the monitor displays those colors correctly. We calibrated the monitor using a Spyder 3 Elite system, but even calibrated, untagged images, such as those in the Safari web browser, and even the system icons, look cartoonishly red.

The Adobe software will tag the images and display their colors correctly based on the target output device. 95% of our work is video and animation, so we target for NTSC SDTV.

I'm still working on a 19" LaCie CRT monitor for LW, but I'm itching to switch to a larger widescreen LCD monitor. Without a way to tag images rendered in Lightwave, when I display such an image after an F9 render I won't know if the colors are what I want. I might have an image with a little red in it, but if the monitor doesn't know the image was rendered for an sRGB colorspace or whatever, it's going to make the reds appear blown out. So then I reduce the reds in the scene so it renders okay on the monitor, but when I bring the untagged rendered image (.png, .psd, .tga, whatever) into After Effects my reds suddenly look desaturated when viewed in software that uses color management.

Not ideal.
It sounds like if that monitor wouldn't be very well calibrated. You shouldn't see more red predominance there. Any monitor has its own particularity and some few monitors (even from prestigious brands) could have manufacture issues (they are damaged commonly in the distribution process when people don't treat them as fragile freights). You may want to consult that issue to your vendor.

About how to see in your compositing package what you see in LW, there are two ways. If you have not managed colors within LW, assign your monitor color space to your rendered images to see the colors you intended within LW. If you have managed colors within LW (with the SG_CCTools), just assing your choosen working color space.


* LightWave's device independant colour space, whether XYZ or RGB, must satisify two essential requirements; a) be a linear colour space and b) have a colour gamut greater than or equal to the colour gamut of human colour vision.
Yes.


* Using either windows colour picker or LW_Picker the blue we are choosing is an absolute blue - a mathamatical definition embracing all possible shades of blue as defined in LW's device independant colour space.
But the "actual" shade of blue we see on the monitor is a shade of blue whose RGB values have been remapped, by the colour engine, to blue values within the gamut of the monitor colour space.
Yes that's basically the idea: LW will use our monitor color space to show those colors. BUT, a color engine is in fact a color management module (CMM) and it's used for color convertions (from a source gamut to a destination gamut). Since LW doesn't manage colors (by default), it's not using any color engine (well, OS has a color engine but LW doesn't use any of these capabilities at least we are using the SG_CCTools).



* What I actually meant to say was - Until selected, picker colours are ambiguous within LightWave.
It is the monitor's colour space and gamma calibration which determine the colours we see.
In that case, yes, of course.


It never occurred to me before - but, is it possible to create a linear bitmap?
8-bits images are in logspace/gamma-encoded space because in the most efficient way of storing 8-bits images. Besides they look perceptually uniform for us. This may sound a bit confusing but in order we perceive, let's say a gradient as uniform, it should be in log space. If it would be in linear space we shouldn't perceive it as linear. That's the reson why linear images looks so dark and contrasted. We can indeed create a linear 8-bpc bitmap, however since each channel only supports 256 tones we wouldn't have enough bits depth to recover shadow details (and even highlights) and our corrected result could be cliped, or posterized due to the quantization of these values.


I ask because colour management in Photoshop cannot truly be turned off - the default working colourspace and colour intent are always active.
Yes it's always active in PS but if for some reason you want to 'simulate' a non-color management environment, you can set your monitor color space as working color space (images from other color spaces should be converted too).



* Now I'm confused! Your linear schoolroom image is in jpeg which, I believe, is a gamma encoded format - how did you avoid gamma encoding? Or did you?
First, any color space is suceptible to be linearized (we can change its gamma to 1.0). And any image is suceptible for gamma adjustments, no matter its color space. i.e.: We can work in a gamma-encoded space, linearize the image, and save this image in log space. Later, when this image will be previewed as gamma-encoded, we'll see a linear result. This is the reason why we can do the opposite as well, that is to say, we can save a floating point image in log space (what would be an error if we suppose this image is linear).


* When a bitmap in log space is brought into LightWave, Lightwave's virtual camera, which operates with linear gamma, reinterprets the bitmap's colour as linear. In the rendered output from this virtual camera, the bitmap colour will have changed due to two conversions.
The first conversion is effected by the camera assuming the bitmap to be linear.
A bitmap in, for example, gamma 2.2 suffers a colour distortion factor of 1/2.2 = 0.4545 gamma.
Yes, LW will assume those colors are linear when they aren't.


The second distortion occurs when this, already distorted, bit map colour is remapped into the monitor colour space.
The distortion factor now becomes 2.2/1.7455 = 1.2603 gamma.
Nope. If we leave the render untouched (no gamma correction), output image will appear the same as input image. If we gamma correct (2.2) those colors, that would be like add 2.2 gamma exponent twice! which is not 4.4, but I'll explain why later.


* The distortion factor of 0.4545 can be corrected pre-render stage.
Yes.


However, lighting features such as shadows, light falloff, refraction, reflections and caustics will also require correction as a consequence.
Only colors (that affect rendered colors) should be linearized. Which are commonly lights colors (or shadow colors if you are using them), BackDrop colors, colors from environment, surface colors (plain colors and procedurals), gradients colors, BG(background) colors, etc. After linearization, light falloff should be inverse square distance and shadows, light spreading, light bounces, refractions, reflections, caustics, DOF, bokeh effects, glare and many other optical effects will behave realistically after gamma correction.


* The infinity of values capable of being represented by a floating-point precision assure all light values will be preserved.
Yes! this is why an FP pipeline is so important.


So saving a rendered image in .exr format in LightWave, with the intention of tone mapping in Photoshop Extended, and to eventually to print as a HDR image, really necessitates a large working colour space? Otherwise, not only will predictable and consistant colour be unobtainable, but the relatively small gamut of the monitor colour space will have discarded much of the image's light and colour values.
I think this depends on each specific equipment, each project and output medium. If we have an aRGB monitor, we don't need to worry about CMYK conversions since CMYK is completely contained in aRGB. Even sRGB monitors these days (which have not precisely sRGB color space - they have their own color space) are in fact bigger than most of CMYK color spaces. Problem with gamuts here is not precisely the size, but the shape. I've posted this before and I'll do it again:

http://imagic.ddgenvivo.tv/forums/Vcmyk1.png

We can see here a tridimensional color space representation. The larger gamut is a CTR monitor. The smaller one is a CMYK color space. We can think that CTR monitor is larger than CMYK. If we don't take into account the shape, yes (and that explains why our colors change so much when you convert them to CMYK). But if we consider the shape we'll notice this:

http://imagic.ddgenvivo.tv/forums/Vcmyk2.png

Some CMYK hues are not contained by the monitor, even when it's a larger gamut.

Same thing happens with sRGB:

http://imagic.ddgenvivo.tv/forums/Vsrgb2.png

There are several hues not contained in sRGB color space and vice versa.

Things are even more notorious with aRGB monitors (which really cover about 95% of aRGB color space)

http://imagic.ddgenvivo.tv/forums/Vargb.png

There's an advantage in gamut shape for CG work when a large (working color space) and small gamut (monitor) share similar shapes.


* I have to say I'm really looking forward to getting into your two articles! Many thanks.
Hope you find it useful! :beerchug:



Gerardo

gerardstrada
01-25-2009, 04:19 AM
Not log space either, but gamma space. Log space would be footage from a film scan, i.e. DPX or CIN files (usually).
We can use the term log space for gamma-corrected colors too. Not only film density has a logarithmic exposure, human vision is also logarithmic and a gamma exponent of 2.2 pretends to approximate our logarithmic visual perception which has a log 2 base. Let's consider a gamma exponent is indeed other way to describe a log function, and this is the reason why a gamma correction of 2.2 applied twice is not the same as a single gamma exponent of 4.4, since this is not a multiplier but an exponent with base 2 (similar to human visual perception). In this case the resultant exponent is 4.84, which indeed match a 2.2 gamma exponnet applied twice. Even the color space SMPTE 196M (a theatrical cinema standard) has a gamma of 2.2 (RSR_D55_g2.2.xml for cineSpace), and we couldn't say for any conversion with this cinema standard that we are not making a log2lin or lin2log conversion by only applying a 2.2 gamma exponent or .4545 gamma correction.

However, some people tend to use the term log for film footage and gamma correction for other color spaces in order to difference each specific task.


I do a lot of print work, and believe me, with all the technological approaches you can take along the way, I've found the best way is to render out an image close to what you're after, then sit there in Photoshop with the CMYK colour picker (still in RGB mode though - never give up that extra RGB information), and a process colour book open and make sure your important colours are correct to the book. Even then your printer's press will have its own biases, but that's the best way I've found. There's no way to guarantee a colour output from Lightwave unless you have the simplest of scenes, and that tends to not be why we work in 3D.
Perhaps you may want to take a look to SG_CCTools and articles (http://www.hdri3d.com/issues/h18.htm) on HDRI3D magazine (http://www.hdri3d.com/issues/h19.htm). It could save time on trial an error process.


I dont find it matter of ignoring or not ignoring. It just isn't possible to correctly manage color all the way thru a 3D-integrated-with-film production pipeline according to any standard or set of rules, because there currently aren't any (the ACS has proposed coming up with some). And every vendor along the pipe has their own way of doing things.
I think everything has principles, rules and rational procedures. If we don't see it or we don't know it, doesn't mean that it doesn't exist. It only means that we still don't know the what, how and why. I found there's indeed some principles, rules and rational procedures in color management for CG work in motion picture production. And even if there are several vendors with their own way of doing things as you say, understanding at least a general color management workflow may give us an idea about how to organize these facilities in a coherent pipeline.


I cant even get a QT to look the same on a Mac as it does on a PC monitor.
That's because gamma (and gamut) are different on PCs and MACs, and we need different QTs for each one in order to see same results on each one.
On win/vista systems (PC) gamma is about 2.2, while in MACs, a gamma correction is firstly made (about 1.8 - AppleRGB) and later a kind of LUT is applied (about 1.4) to compensate. This is the reason why images made on win systems look a bit brighter on MACs. To get the same PC result on a MAC (talking here at gamma level only) apply a .8181 gamma correction to your QT in PC. You will see a darker image on PC, but it will look good on MAC. At gamut level, all depends on the color space used to output the QT (monitor? rec 601? rec 709?..)


The big color space problem I have is viewing elements on our monitors the way they will appear on film. If you have to make photoreal CG integrate with a log cineon plate, what should the 3D artists be lighting to?
Your question suppose the cineon plate and the CG elements are in different color spaces. That should not be in that way. That's the reason of a color management workflow. Put all in the same working color space.


This gets complicated because the 3D artist needs to view the plate in LW in the same way the 2D artist is viewing it in a log-to-lin space, under a theatrical display LUT (from Kodak in our case), in order to know how things will actually look on film. Am talking about a display LUT that is not baked into the image in any way.
I understand the problem you describe (it's the problem everybody have) but people (mainly LW users) seem not to realize that this can be solved with the SG_CCTools and an appropriate color management workflow.



Using CCtools to recreate the LUT in LW just turned into a mess that had everyone scratching their heads. Procedural colors and HV did not respond the same way as color managed maps. More importantly, everything just appeared way too bizarro under the CCtool LUT. It didnt make any sense to the lighters. We found it impossible to get CG to look the same way under the LUT as it would normally look without the LUT. All we could do was ballpark values and leave it to 2D to fudge elements into place. This leaves things too subjective on both ends. For 3D, it was like trying to sculpt with boxing gloves, and 2D gets a purple cat and assumes is suppose to look that way.
Again, you are talking like supposing the color managed maps (cineon/DPX) and CG elements were in different color spaces, when this is not the principle proposed in my articles.


The simpler solution was to just linearize the cineons (exrs, jpgs, etc) for the LW artists to light to and skip the LUT. But then the problem is that the exrs out of LW would respond completely differently under the 2D department's display LUT than the cineon log-to-lin plate.
That's because you are working in monitor color space with unmanaged colors for both CG elements and cineon files.


We went around for months, talking to other post pros, reading articles and tutorials... each proposed setting/profile as 'solution' just brought on a new set of incompatibilites. Compatibility is like the original sin of all things computer as we endlessly debate the true path.
And why not ask in the first place??? :)

I'll show you here a way (there are several depending on each particular studio and project) to solve the problem you have. Since there are many aspects that I won't detail here, I suggest you to read my articles on HDRI3D (Issue#18 (http://www.hdri3d.com/issues/h18.htm) and Issue#19 (http://www.hdri3d.com/issues/h19htm)) so that we can be sure we are talking with the same terms.

My article on Issue#19 covers color management with SG_CCTools for compositing within LW (CG and cineon/DPX files), however CG integration requires another treatment because goal is different.
I'll cover this with a very basic example and I'll refer specifically to cineon/DPX files management for CG work with the SG_CCTools.

Trying to put the idea in the simplest possible terms:

Contrary to other color spaces, film color pipeline (Cineon/DPX files) have a color scheme with an intended 'look'. This 'look' not precisely attempts to be exactly the same as the real scene. We can say it has it's own 'colour style'.

What we need to do is to subtract this intended 'film look', assign a scene referred color space (which implies a very wide color space), work the film footage and CG elements below this common wide working color space and apply later a LUT to recover back the intended 'film look' for both (BG plate and CG elements).

This implies two main things:

1. A very wide working color space (device-independent)
2. Accurate LUTs (forget cineon converters - those are toys - do use a concatenation of a negative film stock and a film theater profiles)

And please, do not try to use theatrical cinema color spaces as a working color space. A film theater color profile (ICC/ICM) for projected film is a preview profile, not an output profile. Moreover, gamut of film theater color spaces are smaller gamuts than negative film profiles (really smaller). Negative film profiles or wider device-independent color spaces are appropriate color spaces as working color spaces for CGI in motion picture production. Do use theatrical cinema color spaces as a LUT for preview purposes only.

Now, this pipeline assumes LW(SG_CCTools) and AE, but you can use Fusion, Shake, Nuke or your favorite compositing package (or even PS). Just be sure you have all the involved color spaces available.

For this simple example:

Working color space: ProphotoRGB
NegativeFilm profile: Kodak 5218 / 7218 Printing Density
Theatrical cinema profile: Kodak 2383 Theater Preview

Load the cineon file on Photoshop, AE or your favorite compositing app. and assign (not convert) ProphotoRGB (log version, not linear). Save as EXR file (yes, not cineon file) and embed ProPhotoRGB as color space. Marcie should look something like this:

http://imagic.ddgenvivo.tv/forums/SGCCTools/ppae.png

Our Cineon/DPX file is now ProphotoRGB.

Load this on LW. In LW we are viewing these colors in monitor color space, but remember, they are in ProphotoRGB and it supposes these are 'absolute' colors. We should get something like this:

http://imagic.ddgenvivo.tv/forums/SGCCTools/pplw.png

Now we load a 3D model and our lighting acquisition model (lightprobe) - converted to our working color space (from camera's color space or sRGB/aRGB).

For previewing how our output result will look like in a negative film stock we set SG_CCNode (within DP Image Filters NE) or SG_CCFilter in post-process in this way:

First node/filter instance:
Input Profile: ProphotoRGB (lin)
input Intent: probably all work, but I recommend Perceptual first if you want to keep hues relationships or Relative Colorimetric/Saturation if you want to keep overall saturation.
Ouput Profile: ProphotoRGB (log)
ouput Intent: Perceptual

Second node/filter instance:
Input Profile: Kodak 5218 / 7218 PD(log)
input Intent: Perceptual
Ouput Profile: screen profile (log)
ouput Intent: it doesn't care

You should end up with something like this:

http://imagic.ddgenvivo.tv/forums/SGCCTools/lwnfs.png
mmm.. that butterfly doesn't look like the amazon butterfly we were talking previously?

The cineon/dpx file with the same profile assigned in AE shows us this:

http://imagic.ddgenvivo.tv/forums/SGCCTools/aenfs.png

We have matched a BG plate with a CG element in a negative film stock color space within Lightwave.

Now, for previewing how our output result will look like in a cinema theater, we set SG_CCNode (within DP Image Filters NE) or SG_CCFilter in post-process in this way:

First node/filter instance:
Input Profile: ProphotoRGB (lin)
input Intent: probably all work, but I reccomend Perceptual first if you want to keep hues relationships or Relative Colorimetric/Saturation if you want to keep overall saturation.
Ouput Profile: ProphotoRGB (log)
ouput Intent: Perceptual

Second node/filter instance:
Input Profile: Kodak 2383 (log)
input Intent: Perceptual
Ouput Profile: screen profile (log)
ouput Intent: it doesn't care

We get this within LW:

http://imagic.ddgenvivo.tv/forums/SGCCTools/lwtps.png

Now output simulation within AE for the cineon/dpx file from Kodak 2383 to Kodak 5218 / 7218 looks like this:

http://imagic.ddgenvivo.tv/forums/SGCCTools/aetps.png

We are previewing how our BG plate with a CG element will look like in a cinema theater within Lightwave.

For saving this sequence:

Disable all SG_CCFilters or SG_CCNodes in post-processing
Apply LW's FPGamma at 1.8 (yes, this output is not linear - but don't worry, neither cineon/DPX files)
Save your sequence as EXR files as always.

In your compositing package:

Choose Prophoto as our working color space
Load the EXR sequence and assign the K 5218 / 7218 PD color profile.

We'll see exactly the same cineon/dpx result (which is the same result within LW). After simulation from Kodak 2383 to Kodak 5218 / 7218 we end up with something like this:

http://imagic.ddgenvivo.tv/forums/SGCCTools/exrtps.png

Color flow has round-trip perfectly from PS/AE to LW and AE back. A very cheap CG-integrated-with-film production pipeline ready to be used.

For log2lin operations just linearize with a .5556 exponent. For lin2log operations, gamma correct with a 1.8 gamma exponent (you problably have saved alpha and buffers so you may want to do this with CG elements only and use the real cineon as BG in your compositing app).

This workflow is viable not only for cineon/dpx management within LW but also to provide to our animations that evasive 'film look':

http://imagic.ddgenvivo.tv/forums/SGCCTools/filmlook2.jpg


Back to eyeballing.
Instead of continuing eyeballing, I propose you something really constructive: Do try to implement the solution I shared here with a practical case and we'll be able to discuss if the workflow works or it doesn't work, and what or how we can do to improve the SG_CCTools. Remember, they are FREE! and it's not so good to see how many users of other 3D packages would want something like the SG_CCTools and to see so excellent LW users not using them at all.


From what I gather, tagging an image in LW may not be necessary for colour accuracy, if the rendered image is saved in a HDR format ie, .exr.
From a gamma point of view, no. But from gamut perspective, that would be interesting :)



Gerardo

John Geelan
01-25-2009, 07:49 AM
Gerardo - Sincere thanks for such a brilliant post and for taking the time and trouble to put it together.:thumbsup:
Your input has got to be of huge benefit to the LW community and, in general, to all CG artists and CG technicians.

In the next few days I'll try to digest the content of your posts and, no doubt, will be getting back for further clarification.

In my question regarding linearizing a bitmap image- I should have used the term "raster image". I was thinking of 16 bit raster images created in PS - I've got to be more precise with this technical jargon!:eek:

LW graphics are vector based, as I understand. So too, are colours from the Windows, LW and SG_CCTools colour pickers.
Within LW, the SG_CCTools colour picker can linearize gamma as the colour is picked by the artist. Though the gamma is linear for use in LW, the colour is viewed on the artist's monitor as gamma corrected due to the gamma corrected colour space of the monitor.
Is it possible to accurately do the same for raster images?

A raster image coming from PS will be tagged with the linear working colour space ie, ProPhoto, and a rendering intent ie, Perceptual - both of which determines the gamut and gamma of the image.
Since LW does not colour manage, is it possible to linearise/un-gamma the raster image and remove the tagged colour space to achieve something equivalent to the linearity and neutrality of a RAW file(no tagged colour space, linear gamma), for importation into LW?

When adjusting the gamma value of a raster image in PS using the centre slider of the Levels command, there is no way of determining when gamma=1 has been achieved.

I wish I hadn't to go away this weekend - it's all beginning to make sense ... I think!:D

Nowhere Man
01-25-2009, 08:10 AM
LW graphics are vector based, as I understand.

LW does not output vector graphics if that's what you mean by "vector based".


A raster image coming from PS will be tagged with the linear working colour space

I don't know much about Photoshop's color management, but i doubt it saves linear gamma images by default. Correct me if I'm wrong.


is it possible to linearise/un-gamma the raster image

Yes, it is, thanks to Sebastian Goetsch and his CC_Tools (and Gerardo for showing us how to use them!:thumbsup:)

Edit: As far as I know, rendering intent is a method of maping colors in case of a gamut mismatch, so I don't think it is tagged to image file.

John Geelan
01-27-2009, 06:29 AM
LW does not output vector graphics if that's what you mean by "vector based".

By "vector based" I mean that, while working inside LW, we are working with non-resolution dependant graphics.
Adobe Illustrator is similar - we can stretch and reshape graphics by dragging handles on the graphic's edges. This is something which is not possible with a raster image which is resolution dependant.
An LW render produces a raster image which is resolution dependant as determined by the resolution setting of the LW camera.


I don't know much about Photoshop's color management, but i doubt it saves linear gamma images by default. Correct me if I'm wrong.

As it turns out, you are right!
Based on Gerardo's answer re the non-linearity of a working colour space - I assumed the ilinear render he posted on page 2 of this thread, had been created in PS - and I understood that the ProPhoto RGB colour space is linear ie,



Quote:
* Working colour spaces, such as ProPhoto RGB, would undermine the linear workflow intended in LightWave's design.

No at all. Remember there are linear versions of color spaces. see this linear render:

It could well be Gerardo is referring to a linear version of ProPhoto RGB available elsewhere other than in PS.
However, the version of ProPhoto RGB available in PS is non-linear. Here are the settings for three of the working colour spaces available in PS:

ProPhoto RGB
Gamma 1.8
White Point - 5000k (D50) - x = 0.3457, y = 0.3585

Adobe RGB (1998)
Gamma 2.2
White Point - 6500k (D65) - x = 0.3127, y = 0.3290

Wide Gamut RGB
Gamma 2.2
White Point - 5000k (D50) - x = 0.3457, y = 0.3585


I made a mistake in assuming "a raster image coming from PS will be tagged with the linear working colour space." Not only is the working space non-linear, it doesn't follow that the saved image will be tagged with the working colour space. Though the image in PS has been created in a working colour space, the user has a choice whether to save the image as tagged or untagged.

I also made a mistake in saying;

"A raster image coming from PS will be tagged with the linear working colour space ie, ProPhoto, and a rendering intent ie, Perceptual - both of which determine the gamut and gamma of the image."

You are perfectly right to point out the correct purpose of rendering intents - they are, of course, provided as a means of handling out-of-gamut colours.
The colour gamut and gamma of the image we see on the monitor is determined by the monitor colour space and the calibration of the monitor.
If Gerardo is looking in, and given his contributions so, I'm sure he's thinking it's about time I got the point!:D

All of this leads to something very interesting.
My goal is to produce linear raster images in PS without a colour space attached. The reason being that LW works with a linear workflow and does not have colour management built-in. Together with SG_CCTools colour picker, which will linearize picked colours, I should have a 100% linear workflow in LW.

The colour space I would prefer to work with is ProPhoto RGB and a white point of D65.
But ProPhoto RGB is non-linear.
I can begin by creating, within PS, a new colour setting file based on the ProPhoto RGB colour space by setting the existing ProPhoto RGB gamma setting = 1.
It looks, from the colour space data above, that across colour spaces, the white point x, y, values remain the same for a given colour temperature ie, in two cases above, though they are different colour spaces, they share the same colour temperature of D50 and the same white point x, y, values.
So, for an adjusted ProPhoto RGB colour space with a gamma = 1 and a new white point of D65, I think, all I need to do is set the gamma to 1 and white point values to the D65 values of Adobe RGB (1998) colour space.

I had a look at the three primary x, y, values for each colour space but, and I'm guessing again, I feel all they do is define the size and width of each colour space.

Look forward to any comments as to whether this will work or not!

Nowhere Man
01-27-2009, 09:08 AM
I would strongly advise you to always tag any image with it's color space, it doesn't hurt, but can save a lot of headaches later on.

Another thing to keep in mind is that bigger not always means better, I usually don't find myself using ProPhotoRGB as a working color space because I know, that my final output will not be able to reproduce great majority of its gamut's colors anyway. Aside from that, while working in wide-gamut color spaces you have to use images with bit depth great enough to avoid posterization.

I personally create my textures in non-linear color spaces and linearize them in LW later on, so I can't help you much when it comes to Photoshop-specific color settings.

John Geelan
01-27-2009, 03:47 PM
I would strongly advise you to always tag any image with it's color space, it doesn't hurt, but can save a lot of headaches later on.

For non-Full Precision(FP) images, certainly!
FP images also, I now see, are better off tagged than untagged - the assignment of a profile working space gives meaning to it's colour numbers. As Gerardo pointed out, we are concerned with both gamma and gamut
A RAW image can be opened in PS, be placed in a 32 bit mode, then tagged and saved in a HDR format - I don't know yet whether this is overkill with regard to the hard disk memory needed for storage and RAM required for 32 bit manipulation. The image could just as easily be saved in a 16 bit format and saved with it's working colour space tag for use in LW.


Another thing to keep in mind is that bigger not always means better, I usually don't find myself using ProPhotoRGB as a working color space because I know, that my final output will not be able to reproduce great majority of its gamut's colors anyway. Aside from that, while working in wide-gamut color spaces you have to use images with bit depth great enough to avoid posterization.

Yes! This is something which bugs me.
I am using Epson Ultrachrome K3 inks. I do know that the Adobe RGB(1998) working colour space clips some greens, oranges, yellows and blues from the Ultrachrome colour gamut.
As you say:


... my final output will not be able to reproduce great majority of its gamut's colors anyway.

At the moment, I have no idea if I will ever require the entire gamut of the K3 - and, even if I do, whether my paper will be able to reproduce it. But I would prefer to play it safe and adopt ProPhoto. I don't anticipate ever using less than 16 bit images in my work in LW. All the same, I'll be aware of the possibility of posterization.

I'm still working on Gerardo's second posting - a lot of techno detail there to be understood!:ohmy:

I found his distinction, a distinction also implied by you, between gamut size and gamut shape to be fascinating.
I must find out how those 3D models of gamuts can be plotted. What a huge insight it would offer, and just how useful it would be, to be able to compare 3D gamuts of ink sets, paper profiles, monitor profiles and colour spaces.

Some very busy days ahead!:D

Mr Rid
01-27-2009, 07:39 PM
...

And why not ask in the first place??? :)

I'll show you here a way (there are several depending on each particular studio and project) to solve the problem you have. Since there are many aspects that I won't detail here, I suggest you to read my articles on HDRI3D (Issue#18 (http://www.hdri3d.com/issues/h18.htm) and Issue#19 (http://www.hdri3d.com/issues/h19htm)) so that we can be sure we are talking with the same terms. ...


Gerardo

Yes, we read your articles when first appeared and a ton of other info. Issues not solved. But I've gone around in circles in this complex discussion too many times. It is difficult enough to convey and grasp in person, but impossible in text.

gerardstrada
01-28-2009, 12:31 AM
As it turns out, you are right!
Based on Gerardo's answer re the non-linearity of a working colour space - I assumed the ilinear render he posted on page 2 of this thread, had been created in PS - and I understood that the ProPhoto RGB colour space is linear ie,

That linear render was created within LW after working in linear light in ProphotoRGB color space (no LUT applied).



It could well be Gerardo is referring to a linear version of ProPhoto RGB available elsewhere other than in PS.
However, the version of ProPhoto RGB available in PS is non-linear.
Prophoto RGB is not linear, as you have say, it has a gamma of 1.8. But we can create linear versions of any color space with a profile maker. Since most of them are commercial and in order to not infringe any copyrights, I did show in lightwiki website (as was posted by Matt) (http://www.lightwiki.com/SG_CCTools_-_For_Color_Management_and_Linear_Workflows) how to create a linear version of a color space from its log version with ICCProfileInspector (free and nice app by color.org), so that anyone can create their own linear versions from any color space (some color spaces require some knowledge about their log formula, but several of them are simple gamma exponents).



You are perfectly right to point out the correct purpose of rendering intents - they are, of course, provided as a means of handling out-of-gamut colours.
Yes, color mapping is the purpose of rendering intents, however color spaces have indeed rendering intents by default specified in the ICC/ICM profile as reccomedation for color operations where we are not consulted (like ones that take place in the OS and their CMMs).


The colour gamut and gamma of the image we see on the monitor is determined by the monitor colour space and the calibration of the monitor.
If Gerardo is looking in, and given his contributions so, I'm sure he's thinking it's about time I got the point!:D
HA! I think you have already got the point!


All of this leads to something very interesting.
My goal is to produce linear raster images in PS without a colour space attached. The reason being that LW works with a linear workflow and does not have colour management built-in. Together with SG_CCTools colour picker, which will linearize picked colours, I should have a 100% linear workflow in LW.

The colour space I would prefer to work with is ProPhoto RGB and a white point of D65.
But ProPhoto RGB is non-linear.
I can begin by creating, within PS, a new colour setting file based on the ProPhoto RGB colour space by setting the existing ProPhoto RGB gamma setting = 1.
It looks, from the colour space data above, that across colour spaces, the white point x, y, values remain the same for a given colour temperature ie, in two cases above, though they are different colour spaces, they share the same colour temperature of D50 and the same white point x, y, values.
So, for an adjusted ProPhoto RGB colour space with a gamma = 1 and a new white point of D65, I think, all I need to do is set the gamma to 1 and white point values to the D65 values of Adobe RGB (1998) colour space.

I had a look at the three primary x, y, values for each colour space but, and I'm guessing again, I feel all they do is define the size and width of each colour space.
Please, do consider this:


I would strongly advise you to always tag any image with it's color space, it doesn't hurt, but can save a lot of headaches later on.

Another thing to keep in mind is that bigger not always means better, I usually don't find myself using ProPhotoRGB as a working color space because I know, that my final output will not be able to reproduce great majority of its gamut's colors anyway. Aside from that, while working in wide-gamut color spaces you have to use images with bit depth great enough to avoid posterization.
Totally agree here with Nowhere Man.

Unless one is working in monitor color space within PS, do consider we need a linear workflow at gamut level if we want to keep total precision color fidelity between PS and LW. Changing the white point of ProphotoRGB won't give you better results. The difference in hues we see between ProphotoRGB and a common monitor color space is not mainly due to white points differences but to difference in gamut size and shape. In the case you mention, it's better to use SG_CCTools at gamut level since that's its purpose.


A RAW image can be opened in PS, be placed in a 32 bit mode, then tagged and saved in a HDR format - I don't know yet whether this is overkill with regard to the hard disk memory needed for storage and RAM required for 32 bit manipulation. The image could just as easily be saved in a 16 bit format and saved with it's working colour space tag for use in LW.
If for RAW file you are referring to a DGN (digital negative), yes, unless you have a Spheron (Spherocam) or something like that, the most of cameras are almost MDR (medium dynamic range) and we can use 16-bit depth to cover its contrast ratio.


Yes! This is something which bugs me.
I am using Epson Ultrachrome K3 inks. I do know that the Adobe RGB(1998) working colour space clips some greens, oranges, yellows and blues from the Ultrachrome colour gamut.

At the moment, I have no idea if I will ever require the entire gamut of the K3 - and, even if I do, whether my paper will be able to reproduce it. But I would prefer to play it safe and adopt ProPhoto. I don't anticipate ever using less than 16 bit images in my work in LW. All the same, I'll be aware of the possibility of posterization.
Do notice some color spaces (like the ones you reffer) have a percentage of imaginary colors (color that doesn't exist or we can't perceive.

13% or 15% of Prophoto gamut contains imaginary colors, and even in that way, there are colors perceivable by human vision that this wide color space is not able to cover. Ratio of imaginary colors is even bigger in some negative film profiles as well. CIE Lab also include imaginary colors. Even the origin in CIE x,y system is outside the range of perceivable colors! It seems wide color spaces need imaginary colors to cover a wider range of human visual perception.

In such a case you could compare graphic representations of those gamuts (ink) with available color spaces (CIERGB, ProphotoRGB, WideGamutRGB, AdobeRGB...) to see which is more convenient according with paper color space. Other practical way is to make a test with a color chart, let's say with AdobeRGB and Perceptual rendering intent. Then try with a wider gamut and see the difference. If there's no difference, keep the smaller gamut.


I found his distinction, a distinction also implied by you, between gamut size and gamut shape to be fascinating.
I must find out how those 3D models of gamuts can be plotted. What a huge insight it would offer, and just how useful it would be, to be able to compare 3D gamuts of ink sets, paper profiles, monitor profiles and colour spaces.
I've used GretagMacbeth ProfileMakerPro for those samples, but in HDRI3D article I shared a tip to use a free app called PerfXGamut Viewer 3D (sorry don't have the link right now).


Yes, we read your articles when first appeared and a ton of other info. Issues not solved. But I've gone around in circles in this complex discussion too many times. It is difficult enough to convey and grasp in person, but impossible in text.
After so many unsuccesful tries, I understand you might be tired, but just do consider the setup shared in my article is for composition, not integration.

The color management setup I shared here is new and I haven't shared it before publically. I think I gonna add it in lightwiki documentation. It's the simplest setup that I found so far and several LW-based studios have found it useful too as a generic setup developed through consultations, solutions, enhancements and improvements (most of them made remotely through e-mails). There's no valid opinion if you don't try it and see if it works out or not. You would have to give it a try sometime. Maybe some good improvements for your color pipeline and SG_CCTools could come out from this.



Btw, for people interesting, the most efficient rendering intent to go from a lin version of a color profile to a log version of the same profile or vice versa is Absolute Colorimetric. In fact all they provide the same results in this case, it's just AC is a bit faster for very big images. Can't edit my post now but you may want to consider that to go from LinProphoto to LogProphoto in the CG-integrated-with-film setup shared previously.



Gerardo

Nowhere Man
01-28-2009, 07:03 AM
Here's the link to the app You mentioned http://www.tglc.com/english/PerfX/3D_Gamut_Viewer.html
It seems to be a good gamut viz tool for free, thanks for info :thumbsup:

John Geelan
01-30-2009, 05:02 AM
Gerardo - The two HDRI 3D magazines containing your articles arrived yesterday.:)
Both articles provide excellent and fascinating reading - well worth their cost!
In the next few days, I'm going to re-read all of your posts on these forums, those on SpinQuad and your Wiki article. I'm also going to have to experiment with the SG_CCTools and put all of this theory to test.
Once I have a comprehensive understanding, I'll make a post here outlining my grasp re linear workflow and colour management in LW.
I look forward to any comments you might make.
In the meantime, sincere thanks for your trojan efforts to make all of this understandable to the rest of us.:thumbsup:

Nowhere Man - Thanks for the above link - it opens up a whole new world!:thumbsup:

John Geelan
02-03-2009, 07:58 PM
Gerardo - Based on both your articles, I've been carrying out some experiments with the SG_CCFilter.
For the experiments, I've used both linear and non-linear versions of the ProPhoto working space as inputs.
I have also used two outputs - my monitor profile and SG_CCTools default OUTPUT "Profile Connection Space".
These settings have given me 8 possible combinations - ProPhoto, Linear_ProPhoto, Monitor Colour Space and the default "Profile Connection Space".
Since each of these settings can be paired with the SG_CCFilter On or OFF, the total combinations are 4 x 2 = 8.

Keeping in mind your recommendation to disable the SG_CCFilter, just before rendering, to produce a linear output in the render window, here are my findings + questions, of course!:D

1) In all cases ie, 4, when the SG_CCFilter is off, I am getting the exact same render as per your recommendation above for a linear output in the render window.
Does this mean the SG_CCFilter is not fully disabled? It seems to me that while the output to the monitor profile certainly is, my working space is being utilized to produce the render.
Also, the fact that the render window contains a linear output must also mean that output has been corrected to display properly in my monitor's non-linear colour space? So, is my monitor's colour space definition also being used?

2) With the SG_CCFilter ON, and whether I use the ProPhoto or Linear_ProPhoto working spaces with the default OUTPUT "Profile Connection Space", the output render is equally very dark in both cases.
How am I supposed to understand this "Profile Connection Space" used by the SG_CCFilter? It's use implies my monitor colour space is not being used. So what colour space is it using to produce such a dark output?

3)With the SG_CCFilter ON, and using the ProPhoto Colour Space and monitor profile, the render output looks exactly as the image does when opened, together with its colour tag, in PhotoShop.
In this case, it is the non-linear version of the INPUT colour space which is being used, and, yet, I'm achieving the same output as I do with a colour managed workflow in Photoshop!

4) Finally, when using the SG_CCFilter as it should be used ie, with linear_ProPhoto as the INPUT working space and my monitor profile as the OUTPUT colour space - the rendered output is very bright.
Is this due to my monitor's colour space readjusting the gamma curve?

Looking forward to your thoughts on the above!:thumbsup:

gerardstrada
02-05-2009, 10:41 AM
1) In all cases ie, 4, when the SG_CCFilter is off, I am getting the exact same render as per your recommendation above for a linear output in the render window.

If we have converted all input images to our working color space (ProphotoRGB in this case) and we have linearized them (log2linear) either as pre-process or in Surface Node Editor, we'll see a linear output according to Prophoto RGB chromaticities.



Does this mean the SG_CCFilter is not fully disabled? It seems to me that while the output to the monitor profile certainly is, my working space is being utilized to produce the render.
SG_CCFilter applied as pre-process or in Surface Node Editor shouldn't be disabled. We disable SG_CCFilter or SG_CCNode as post-process only. Then, yes. SG_CCFilter/Node is not fully disabled.


Also, the fact that the render window contains a linear output must also mean that output has been corrected to display properly in my monitor's non-linear colour space? So, is my monitor's colour space definition also being used?
Monitor color space will always be used to display colors (managed or unmanaged). The precision of color fidelity depends on the monitor's gamut (calibrated) and color management.


2) With the SG_CCFilter ON, and whether I use the ProPhoto or Linear_ProPhoto working spaces with the default OUTPUT "Profile Connection Space", the output render is equally very dark in both cases.
How am I supposed to understand this "Profile Connection Space" used by the SG_CCFilter? It's use implies my monitor colour space is not being used. So what colour space is it using to produce such a dark output?
Profile Connection Space was created after the articles and it does nothing to our renders. But it has an important purpose. It was added by Sebastian Goetsch for collaborative pipelines scenarios when a color profile is not available in the network or a computer. If this item wouldn't be there, LW would crash if it doesn't find a given color profile.



3)With the SG_CCFilter ON, and using the ProPhoto Colour Space and monitor profile, the render output looks exactly as the image does when opened, together with its colour tag, in PhotoShop.
In this case, it is the non-linear version of the INPUT colour space which is being used, and, yet, I'm achieving the same output as I do with a colour managed workflow in Photoshop!
Are you getting a linear output when you disable SG_CCFilter as post-process? Can you detail please how are you using ProPhotoRGB color space and monitor profile in this setup?


4) Finally, when using the SG_CCFilter as it should be used ie, with linear_ProPhoto as the INPUT working space and my monitor profile as the OUTPUT colour space - the rendered output is very bright.
Is this due to my monitor's colour space readjusting the gamma curve?[/QUOTE]
Nope. It seems you are not getting the appropriate ouput render due to images are not being linearized in pre-processing. But I'm just guessing. It would be easier if you can detail your SG_CCTools setup (tools applied as pre- and post- process, instances, etc) or maybe upload simple test images.


Btw, there's no convention about this, but people can exchange scenes based on the SG_CCTools if they share same color spaces and agree about a common name for their monitor profile. It can be my_screen_profile.icc or my_monitor_profile.icc or whatever. They just must share the same name, even when they are different color spaces, each monitor will try to display the better color representation according to its own gamut and calibration.



Gerardo

John Geelan
02-05-2009, 05:36 PM
If we have converted all input images to our working color space (ProphotoRGB in this case) and we have linearized them (log2linear) either as pre-process or in Surface Node Editor, we'll see a linear output according to Prophoto RGB chromaticities.

Having disabled the SG_CCFilter immediately before rendering, of course?


Quote:
Also, the fact that the render window contains a linear output must also mean that output has been corrected to display properly in my monitor's non-linear colour space? So, is my monitor's colour space definition also being used?

Monitor color space will always be used to display colors (managed or unmanaged). The precision of color fidelity depends on the monitor's gamut (calibrated) and color management.

Certainly! However, the linear output we see in LW's render window is presented to us in our monitor's non-linear colour space. To be confident we are viewing an accurate linear output, should not that linear output have been gamma corrected during its generation to take account of the non-linear nature of the monitor colour space on which it will be viewed?
In other words, how can I be confident that the linear output I see is truly linear since I am viewing it in the non-linear colour space of a monitor? (This, for me, is the most important of all the questions you may answer):)


Profile Connection Space was created after the articles and it does nothing to our renders.

I'm referring ONLY to the OUTPUT "Profile Connection Space"
As an experiment, I had rendered with the SG_CCFilter ON(NOT Disabled) and the OUTPUT set to "Profile Connection Space" - the rendered image was excessively dark. Exchanging ONLY the linear and non linear working spaces made no difference - in each case the outputs were equally and excessively dark.
However, by making ONLY ONE change - I changed the OUTPUT to my monitor profile - I get two entirely different renders depending on whether I use a linear OR a non-linear working colour space. Using the NON-LINEAR working colour space produces a render identical to the same image when opened in PS in a colour managed workflow.
Using a LINEAR working space produces a very bright render in LW. (This is the reason I asked; "Is this due to my monitor's colour space readjusting the gamma curve?")
Do you not then think that the OUTPUT "Profile Connection Space" has, in fact, a huge impact on our renders?
Remember, the renders were carried out with SG_CCTools NOT DISABLED.


Are you getting a linear output when you disable SG_CCFilter as post-process?

Yes!
I think I may not have been clear enough in my questions. When I use the term "SG_CCFilter ON", I mean SG_CCTools were NOT DISABLED at render time.

The reason I'm interested in those details is I see SG_CCTools as an ingenious answer to the problems encountered in implementing a linear workflow and a colour management workflow in LW. The impending CORE rewrite of LW does not appear to have made any movement in this direction either.
SG_Tools could have a very prominant role to play in LW for years to come. The more I know of SG_Tools features and how they interact, the better I will be able to use them to their full potential.

Many thanks for taking the time to respond - I look forward to your comments.

gerardstrada
02-08-2009, 01:26 AM
Having disabled the SG_CCFilter immediately before rendering, of course?
Yes, but at post-process only.


Certainly! However, the linear output we see in LW's render window is presented to us in our monitor's non-linear colour space. To be confident we are viewing an accurate linear output, should not that linear output have been gamma corrected during its generation to take account of the non-linear nature of the monitor colour space on which it will be viewed?
In other words, how can I be confident that the linear output I see is truly linear since I am viewing it in the non-linear colour space of a monitor? (This, for me, is the most important of all the questions you may answer):)
In a color management setup, the linear render output is using our monitors gamut to represent our scaled working color space, but this result is not showed in our monitor's gamma, but the gamma used in our working color space (in this case, linear). I say "scaled" because the working color space we are viewing is scaled according to our our monitor's gamut.

Remember, on win/vista systems gamma on monitor is real gamma, this implies that when we save an image, gamma is baked in the image. On Mac OS systems real gamma is about 1.8 and we have a kind of LUT (about 1.4), this LUT is for displaying purposes only and is not baked in the image file - which is more convenient (and we need to keep this in mind when we create HDRIs on MACs). Thus, as we know, colors on images in linear space are darker and contrasted. When they are gamma encoded according to our visual perception, we perceive a pleasant image (linear - or almost linear - to our eyes). Then, if we see a darker and contrasted (linear) result, we can be sure that this output image is linear.


I'm referring ONLY to the OUTPUT "Profile Connection Space"
As an experiment, I had rendered with the SG_CCFilter ON(NOT Disabled) and the OUTPUT set to "Profile Connection Space" - the rendered image was excessively dark. Exchanging ONLY the linear and non linear working spaces made no difference - in each case the outputs were equally and excessively dark.
However, by making ONLY ONE change - I changed the OUTPUT to my monitor profile - I get two entirely different renders depending on whether I use a linear OR a non-linear working colour space. Using the NON-LINEAR working colour space produces a render identical to the same image when opened in PS in a colour managed workflow.
Using a LINEAR working space produces a very bright render in LW. (This is the reason I asked; "Is this due to my monitor's colour space readjusting the gamma curve?")
Do you not then think that the OUTPUT "Profile Connection Space" has, in fact, a huge impact on our renders?
Remember, the renders were carried out with SG_CCTools NOT DISABLED.
What I mean is Profile Connection Space in SG_CCFilter or Node has nothing to do to our renders in the color flow setup. If this item wouldn't be there, LW would crash if it doesn't find a given color profile. Is like a substitute profile when a given profile is not found. So please, don't use it in your color flow setups.


Yes!
I think I may not have been clear enough in my questions. When I use the term "SG_CCFilter ON", I mean SG_CCTools were NOT DISABLED at render time.
A tipical simple example would be:

We have a background (BG) card. The color flow for this BGcard would be:

In pre-process (PS):
-Convert the 8-bpc BG image from the source device color space (let's say a digital camera) to our working color space (let's say log version of ProPhotoRGB).

In LW:
Apply SG_CCNode to the card surface:
InputProfile: ProphotoRGB (log version)
Rendering Intent: Absolute Colorimetric
Output profile: ProphotoRGB (lin version)

Apply SG_CCFilter instance as Post-filter:
InputProfile: ProphotoRGB (lin version)
Rendering Intent: Perceptual
Output profile: monitor space (log version)

Result in this example should match PS.

If we disable SG_CCFilter as post-filter, we must see a linear ouput according to ProphotoRGB chromaticities.



The reason I'm interested in those details is I see SG_CCTools as an ingenious answer to the problems encountered in implementing a linear workflow and a colour management workflow in LW. The impending CORE rewrite of LW does not appear to have made any movement in this direction either.
SG_Tools could have a very prominant role to play in LW for years to come. The more I know of SG_Tools features and how they interact, the better I will be able to use them to their full potential.
It's too early to say that, I think. But the re-written CORE of LW has indeed a huge benefit for these kind of tools. SDK will be totally opened, it's also C++ and the structures implemented are C++ as well, and default scripting will be Phyton now. I guess this kind of power and flexibility could make viable the automatic color profile recognition for example (recognizing tags and embedded profiles), accurate LUTs could be viable in previewers (like a new Viper), and maybe the automatic conversion and setup from any color space to our working color space and the consequent LUT for previews in a general color management panel might be possible too. This implies a lot of more work, but it's viable now. This is advantageous for many other tools and areas as well.

We did comment previously the reason why LW or any other 3D package don't offer an option of a working colour space, that is to say, the option of color management. For some developers, I can think this is due to this is a new paradigm yet in professional workflows, but for AD for example ...hmmm this is not so convenient, seeing that to implement their color management system could cost hundreds of thousands of dollars. They may think that to include these facilities in their 3D packages could damage in some way another very profitable business. LW could take advantage of this, I guess :)



Gerardo

John Geelan
02-14-2009, 09:47 AM
Gerardo, once again, many thanks for your response – it has been very valuable!:thumbsup:

It is clear there are many complex technical considerations which underpin any discussion on linear and colour-managed workflows. I now think it would be very worthwhile to embark on a project to bring coherency and order to both discussions. When focused on a single technicality or a pointed question, a forum such as this has immense value. However, with broader topics such as linear and colour-managed workflows, information comes piecemeal and, as we have seen, can be very confusing. Technical information can be thrown into a discussion to answer questions which have yet to be asked. The problem with this is: how can we recognise an answer, as an answer, unless we have first asked the question?
Under the guiding title “How to accurately preview a colour-managed and linear workflow in LightWave,” I propose to make four posts in this thread, of which this will be the first. Each post will be concerned with a different topic:

The First Post: The linearity of light versus the human perception of light.
The Second Post: The principles underlying a colour management workflow.
The Third Post: Understanding the linear workflow in LightWave.
The Fourth Post: Implementing a colour-managed workflow in Lightwave.

My intention is to collate whatever relevant information is contributed into a coherent and systematic body of information. Such information would be of huge benefit to all of us concerned with the two workflows. That information would be presented in a conversational, yet direct, manner and be made available on the forum in a regularly updated pdf file.
Of course, the success of all of this depends on the willingness of individuals to contribute. And, as we all know, there are many talented and experienced individuals out there from whom a single sentence could provide a wealth of knowledge for the rest of us!

Let’s begin with the first post then! Whatever inaccuracies, errors, etc there are, do let me know. The goal here is to achieve accuracy of knowledge!

NOTE: I had to attach the document to this post because it contains superscript and subscript formatting which is not recognised by this web page. If anybody knows a way to display such formatting here, do please let me know.
I look forward to your comments.

gerardstrada
03-01-2009, 05:08 AM
Hello John,

Sorry for this late replay. I think this initiative for understanding color management and linear workflows are very valuable, and I think it would have more positive response if these answers and concepts are presented with practical examples and images. This things could become too technical and without graphic samples, well, it can not only be too abstract but also confused or difficult to understand. I know you are begining with LW, but examples don't need to be complex, better if they are simple.

About your document:


Realistic lighting attempts to represent the way in which humans perceive light.
I think realistic lighting implies to represent two main things:

1. How light behaves in nature
2. How human vision preceives light behavior and how image devices capture light and display it.

Right now the first aspect is up to the render engines, and though they don't take into account the entirety of the variables implied in the real light behavior, they are close enough as to say that they are accurate.

The other aspect however is more tricky, since we could simulate how a digital camera captures the scene, or how a motion picture camera captures the scene, or how a monitor displays the scene, or our visual perception interpret the scene... or all of them!


In other words, as humans, we impose meaning on a light scene, a meaning which doesn't exist independent of us.
:agree: This is very important concept for our work.


Independent of human perception, light behaves in a linear fashion in the natural world. This means that, objectively, luminous values in a scene don't meld into one and other to produce the gradual tonal values we perceive in a scene. Instead, there are, in reality, sudden jumps between luminous values, resulting in a scene which can have high contrast and an unevenness in luminosity which we, as humans, would find displeasing.
Linear light can be understood more easily with this example:

This is a linear ramp for our visual perception according to ours monitors' gamma:

http://imagic.ddgenvivo.tv/forums/LCSexplained/perceplinramp.png

Let's notice the middle gray is exactly in the center of the ramp. However, real measurements of reflected light says that what is 50% gray for us, is in reality 18% (there's some photographers whom claim it's about 12.5% but that's a misconception). Then, this ramp would have this linear version:

http://imagic.ddgenvivo.tv/forums/LCSexplained/linramp.png

This means that if we split the gradient by half, the right half contains the quarter part of the bright zones, if we make the same with the left half, its half contains the half of bright greys, and if we made the same thing with the remainder half, its half will contain the half of middle grays and so on. Middle grays here are only represented in a very small part of the ramp and dark grays are the half of that.

However this ramp is not representing linearity in that way because our visual perception, or better said, not only because of that, but primary because the intensity of light in monitors is non-linear, that is to say, they compress the linear input gamma by encoding it with an input function of about 0.45. This is why 2.2 power function for gamma expansion is required to compensate this:

http://imagic.ddgenvivo.tv/forums/LCSexplained/monitor_encoding.png

The curious thing is this gamma encoding in monitors is very similar to the inverse of our visual perception to light intensity. Some experts says this is an amazing coincidence, just by pure chance, but I don't believe in accidents :)

Then, be aware that image devices gamma are as important in our work as our visual perception of light.


Luminosity is a technical term for brightness.
Yep. It's also well-known as luminance, while brightness (also called lightness) is used to describe the perceptual response to luminance. Which is an important distinction here, I think.


What, on earth, is Gamma?
Essentially, it refers to a measured ratio, expressed as a single factor, between an input and an output. For our purposes it expresses, as a single factor, the difference between the objective behavior of light and our human perception of how light behaves.
I think useful concepts are those that describe the function/utility of a given term. Gamma in that regard could be described as a numerical representation of the reproduction of light intensities proportions.


For our purposes it expresses, as a single factor
I know this is to not complicate more the things, but just to consider that sRGB, Rec.709, film... haven't a single factor for their gamma.


What do we mean then when we describe a gamma value as an exponent?
Well, the mathematical part of this point has become too technical, I think :) But just want to clarify a concept here:


The term "exponent" simply means a " power".
As far as I understand, there's no clear definition for power function since it implies a wide range of options for basis and exponents. However what we can recognize in the regard of gamma is that a power function indeed implies an exponent, and contrary to common exponents when base is the constant, in this kind of power funtion, the exponent is the constant. This means that a value for a given luminance level is the luminance of the previous level multiplied by a constant. Even film has an almost linear portion that behaves in this way and it's called gamma, however due to relationship between its density and exposure its curve is considered logarithmic.

I think this gamma part could be more understandable with sample images (it could be photographs). Other common overlooked aspect is human vision adaptation to an extremely wide range of lighting conditions and this aspect is one of those that really makes the difference between images devices and human vision.

Btw, about gamma, this document (http://www.poynton.com/notes/colour_and_gamma/GammaFAQ.html) could help to clarify some concepts.



Gerardo

toby
03-01-2009, 05:04 PM
Btw, about gamma, this document (http://www.poynton.com/notes/colour_and_gamma/GammaFAQ.html) could help to clarify some concepts.
Awesome awesome link.
Light as it pertains to video, for dummies... if you skip the math formulas, which I certainly do :)
I feel like reading this first will make reading your posts easier.

gerardstrada
03-01-2009, 05:53 PM
Oh, then this document (http://www.poynton.com/notes/colour_and_gamma/ColorFAQ.html) might help with some concepts for color management.



Gerardo

toby
03-01-2009, 06:55 PM
Ah yes I found that one too! It's linked :)

But can you help me understand one of their graphics? Something doesn't seem right.
If you read the first example the same way as the others, it looks like the video is going in linear, which isn't right; video already has the .4545 gamma baked in, which is also why you don't see the 8-bit bottleneck, that was baked in at the same time. Do I have that right?

http://www.poynton.com/notes/colour_and_gamma/video_cg_mac.gif


And just for my clarity;
All Jpegs and most images found on the web are gamma'd up 2.2
to linearize an image apply .4545 gamma to it ( 1/2.2 )

Is that right?
Is rendering with linear textures is more accurate, or is it just a perception/workspace of the artist type thing?

Thanks Gerardo!

allabulle
03-01-2009, 07:00 PM
Both articles are highly valuable. Thank you again Gerardo.

Oh!, by the way, the link to the PDF version of the ColorFAQ seems to direct to the GammaFAQ's PDF file instead. Just changing it in the browser's site bar by hand solved it for me and I could download the PDF version.
Or you can just type poynton.com/PDFs/ColorFAQ.pdf

I hope I made sense, it's quite late here.

gerardstrada
03-02-2009, 03:28 PM
Ah yes I found that one too! It's linked :)

But can you help me understand one of their graphics? Something doesn't seem right.
If you read the first example the same way as the others, it looks like the video is going in linear, which isn't right; video already has the .4545 gamma baked in, which is also why you don't see the 8-bit bottleneck, that was baked in at the same time. Do I have that right?

http://www.poynton.com/notes/colour_and_gamma/video_cg_mac.gif

You are right Toby, that graph is a bit confusing because in all cases, gamma curves shown are exactly the opposite to the gamma exponent indicated. We could interpret that as the necessary gamma compensation for each case. Then all fits. Video input at 2.2, ramp compresed with 0.45, since monitor encode this, a gamma compensation (2.5 in that case) is applied.

We need to consider (in regarding on gamma flows - not concepts) that document is a bit old now. It seems he's referring to the gamma of NTSC1953 system which indeed has 2.2 gamma and not to VideoNTSC which has a more complex gamma formula that behaves better in darks (similar to Rec 709). Monitors gamma is not 2.5 anymore. Even Apple recomends now 2.2 gamma for their new LCD displays and not 1.8 as always has been and it's indicated there as well.


And just for my clarity;
All Jpegs and most images found on the web are gamma'd up 2.2
to linearize an image apply .4545 gamma to it ( 1/2.2 )

Is that right?
If we consider JPG is not a color space but an image format, yes, we can consider JPGs from web as gamma-encoded with sRGB gamma which for practical terms, it has a gamma very near to 2.2, and most of people use a .4545 value for linearization. But LW users have SG_CCTools, with which we are able to perform accurate linearizations for sRGB gamma with just one click. This is really necessary? Well, it depends. In most of images, textures, BG plates, etc. difference between a simple gamma decoding (.45) and real sRGB gamma are minimal, and nobody cares about it. But on gradients backgrounds and plain colors, differences can be more notorious depending on the luminance of these colors. Darker the color, more notorious difference:

http://imagic.ddgenvivo.tv/forums/22srgb_bright.png
http://imagic.ddgenvivo.tv/forums/22srgb_dark.png

In the sRGB case, accurate linearizations can be useful for some branches for corporative logos, or product colors, or backgrounds associated with some marks, etc.

For VFX this can be important in the rec. 709 case (where difference can be even more notorious) if we need to preserve more details in darker areas if the 'artistic-look' is gonna be contrasted.

http://imagic.ddgenvivo.tv/forums/194rec709_dark.png

SG_CCTools performs also this type of gamma conversion in a simple way.



Is rendering with linear textures is more accurate, or is it just a perception/workspace of the artist type thing?

Thanks Gerardo!
Yes, rendering properly in linear light (linear textures, surface colors, light colors, environment colors, etc) provides a more realistic light behavior, and more realistic results with less tweaking in lighting and shading.


Allabulle, hey, man! how are you? Thanks for the link correction :)



Gerardo

toby
03-02-2009, 04:21 PM
Well as I recall, SG_CCTools is pc-only, so I'll have to linearize things myself to try this :|
Thanks for taking the time!

gerardstrada
03-02-2009, 04:46 PM
In that case you could use the node setups kindly shared by HDRI3D magazine here (http://www.hdri3d.com/hdrfiles/h18/p71/hdr18-71.zip). in the Special LCS Setups folder. There are accurate node setups for sRGB and Rec 709 gamma encoding and decoding. Though it's slow compared with SG_CCTools, it could be useful.



Gerardo

EDIT: Hope Sebastian Goetsch has some free time to port these tools to MAC soon :)

toby
03-02-2009, 05:00 PM
Wow that's great! It's just a hobby / study at home, so these will do fine.
Thanks again.

John Geelan
03-06-2009, 05:55 AM
... I think it would have more positive response if these answers and concepts are presented with practical examples and images

Gerardo, great to hear from you!
I've been away for a while and didn't have an opportunity to monitor this thread.
I agree with you fully regarding the need for illustrative images. I have almost completed a pdf file on colour, colour models and colour spaces. Attempting to illustrate theory, from first principals using images, is taking some time but I should have the section on colour completed by the end of next week.
I also take your point on the different gamma values used in industry. Though I was aware of this, I chose not to make a comment because it might cause confusion. As I say, I'll be posting this coming week and do look forward to your comments. :thumbsup:

John Geelan
04-11-2009, 08:25 PM
Gerardo - if you are still out there and in the land of the living:D

It has taken a while but here is the illustrated section (Part 1) on light.

So much more could have been included! With regard to maths, I decided the less, the better.

It is going to take quite a few weeks to finalise the section on colour - but it will be worth it in the end.
In the meantime this should keep us going for a while.

Looking forward to your and others comments.:thumbsup:

I'll be away for the coming week and don't expect to have access to the forum.

allabulle
04-12-2009, 09:03 AM
Thank you John!

toby
04-12-2009, 02:59 PM
Sweet!
Really appreciate this!

gerardstrada
05-15-2009, 03:29 PM
Gerardo - if you are still out there and in the land of the living:D

I have not been taken yet... :D


It has taken a while but here is the illustrated section (Part 1) on light.

So much more could have been included! With regard to maths, I decided the less, the better.

:agree:


It is going to take quite a few weeks to finalise the section on colour - but it will be worth it in the end.
In the meantime this should keep us going for a while.

Looking forward to your and others comments.:thumbsup:

It's bassically an excellent document! Just few comments about some curiosities, maybe you already know some of them:

In order we see images with an intended color space (ProphotoRGB in the images of the pdf), we need to tag images for color management in the pdf creation and choose a rendering intent. Consider however there's no monitor that can display such a gamut range. Remember as well the 1.8 gamma value was designed for MAC platform, where a kind of LUT of 1.4 is applied to compensate the 2.2 gamma necessary to be perceived as linear according to our visual perception.

Most of you have said about linearity is in fact in that way, but there's just a definition before that (the end of the 1.2 part) where the concept becomes a bit confusing. In simple terms, linearity corresponds to a physical measure of light intensities. Human perception is not linearly proportional to these real intensities. Doubling the amount of light is not perceived by our eyes twice as bright, the same as doubling the temperature of a coffee isn't twice as hot for us. Curiously this happens with all radiations. In a neutral environment, luminance at 18% is perceived by us like 50% because our visual perception is more sensitive to lower lighting values, this compensation allows to our senses to perceive a wide range of visual stimuli and it's in fact a wise gift of the nature. Then, as far as I understand that part, it has nothing to do with how tonal values suddenly jumps between luminous values.

Btw, there's a controversy about the Kodak grey card (18%). Some photographers state that the real value is not 18% but a value between 12.5% - 15%. There's even a note in a Kodak grey card about a compensation for this. However, all this mess begin with histograms of the new digital cameras, what makes me think is not a problem of the original measure of the grey card but the way the built-in histograms in some cameras don't perform an accurate measure.

The simultaneous contrast phenomenon that you mention is really interesting. There's even a law in traditional art about this phenomenon that says that a grey is more white when more black is the environment that surround it, and vice versa.

About the linear images representations (the photographs examples), a dark and contrasted image is indeed linear. I think the confusion arrives when we talk about perceptually linear/uniform images. A linear image is displayed dark and contrasted because common monitors are not able to display the whole range of real light intensities. Theoretically speaking for example, a HDR monitor uses linear gamma due to the opposite reason.

About the 'secretive' linearity in RAW images, at least for Adobe DNG, and as far as I understand, we can preserve the raw image by keeping its 'mosaic' format, or to convert the image to linear data by 'demosaicing' the image pattern. In this process, the ADNG app assigns to an image the respective internal color space supported for a given camera. This color space has just a linear gamma value, so no gamma function is applied to this data. From there we can convert to sRGB, or aRGB, or assign a log version of the original color space if it's available.

Btw, though our visual perception oscillates between 380nm and 780 nm, something curious about the visible spectrum bar is that magenta/purple color is not a spectral color, which means we won't find it in this bar because it has not a single wavelength in the visible spectrum, which implies we won't see this color in a simple prisma dispersion. To do so, we need two prisms to mix the first spectral color (red) with the last one (violet). Due to this, some people thinks magenta is existent only as a mental imagery (which is the case with all colors, I think) and they even propose it as an objective proof of the controversial and vaguely defined concept of qualia.



Gerardo

probiner
07-24-2010, 10:14 PM
Oh wow. So many good info in this thread.

Threads like this should have a "Following Users say Thank You" button.