PDA

View Full Version : Definitive Color Space Guide



spherical
07-21-2014, 10:37 PM
Let's build one. Please! Perhaps this ended up being it... (Certainly took long enough to research and assemble.)

I'm tired of going around in circles reading thread after thread on Color Space (CS) settings and how they should be best set for the mainstream. Threads on the topic, spread all over the place, (that I have read many times) having good tidbits are:

Managing color space in LW -> AE -> x264 workflow? (http://forums.newtek.com/showthread.php?141111-Managing-color-space-in-LW-gt-AE-gt-x264-workflow)
LightWave and Wide Gamut monitors (http://forums.newtek.com/showthread.php?136662-LightWave-and-Wide-Gamut-monitors)
Is there a sticky somewhere on color space options? (http://forums.newtek.com/showthread.php?134060-Is-there-a-sticky-somewhere-on-color-space-options)

I've read the v10 manual that has a really nice definition of things in the CS panel, the list of color spaces included and what LUTs can be loaded and the format they're in but zero on how to actually use it or why. Searches through all of the addenda since v10 come up with zero, too.

There's no video that I can find on NewTek's LightWave Training site.

I've read Matt Gorner's The Beginners Explanation of Gamma Correction and Linear Workflow PDF. I've watched the video he put together using the same document. It and the references cited above only go so far. They all stop short of actual implementation and what said implementation does to the data; aside from not applying Gamma corrections twice or not at all.

gerardstrada and dwburman have posted some tips and sometimes point to the use of SG_CCTools, which become relatively complex and, AFAIK, don't work everywhere in LightWave because it's a filter.

"Use sRGB" is quite often repeated but few, if any times, concisely and unambiguously explained. However, when looking at the CS panel and having gleaned some concepts from the above threads and others, it would appear on the surface that "just use sRGB" may not necessarily be the answer for obtaining the best renders that have the least amount of colors thrown away when saving them. Blindly following advice, when there is no clear explanation from the developers to check with and obtain corroborative information to better understand what is being proffered, can often lead one into confusion, if not trouble.

First question is:
Why, if sRGB is the default go-to setting cited by nearly everyone, is it NOT default?
Linear is. LightWave works in Linear internally, no matter what CS one chooses, because the CS option chosen influences the Input/Output. But the default "Disabled" is Linear for everything. If sRGB is what everyone except the ones who have special workflows should be using, why is it not the default when LightWave opens its CS panel and let the other cases that have special requirements (and users who possess greater knowledge in the topic) CHOOSE all Linear (Disabled) for everything if they see fit? IOW, their post processing needs Linear as direct input.

Second:
Why is Output Color Space set to sRGB in the sRGB preset?
I get why the other Input/Output settings need to be sRGB, because they are either what you are seeing and choosing from or imported images need to be converted. However, if you don't want to truncate colors by converting on saving to sRGB, a limited gamut because it's for monitors, shouldn't there be a high gamut Color Space choice that doesn't throw away color data when the render is saved?


Since sRGB serves as a "best guess" metric for how another person's monitor produces color, it has become the standard color space for displaying images on the Internet. sRGB's color gamut encompasses just 35% of the visible colors specified by CIE, whereas Adobe RGB (1998) encompasses slightly more than 50% of all visible colors. Adobe RGB (1998) extends into richer cyans and greens than does sRGB – for all levels of luminance. The two gamuts are often compared in mid-tone values (~50% luminance), but clear differences are evident in shadows (~25% luminance) and highlights (~75% luminance) as well. In fact, Adobe RGB (1998) expands its advantages to areas of intense orange, yellow, and magenta regions.

Therefore, it would make sense that AdobeRGB (1998), or other, as an output (not display) color space should be available.

Third:
Why is sRGB the only output choice that embeds a Color Profile that can be converted to your Working RGB profile in your image editor of choice when first opened?
This would seem a given if consistent color is to be maintained in any workflow. If you look at the Metadata of a layer, you see that Adobe RGB is the listed value in the Exif Color Space field but the profile is not recognized as being attached by Photoshop.

Fourth:
Wouldn't you think that saving an image as Linear and opening in Photoshop, then adjusting Gamma to 2.2 be the preferred workflow in order to preserve the most data?
What you get, however, is a "Picket Fence" Histogram after the adjustment. Colors/values are spread across the spectrum from what appears to actually be limited original data in the Linear file.

The LightWave conversion to Gamma 2.2 using the sRGB preset generates a solid histogram.
Doing a Gamma adjustment while dropping from 32bpc to 8bpc in an EXR generates the same solid histogram.
(As a side note, why is EXR Half not 16bpc? Both FP and Half show in PS as being 32bpc.)

Conclusion I:
So, it appears that sRGB preset is the choice for most users; setting aside, of course, the missing higher gamut Color Profiles that really should be available to embed in order to properly convert from in an image editing application.

BUT
This is referred to as working in "A Linear Workflow". Yet, to get that, we're "working" in sRGB. If you leave the default that shows when you first open the CS tab—where everything says "Linear" and the drop-down says "Disabled", you're not working in a Linear Workflow.

Confused yet?

Conclusion II:
It also appears that using EXR is the best overall choice as an output file format, as it sidesteps a number of these issues and has many other advantages (http://wolfcrow.com/blog/the-advantages-of-using-openexr/). Employing LightWolf's db&w exrTrader (http://www.db-w.com/products/exrtrader/about) in your pipeline makes it even more so.

Lightwolf
07-22-2014, 07:02 AM
Second:
Why is Output Color Space set to sRGB in the sRGB preset?

One major reason is adaptive sampling. The Output Color Space also affects how the threshold of pixels to be sampled further is computed. This can make quite a difference and thus the Output Color Space should be close to the final colour space of your project.


Employing LightWolf's db&w exrTrader (http://www.db-w.com/products/exrtrader/about) in your pipeline makes it even more so.
Thanks, that's nice to read. As a side note, exrTrader always saves EXR as linear (unless you export as a separate image and manually apply a colour space in exrTrader) - for the simple reason that EXR is only specced for storing linear image data in the first place.

Cheers,
Mike

Markc
07-22-2014, 11:51 AM
I must be working in the wrong Color Space in my browser......I have to highlight each of the questions with my mouse to read them :D

gerry_g
07-22-2014, 01:06 PM
I noticed a marked difference to ignoring the colour space set up tab in versions 10 as opposed to 11+, when I load an EXR or similar high dynamic float image for background illumination the colours are now skewed towards yellow in the highlights and are way more contrasty the whites pop and the blacks are deeper, not only that they cause VPR to render more slowly, only when everything is correctly set to sRGB does it work as intended.

spherical
07-22-2014, 06:26 PM
One major reason is adaptive sampling. The Output Color Space also affects how the threshold of pixels to be sampled further is computed.

Ah, I remember that now. Thanks.


This can make quite a difference and thus the Output Color Space should be close to the final colour space of your project.

sRGB is "close to" AdobeRGB (I use BruceRGB, as it has a wider gamut than AdobeRGB), whereas Linear is way off. Linear didn't react well to the Gamma adjustment, so sRGB it is. It'd be great to be able to load and embed specific output color space profiles, for formats that support them, so that there is only one conversion process at the start of the pipeline.


Thanks, that's nice to read.

My pleasure. :)

spherical
07-22-2014, 06:30 PM
I must be working in the wrong Color Space in my browser......I have to highlight each of the questions with my mouse to read them :D

Using a light theme for the forum? I'm using BP-Brown, as it's easier on the eyes. I guess by specifying a font color, that the flipped CSS values of a light theme doesn't override that specification to create a compatible contrast. Important safety tip.

pinkmouse
07-23-2014, 02:45 AM
Incompatible colour spaces? :)

I use vB4 Default, much easier on my eye, and I too, can't read a word of your yellow text. Even if I could though, I don't like coloured text, it looks childish to me. What's wrong with Bold, Underline or Italic? ;)

spherical
07-23-2014, 04:09 AM
Yep, that's a bright one. To each his own... Childish? Seriously, you want to go there?
If I could go back and edit the post right now, I'd do it; just to KEEP THE POSTS ON TOPIC. (There's all three.)

gerardstrada
07-26-2014, 08:22 PM
There's no video that I can find on NewTek's LightWave Training site..
Have you seen this video (http://www.youtube.com/watch?v=D2iDv9hnQiw)?


gerardstrada and dwburman have posted some tips and sometimes point to the use of SG_CCTools, which become relatively complex and, AFAIK, don't work everywhere in LightWave because it's a filter.
Not only it works everywhere in LW (Filter, Node and Picker) but it also plays well with native LW implementation. Using here a workflow where we can gamma convert with native LW tools and let SG CCTools takes charge of the gamut mapping and final preview. But as you say, not for everybody.


Why, if sRGB is the default go-to setting cited by nearly everyone, is it NOT default?
Guess if the default is the option that takes place when no decision is taken, the logic would be that since the render engine works internally in linear light, it assumes linear as default. But if the default is the option that takes place because of a lack of opposition, then I think you have a point. In such case and considering what CS panel is offering right now (which can be improved a lot yet), the sRGB preset seems to be the most used one. You probably have noticed that one can set up the default preset of own choice by configuring your preset (save it if new) and close LW. Then when open it, the preset by default is the last one you chose. We can set up also these things in the Config file.


LightWave works in Linear internally, no matter what CS one chooses
If we don't choose carefully the CS options, we wouldn't really work in linear light. If let's say we assume sRGB for an input image and this is really HDTV, then colors for this image wouldn't be really linear. Now let's imagine how things could go wrong if gamut or white points are also different...


Why is Output Color Space set to sRGB in the sRGB preset?
Because the sRGB preset is intended to input, process (render) and output in sRGB standard. Something most of people don't realize is that the internal process (render) is conformed according to the input colorspace, so in the sRGB preset, render will be performed linearly but according to sRGB characteristics.


because they are either what you are seeing and choosing from or imported images need to be converted.
Curious thing is that Light and Picked colors are in dependence of the working display standard used. There's no sRGB gamma for computer monitors, it's just 2.2 for sRGB/aRGB monitor standards. There are also other display standards but none of them is sRGB gamma. What CS would need in the sRGB preset for these parameters is a 2.2 gamma table with sRGB chromaticities and white point.


However, if you don't want to truncate colors by converting on saving to sRGB, a limited gamut because it's for monitors, shouldn't there be a high gamut Color Space choice that doesn't throw away color data when the render is saved?
By how color management is designed right now in LW, it doesn't make really a difference, seeing that if your input colors are sRGB and your display color space is also sRGB, you'll end up conforming the final color appearance according to sRGB. Then, even if you save to a colorspace with a wider gamut, your final colors will be always in sRGB standard. This is what happens when one works in output-referred color spaces, this not only limits the color range from start but also affects the results (read realism) of our renders. There are color workflows to eliminate these disadvantages, and there are also things LW could implement so that people without any color management knowledge could overcome this limitation without too much hassle. Surprisingly, no 3D package has made this yet. You might want to take a look at this (http://forums.newtek.com/showthread.php?129020-3D-World-needs-you&p=1253424&viewfull=1#post1253424).


Therefore, it would make sense that AdobeRGB (1998), or other, as an output (not display) color space should be available.
Do the test. Work in sRGB, and even if we save as aRGB, we'll end up with sRGB colors at the end of the day. Because when we assign a wider colorspace to an image conformed with a narrow colorspace, we get odd saturated results. Then we need to come back to the sRGB preset to get the original color appearance. The only way to generate an image in one color space and get another color space by keeping the color appearance is by color converting this image, which will imply gamut mapping and this should be made in floating point domain to almost get there. Being that LUT formats doesn't handle gamut mapping by default, this can not be made properly without a CM system in that context. But even when ICC profiles v4.3 are able to handle robust gamut mapping for free in FP domain, the thing is that the proper way of making this is by generating the image in the wider color space in the first place. There's no post-solution that can beat that.


Why is sRGB the only output choice that embeds a Color Profile that can be converted to your Working RGB profile in your image editor of choice when first opened?
Guess you are referring to some 8-bpc formats. In such case LW doesn't embed any color profile, it's just tagging a specific colorspace. The difference between tag and embed is that the former is only specifying which colorspace the image editor should use, in the latter the color profile is actually inlaid in the metadata. Why only sRGB? Because Exif specifications, in its ColorSpace tag, defines that if a color space other than sRGB is used, "Uncalibrated" is set. Uncalibrated means no colorspace (=nothing) that PS (or any other image editor) can read. But, really? only sRGB? This is a legacy spec from 1998 (when new monitor standards didn't exist) and because sRGB (=1) standard gives strict colorspace definitions based on standard monitor characteristics and viewing conditions. It's also compatible with Rec 709 and Rec 601 standards, and there's also a relationship with YCC space. This is the case for ColorSpace tags in TIFF and JPEG files at least.


you see that Adobe RGB is the listed value in the Exif Color Space field but the profile is not recognized as being attached by Photoshop.
The thing is that LW metadata says aRGB, but the colorspace actually tagged (at least for JPEG and TIFF) is =1. which is sRGB. Reading the metadata in ExifTool or any other matadata reader/editor, we'll see that sRGB is actually tagged, not aRGB. Then, PS is just interpreting the thing as the Exif specs defines. More over, as far as I can tell, in the Exif data is 1, PS will assign sRGB, but if we change this value by 2, PS will recognize the aRGB colorspace automatically.

Looks like LW OpenEXR format is indeed assigning the aRGB colorspace. If we check the primaries and whitepoint of the Exif data, we'll see they are clearly Adobe RGB. Which is inconsistent since LW doesn't have an aRGB preset anywhere. Recently, last Exif specs adds Gamma tag, then for other colorspaces LW could use the WhitePoint, BlackPoint, PrimaryChromaticities and Gamma tags for defining other colorspaces, I guess.


Wouldn't you think that saving an image as Linear and opening in Photoshop, then adjusting Gamma to 2.2 be the preferred workflow in order to preserve the most data? What you get, however, is a "Picket Fence" Histogram after the adjustment. Colors/values are spread across the spectrum from what appears to actually be limited original data in the Linear file.
Have the idea you are saving to Linear in a 8-bpc format. In such case you will see a "Picket Fence" histogram. Just in case, for saving in linear, we need to save in some floating point space. Then, if you have used the sRGB preset in LW, save in EXR and when loading in PS, just assign the sRGB profile (not the embedded chromaticities because LW OpenEXR uses aRGB primaries) and you are ready to go.


The LightWave conversion to Gamma 2.2 using the sRGB preset generates a solid histogram. Doing a Gamma adjustment while dropping from 32bpc to 8bpc in an EXR generates the same solid histogram.
Be aware that PS performs automatically the gamma adjustments when converting from linear 32-bpc image to gamma-corrected 8-bpc image, so no gamma adjustment is necessary in PS in that case.


(As a side note, why is EXR Half not 16bpc? Both FP and Half show in PS as being 32bpc.)
Because both (Full FP and Half FP) are 32-bpc, it's just that one has higher precision than the other. 5 bits exponent and 10 bits of significant digits in the half version and 8 bits exponent and 23 bits of significant digits in the full version. This means i.e. for a linear image, the half version provides 1024 values per f-stop and 30 f-stops, while the full version provides 8388608 values per f-stop and 254 f-stops.


Conclusion I:
So, it appears that sRGB preset is the choice for most users;
Yep.


setting aside, of course, the missing higher gamut Color Profiles that really should be available to embed in order to properly convert from in an image editing application.
I think you might want to reconsider this conclusion.


BUT

This is referred to as working in "A Linear Workflow". Yet, to get that, we're "working" in sRGB. If you leave the default that shows when you first open the CS tab—where everything says "Linear" and the drop-down says "Disabled", you're not working in a Linear Workflow.

Confused yet?
I understand the idea of the preset names to define from which colorspace you want to "linearize" your colors and work properly in linear light. If your colors comes from sRGB, then one choose sRGB preset, if your colors come from Rec709, one choose Rec709 preset.


Conclusion II:
It also appears that using EXR is the best overall choice as an output file format, as it sidesteps a number of these issues and has many other advantages. Employing LightWolf's db&w exrTrader in your pipeline makes it even more so.
:i_agree:



Gerardo

spherical
07-27-2014, 12:19 AM
Finally. Stuff that makes sense. Thank You! AND Thank You for being explicit in your answers. Not for everyone but I sure appreciate it. Then again, I'm not "everyone"; never have been. :)

Rather than weed through the quoting process, I'll attempt to identify that to which I am responding by formatting my responses accordingly.

Yes, I had seen that video when it came out. More of a promo than a tutorial, so it left a lot out. Putting it all in would obviously have made it incomprehensible and boring for most, so editing is a Good Thing, but a pointer to another video with in-depth information might be good, too.

Indeed, the Linear TIFF was saved out as 24-bit. Saving as 24-bit FP, it comes into PS already Gamma corrected in the display, same as the EXR did (because it's FP by default), so, as you say, no correction necessary and the bit drop is handled smoothly. Gotta get more and larger disks...

Thanks for the explanation of the EXR Full/Half. That distinction wasn't clear in any data I could unearth.

Thanks for the pointer to that earlier thread, where you discussed and showed examples of output-referred and scene-referred workflows. However, I have not an unambiguous idea of the difference between the two and how you work in real scene-referred space. Can you elaborate, Please?

There's more questions, I'm sure but this'll be sufficient for the present. Thanks, again. I always look forward to topics in which you post, as I never fail to learn something. May not understand it all right off, but....

Lightwolf
07-27-2014, 08:17 AM
Because both (Full FP and Half FP) are 32-bpc, it's just that one has higher precision than the other. 5 bits exponent and 10 bits of significant digits in the half version and 8 bits exponent and 23 bits of significant digits in the full version. This means i.e. for a linear image, the half version provides 1024 values per f-stop and 30 f-stops, while the full version provides 8388608 values per f-stop and 254 f-stops.

Just a little correction here. Half-FP is indeed 16-bit. However, applications that don't allow for the native storage of half values convert up to 32-bit. Others, such as Fusion or Nuke, can keep them in RAM in the native format. The downside is that the values need to be converted (on the fly) to 32-bit values for computation (on the CPU, modern GPUs can compute with half values natively), which can slow down processing. The advantage is, obviously, that it only uses half as much memory (which, indirectly, also affects processing speed since they require less memory bandwidth and cache memory on the CPU).

Cheers,
Mike

spherical
07-27-2014, 04:48 PM
OK, that makes sense, as my test file full is 4.9MB on disk and half is 1.5MB. Gotta store those extra bits somewhere. Just to see what happens, I tried a downsample on the half to 16bpc and the Doc: values in PS dropped from 4.94M/6.53M to 2.47M/3.30M. It now shows in PS as being 16-bpc, but the size on disk doubled to 3.3MB. The full version, downsampled to 16bpc arrived at the same Doc values and size on disk, which is to be expected, as it is discarding data, while the half downsampled to the same level appears to have gained extraneous data; swelling its file size—apparently needlesly.

Lightwolf
07-27-2014, 05:17 PM
Don't forget, 16bpc in PS is integer, not float.

Cheers,
Mike

spherical
07-27-2014, 07:22 PM
AH, of course. Thanks, Mike.

gerardstrada
07-28-2014, 06:55 AM
Just a little correction here.
Indeed! Sorry, my mistake. I meant floating point not 32-bpc.


Not for everyone but I sure appreciate it. Then again, I'm not "everyone"; never have been
Thanks for your appreciation and very sorry for not explain myself well, I was not referring to the person at all but to the type of output mediums we work. The need of these workflows are more notorious when we work for output mediums with color ranges outside of the sRGB characteristics (basically laser/holographic projections, film and print). But as I showed in that thread, the differences are noticeable even for sRGB outputs, then is very useful for any output medium and the logical way of working, I think.


I have not an unambiguous idea of the difference between the two and how you work in real scene-referred space. Can you elaborate, Please?
Main differences between the two in the way of working is that with output-referred workflows we provide to the render engine with a very limited color range to start with (commonly just the third part of the visible spectrum), usually working with "false" linear pictures (since they are linearized just taking gamma into account), basing all the color flow in LDR standards encoded for home and office viewing conditions, previewing the raw render just with a gamma correction and and being able to output to web, video and TV only. In a scene-referred workflow we provide to the renderer with a range that cover all perceivable colors (this is something that affects several aspects of realism in renders), working with real linear images (not only linear gamma but with linear internal tone curves as well), basing the color flow in HDR (FP) standards encoded for real high luminance viewing conditions, previewing the raw render with gamma correction and also tone reproduction according to display and being able to output not only to sRGB-ish mediums but to ALL output mediums and by using the same raw render... These workflows imply we generate textures and matte-paintings, process input photographs and footages, reconstruct lighting in lightprobes, generate and post process images in some specific ways that assure we always get the same consistent results no matter what image device for capturing, generating, processing, displaying or outputting images we use.


OK, that makes sense, as my test file full is 4.9MB on disk and half is 1.5MB. Gotta store those extra bits somewhere. Just to see what happens, I tried a downsample on the half to 16bpc and the Doc: values in PS dropped from 4.94M/6.53M to 2.47M/3.30M. It now shows in PS as being 16-bpc, but the size on disk doubled to 3.3MB. The full version, downsampled to 16bpc arrived at the same Doc values and size on disk, which is to be expected, as it is discarding data, while the half downsampled to the same level appears to have gained extraneous data; swelling its file size—apparently needlesly.
Just in case, ProEXR (http://www.fnordware.com/ProEXR/) by Brendan Bolles is able to save in half FP or full FP EXR within PS. The ProEXR EZ version is free, btw.



Gerardo

spherical
07-28-2014, 07:05 PM
Thanks for your appreciation and very sorry for not explain myself well, I was not referring to the person at all but to the type of output mediums we work.

Neither was I. I was just referring to myself, how I work and attempt to fully understand concepts down to a common base.


The need of these workflows are more notorious when we work for output mediums with color ranges outside of the sRGB characteristics (basically laser/holographic projections, film and print). But as I showed in that thread, the differences are noticeable even for sRGB outputs, then is very useful for any output medium and the logical way of working, I think.{/quote]

Agreed.

I really liked this thread: https://forums.adobe.com/message/4573185

[QUOTE=gerardstrada;1392261]Main differences between the two in the way of working is that with output-referred workflows we provide to the render engine with a very limited color range to start <SNIP> In a scene-referred workflow we provide to the renderer with a range that cover all perceivable colors

Is this standing on one's head difficult in LW? As in all things, the law of diminishing returns applies, but I'd like to see what is possible with some of my projects that have exhibited dynamic range issues.


Just in case, ProEXR (http://www.fnordware.com/ProEXR/) by Brendan Bolles is able to save in half FP or full FP EXR within PS. The ProEXR EZ version is free, btw.

Thanks! Trying that out now in Ps, Ae and Pr. I notice that Ae has been shipping with EZ since CS5.5 and Ps since CS6.

gerardstrada
07-29-2014, 08:29 PM
Neither was I. I was just referring to myself, how I work and attempt to fully understand concepts down to a common base.
Then you might find these workflows interesting when get published.


I really liked this thread: https://forums.adobe.com/message/4573185
Interesting. Thanks for the link.


Is this standing on one's head difficult in LW? As in all things, the law of diminishing returns applies, but I'd like to see what is possible with some of my projects that have exhibited dynamic range issues.
The procedure within LW is not difficult because when things come to LW, everything is already in the appropriate state. Btw, what type of DR issues? just in case, color range in this case means gamut. A way to see clearly the difference is to generate an HDR gradient of the most saturated green colors in sRGB and another in aRGB. Then, save the sRGB greens in HDR and the aRGB greens in JPG. If we visualize both in an aRGB monitor you'll tell the difference. The LDR greens (aRGB JPG) will have more color range (wider gamut) than the HDR greens (sRGB HDR), but it will have less dynamic range. Wide gamut colorspaces need more bits per color component for avoiding quantizations (seen as posterizations). That's why HDR format is not as good as EXR for wide gamut colorspaces.



Gerardo

spherical
07-30-2014, 03:06 AM
SssssOOOOOooooo...... the point of the output-referred and scene-referred comparison post was that current versions of LightWave, when operating using the sRGB CS choice and similar, is scene-referred and earlier versions, where there was no CS choice, was output-referred?

gerardstrada
07-31-2014, 01:55 AM
When using sRGB CS choice we are working in output-referred space, because sRGB is output-referred. The output reproduction medium in sRGB case is the computer monitor. The point of the post about the scene-referred/output-referred comparison was just to show you that the proper way to not throw away color data is by working in scene-referred state. Working in sRGB won't prevent that since we are limiting the gamut from the start.



Gerardo

spherical
07-31-2014, 03:13 AM
When using sRGB CS choice we are working in output-referred space, because sRGB is output-referred. The output reproduction medium in sRGB case is the computer monitor.

OK. I get that.


The point of the post about the scene-referred/output-referred comparison was just to show you that the proper way to not throw away color data is by working in scene-referred state.

Which is done... how? I believe that this is the concept that I was trying to approach originally; not throwing away color/value data needlessly from the start.


Working in sRGB won't prevent that since we are limiting the gamut from the start.

Wouldn't that be "would prevent that"? Otherwise it seems to contradict all that came before. Either that, or I'm REALLY not grokking this. Perhaps I am, but terming things differently/incorrectly. I dunno... time to crash for the day. Thanks for hanging in with me!

gerardstrada
08-01-2014, 05:21 AM
Which is done... how?
We need to input colors and images in scene-referred state. not output-referred as happens with the sRGB way-of-working. It's a similar principle that what happens with gamma-corrected and linear workflows. You surely have heard: "render engine works internally in linear light". so "we need to put all input colors and images in linear space". Well, that's a part of the story. The actual story is that render engine not only generates linear light images, it generates images in scene-referred state, which is a wider concept than just linear space. This is something that most of color management solutions in 3D packages have overlooked for years, and what the workflows proposed in the incoming article try to address.


I believe that this is the concept that I was trying to approach originally; not throwing away color/value data needlessly from the start.
When working for web, TV or video and using the sRGB way-of-working, we won't be throwing away color data after the render, because that's just what the output medium needs. We are limiting color data before the render and results won't be entirely realistic being that we are inputting colors according to the limits of our monitor. The differences will be more notorious in more saturated range of colors.


Wouldn't that be "would prevent that"? Otherwise it seems to contradict all that came before. Either that, or I'm REALLY not grokking this. Perhaps I am, but terming things differently/incorrectly. I dunno... time to crash for the day. Thanks for hanging in with me!
sRGB won't prevent throwing away gamut before and after the render (if working for an out-of-sRGB-gamut output medium), precisely because it only covers about the third part of visible colors (before render) and a fraction of other output mediums (after render). This is consistent with the differences about the two ways of working mentioned previously (http://forums.newtek.com/showthread.php?142669-Definitive-Color-Space-Guide&p=1392261&viewfull=1#post1392261). However right now, the best LW native choice for sRGB output is the sRGB preset, with the exceptions mentioned in the threads you linked, and if you ask me, LW does the sRGB linear workflow far better than many major 3D packages out there.



Gerardo

spherical
08-01-2014, 05:44 PM
When working for web, TV or video and using the sRGB way-of-working, we won't be throwing away color data after the render, because that's just what the output medium needs. We are limiting color data before the render and results won't be entirely realistic being that we are inputting colors according to the limits of our monitor. The differences will be more notorious in more saturated range of colors.

sRGB won't prevent throwing away gamut before and after the render (if working for an out-of-sRGB-gamut output medium),

Ok, that's what I understand to be the case. So, when not working toward an sRGB device, we work in sRGB to make relative and relevant color/value choices (because that's the best approximation available right now) but not save as sRGB; instead choosing a wide gamut output format, such as EXRFP or EXRHALF, in order to preserve as much color/value data as possible through the pipeline.

gerardstrada
08-02-2014, 05:43 PM
That's the issue with working in sRGB way. The color/value choices are not relative, that is to say proportional to sRGB, they indeed are sRGB. That's the reason why we are limiting color gamut before render in such case. When outputting, even if we use a format able to handle well wide color gamuts, our image is already conformed according to sRGB gamut. If when saving in these FP formats we encode in a wider color range (like LW does with aRGB primaries in OpenEXR format case), we'll get odd saturated results because we are assuming (assigning) a wider color range that doesn't conform with the original color appearance of the image. Then, we come back to what we have commented earlier:

Then we need to come back to the sRGB profile to get the original color appearance. The only way to generate an image in one color space and get another color space by keeping the color appearance is by color converting this image, which will imply gamut mapping and this should be made in floating point domain to almost get there. Being that LUT formats doesn't handle gamut mapping by default, this can not be made properly without a CM system in that context. But even when ICC profiles v4.3 are able to handle robust gamut mapping for free in FP domain, the thing is that the proper way of making this is by generating the image in the wider color space in the first place. There's no post-solution that can beat that.



Gerardo

spherical
08-02-2014, 07:09 PM
Yes, I get that assigning a wide gamut profile doesn't add colors back into a truncated gamut, but that leaves a question. How does one work in a wider gamut when the device we are working through is limited to sRGB? We can (somehow—we'll get to this later) supply sources having a wide gamut but color pickers (perhaps) and monitors that they are displayed on are limited gamut. As I understand them, 10-bit displays aren't actually wider gamut, they are higher precision; outputting smaller variances between previously adjacent colors/values, so you're still saddled with sRGB gamut—just smoother. HDMI 1.3 supports xvYCC, which is a wider range, but without applications that work in this space, we're still waiting.

gerardstrada
08-03-2014, 01:43 PM
Not such thing necessary, we do that by color rendering to display http://forums.newtek.com/images/icons/icon7.png



Gerardo

spherical
08-03-2014, 04:17 PM
Remaining in sRGB... or am I still not getting it?

gerardstrada
08-03-2014, 04:33 PM
sRGB/whatever the monitor standard is, is just for display, while the whole work stays in SR-state.



Gerardo

spherical
08-03-2014, 08:01 PM
Ah, OK, that's what I thought until I read your previous post.

gerardstrada
08-04-2014, 11:10 AM
Ah, OK, that's what I thought until I read your previous post.


sRGB/whatever the monitor standard is, is just for display, while the whole work stays in SR-state.

That's the reason why I said


Not such thing necessary, we do that by color rendering to display



Gerardo