PDA

View Full Version : 3 LCS workflows for LW



gerardstrada
01-23-2008, 06:29 PM
Hello there,

I've written an article for HDRI3D magazine (Issue #18) (http://www.hdri3d.com/issues/h18.htm)about 3 linear workflows for Lightwave 3D that I designed within a general color management workflow that passes through whole production (even if it's not CG). The article is oriented to studios that can't afford expensive color management systems for their CG work, but it's also applicable for CG professionals.

It could be useful for other 3D packages as well since it's not a tutorial about techniques but instead an article about workflows (which is really uncommon even in printed magazines). It not only shows how to work in linear light cohesively by applying 3 different workflows according to our needs, but also how to set up a general workflow to work in different output mediums (motion picture production, TV and print) by maintaining color consistency as much as possible.

The article has also some good news for LW users on this topic.



Gerardo

gerardstrada
01-24-2008, 07:28 PM
Just for people not familiarized with LCS workflows, it's good they know that there's only advantages in to switch to these workflows. If you have realized you need a lot of tweak to get a realistic shading, or you have some flickering problems in animation, or if you can't get that evasive bokeh effect, or if your real fresnel shading in reflections doesn't look so "real", or if still you have problems to integrate through HDRI lighting that CG element with this real footage, or if color bleeds aren't so clean, or if you are getting too contrasted images and you want a 'v-ray look', or if brightness attenuates when you use mblur, or if you have some AA problems in textures... then, you might be interested in to implement a LCS workflow for your own work.

But not only that, let's say you are working for film, or print, or TV, or web or for all of them and as it's obvious, you have realized colors doesn't look the same in your computer than in the video monitor, or in the theater screen, or in that printed illustration. How to keep color consistency as much as possible from medium to medium? or let's say you want your CG short has a "film look" even if you are displaying it on a video monitor or TV, we can do that? if so, how? or let's say you see a banding effect when you display your sequence in a projector, but you don't see it in your computer monitor; there's a way to get rid off those annoying banding effects? or let's say you are working in hi-res for print; there's a way to convert that result sucessfully to CMYK, or even more, there is a way to work with a CMYK pantone within LW? These questions are answered with a proper color management workflow.

If some of you have some of these questions, you might find interesting my articles in Issue# 18 and Issue# 19 for HDRI3D magazine :)


Gerardo

beverins
01-25-2008, 10:20 AM
A terrific article!

Very helpful indeed, thanks so much!

gerardstrada
01-26-2008, 10:43 AM
Glad it helps :)

Thanks,



Gerardo

gerardstrada
01-30-2008, 06:28 PM
Just to comment I've seen some people confusing FP pipelines with linear workflows. It seems there is a lot of misunderstanding in this topic, which is what this article tries to clarify as well :)



Gerardo

gerardstrada
02-03-2008, 04:50 PM
Just to explain this a bit more:

Working with FP images doesn't mean we are implementing a linear workflow automatically. Processes similar to LightWave's "Full Precision" Renderer and You, by Kenneth Woodruff, Arnie Cachelin and Allen Hastings (http://www.newtek.com/lightwave/tutorials/rendering/fullprecision/index.html), are FP pipelines. Notice they don't pretend to be linear workflows, but rather a very useful introduction to the advantages of FP renderers and FP processings.

Consider also that varing the gamma at the end of the output render without any pre-processing, doesn't mean that we are gamma-correcting colors. Because we are not correcting colors in fact, but the opposite.

In those specific cases, we are not gamma-correcting colors because flat colors, colors from procedural textures, from 8-bit images, from lights, etc are in log space (this means they are already gamma-encoded). Though the diffuse shading obtained is linear, colors are not linear; so when a simple gamma exponent is applied in post-processing (LW, PS or any compositing package), what we are really doing is cranking gamma up for those colors. this means, we are adding gamma twice! So colors are totally wrong. That's the reason why we get wash out images when we apply a gamma exponnent near to 2 without a proper correction. That's one of the reasons also why is not easy to match properly a CG element with a BG plate even if we have adquired the lighting intensities in location through an HDR image.

The gamma exponent is not arbitrary as well. If we are applying a linear workflow, we'd want this exponent can be as accurate as possible due to two reasons: Proper Linearization and previewing. Later, in post, we can vary the final gamma for artistic purposes. But be aware: some type of linearization (at least a basic one) is a must in any linear workflow.



Gerardo

gerardstrada
02-09-2008, 05:04 AM
Btw, article on Issue# 18 explains several ways to linearize colors and several other ways to previewing in log space within Lightwave. In this regard, that's what my article of Issue #19 it's about. It's a tutorial about the usage of an awesome incoming new tool: The first color management system that has been developed ever within a 3D commercial package. These tools have been developed by the brilliant Sebastian Goetsch to work within Lightwave 3D, and pretend to facilitate, make simple and accessible for ALL users the classic linear workflow by adding also color management capabilities within the 3D package. According to other specialists it was a kind of 'impossible mission' with the current SDK. So all my gratitude to Sebastian for developing so innovative tool.



Gerardo

Exception
02-10-2008, 06:58 PM
Sounds interesting, gerardo, despite prhaps some excessive self promotion, but how would one get to this information without committing to a yearly subscription of this magazine?

MooseDog
02-10-2008, 10:06 PM
Barnes&Noble....magazine section :)

(if it worked here in Burlington, VT, surely it would work in New Haven :) )

Limbus
02-11-2008, 02:56 AM
but how would one get to this information without committing to a yearly subscription of this magazine?

You can buy single issues from the HDRI Mag website.

Cheers, Florian

Exception
02-11-2008, 06:34 AM
You can buy single issues from the HDRI Mag website.


Only if you get a subscription.
I'll try B&N

Limbus
02-11-2008, 06:51 AM
Only if you get a subscription.
I'll try B&N

If you click on "buy now" you are directed to this website:
http://www.dmgpublishing.com/Merchant2/merchant.mvc?Screen=CTGY&Category_Code=H3M
where you can click on "Buy one now". As far as I see you only buy one single issue this way.

Cheers, Florian

JeffrySG
02-11-2008, 11:42 AM
sounds very cool... I'm going to look for it at B&N as well...

gerardstrada
02-11-2008, 12:28 PM
Yep, Thanks MooseDog!

Tom, it's not self promotion. I'm promoting the content of this article: linear workflows and color management for CG work. It's just happens that I've designed these workflows and written these articles too :) but I'd talk about them in the same way if they had been designed and written by someone else, as I've already made with several techniques and tools. I'm being specially incisive on this because it's an unknown topic for many people and misunderstood by some others and they really can improve our results consistently.

The other way to get HDRI3D Issues by separate is as Florian has said.



I thought initially this will interest to CG studios only (since methodical workflows are not a common practice in the most of users), but it's really good to see several LW's CG professionals and advanced users interested in this topic.



Gerardo

Exception
02-11-2008, 02:40 PM
Tom, it's not self promotion. I'm promoting the content of this article: linear workflows and color management for CG work.

That's semantics.
You're promoting something which makes you money.
Which is fine, within reason.


The other way to get HDRI3D Issues by separate is as Florian has said.

You can only get one issue this way.


I thought initially this will interest to CG studios only (since methodical workflows are not a common practice in the most of users), but it's really good to see several LW's CG professionals and advanced users interested in this topic.

I know about linear workflows. I'm mostly interested in Sebastin Goetsch's color management system for LW, that I am unaware of...
And next to that you might have some handy tricks up your sleeve and it's always good to read well informed articles.
I just wish there was a handier way to get these.

gerardstrada
02-11-2008, 04:01 PM
That's semantics.
You're promoting something which makes you money.
Which is fine, within reason.

Tell me, in what way I make money with sharing these workflows in a magazine???

I have not accepted any payment from ANY magazine for ANY article I write because I DO THIS FOR FUN!



You can only get one issue this way.

Check again, it seems you can get more than one :)


I know about linear workflows. I'm mostly interested in Sebastin Goetsch's color management system for LW, that I am unaware of...
And next to that you might have some handy tricks up your sleeve and it's always good to read well informed articles.
I just wish there was a handier way to get these.

If you know about linear workflows, remember then that some type of linearization (at least a basic one) is a must in any linear workflow.
Remember as well that this is not a matter of "tricks", they are methodical working schemes that provide us good results in a consistent way.

Consider also these are not well informed articles because this workflows are not a summary of gathered information. They are original linear workflows and SG_CCTools are original tools for color management.

Btw, you don't need to buy the magazine to use the SG_CCTools, though those articles can be very helpful as background knowledge for better understanding and usage.



Gerardo

Digital Hermit
02-11-2008, 05:33 PM
Barnes&Noble....magazine section :)

(if it worked here in Burlington, VT, surely it would work in New Haven :) )


Yep, found it, thanx MD!

To find your copy, look in the very bottom, very left, very back shelf of the Computer section, covered by a Good HouseKeeping mag. :D

Exception
02-11-2008, 07:28 PM
Tell me, in what way I make money with sharing these workflows in a magazine???

I have not accepted any payment from ANY magazine for ANY article I write because I DO THIS FOR FUN!

My apologies. Since this is a paid-for magazine I assumed some of this would trickle down to you. This is common practice for magazines.

I provide many things for free to the Lw community too, but don't require them to buy a magazine. If you are willing to share your articles with the communities at no cost, I can format and host them, if you so wish.

I'd be interested to where I could find SG_CCTools, since a search on google and flay yielded no results.

Cheers,
Tom

gerardstrada
02-12-2008, 12:07 AM
These workflows require to buy HDRI3D magazine. And believe me, it's better in that way. Not only for me, but also for all users.
"a man who easily understands, doesn't need many words" or maybe: "a word is enough to the (workflow's) wise" :D

The SG_CCTools are ready to be released very soon by Sebastian Goetsch :)



Gerardo

Exception
02-12-2008, 08:43 AM
These workflows require to buy HDRI3D magazine. And believe me, it's better in that way.

I doubt it.
Saying it doesn't make it true :)

MooseDog
02-12-2008, 10:43 AM
I'll agree w/ GS.

The meager price of a magazine to support a team (Alice & Charles w/ HDRI) who have consistently supported the LW community is a noble and inexpensive gesture.

gerardstrada
02-12-2008, 04:48 PM
I doubt it.
Saying it doesn't make it true :)

Saying it doesn't make it real. We should make it real:

You are saying that because you are thinking short-term. Due to these are workflows, things are different long-term. CG studios and experienced CG professionals know this. Besides it's very hard to change the way people have worked for years, and though the workflows I'm proposing are cheaper, linear workflows and color management workflows have fame of being very expensive (only bigger houses work in this way). So we are talking about a thing that it's difficult even for CG studios that are accustomed to work with defined workflows and pipelines. Things are worst for common users who work empirically.

So I was thinking about a year in the better way to share these workflows. There's a natural process when sharing a workflow, since they are not like techniques or tricks. This is not like a step-by-step tutorial. Most CG artists like tutorials because they provide inmediate results. But a workflow is a bit more complex.

It requires to understand very well the principles and all important aspects involved in the working scheme, functions of this scheme should inter-operate efficiently through the whole production, and the implementation of these functions through proved methods and techniques should make the better and easy usage of available tools. Only then, a regular user can assume these workflows through his/her tools without knowing or worring about what is happening behind the scenes.

So sharing a workflow doesn't provide inmediate results for common users. If we share only the final part (implementation) without provide technical reasons why these tools and apps should be used in that way, it's very easy to corrupt the workflow. So, or we assign a person who understands the workflow to guide good practices, or we lock the workflow in a simplified implementation (simple tools in apps with some parameters to adjust). I've made this last with two linear workflows by proposing some simple implementations. Obviously, CG artists and developers who understand the workflow can improve them as much as new technologies allow us (which is what Sebastian Goetsch has made brilliantly with one of these workflows).

Due to all this, these articles were specially oriented to CG studios and advanced users first (for them, the price of the magazine is a gift compared to the articles quality they get in exchange); the target here was people from the Pipeline Setup department or CG Supervisors working together with developers. People who are in charge of the more technical parts of the CG pipeline to make easier the artistic part of the work for CG artist. But I'm aware - through personal experience - that this knowledge can be very helpful at individual level as well. Reason why I've tried to explain it in the possible simpler way that I've been able to. Though I should warn you, article in Issue# 18 is very dense (technicaly speaking)

So idea is CG studios and advanced users take on these implementations first. Eventually, these workflows will be known for general public since threads, discussions, feedback and features request for LW are expected from generous CG professionals ..a natural process where people like you have A LOT to do :) In this context, when a user who don't understand something about these workflows ask something, there won't only be one or two persons that know the answer. There will be MANY CG professionals who are already experienced in this workflows to answer those questions and guide them. Again, a natural process...

I really hope all users finish adopting and improving these workflows and requesting better tools for LW in regarding this.

Consider what MooseDog has said as well, HDRI3D magazine is the unique magazine these days that it's designed with a special treatment for LW users. They are also really good friends and I wanted to share something really good through them.

So no thanks. I'm not interested. (Besides, DMGPublishing has the rights of my articles now) :)



Gerardo

Exception
02-12-2008, 05:02 PM
Saying it doesn't make it real. We should make it real:

What you're talking about doesn't reflect at all on the medium of a magazine.
You are arguing against tutorials, while I never proposed you should make one.


You are saying that because you are thinking short-term.

Please, you have no idea how I think.

I am not going to argue about the advantages of free and/or readily available high quality information. The advantages of these are so obvious I don't need to explain them.


They are also really good friends and I wanted to share something really good through them.

That is a good reason for publishing it in a magazine for free, and the only one I can think of. All the rest, has no bearing on the discussion. A magazine and the web are equal in their capacity to convey information.

gerardstrada
02-12-2008, 05:06 PM
Just to let you know guys that 2 of these 3 linear workflows are viable without no commercial tool thanks to genius and generosity of Denis Pontonnier. Very special thanks to him for developing and sharing and improving constantly his experimental nodes, they really should be taken on as a built-in system for Lightwave 3D.



Gerardo

lardbros
02-12-2008, 05:08 PM
What you're talking about doesn't reflect at all on the medium of a magazine.
You are arguing against tutorials, while I never proposed you should make one.



Please, you have no idea how I think.

I am not going to argue about the advantages of free and/or readily available high quality information. The advantages of these are so obvious I don't need to explain them.



That is a good reason for publishing it in a magazine for free, and the only one I can think of. All the rest, has no bearing on the discussion. A magazine and the web are equal in their capacity to convey information.

Calm down ladies!!! This is going to get out of hand... HDRI mag is a great read... if you don't want to pay for the mag, you don't have to. But if 3dworld had a nice article in it and they weren't going to just hand it out to everyone for free, then i'd buy that too. HDRI isn't the biggest of published magazines on the subject, and i think it's a good thing to support these people. The same with 3dcreative magazine... it's all non-profit stuff, but good for us in the long term!

Exception
02-12-2008, 05:10 PM
Denis, Neverko, Pavlov, myself, and some others have been arguing that for years now. LW does need a front to back gamma correction first, and full color spectrum control second.
Perhaps we'll see that one day.


Calm down ladies!!! This is going to get out of hand... HDRI mag is a great read...

Nobody said it wasn't.
It's just very frustrating if someone is hinting between the lines at the inferiority of free information, and the people that provide it. Then goes on to advertise this 'superior' non-free information all over the board.

We're professionals here, and linear workflows have been around for ages in various forms. I have several linear workflows on the shelf, some of which are more useful than others. I might publish those some day, I might not. Some of us might just be curious about specific interpretations of it, which might not warrant the effort of buying several issues of a magazine. Especially for those living in remote areas.

Lightwolf
02-12-2008, 05:13 PM
LW does need a front to back gamma correction first, and full color spectrum control second.
:agree:

Cheers,
Mike

gerardstrada
02-12-2008, 05:34 PM
Just to explain a bit other advantage about the new SG_CCTools: In article on Issue #18 I share 3 Linear workflows:

1. Classic Linear Workflow
2. Inverse Linear Workflow
3. Multipass Linear Workflow

Each one offers different advantages according to each project and the way we feel comfortable to work. Classic linear workflow offers extreme accuracy, but LW tools available (before the SG_CCTools) weren't enough to cover each case of linearization easily. The new SG_CCTools facilitate a lot this workflow and makes it almost intuitive. But other good thing about the SG_CCTools is that its color management capabilities work smoothly with any of the workflows proposed. So if we want to apply the Inverse Linear worflow to work with legacy projects, we can add also these color management capabilities to that project. Or if we want to apply the Multipass Linear workflow to work as always we've worked, but switching ON its linearization system to work in linear light automatically, we can add color management facilities to this workflow too. Still more, even if you don't work with a linear workflow, you can take advantage of color management capabilities of the SG_CCTools. In such cases you might want to apply same principles of the general color management workflow that I proposed in that article too.



Gerardo

Exception
02-12-2008, 05:43 PM
It's like talking to a wall.

gerardstrada
02-12-2008, 06:13 PM
:agree:

Cheers,
Mike


I'm proposing a simple implementation for one of these workflows that makes use DP's experimental nodes (Multipass Linear workflow). The good thing about this generic implementation is that we can work as always, switch ON the linear system and WHOA! linear workflow automatically! With some adjustments to the nodal network, we can make this work as Color Mapping options in VRay. Disadvantage is that, like VRay, this linear workflow is not accurate since it doesn't take into account colors from illumination (unless we use an AO solution only). Don't know if new version of ExrTrader could solve this. But if LW implements a built-in node editor system for multipass or render buffers, we could have an automatic and accurate linear workflow implemented with this as well.



Gerardo

Lightwolf
02-13-2008, 05:37 AM
I'm proposing a simple implementation for one of these workflows that makes use DP's experimental nodes (Multipass Linear workflow).
The problem is that we neither have a decent previewing solution, nor a decent image viewer for LW.
Also, none of the previewers in LW (i.e. shader balls, VIPER) support any kind of gamma correction ... and there is only so much you can do with plugins.

Cheers,
Mike

gerardstrada
02-13-2008, 10:18 AM
It's like talking to a wall.

That's because I prefer to talk about these workflows that to have to explain the process to share them :D But some things have to be said:


What you're talking about doesn't reflect at all on the medium of a magazine.
You are arguing against tutorials, while I never proposed you should make one.

I'm not arguing against tutorials, I like tutorials too, article in Issue #19 it's a tutorial! what I'm saying is that sharing through magazine allows to stablish a time, between this workflows be implemented by people really interested and general public (most of users don't care about them right now anyway). This is important because in contradistinction to tutorials, workflows need to be digested first, and then shared through people who REALLY understand them.



Please, you have no idea how I think.

And please, I don't want to 8~ But it was obvious you were thinking short-term because you didn't realize there are phases when sharing a workflow in this way. Notice I said: So idea is CG studios and advanced users take on these implementations first. first means that eventually, later, these workflows will be available for all users for free because people who read these articles will share and discuss their experience with them. And they surely will be able to share them in simpler terms than me.


A magazine and the web are equal in their capacity to convey information.

Nope. A magazine is oriented to an specific target mainly. And in this sharing process; that's important.


...myself, and some others have been arguing that for years now. LW does need a front to back gamma correction first, and full color spectrum control second.
Perhaps we'll see that one day.

:rolleyes: I've heard some people say the same thing. But tell me HOW, in an structural way and precise tools. Although I'm a terrible writer, at least after reading these articles; after implementing them; after working with them; one is able to say HOW and to request very specific features to do that in an efficient way.


It's just very frustrating if someone is hinting between the lines at the inferiority of free information, and the people that provide it. Then goes on to advertise this 'superior' non-free information all over the board.

I haven't said that. That is only between your temples. I couldn't say that because it's false, and because I've shared a lot of tutorials in web and forums as well, and I like to think that all what I share is top notch information :D No matters where I do :) And though HDRI3D magazine always tries to share articles that we won't find on web (and they really do), they have shared also a 200 printing pages tutorial on the web for free.


We're professionals here, and linear workflows have been around for ages in various forms. I have several linear workflows on the shelf, some of which are more useful than others. I might publish those some day, I might not. Some of us might just be curious about specific interpretations of it, which might not warrant the effort of buying several issues of a magazine.

:sleeping: It seems you don't have a clue what you are talking about. You can be very professional and not to know anything about linear workflows and color management. I know several GREAT CG artist and very respectable CG professionals who don't have a clue about this. The term has been around, some isolated tools has been around, but any real linear workflow has been around (besides most of people use this term wrongly). Contrary to what you are saying, very few people has gotten these workflows right, and very very few houses has implemented them nowadays. What happens is that almost nobody is going to recognize it publically (just special persons can do that). However I receive emails and PMs from some people (some of them working for award winning CG studios) that says "We hadn't gotten this right 'til reading your article". And this is from an XSI based studio that is developing an in house system based on one of these workflows.

That's the reason why it's better that some people implement them first and then they share their experience - not assumptions. experience - with all users.



Especially for those living in remote areas.

I know what that means. So I really hope you can get a copy. I'm sure you could share these workflows in simpler terms than me.



Gerardo

gerardstrada
02-13-2008, 10:22 AM
The problem is that we neither have a decent previewing solution, nor a decent image viewer for LW.
Also, none of the previewers in LW (i.e. shader balls, VIPER) support any kind of gamma correction ... and there is only so much you can do with plugins.

Cheers,
Mike


Yes, that's one of the bigger problems, though at least with viewers we can use a pixel or image filter to work-around that (with Fprime too), it's not idoneous but it's better than nothing. Other bigger problem is the linearization process. LW needs to include all processings made in Image Editor into its FP engine. We can work-around that too, but again, it's not a suitable solution. Picker could be improved as well, not only for linearizations but for color management. In this regard, Viper and FPViewer should support post processings filters, too.

For color management, would be great if LW could work and recognize color profiles. ICC/ICM profiles are the cheaper industry standard. Let's think about this: A LUT can cost between $100 - $5000. An ICC/ICM profile is or free, or it can cost $250 at most.

Virtual DarkRoom (VDR) is not a good solution because it can't display a very near representation in any way. For this, we'd need to bring VDR output in a color management system that has the same LUT preset and can show it taking into account our monitor color space (calibrated). So the colors shown within LW are always wrong (mostly in its chromaticities). VDR makes its job perfectly, but it doesn't have the facilities to show us its results correctly. Other critical disadvantage is that VDR presets don't work with any industry standard format for color management or LUTs.

SG_CCTools tries to solve some of these disadvantages in the better possible way (though there's always room for improvements) and that's A LOT, considering the SDK.

The most user-friendly implementation that I found, was the Multipass Linear workflow, because for the user, the system is a one button solution. Internally however, it can be improved even more (to provide accurate results) through the addition of some key buffers (direct and indirect illumination separately). As far as I know LW SDK doesn't allow this yet, and though we can fake it through a nodal network, it's not the same thing. Maybe is too much to request currently, but it would be great if this could be possible in some future ExrTrader release.

Cheers,



Gerardo

Exception
02-13-2008, 11:31 AM
This is important because in contradistinction to tutorials, workflows need to be digested first, and then shared through people who REALLY understand them.

Again, that has no reflection on the medium.
Philosophy has to be digested as well. Does this mean it's only published in book forum and not on the web?
Of course not.


And please, I don't want to 8~ But it was obvious you were thinking short-term because you didn't realize there are phases when sharing a workflow in this way.

It is obvious you are not listening to what I'm saying. You have no idea what I am thinking about.
There being phases to a work flow has absolutely no relationship with the capacity of a medium to convey information. It can just as well be told on a website as in a magazine. Your arguments are disconnected. You wish to argue about Walter Benjamin's Art in the Age of Mechanical Reproduction? Or Derrida's Deconstructionism regarding these topics? Would you like to talk about Lacan's Gaze and the Eye / Sign and Signified in this relationship? I'll be happy to discuss this in an intelligent manner, but not on blank arguments that bear no relation to the topic.

If you just prefer to publish this in a magazine, or want to help your friends at HDRI magazine, please, by all means, but don't start throwing around silly argumentation on why a magazine would be capable of showing information that a website can't. I studied design for too long to be wooed by moot points.



Nope. A magazine is oriented to an specific target mainly. And in this sharing process; that's important.

And a industry specific website isn't?
Where exactly lies your argument, because I can't find it.




It seems you don't have a clue what you are talking about.

Oh, that's mature.
You have no idea what I know or do not know. I haven't discussed linear workflows with you at any point, nor with anyone else. This discussion is over by one of the parties having extended over the line of civility.
Good luck with your writings.

gerardstrada
02-13-2008, 12:34 PM
Again, that has no reflection on the medium.
Philosophy has to be digested as well. Does this mean it's only
published in book forum and not on the web?
Of course not.

It is obvious oyu are not lsitening to what I'm saying.
There being phases to a work flow has absolutely no relationship with
the capacity of a medium to convey information. It can just as well be
told on a website as in a magazine. Your arguments are disconnected.

If you just prefer to publish this in a magazine, or want to help your
friends at HDRI magazine, please, by all means, but don't start
throwing around silly argumentation on why a magazine would be capable of
showing information that a website can't. I studied design for too long to
be wooed by moot points.

And a industry specific website isn't?
Where exactly lies your argument, because I can't find it.

Oh, that's mature.
You have no idea what I know or do not know. I haven't discussed linear
workflows with you at any point. This discussion is over by one of the
parties having extended over the line of civility.
Good luck with your writings, but get off your high horse.



U n b e l i e v a b l e. This case is critical.



Gerardo

gerardstrada
02-13-2008, 12:34 PM
:rolleyes:



Gerardo

gerardstrada
02-13-2008, 12:35 PM
Anyway, this thread is not to explain how or why I share articles through magazines first and web last or vice versa, this thread is to let you know guys that this workflows are already available through HDRI3D magazine. And to encourage you to try them for your own work, implementing them, mixing them and modifiying them according to your needs or preferences. They have helped me considerably in my work and I hope you find them useful too.

:beerchug:



Gerardo

*Pete*
02-13-2008, 12:53 PM
As far as i know...Gerardstrada has previously been giving similiar advices that can be found on HDRI3D on the forums (spinquad, newtek, cgtalk) as well.
HDRI3D is inexpensive, and no more wrong medium than a book or web (i understand your point Exception).
Personally i do prefer information in a magazine or a book over information on the web...saves me the process of printing out pages if i want to read at work, on the bus or before going to sleep....after all, we do spend too much time in front of the computer already as it is.

btw Gerardo...i ordered a subscription of HDRI3D a few weeks ago, much thanks to you and Gregg having articles in it.

Lightwolf
02-13-2008, 01:46 PM
For color management, would be great if LW could work and recognize color profiles. ICC/ICM profiles are the cheaper industry standard. Let's think about this: A LUT can cost between $100 - $5000. An ICC/ICM profile is or free, or it can cost $250 at most.
Well, to be quite honest, there is a bit of a difference between a LUT and a full profile as well. Plus a LUT isn't a LUT either...

Have you looked at CTL? http://www.oscars.org/council/ctl.html

Cheers,
Mike

gerardstrada
02-14-2008, 02:22 AM
much thanks to you and Gregg having articles in it.

Thank you Pete! It's really a pleasure to be taken into account together with so excellent contributors! besides its advisory board is really top notch.



Gerardo

gerardstrada
02-14-2008, 02:35 AM
Well, to be quite honest, there is a bit of a difference between a LUT and a full profile as well. Plus a LUT isn't a LUT either...

Yep. They are different things, but the good thing about color profiles (besides they're cheaper) is that they can work as LUTs (preview profiles) and though they are slower than 1D-2D LUTs, their speed is commonly enough faster these days as to be used in CG and compositing as LUTs. About the LUT isn't a LUT, I guess you are referring to the mathematical definition? I'm terrible on maths, you know? :)



Have you looked at CTL? http://www.oscars.org/council/ctl.html

Cheers,
Mike

Yes. I've seen CTL (you posted it in a CGTalk thread some time ago. Thanks!). That system looks GREAT, it has been specially designed to work almost automatically (and it seems exclusively) with OpenEXR pipelines, and it can really take profit of it by making all the color management almost hidden for the CG artist. However, as far as I understand, it's implementation is more complex and much more expensive (it requires to built by hand inputs and outputs libraries, in-house LUTs, etc, and these must be linked to the CTL interpreter for color rendering and I'm not sure if there's a way to work it within LW). Consider also that only big VFX houses can implement a full FP pipeline for the whole production. Post-production is FP always, but as we know, in practice, within the 3D package, doing the real CG work, many studios still make an intensive use of 8-bits images for background, projections, textures, footage and so. In that context, the approach taken by Sebastian offer the same color management capabilities, no matters what type of image we are using. But it's not automatic or hidden, it requires that the CG artist understands some key procedures in the color management workflow. What is the tricky part here :) Maybe a system like that would be the next consequent step. Hope someone wants to take the challenge.

Cheers,



Gerardo

Lightwolf
02-14-2008, 03:35 AM
bout the LUT isn't a LUT, I guess you are referring to the mathematical definition? I'm terrible on maths, you know? :)

I was more talking about the different types (1,2,3D) as well as different formats and uses. I've seen LUTs used and mis-used for all kinds of stuff.
Inclusing going from log to preview in one go in a compositing app... *ouch* (Yup, blame the supervisor, it was his idea).

Cheers,
Mike

Lightwolf
02-14-2008, 03:38 AM
Consider also that only big VFX houses can implement a full FP pipeline for the whole production. Post-production is FP always, but as we know, in practice, within the 3D package, doing the real CG work, many studios still make an intensive use of 8-bits images for background, projections, textures, footage and so.
That's because it doesn't make sense in many cases to have FP input files.
As for the "big" houses... I disagree. We're two people here, and we've been using a FP pipeline for years (where it makes sense, converting video to FP doesn't make sense, converting client data to FP doesn't either, handling film scans as FP does though. A lot of matte paintings are currently being done in at least 16 bit as well - rarely FP though, even for big productions).

Cheers,
Mike

Lightwolf
02-14-2008, 03:41 AM
Sorry for splitting up my replies like this....

That system looks GREAT, it has been specially designed to work almost automatically (and it seems exclusively) with OpenEXR pipelines, and it can really take profit of it by making all the color management almost hidden for the CG artist.
No exclusively. Automatically yes, but that's becuase OpenEXR can manage the metadata: http://www.openexr.com/UsingOpenEXRandCTL.pdf

Cheers,
Mike

gerardstrada
02-14-2008, 12:27 PM
I was more talking about the different types (1,2,3D) as well as different formats and uses. I've seen LUTs used and mis-used for all kinds of stuff.
Inclusing going from log to preview in one go in a compositing app... *ouch* (Yup, blame the supervisor, it was his idea).

Cheers,
Mike


oh, well, it depends on the use... there are some uses in which we can talk about correct ways (log2lin is one of them), but if let's say the LUT is to accomplish an specific 'look' for artistic purposes, we can use them in less orthodox ways since the goal is to achieve a desired finishing. So don't blame the supervisor in all cases! :D



Gerardo

gerardstrada
02-14-2008, 12:34 PM
I was more talking about the different types (1,2,3D) as well as different formats and uses. I've seen LUTs used and mis-used for all kinds of stuff.
Inclusing going from log to preview in one go in a compositing app... *ouch* (Yup, blame the supervisor, it was his idea).

Cheers,
Mike

Me too. But I'm not referring to that. with full FP pipelines I'm referring that HD video and even 8-bits hand-created textures go into an FP format. All images and sequences go in FP. In this regard we don't really use a full FP pipeline. For us, that doesn't make sense as you have said, but for studios that base the color management system in something like CTL, that is a must (besides they can take profit of the linear nature of EXR). It's obviously too expensive for most of CG studios and CG professionals.



Gerardo

gerardstrada
02-14-2008, 12:45 PM
Sorry for splitting up my replies like this....

Don't worry, that's better in this case...


No exclusively. Automatically yes, but that's becuase OpenEXR can manage the metadata: http://www.openexr.com/UsingOpenEXRandCTL.pdf

Cheers,
Mike

Yep. that's one of the reasons why all have to go to FP format (EXR) working with CTL. With the increment in the capacity of HDs, a system like that could be more viable, sooner than later.

Cheers,



Gerardo

Lightwolf
02-15-2008, 03:58 AM
But I'm not referring to that. with full FP pipelines I'm referring that HD video and even 8-bits hand-created textures go into an FP format.
There's no point in doing that though... why store with a higher fidelity format than the original? That only makes sense if there is a processing step involved between the original footage and the FP one.
I've always been in favour of changing as little as late as possible - unless needed otherwise.

Cheers,
Mike

Lightwolf
02-15-2008, 03:59 AM
oh, well, it depends on the use... there are some uses in which we can talk about correct ways (log2lin is one of them), but if let's say the LUT is to accomplish an specific 'look' for artistic purposes, we can use them in less orthodox ways since the goal is to achieve a desired finishing. So don't blame the supervisor in all cases! :D

In this case I surely can, if a log->display LUT is used to prep images for compositing. As a standard procedure that is supposedly "right" (no artistic purpose here).

Oh yeah, that was a DI specialist to boot...

Cheers,
Mike

Limbus
02-15-2008, 04:06 AM
Hi Gerardo,
I just got the Mag and read thru your article. Nice read. I am testing the setup now with a Kray scene of mine (Kray can convert the Output to Gamma 2.2 so it is displayed correctly).

I have one question: can't I use the Gamma control in the Image Editor to apply a gamma-correction to an image? If I compare these two methods the reuslt looks the same.

Cheers, Florian

Direwolf_NL
02-15-2008, 04:28 AM
I did this in the past with Kray and posted the results on the Kray forum. Textures get less blown out. But now we have limithdr, which helps a lot too.

gerardstrada
02-15-2008, 09:55 AM
There's no point in doing that though... why store with a higher fidelity format than the original? That only makes sense if there is a processing step involved between the original footage and the FP one.

Yes, this is made with the footage that CG artists are going to use for VFX. The main reason I can think this is worthwhile, is to provide to CG artists automatic linear and color management workflows.


I've always been in favour of changing as little as late as possible - unless needed otherwise.

Cheers,
Mike

Me too, Mike.



Gerardo

gerardstrada
02-15-2008, 09:57 AM
Hi Gerardo,
I just got the Mag and read thru your article. Nice read. I am testing the setup now with a Kray scene of mine (Kray can convert the Output to Gamma 2.2 so it is displayed correctly).

I have one question: can't I use the Gamma control in the Image Editor to apply a gamma-correction to an image? If I compare these two methods the reuslt looks the same.

Cheers, Florian


Hi Florian, thank you. Btw, what you say about Kray preview sounds very interesting...

About the question, that's depends. which linear worflow are we referring?



Gerardo

gerardstrada
02-15-2008, 09:57 AM
In this case I surely can, if a log->display LUT is used to prep images for compositing. As a standard procedure that is supposedly "right" (no artistic purpose here).

Oh yeah, that was a DI specialist to boot...

Cheers,
Mike

hehe



Gerardo

gerardstrada
02-15-2008, 09:58 AM
I did this in the past with Kray and posted the results on the Kray forum. Textures get less blown out. But now we have limithdr, which helps a lot too.


Please, post the link :)



Gerardo

Limbus
02-15-2008, 04:20 PM
Hi Florian, thank you. Btw, what you say about Kray preview sounds very interesting...

About the question, that's depends. which linear worflow are we referring?


Hi, I was referring to the first Lightwave Workflow you describe. You suggest to use FPGamma as one way to gamma-correct images in the image editor.

Florian

gerardstrada
02-15-2008, 06:26 PM
Oh, sorry. Yes, of course we can use it. If image is in some FP format, gamma adjustment will be FP too (in fact, I'm more accustomed to use that control). Sorry to not mention it in the article.



Gerardo

gerardstrada
02-16-2008, 01:23 PM
Btw, Gamma control doesn't work for the tip of HDR blank image, we need FPGamma in that case. With SG_CCTools we don't need these workarounds :)



Gerardo

Limbus
02-16-2008, 01:31 PM
Btw, Gamma control doesn't work for the tip of HDR blank image, we need FPGamma in that case.

Yes, I noticed that.


With SG_CCTools we don't need these workarounds :)



Thats even better.

Cheers, Florian

alice hdri 3d
02-18-2008, 02:28 PM
Tell me, in what way I make money with sharing these workflows in a magazine???

I have not accepted any payment from ANY magazine for ANY article I write because I DO THIS FOR FUN!

Gerardo

I can attest to the fact that Gerardo has, in fact, kindly refused any honorarium for his articles.

Please note those who run HDRI 3D have never drawn profits or a salary. We have a passion to provide such a magazine for the community. It's truly good to see that same passion and generosity in someone else! Thanks, Gerardo!

Limbus
02-18-2008, 03:09 PM
I can attest to the fact that Gerardo has, in fact, kindly refused any honorarium for his articles.

Please note those who run HDRI 3D have never drawn profits or a salary. We have a passion to provide such a magazine for the community.

Thats a shame ;-) You should get profits from it. Keep up the good work.

Cheers, Florian

gerardstrada
02-18-2008, 03:33 PM
It's truly good to see that same passion and generosity in someone else! Thanks, Gerardo!


http://www.spinquad.com/forums/images/smilies/redface.gif Thank you, Alice. It's a pleasure to work with such a great team!



Gerardo

gerardstrada
04-05-2008, 09:51 AM
Just to let you know that Sebastian Goetsch has already released the SG_CCTools. For more info, please visit this thread:

http://www.newtek.com/forums/showthread.php?p=682889

If you want to know how to implement these tools within a general color management workflow in a coherent way, I strongly reccomend you get an Issue# 19 of HDRI3D magazine (http://www.hdri3d.com/issues/h19.htm). There, I propose 2 methods (the screen method and the perceptual method) to work with these tools, and I explain the only method that I found, so far, to take profit from wider gamuts by keeping color consistency.



Gerardo

hydroclops
04-09-2008, 11:24 AM
To Gerardo or anyone informed reading this thread:

I've read this thread and the other thread about the release of the SC_CCTools. I've read the wiki article and have ordered the HDRI issues 18 and 19.

My longterm goal is to create animated imagery in LW that can ultimately be finished in multiple formats: sd, hd, film. My approach would be that renders in lightwave would go through different "color correction" stages depending on the format.

My background is in motion picture production for television circa mid 1990s. The Dp would expose the negative and then the colorist would finish it.

My question is how to learn more about digital color management and digital color correction. I want to create raw renders that have lots of potential, like film negative, maybe even using a pro post house to finish.

Amazon has lots of books about digital color management and digital color correction. Any recommedations? other sources?

my plan is to use LW and After Effects for compositing. So I can have LW and AE both set up with the same color management? Will I need hardware to set up my monitor?
I see that I can load ICC profiles into my video display...

Sorry I'm pleading ignorance here....Any advice anyone?

Thanks

gerardstrada
04-09-2008, 08:34 PM
The purpose of articles on HDRI3D issues #18 and #19 is precisely that, Hydroclops.

In fact, Sebastian and I already knew (by November 2007) that it will be necessary/better that Issue#18 was shipped to release the SG_CCTools, seeing that this is a very wide topic.

So, in Issue#18, I proposed a general color management wokflow that can be enhanced or modified according to each project and unique pipeline. As I said in the article, this general workflow passes through the whole production (even if it's not CG). I designed that workflow taking in mind that we can adjust it according to specific characteristics of each project and output medium (image systems and image devices). The 3 linear color space workflows that I proposed, were designed also below this general color management worflow, as well as the 2 methods that I propose for the usage of SG_CCTools in the current Issue #19. There, I explain, step by step, with a real (and simple) production project, how to implement the SG_CCTools for working in multiple mediums. From there, I'm sure that any CG professional will be able to go further by organizing an aproppriate concatenation of additional phases/stages.

Curiously, the workflow uses Abobe-LW based pipeline, but it also shows how to adapt it to Discreet-LW pipeline (which is applicable for Fusion as well), since workflow shared there is flexible and customizable.

Btw, I commonly have a meeting before, with the PD not only to talk about the exposure film or film stocks, but also to see how his work will be managed better in the telecine and DI processes. This is important because very productive decisions can be taken at this stage to facilitate the whole process at the end and enhance the final result. This is something that one learn by being involved in the production process, I guess.

Though I don't go deeply about monitor calibration in my article (article is very extensive and there's plenty info about this in books and web), I mention some vendors for color profiling images devices involved in a production. They have a wide range of solutions almost according to any budget. These same vendors have tools for monitor calibration as well (and yes, you'll need hardware for the best solutions).

I think anyone who had read these articles will be able to comment here their own experiences, or ask questions about these workflows, or share enhancements or new methods. I encourage people who had read these articles to do this :)



Gerardo

Mike_RB
04-09-2008, 10:13 PM
Here's a simple way to work linear in LW.

normal textures, apply a .455 gamma in the texture editor
HDR's leave at gamma 1.0
apply FPgamma as an image filter, set to 2.2
light and shader your scene as normal (keep in mind the final gamma, choose much darker colors)
turn off color dithering in the image process tab

if LW is your last stop, use the rendered image as is, with the 2.2 post gamma

if your going to comp, take off the FPgamma 2.2 and render to EXR

load the exr in your comp app and either comp it linear and view the result in gamma space, or put a 2.2 gamma node right after the footage and correct it that way.


What this does is allow for realistic looking shading from inverse square falloff lights

Thomas M.
04-10-2008, 05:03 AM
I just bought a book from C. Bloch on "HDR" and he recommends to apply the gamma correction to the image (.4545) in the node editor. The advantage is that the corrections will be done in floating points, and not 8bit.

gerardstrada
04-10-2008, 05:05 PM
I just bought a book from C. Bloch on "HDR" and he recommends to apply the gamma correction to the image (.4545) in the node editor. The advantage is that the corrections will be done in floating points, and not 8bit

Yep, I agree with Thomas (and Blochi). I proposed a trick to work-around that. Idea is to use an FP blank image and to map there (Textured Filter) the image we want to linearize - image is in floating point space now and it's viable for sequences too. But that's only necessary with dark images. We can linearize bright 8-bit images in Image Editor without problems. For gamma correction in a nodal environment, I proposed (Issue#18) about 5 or 6 ways to do that. SG_CCFilter are useful for advanced and simple corrections (for simple corrections Aurora's Gamma Correction and Michael Wolf's Simple Color Correction are very useful as well)


Here's a simple way to work linear in LW.

normal textures, apply a .455 gamma in the texture editor
HDR's leave at gamma 1.0
apply FPgamma as an image filter, set to 2.2
light and shader your scene as normal (keep in mind the final gamma, choose much darker colors)
turn off color dithering in the image process tab

if LW is your last stop, use the rendered image as is, with the 2.2 post gamma

if your going to comp, take off the FPgamma 2.2 and render to EXR

load the exr in your comp app and either comp it linear and view the result in gamma space, or put a 2.2 gamma node right after the footage and correct it that way.


What this does is allow for realistic looking shading from inverse square falloff lights

Thanks Mike. The way you propose is ok. (in part). I'm sure you are aware about the whole thing that is missing here, but a so simple way can confuse - more than to help - to people who are not aware about linear workflows. Situation with simplifying things too much is that we can loose the clue of what we need to do, and wrong results may cause that people think that working in linear light is not useful at all in practice. So this is an opportunity to clarify some things:

In a classic linear workflow, we still need to linearize flat colors, lights colors, environment gradients, colors in HVs, in volumetrics, pixel filters, etc. If we only linearize textures, we'll get inconsistent results. SG_CCPicker can help a lot here, too.

Other common misunderstanding is the .4545 factor. That's not in this way always (it doesn't depends always on our screen gamma as many people think), it has to do with our working color space. Thus, if we are working in ProPhotoRGB, we need to linearize colors for 1.8 gamma exponent, not 2.2 - even if we are on Windows and our monitor has a gamma of 2.2 or alike!

Other misunderstanding is with HDRIs. "Leave it as it is", all say (since it's gamma is 1.0). But it may be wrong in several cases (!) Mostly for 2 reasons:

1. HDRI's purchased from some vendors are (many times) tone mapped. This is very bad for working in linear light because we have lost the relative intensity of real light. (we need to solve this before begin to work with them)

2. HDRIs gamma depend on the gamma used in the LDRIs for assembling it. If HDRI was assemble in a MAC, is very probable that it has been wrongly linearized (by using 1.8 gamma instead of 2.2 if they have used sRGB embedded in JPGs). So we need to compensate this as well.

There are several other misunderstandings and misconceptions that articles in HDRI3D Issue#18 (http://www.hdri3d.com/issues/h18.htm) and Issue#19 (http://www.hdri3d.com/issues/h19.htm) try to clarify.

About inverse square falloff lights, there's a controversy in this regard as well. Do we need to use inverse square falloff lights if we are working in linear light?
-What?! OF COURSE!
-of course?...
You guys may want to take a look at this thread:

http://vbulletin.newtek.com/showthread.php?t=79919

For people who have already read these articles and have implemented some of the workflows and methods proposed, please, do share your experiences :thumbsup:



Gerardo

Mike_RB
04-10-2008, 05:17 PM
Yeah, I compacted the 'use linear values for everything' into this line:

light and shader your scene as normal (keep in mind the final gamma, choose much darker colors)

Which is essentially right, if you force yourself to always view the final result at your destination gamma you will choose the right colors, values, reflections, fresnel amounts automatically.

gerardstrada
04-10-2008, 05:28 PM
The perceptual method is the most simple method as you say, and it's more suitable with G2 (since Viper doesn't have gamma correction). Although it's not accurate. SG_CCPicker is already a solution for linearizing accuratelly all picked colors, so I strongly reccomend it :)

For working with wide color spaces though, the perceptual method for previewing colors (chromaticities, not gamma), is the only viable way that I've found so far.



Gerardo

Mike_RB
04-10-2008, 05:35 PM
The perceptual method is the most simple method as you say, and it's more suitable with G2 (since Viper doesn't have gamma correction). Although it's not accurate. SG_CCPicker is already a solution for linearizing accuratelly all picked colors, so I strongly reccomend it :)

For working with wide color spaces though, the perceptual method for previewing colors (chromaticities, not gamma), is the only viable way that I've found so far.



Gerardo

We only work in 64bit, and the SG color picker isn't 64bit. So we gamma correct down all our constant-color nodes.

gerardstrada
04-10-2008, 05:51 PM
oh, I see. Sebastian is working on a 64-bits version. Hope it will be ready soon. I proposed (HDRI3D Issue#18) other trick that works with Picky, but it's not 64-bits neither.



Gerardo

gerardstrada
04-10-2008, 09:23 PM
We only work in 64bit, .... So we gamma correct down all our constant-color nodes.

Just in case you guys want to try other ways: besides the classic linear workflow, in article on Issue#18, I proposed also 2 additional linear workflows:

The Inverse Linear workflow: Useful and compatible with legacy projects (in which one had not worked in linear light). It doesn't affect some properties but at least offer several benefits of working in linear light (several optical effects included).

The Multipass Linear Workflow: If you made an extensive usage of multipass/multilayering rendering.

Both workflows aren't so accurate as the classic linear workflow (though the multipass workflow can provide exactly the same results depending on the scene), and both worflows are A LOT more easier to set up. And thanks to Denis Pontonnier generosity, they are 64-bits compatible :)



Gerardo

Thomas M.
04-11-2008, 02:34 AM
At this point there's something which bothers me even more: How to generate a curve I can apply to a LW .hdr from a series of photos shot with a Nikon D2X or D3? I'd shoot a series of the same motive in 1/3 or 1/2 f-stop steps to be able to generate, yeah, what do I generate from it? I'd say I want a curve I can apply to the LW .hdr when converting it to 16bit or to apply it while staying in 32bit. Exposure in CS3 is nice, but doesn't work for this purpose.

Any ideas?

Cheers
Thomas

Limbus
04-11-2008, 02:44 AM
I'd say I want a curve I can apply to the LW .hdr when converting it to 16bit or to apply it while staying in 32bit.

With "Full Precision Gamma" or "HDR Expose" you can expose your images. Both are Image Filters. Not sure if this is what you want.

Cheers, Florian

Thomas M.
04-11-2008, 02:49 AM
This needs to be done in PS CS3 after all image manipulations in 32bit. Besides, a camera response curve and a gamma correction are two different things. Of course the response curve does have a gamma, but the curve looks much more complicated than a simple gammsa curve It's more like a s-curve, at least for traditional film stock.

I need to give the LW .hdr in PS the look of a D2X (or whatever camera) image look.

Cheers
Thomas

gerardstrada
04-11-2008, 04:31 AM
I need to give the LW .hdr in PS the look of a D2X (or whatever camera) image look.

Cheers
Thomas


Hello Thomas,

SG_CCTools can help you with the proper workflow to set up your pipeline (camera-Photoshop-LW-Photoshop-Print) in order to get a consistent result.

However the first thing you need to do (after get an HDRI3D Issue#18 and 19) LOL :D is to get the icc/icm color profile of your Nikon camera model. Some cameras include it with the drivers installation (do a search in your system *.*icc / *.*icm). If not, with luck, you could find it in web (google it), or you can ask to your vendor/supplier for it, or directly to the manufacturer, or if you want to go mad with an exact profile according to your unique camera, you can profile it yourself (you'll need some hardware).

You need this color profile to work appropriately within LW (with the new SG_CCTools) and then for finishing your work in Photoshop. Without this profile, all what we can do, will be by sight. You probably could match your camera look at gamma level, but not at gamut level (at least not accurately).



Gerardo

vfxwizard
04-11-2008, 09:04 AM
About inverse square falloff lights, there's a controversy in this regard as well. Do we need to use inverse square falloff lights if we are working in linear light?

Gerardo, I haven't read your hdri articles, my bad as your input is always great. But referring to the examples in the thread you mentioned, please let me try to persuade you that inverse square falloff is really mandatory. :)

The attached image shows two Direct Light / Luminous geometry comparisons. In the point light test an 100% intensity point light with 1m Nominal Distance is shining 1m above a plane. Compare it with the Monte Carlo render on its side, lit by 100% luminous ball with 1m radius and placed where the light was (the ball is unseen by camera). This rendering is remarkably similar to the first one. The slight differences in shading, probably lost in the jpg, are due to the point light shining from its pivot while the luminous ball shines from its surface.

Now redo this with an area light and a luminous poly. The differences here are more pronounced because the Nominal Range is spherical and yet the area light -having a surface- does its own inverse square falloff. However its close enough to the Monte Carlo solution to be called accurate.

I think we all agree that in real world two almost identical light sources should produce almost identical results. Math aside, to me this is the visual proof for needing inverse square falloff. Otherwise, GI and direct lighting will not interact properly. Putting LW aside, renderers that use Photon Mapping implicitly use inverse square falloff for contribution from light sources (photons are distributed in the scene).

What do you think about this? Anyway, let me congratulate for the effort you put in promoting the linear approach. It's easy to fail to see immediate benefits if explainations only talk about exactness and colors raised to powers. But the wiki article and imagery will surely motivate a lot of people to delve deeper.


As for sharing experiences:

Nominal Distance circle plays an important role in inverse square lighting. Light inside the area defined by the Range/Nominal Distance value is not physically accurate. Since distances there are normalized to unity, light intensity is increased inside this area. This will also throw off any radiosity calculation as the overlit surface will contribute a thin superbright spot to nearby surfaces. Nominal Distance should be tought of as the physical size of the light and set to scale, without ever intersecting a surface.

BTW, my workflow is almost identical to Mike_RB's one. If someone is getting started with linear workflow, a pure mathematical gamma such as FPGamma can be difficult to handle. A surface with value 1 in linear space becomes 8% in gamma space, or value 21 in 8 bit per channel. This explains why everything suddenly seems lacking in contrast: there's significant fogging. A quick fix would be to apply in AE a levels adjustment with Input Black set at 7% and Clip to Output Black set to on.

gerardstrada
04-11-2008, 08:09 PM
Gerardo, I haven't read your hdri articles,


Oh! you should do LOL :D


my bad as your input is always great. But referring to the examples in the thread you mentioned, please let me try to persuade you that inverse square falloff is really mandatory. :)

Thanks for sharing your reasonings, I really appreciate it and I agree with several of them. But to agree with something doesn't mean we are not able to re-think about it and verify it below other paradigm. In that case, this paradigm was the linear workflow.


The attached image shows two Direct Light / Luminous geometry comparisons. In the point light test an 100% intensity point light with 1m Nominal Distance is shining 1m above a plane. Compare it with the Monte Carlo render on its side, lit by 100% luminous ball with 1m radius and placed where the light was (the ball is unseen by camera). This rendering is remarkably similar to the first one. The slight differences in shading, probably lost in the jpg, are due to the point light shining from its pivot while the luminous ball shines from its surface.

Now redo this with an area light and a luminous poly. The differences here are more pronounced because the Nominal Range is spherical and yet the area light -having a surface- does its own inverse square falloff. However its close enough to the Monte Carlo solution to be called accurate.

I think we all agree that in real world two almost identical light sources should produce almost identical results. Math aside, to me this is the visual proof for needing inverse square falloff. Otherwise, GI and direct lighting will not interact properly. Putting LW aside, renderers that use Photon Mapping implicitly use inverse square falloff for contribution from light sources (photons are distributed in the scene).

What do you think about this?


It surely makes sense. I understand the premise of your reasonings is MonteCarlo, as real light behavior. If MC works in that way, it should be alright, right? I've seen MC papers and by the constrast level, fresnel effects, reflections, direct light falloff... of the shown images in that time, it seems they are showing linear renders (as if they wouldn't worked in linear light). Thus, in that thread, I questioning ID^2 even for indirect lighting (yes, including MonteCarlo!). Idea there, was to re-question this from its basis and reach our own conclusions from there. It's difficult to say if MC solution is taken linear workflows into account. Consider as well as Red_Oddity points out in that thread (http://www.newtek.com/forums/showthread.php?t=79919), MC is calculated different for LW 9.3.1 and FPrime - it seems LW's indirect lighting is not using ID^2 - though some people say it does. And there comes the controversy. In fact, in that case, indirect lighting from a luminous polygon behaves different with LW's MC than with FPrime's MC:

LW's MonteCarlo
http://imagic.ddgenvivo.tv/forums/LWfalloff.png


FPrime's MonteCarlo
http://imagic.ddgenvivo.tv/forums/FPfalloff.png

It seems FPrime is indeed working with ID^2. But we can't asure that for LW. Hope someone can explain the technical reason for this.


Anyway, let me congratulate for the effort you put in promoting the linear approach. It's easy to fail to see immediate benefits if explainations only talk about exactness and colors raised to powers. But the wiki article and imagery will surely motivate a lot of people to delve deeper.

Thank you very much. I know this can be difficult now (as Blochi told me one time - this stuff should work hidden to our eyes), but till then, we need to take care about this things and I really hope people investigate deeper.



As for sharing experiences:

Nominal Distance circle plays an important role in inverse square lighting. Light inside the area defined by the Range/Nominal Distance value is not physically accurate. Since distances there are normalized to unity, light intensity is increased inside this area. This will also throw off any radiosity calculation as the overlit surface will contribute a thin superbright spot to nearby surfaces. Nominal Distance should be tought of as the physical size of the light and set to scale, without ever intersecting a surface.

Sure, but curiously, as I show in that thread, nominal distance is accurate when we work in linear light (and with linear falloff). I post it again:

Nominal Distance: 8mts

Without LCS workflow (linear falloff)
http://imagic.ddgenvivo.tv/forums/ies/raw8mfalloff.png

With LCS workflow (linear falloff)
http://imagic.ddgenvivo.tv/forums/ies/lcsw8mfalloff.png

Without LCS workflow (^2 falloff)
http://imagic.ddgenvivo.tv/forums/ies/raw8mfalloff2.png

With LCS workflow (^2 falloff)
http://imagic.ddgenvivo.tv/forums/ies/lcsw8mfalloff2.png

With LCS workflow (linear falloff - intensity increased)
http://imagic.ddgenvivo.tv/forums/ies/lcsw8mlinfalloffintensity.png

The only result that doesn't respect the nominal distance (it passes over it) is by working in linear light and ID^2.

As I said in that thread: "Inverse Distance ^2 looks good to me, too." and "I'm not saying that it is necessary in that way. It's just a question to re-think from other point of view". So I still use ID^2 most of the times :) but sometimes as well, I use Linear falloff by increasing the light intensity instead (by working in linear light). This not only because it gives pretty similar results than Inverse Distance ^2 in many cases, but it can rid off splotches in renders and works specially good with LW indirect illumination.



BTW, my workflow is almost identical to Mike_RB's one. If someone is getting started with linear workflow, a pure mathematical gamma such as FPGamma can be difficult to handle. A surface with value 1 in linear space becomes 8% in gamma space, or value 21 in 8 bit per channel. This explains why everything suddenly seems lacking in contrast: there's significant fogging. A quick fix would be to apply in AE a levels adjustment with Input Black set at 7% and Clip to Output Black set to on.

I've seen several people who get wash out images by working in linear light. if they have linearized images at .4545 factor, they apply at the end 1.6-1.7 gamma instead of 2.2. This commonly happens when we linearize images but not lights colors. It's similar to what happens with ColorMapping options in Vray. Other cause may be an uncalibrated monitor. As we know, calibrated screen is very important when we use the perceptual method. And as we can see in lightwiki tutorial, the classroom scene was worked in linear light, and no one of those samples are wash out images. This is due because of the scrupulous work within the classic linear workflow, which provides a more accurate result by working with mathematical gamma exponents and formulas, but in an easy way because of the SG_CCTools.

And I agree with you about this mathematical gamma is not easy to handle for people getting started with linear workflows. In this regard I think if someone is getting started with these workflows, they should try first the inverse linear workflow or even the multipass linear workflow that I proposed in my article (they are easier to set up and can provide similar benefits). But if they want accuracy, they must be ready to handle mathematical gamma, color spaces gamut and methods for working with the SG_CCTools.



Gerardo

vfxwizard
04-12-2008, 07:17 AM
it seems LW's indirect lighting is not using ID^2 - though some people say it does. And there comes the controversy. In fact, in that case, indirect lighting from a luminous polygon behaves different with LW's MC than with FPrime's MC

Luckily this is something that can be demonstrated. :) LW's gi obeys inverse square falloff, as any gi renderer must do because light intensity gets spread over larger and larger areas as distance increase.

Here's the formula: Pi=Li/d*d. Intensity at point P is the light's (L) intensity divided by the square of the distance between L and P.

Let's try this out with 100% light intensity and increasing distances by 1mt. Those are the results for a perfect renderer.

d=1m -> 100%/1*1 = 100%
d=2m -> 100%/2*2 = 25%
d=3m -> 100%/3*3 = 11.11%
d=4m -> 100%/4*4 = 6.25%
d=5m -> 100%/5*5 = 4%

Now check if LW does the math right for direct lighting. The attached scenes have a white square receding by 1mt in every animation frame.

Sampling the center of the square in ImageViewerFP I get:
1m -> 99.95%
2m -> 24.99%
3m -> 11.11%
4m -> 6.25%
5m -> 4.00%

In the second scene, the light is replaced with luminous geometry, so we can check LW's gi.
1m -> 99.61%
2m -> 24.90%
3m -> 11.28%
4m -> 6.25%
5m -> 3.88%

Of course depending on where you sample there are slight differences, but overall the numbers speak for themselves.


I don't have Fprime, but I did tests like that in Maya and Modo obtaining the same results. I think Fprime should produce the same result in this "single-bounce" test. That in your test the falloff shape is slightly different is probably a matter of implementation. Indirect samples may or may not have been cosine weighted, an early ray termination strategy may be in place, or Fprime may even be still adding ambient light to the last bounce like LW did and Modo still does, etc. Every renderer is slightly different, but the core light interplay must be the same.


Now, I know this post is boring and let me make clear that I'm not trying to pick on you in any way. But it's very important to be able to trust the tools we use and rule out the idea that LW's gi is not physically accurate.

As for a renderer that works properly in linear light, they all must do that. The core rendering equation is a dot product, a few adds and muls. A renderer works in linear space by definition and can only produce correct results in linear space.

In LW there were some hard coded "tweaks" (area light intensity, sprite hv) but by now it seems to me they have been removed.


as Blochi told me one time - this stuff should work hidden to our eyes)

How true.

vfxwizard
04-12-2008, 07:29 AM
Sure, but curiously, as I show in that thread, nominal distance is accurate when we work in linear light (and with linear falloff).

I have split this answer in another post as this is something where LW's manual was wrong in the past and is still unclear. May be of some interest to other readers.

The Range / Nominal Distance is really two controls in one.

When falloff is set to Linear, this is the range control. The value entered is the point where light intensity decreases to zero.

So light starts at full intensity where the light is placed, is exactly half intensity at half Range, and is turned off at Range.

However, when falloff is set to Inverse Distance or Inverse Squared distance, this becomes the Nominal Range control.

In this mode it represents the light "surface" or the point where light starts decreasing either with distance or squared distance. There is no Light "turn-off" point in those modes.

So it is really correct that light "passes over" this point, because in real world light never ceases to contribute to surfaces. It's only that it gets so spread out that the contribution becomes negligible.

Now, what's not so nice in LW is that inside the Nominal Distance circle, light intensity is increased over the value set in Light properties.

This happens because distances inside that circle are considered to be less than 1, say .5. Now .5 squared is .25, and if we divide 100% light intensity by .25 we get... 400% light intensity! Wow, if real world physics worked this way we would have solved all energetic problems. :)

IMHO, light intensities inside the Nominal Range should really be clamped to original intensity. Right now they create unrealistic hotspots that adversly affect GI calculations.


calibrated screen is very important when we use the perceptual method

Absolutely agree. And if only LW had an OpenGL gamma correction there would be no need to alter the gfx card lut for preview. Maybe in 10.

gerardstrada
04-13-2008, 07:58 AM
Luckily this is something that can be demonstrated. :) ...

Thanks for the scene, it's better to test this by knowing we are talking about the same thing. And I've noticed, we don't:

According to the calculations, we should get this:

d=1m -> 100%/1*1 = 100%
d=2m -> 100%/2*2 = 25%
d=3m -> 100%/3*3 = 11.11%
d=4m -> 100%/4*4 = 6.25%
d=5m -> 100%/5*5 = 4%

And at least here, your scenes gives me this:

d=1m -> 100%/1*1 = 100%
d=2m -> 100%/2*2 = 25%
d=3m -> 100%/3*3 = 11.11%
d=4m -> 100%/4*4 = 6.25%
d=5m -> 100%/5*5 = 4%

Yes, exactly the same values! (I know where to pick, I guess) :) So we can say LW is accurate in this regard. However, as I said before, the whole point of that thread was to test this by working in linear light. And below the LCS workflow, things are totaly different (just add a 2.2 gamma correction exponent at the end of your render - we won't need to linearize colors since you are using 100% white for the surface):

d=1m -> 100%/1*1 = 100%
d=2m -> 100%/2*2 = 53.25%
d=3m -> 100%/3*3 = 36.83%
d=4m -> 100%/4*4 = 28.36%
d=5m -> 100%/5*5 = 23.15%

It seems ID^2 is not taken into account linear workflows, but rather linear render outputs. And what we finally see, is a gamma corrected render, we don't see a linear one. As I said in that thread, the only scenario in where we can think this is the correct way, is if these light measures don't take into account our visual perception, as it seems this is the case.



I don't have Fprime, but I did tests like that in Maya and Modo obtaining the same results. I think Fprime should produce the same result in this "single-bounce" test. That in your test the falloff shape is slightly different is probably a matter of implementation. Indirect samples may or may not have been cosine weighted, an early ray termination strategy may be in place, or Fprime may even be still adding ambient light to the last bounce like LW did and Modo still does, etc. Every renderer is slightly different, but the core light interplay must be the same.

Yes, it seems FPrime calculates indirect bounces a bit different. Direct lightning is however the same as LW. Not sure if it is adding ambient light to the last bounce since results are in fact more contrasted than LW renders - deeper shadows - this is more notorious by working in LCS, and it provides a better look indeed.


Now, I know this post is boring and let me make clear that I'm not trying to pick on you in any way. But it's very important to be able to trust the tools we use and rule out the idea that LW's gi is not physically accurate.

Oh no, this is pretty interesting and I appreciate we are talking about this because ID^2 has a different behavior/'look' below linear workflows.


As for a renderer that works properly in linear light, they all must do that. The core rendering equation is a dot product, a few adds and muls. A renderer works in linear space by definition and can only produce correct results in linear space.

In LW there were some hard coded "tweaks" (area light intensity, sprite hv) but by now it seems to me they have been removed.


Sure, I've seen similar calculations in vRay and MR. But as we can see, ID^2 has a different 'look' below linear workflows.


The Range / Nominal Distance is really two controls in one.

When falloff is set to Linear, this is the range control. The value entered is the point where light intensity decreases to zero.

So light starts at full intensity where the light is placed, is exactly half intensity at half Range, and is turned off at Range.

Yes, that's the other thing we were measuring in a different way. We get a perfect linear behavior when we measure this in Z axis, but when we let the light spread along a surface and we measure this, we don't get the same result. And I think this phenomenon has to do with this:


Now, what's not so nice in LW is that inside the Nominal Distance circle, light intensity is increased over the value set in Light properties.
This happens because distances inside that circle are considered to be less than 1, say .5. Now .5 squared is .25, and if we divide 100% light intensity by .25 we get... 400% light intensity!

And I think this is why, by working with a LCS workflow - gamma corrected output renders - we can get sometimes more natural results with linear falloff and we can get rid off splotches. These cases are present precisely when we see, within the take, the light source spreading along a surface. Let's consider this: with a light at 100% intensity, with ID^2 spreading along a surface, we can get about 12000000% in the light spot! (inside the nominal distance circle).


IMHO, light intensities inside the Nominal Range should really be clamped to original intensity. Right now they create unrealistic hotspots that adversly affect GI calculations

Completely agree. Hope we can see this solved in future versions. Until then and working with a LCS workflow, I'd try with both ways (ID^2 and linear) and I'd stay with the one that provides more natural results according to the case.


And if only LW had an OpenGL gamma correction there would be no need to alter the gfx card lut for preview. Maybe in 10

You might want to use the SG_CCTools (SG_CCFilter as a LUT previewer), it works at gamut level, so it's A LOT better than a simple gamma correction. Really hope to see SG_CCTools as a built-in system in v10 :)



Gerardo

vfxwizard
04-14-2008, 03:06 AM
these light measures don't take into account our visual perception, as it seems this is the case.

Yes, after applying the non-linear gamma correction the values are adjusted for display. And guess what? they get very close to what non-squared distance decay gives in a linear render.

In short: physically correct value in linear space, after display compensation, becomes the value we perceive as correct on that display device.

This is the essence of linear workflow: separating the rendering from the display. This allows to cut away with most tweaks, tricks and hacks that helped making images look good. This in turn leads to easier lighting, correct color matching in post, etc.



I'd try with both ways (ID^2 and linear) and I'd stay with the one that provides more natural results according to the case.

Of course, it's the result that matters. I'm not advocating pure physical rendering, that's what Maxwell is for. It's just that we are not forced anymore to use them (and many can be dropped with confidence, like radiosity intensity higher than 100%).

Just a side example: how many times have we nodded at the mantra "ambient light just adds a constant value so it's useless"? After gamma correction -as was originally intended by the shading equations- this is not true anymore. Still cheap, but surely not a constant.

Mike_RB
04-14-2008, 07:53 AM
This is the essence of linear workflow: separating the rendering from the display. This allows to cut away with most tweaks, tricks and hacks that helped making images look good. This in turn leads to easier lighting, correct color matching in post, etc.

I always thought inverse square lights were broken somehow. Same with the GI not getting too deep into the corners with only 2 bounces, and fresnel reflections never seemed bright enought on the facing angle..... It fixes a lot of stuff.

gerardstrada
04-15-2008, 06:27 PM
Yes, after applying the non-linear gamma correction the values are adjusted for display. And guess what? they get very close to what non-squared distance decay gives in a linear render.

Yes, it's really similar to Inverse Distance... So people who don't work in linear light - there's someone after this thread? :D - can work with this type of Intensity Falloff.



In short: physically correct value in linear space, after display compensation, becomes the value we perceive as correct on that display device.

This is the essence of linear workflow: separating the rendering from the display. This allows to cut away with most tweaks, tricks and hacks that helped making images look good. This in turn leads to easier lighting, correct color matching in post, etc.

Of course, that's what my articles in HDRI3D magazine are about. However, as Mike has said, inverse square falloff has never fit completely within LW, even by working in LCS, there's something that is missing due to the increased intensity inside the nominal distance, as you have said. Besides there're a lot of experiments like this (http://www.exploratorium.edu/snacks/inverse_square_law.html). They all are inverse square law experiments with measured light by taking into account our visual perception. If our visual perception was the parameter rule, all GI engines would be wrong, because there's no way to actually get the square inverse falloff by gamma correcting linear renders. However on the other hand, the inverse square law is applied as well to gravity, radiation, sound, electric, etc. where human visual perception has nothing to do there.



Just a side example: how many times have we nodded at the mantra "ambient light just adds a constant value so it's useless"? After gamma correction -as was originally intended by the shading equations- this is not true anymore. Still cheap, but surely not a constant.

Btw, I prefer to use gradients for local ambient shading, is cheap and it can be very powerful. Other curious thing is that default LW ambient intensity begun at 25%, then was lowered to 15%, now is 5%. LCS workflows weren't taken into account at the beginning, I guess :)



Gerardo

gerardstrada
05-04-2008, 02:46 AM
Originally Posted by Thomas M.
I need to give the LW .hdr in PS the look of a D2X (or whatever camera) image look.

Cheers
Thomas
Hello Thomas,

SG_CCTools can help you with the proper workflow to set up your pipeline (camera-Photoshop-LW-Photoshop-Print) in order to get a consistent result.

However the first thing you need to do (after get an HDRI3D Issue#18 and 19) LOL :D is to get the icc/icm color profile of your Nikon camera model. Some cameras include it with the drivers installation (do a search in your system *.*icc / *.*icm). If not, with luck, you could find it in web (google it), or you can ask to your vendor/supplier for it, or directly to the manufacturer, or if you want to go mad with an exact profile according to your unique camera, you can profile it yourself (you'll need some hardware).

You need this color profile to work appropriately within LW (with the new SG_CCTools) and then for finishing your work in Photoshop. Without this profile, all what we can do, will be by sight. You probably could match your camera look at gamma level, but not at gamut level (at least not accurately).



Gerardo


Looking for a Cannon profile I found a couple of Nikon D2X profiles here (http://digikam3rdparty.free.fr/ICCPROFILES/CameraProfiles/LZ.2.0/) for ISO100 and ISO800. MS RAW Image Thumbnailer (freeware) also supports Nikon D2X color profile. The same thing for Capture One Pro.

I've checked the profiles here and they are linear and really wide - similar to Wide Gamut RGB - but you can go sure with ProPhoto RGB as your working color space (mostly if you are working for print).


Gerardo

Gregg "T.Rex"
12-17-2008, 03:26 PM
much thanks to you and Gregg having articles in it.

Doohh...
Just came across this thread... :stumped:
Thanks, Pete... :thumbsup:

Cheers,
T.Rex

monovich
12-17-2008, 05:17 PM
wow. I just read this whole thread. this is some serious reading.

oh, and I ordered the two issues of the magazine also. thanks for the motivation Gerardo!

metahumanity
12-17-2008, 08:33 PM
Gerardo, i am very interested in this and will get the magazine.

I have VERY often noticed big CG/Realfilm integration problems that I (in my ignorance of this particular subject) have atributed to poor compositing or inacurate lighting.

Jurassic Park, Minority report, 300, Matrix to some extent, Star Wars, Hulk...even Two Face in Batman, are just some examples that come to my mind.

I would even go as far as to say that the colorburned outwashed post we see in so many movies these days is a direct result to hide these flaws.

I think the Iron Man FX have been the best integrated I have seen so far, so I guess it's catching on.

Very curious to learn more aout this.

Oh, and I don't see any problem with you promoting your 3d related article on a 3d forum with some enthusiasm. Also, if you decide to publish first and eventually go online later I don't understand how someone could have a problem with that.

The article is yours and you promote and distribute it how you see fit without having to explain yourself.

However, Exception does have a point. I would even add to his argument that if you want to see the techniques catching on and evolving fast the web would be the better medium, even if it were a paid download or on a subscription tutorial site. (Yes yes, I heard you, it''s not a tutorial, but you get my point).

Happy xmas to all....uhm...soon

monovich
12-18-2008, 11:21 AM
I don't think the color work in the feature films mentioned above is intended to hide poor compositing. Very artists put those movies together, and the looks achieved are exactly what the director intended.Love it or hate it, its a style thing, not a mistake or a mask for bad work.

gerardstrada
12-20-2008, 04:43 AM
Gerardo, i am very interested in this and will get the magazine.

I have VERY often noticed big CG/Realfilm integration problems that I (in my ignorance of this particular subject) have atributed to poor compositing or inacurate lighting.

Jurassic Park, Minority report, 300, Matrix to some extent, Star Wars, Hulk...even Two Face in Batman, are just some examples that come to my mind.

I would even go as far as to say that the colorburned outwashed post we see in so many movies these days is a direct result to hide these flaws.

I think the Iron Man FX have been the best integrated I have seen so far, so I guess it's catching on.

Very curious to learn more aout this.


I think as Monovich that the most of them are definitively a decision of visual style or have nothing to do with compositing. But some others..hmmm.

Don't surprise us that there are still things that lack to be dominated in CG. Just consider there were two shots in the film The Day After Tomorrow - which has awesome VFX - that had to be eliminated from the movie (due to their monumental complexity, photo-realism was unviable). But in such a case, it had to do with technology, not with talent.

Results not so excellent are commonly not related to artists talent - which in the level you refer, are top notch - it can be due mainly to workflows and working methodologies of the companies they work for, which influence a lot to techniques and pipelines, planning and deadlines (and even working dynamics of a team).

These production schemes are influenced also by the size of the company. At east in my experience, boutique studios schemes are commonly able to deliver less amount of work but higher quality. Some big VFX houses had also copied some work schemes from small houses to improve their results and vice versa. Six or seven years ago for example, ILM recognizes some problems in their working schemes related to the technical complexity of their tools, inter-operation between their artists and supervisors and the fact they had too much specialists doing many small parts of the work which bureaucratized simple tasks. In those days, they asked why smaller studios were able to deliver better work in less time. They adopted some working schemes from small studios to speed their work up (more and better generalists than specialists). They changed their GUI and systems in such a way many technical aspects were hidden behind the scenes for their CG artists (many of them don't even know their linear workflows are already implemented in the structure of their systems), they splited the work in small groups and put together all the people who worked in teams, etc. Results improved a lot after some changes.

On the other hand, small studios try also to adapt workflows and pipelines used in more larger companies. This is commonly very expensive for small houses, but some of them have terrific generalists, whom can manage very complex technical aspects that expensive systems already have automated (here, knowledge can be as good as a lot of money, if not even more). These CG professionals are worth their weight in gold.



I don't think also that some deficiencies in CG integration are related to linear workflows, but to color management... very very probable. Almost all CG work is made in monitor color space, whom commonly have a gamut really very small compared with negative film spaces. This has been a problem for many years in VFX and it's commonly solved in post these days (DI phase => specifically in the digital color timing process). But this process involves to color correct per shot-basis, even specific areas, with multiple grades for several different output media (film, DVD, print, web, stereoscopic - RealD special requirements, etc). This is like painting every shot, literally. Imagine that in a full CG movie. And they do it for sure.

Some studios make usage of commercial solutions for color manament and color grading (like Lustre or Kodak's KDM, Iridas or da Vinci, etc). But some others have developed their own systems. Pixar for example, have their 'Amethyst' for color matching between their monitors and film. Contrary to other technologies they have shared, they are very hermetic about how this kind of system works. But except for very few rare exceptions, almost all these systems solve this issue in their final color-grading sessions. I think this is a limitation (if not an error). I understand common color management systems are very expensive, but this limitation can be solved for a Lightwave-based studio with the SG_CCTools.

A first color matching pass can be made directly in the CG artist's workstation in a very early stage of the process. Even with a crappy monitor, this can save a lot of time later in the final color grading process. Some Maya users have told me: "You lw users are very lucky into having such a great color management system already available... and the worst thing, it's free! errrrrrggggg!!" :D





Oh, and I don't see any problem with you promoting your 3d related article on a 3d forum with some enthusiasm. Also, if you decide to publish first and eventually go online later I don't understand how someone could have a problem with that.

The article is yours and you promote and distribute it how you see fit without having to explain yourself.

However, Exception does have a point. I would even add to his argument that if you want to see the techniques catching on and evolving fast the web would be the better medium, even if it were a paid download or on a subscription tutorial site. (Yes yes, I heard you, it''s not a tutorial, but you get my point).

Happy xmas to all....uhm...soon

For people interestred, there are simpler approaches in web about the SG_CCTools here in this thread:

http://www.newtek.com/forums/showthread.php?t=82605

Anyone can share there their experiences, answers, questions and any contribution is very welcome.

About the inverse linear workflow and multipass linear workflow, I've written an overview here:

http://www.spinquad.com/forums/showthread.php?t=19309&p=230618

and thanks to HDRI3D generosity, we can download my presets for free in their website. Any contribution for those implementations can be made there or here as well.

However, I don't think web can do much at this moment. Consider workflows don't evolve as fast as techniques - mainly because they are very customizable and every studio has its own way to implement it (some of them have even opposed opinions about how this stuff should work). These stuff are not standarized yet and there's a lot of misunderstandings too.

I'm aware also not all people care about these workflows right now. According to a CGTalk poll, about a 33% have implemented some kind of linear workflow (and the most of them have more questions than answers). Linear workflow at gamma level (which is a simple version of this) is a good example about how the most of people wait until a workflow/method/technique or tool becomes "the state of the art" before begin for caring about it. Then, I don't expect speed for linear workflows at gamut level and color management, I prefer that the very few people whom are caring about this, implement these workflows properly, improve their results and then share their knowledge and experience on web later.



Gerardo

gerardstrada
12-20-2008, 04:52 AM
wow. I just read this whole thread. this is some serious reading.

oh, and I ordered the two issues of the magazine also. thanks for the motivation Gerardo!

Hey! Thanks for the interest Monovich!



Gerardo

John Geelan
01-20-2009, 11:50 AM
Gerado,
If you read this and have the time, could you drop over to my thread in the LW-Community - Colour Management in LightWave.
A group of us there are in desparate need of enlightenment:D