PDA

View Full Version : LightWave Compositing



Richard Hebert
05-25-2009, 02:05 PM
I'm trying to composite geometry that's surfaced like tinted glass and I can't find a workable solution. This is what's being exported from LightWave versus what's being comped within the program. This is a background radiosity pass. Any compositors out there ever encounter this and if so, how did you handle it? Thanks for any assistance.

Richard

Captain Obvious
05-25-2009, 02:09 PM
What passes have you exported? If you upload the raw files (with alphas, of course), I can have a look at it. Just looking at the result, it doesn't seem like anything that should be too difficult.

Richard Hebert
05-25-2009, 02:47 PM
I'm exporting TIF files for After Effects but the forum doesn't seem to upload those so the tif file of the exported radiosity layer is zipped. It's the only rendered layer available for testing at the moment.

Richard

Richard Hebert
05-25-2009, 05:43 PM
Here's sample tifs for rad,diff, and spec passes.

Richard

Jockomo
05-25-2009, 07:54 PM
I could be wrong, but in my experience radiosity shadows do not show up in alphas.
I don't know if that has anything to do with your problem or not.

joelaff
05-25-2009, 08:34 PM
Looks like your radiosity pass includes the windows, and that they are black in that pass? A few ways to handle this. If you compositor has the right apply mode you can merge the radiosity In the RGB, which will preserve its alpha. Otherwise you could use the alpha from the RGB as a track matte for the radiosity (in AE it is track matte from alpha). You could knock out the alpha from the radiosity pass first before merging it onto the RGB. You could render out a matte pass where you sett all your objects to matte black, but then set the windows to matte white and use that as a matte for the radiosity pass.

If you are using AE read up on track mattes. If you are using Fusion read up on the merge modes and also bitmap masks (and/or the Bol tool to copy the alpha). Shake would be similar to Fusion.

Also... I *think* AE's "preserve transparency" setting might help you as well. Read up on that. We have been exclusively Fusion for about 5 years. So I am rusty on AE.

Hope this helps.

Richard Hebert
05-25-2009, 09:01 PM
What exactly constitutes an RGB Pass? I'm going self taught on this and I'm still trying to get up to speed on the variety of passes and what's included in them. Thanks for the insight.

Richard

Cageman
05-25-2009, 09:12 PM
What exactly constitutes an RGB Pass?

Richard

Color/textures without shading...

EDIT: Oh... I was thinking about the Buffer...

Well... RGB-passes are also known as Croma-passes. What you essentially do is to render several masks at the same time, using colours instead of alpha. You use these colors as a way to define what area of the image you want to affect.

Using either RLA, RPF or PSD exporters allows you to output something called Object ID and Surface/Material ID. They are essentially RGB-based passes that gives each surface and object a unique color. However, these exporters seem to have been broken in later versions of LW. Using LW8.5 + Combustion will allow you to use those exporters (at least RLA/RPF) without issues.

EDIT 2:

If you are going to get serious about multipass/multilayer rendering in LW, I suggest you check out both PassPort (http://www.lwpassport.com/) and Janus (http://janus.faulknermano.com/) as they both provide alot of control and easier setup for multipass/multilayer rendering.

I also happen to have created videotutorials for both;

PassPort (http://www.newtek.com/forums/showpost.php?p=886823&postcount=6)

Janus (http://www.newtek.com/forums/showthread.php?t=93407&highlight=Janus)

Cheers!

Richard Hebert
05-25-2009, 09:15 PM
How do I go about that in LightWave? I've turned off specular and shadows in lights but not getting what I'm after. Please forgive my ignorance. Thanks.

Richard

Richard Hebert
05-25-2009, 09:42 PM
Thanks for the tips on multipass, I'm going to delve into this a little deeper. It's beyond my abilities and budget at the moment but never hurts to get prepared for it.

Richard

Mr Rid
05-26-2009, 01:24 AM
Am not following what it is you are trying to do actually. I find it is very rare that I ever need to separate a bunch of channel passes. Usually it is a waste of time. A compositor I work with said that even on Transformers he never once needed all the channels he was always given. If it is lit and textured right in 3D, it isnt going to improve with 2D tweaking of channels.

Also, you should render the background as one pass. And render all other elements over a black background for them to comp correctly without funny edges.

Richard Hebert
05-26-2009, 01:45 AM
I tend to agree about the number of passes. The problem I was having was trying to get reflections from transparent or partially transparent surfaces. I was relying on the alpha channel to have varying densities for the layer to be composited using the alpha. LightWave doesn't do that. I am accustomed to using software like Poser which supports varying density alphas. In LightWave I have to use solid objects as glass to capture reflections then composite those using a transfer mode like add or screen to see any color tinting or reflection on the 'glass'. Everything renders fine within LightWave it's just the alpha export that's causing additional layers in the compositing process. At least that's just what I see happening. There may be another solution but it hasn't presented itself yet.

Richard

Cageman
05-26-2009, 06:07 AM
Am not following what it is you are trying to do actually. I find it is very rare that I ever need to separate a bunch of channel passes. Usually it is a waste of time.

Yeah... but tools like PassPort and Janus makes those things a breeze... a couple of mouseclicks and off you go.


A compositor I work with said that even on Transformers he never once needed all the channels he was always given. If it is lit and textured right in 3D, it isnt going to improve with 2D tweaking of channels.

That may very well be true. But things like spec, reflection, occlusion, shadow density and colour can easily be tweaked without affecting the renderquality, as long as you render out the passes properly. Being able to change a sequence in a compositing app within minutes compared to do a complete re-render will gain you time, not waste your time.

Passes should of course be created within reason and where appropriate, and should never be used as an excuse to sloppy lighting and shading. But renderbuffers doesn't add to rendertime and even if you don't think you will need these buffers in comp, I think it is a good practise to always output them into a multichannel exr-file just in case the client may change their mind.

;)

joelaff
05-26-2009, 07:56 AM
By RGB pass I simply meant your standard RGBA pass (beauty pass).

Don't make more passes than you need. It is a common mistake for people to make way too many passes. If you simply want to explore how compositing works then by all means make them all and experiment. But for real production you generally don't need too many passes, especially if the 3d artist is the same as the compositing artist. When someone is talented in both areas they now what to make as a pass and what to fix in the render.

joelaff
05-26-2009, 08:00 AM
I tend to agree about the number of passes. The problem I was having was trying to get reflections from transparent or partially transparent surfaces. I was relying on the alpha channel to have varying densities for the layer to be composited using the alpha. LightWave doesn't do that. I am accustomed to using software like Poser which supports varying density alphas. In LightWave I have to use solid objects as glass to capture reflections then composite those using a transfer mode like add or screen to see any color tinting or reflection on the 'glass'. Everything renders fine within LightWave it's just the alpha export that's causing additional layers in the compositing process. At least that's just what I see happening. There may be another solution but it hasn't presented itself yet.

Richard


You should get reflections on transparencies out of LW. Perhaps you are interpreting your render footage as having a straight alpha. LW produces premultiplied alphas by default (unless you enable the Fader Alpha option, which may have actually finally been renamed, but I don't recall. Many LW effects do not work properly with fader alpha. So I would stick with premultiplied alphas.

Make sure you interpret your alphas as premultiplied in the compositor (in the footage settings in AE... Fusion generally assumes things are premultiplied, though it is deigned for people with an in depth knowledge of the difference and can work either way.)

Richard Hebert
05-26-2009, 08:59 AM
Hi Joe,

I'm rendering every pass using premultiplied against a black background and then importing those passes into AE with the alphas interpreted as premultiplied. The edges look great but I'm seeing straight through any transparent surface or partially transparent surface to the underlying video when comping in AE. I'm doing all compositing in AE and none in LW. I'm probably not adjusting something but I've been stumped by this since the release of 9.0. I thought that it was the version of AE that's being used (5.5) but that's proved not to be the problem. Would you mind creating a partially transparent reflective poly, do a test render against a black bkgnd, export as a 32 bit TIF file and upload the image so that I can test it in an AE comp? Sorry guys, not sure what else to do, even spoke with tech support for 30 min. trying to get this straightened out.

I've noticed that transparency data is indeed in the alpha but no density data with regard to reflections. I'm going to assume that reflections have to be done as a separate pass whether I want the extra pass (probably a good idea) or not. This means that any glass surface has to be saved on a separate model layer so that it can be active while the other layers are matting it when passing in front of the glass. Am I over thinking this whole thing just to get reflections to comp?

joelaff
05-26-2009, 11:41 AM
Hi Joe,

I've noticed that transparency data is indeed in the alpha but no density data with regard to reflections. I'm going to assume that reflections have to be done as a separate pass whether I want the extra pass (probably a good idea) or not. This means that any glass surface has to be saved on a separate model layer so that it can be active while the other layers are matting it when passing in front of the glass. Am I over thinking this whole thing just to get reflections to comp?

This is normal. The reflections are additive. I am pretty sure AE should merge this properly over another background.

This is a PNG from LW merged over a gradient. If you are doing something that post multiplies the image then it would indeed make the reflections disappear. does that PNG merge OK in AE? Interpret it with a premultiplied alpha.

NOTE... if your final render is out to a format with an alpha, and you render out of AE premultiplied that could make the reflections disappear in areas where the alpha is black (0).

Perhaps post a sample render that does not work the way you want it to.


Sorry for the 2k image in the merge... default background size for me...

Richard Hebert
05-26-2009, 10:03 PM
OK, got a working scenario. I'm going to have to sandwich the reflection passes to get what I'm looking for. External Reflection Pass, Internal Reflection Matte, Internal Reflection Pass, then Specular, Diffuse, and Radiosity passes = a good start. Thanks guys for getting me through this with all of the tips on mattes (didn't even realize I could generate a white matte). This is a very basic test of the process. Will post more complete renders later on to demonstrate the effect that I'm wanting to achieve.

Richard

Mr Rid
05-26-2009, 10:36 PM
Yeah... but tools like PassPort and Janus makes those things a breeze... a couple of mouseclicks and off you go.


Yes, except that now you are taking up much more disk space and a compositor has to take time reassembling the pieces. Too often this is like taking a carburetor apart, then having someone else reassemble it when it was working just fine in the first place. In 13 years only one time can I recall needing to render a separate spec pass for an element in one shot because the client was picky about how a particular spec hit was moving over a surface. But thats it. And Ive seen many examples of shots getting lost in the comp due to too much control and value needling when elements actually looked better straight out of 3D (some Matrix sequel shots suffered from this).

Now, it is usually preferable to render separate CG elements- objects (building, ground, tree, etc), fur, FX, etc- in separate passes, but when you know how to efficiently light and texture (FPrime can help considerably), normally it is not necessary to break out surface channels. I also regularly have to make isolated shadow passes to integrate CG with plate photography.

But I advocate doing things in the most straightforward way possible and to avoid adding complexity unless it is really necessary. I would not break out passes unless it is expected to be necessary by the compositor. But doing it for no particular reason is usually a waste of time for the compositor, bogs down the comp (2k exrs), and if you are dealing with dozens or hundreds of 2k (or maybe 4k) shots, all those extra passes eat up a several times the amount of drive space and take much longer to back up.



That may very well be true. But things like spec, reflection, occlusion, shadow density and colour can easily be tweaked without affecting the renderquality, as long as you render out the passes properly. Being able to change a sequence in a compositing app within minutes compared to do a complete re-render will gain you time, not waste your time.

Naturally this is the purpose of breaking out channel passes. But the need to tweak individual surface channels in comp should rarely come up if you are lighting things correctly in the first place. Although sometimes a particular effect requires it. Most client changes are more drastic or may include say 'turn down the spec a little' which can just be tweaked in seconds in 3D while you are addressing bigger lighting changes.



Passes should of course be created within reason and where appropriate, and should never be used as an excuse to sloppy lighting and shading.

Yes, and this is more to my point, that 2D channel tweaking is too often a substitute for sloppy 3D. I see that decent lighting and basic global surfacing skills are widely lacking, as well as optimized work habits. Some compositors prefer to get a bunch of pieces to play with and to feel like they have control. But I find it usually best to fix things as close to the source as possible in any kind of pipeline because there is inevitably less control at the end of the assembly line where some things just cant be fixed properly.

For instance, try to shoot it live before dicking with the complexity of 3D, then make 3D look as good as is practical before trying to fudge it in 2D, then fix color issues in 2D instead of trying to fix them in the digital intermediate session. You dont want to make more work for the next person in the assembly line.

Mr Rid
05-26-2009, 11:00 PM
OK, got a working scenario. I'm going to have to sandwich the reflection passes to get what I'm looking for. External Reflection Pass, Internal Reflection Matte, Internal Reflection Pass, then Specular, Diffuse, and Radiosity passes = a good start. Thanks guys for getting me through this with all of the tips on mattes (didn't even realize I could generate a white matte). This is a very basic test of the process. Will post more complete renders later on to demonstrate the effect that I'm wanting to achieve.

Richard

I wasnt sure if you realize there are alpha channel options for each surface under the advanced tab that may help with whatever you are going for.

Richard Hebert
05-26-2009, 11:17 PM
I've been around this tree more times than I can count with techs at NewTek, experimentation and the forum countless times. I was watching a compositing tutorial produced by Zoic Studios. Almost every topic was touched upon except glass. Their ship had surface reflections in the composite but no glass to deal with. Would have been nice to see this actually done. I'm not a real big fan of layer upon layer but compositing within LightWave is just not going to be an option unless it can be used to match grain and the like and at the same speed and ease of a compositing app. I wonder if NewTek would produce a good compositing tutorial. BTW, watched your demo reel and I really like the composites and animation. Really good work, indeed. If you have any recommendations that will lessen the number of layers I'm more than willing to listen. Seems you've been there and done that at least a few times. I've tried using the alpha export options (these are the options NewTek walked me through at the very beginning of all this) Not really sure what else to try other than sandwiching the mattes.

Richard

Mr Rid
05-26-2009, 11:35 PM
I've been around this tree more times than I can count with techs at NewTek, experimentation and the forum countless times. I was watching a compositing tutorial produced by Zoic Studios. Almost every topic was touched upon except glass. Their ship had surface reflections in the composite but no glass to deal with. Would have been nice to see this actually done. I'm not a real big fan of layer upon layer but compositing within LightWave is just not going to be an option unless it can be used to match grain and the like and at the same speed and ease of a compositing app. I wonder if NewTek would produce a good compositing tutorial. BTW, watched your demo reel and I really like the composites and animation. Really good work, indeed. If you have any recommendations that will lessen the number of layers I'm more than willing to listen. Seems you've been there and done that at least a few times. I've tried using the alpha export options (these are the options NewTek walked me through at the very beginning of all this) Not really sure what else to try other than sandwiching the mattes.

Richard

Am glad to try to help. I just dont understand what it is you are trying to do or why you cant just render the plane, with whatever reflections in one pass. Or maybe render the window by itself over black matte versions of the rest of the geometry to isolate an effect.

Richard Hebert
05-27-2009, 12:05 AM
If a job required you to get a close-up shot of a jet cockpit while in flight you would probably have reflections of the sky (clouds or whatever) reflected on the outside of the glass while on the inside glass of the cockpit there would be reflections of the pilot and instruments depending on the brightness of these objects. Rendering out such a shot in LightWave against a black background using premultiply for compositing in AE will not yield me the reflections in the glass when brought into AE... period. The shot looks good in LightWave until you export the file for compositing adjustments to match video footage. I can't think of any other way to describe this scenario.

In order to get the external reflections and internal reflections they had to be rendered in separate passes using the matte object procedure that you mentioned. That's why reflection layers are sandwiched... because LightWaves alpha export doesn't support reflections apparently.

Mr Rid
05-27-2009, 12:26 AM
If a job required you to get a close-up shot of a jet cockpit while in flight you would probably have reflections of the sky (clouds or whatever) reflected on the outside of the glass while on the inside glass of the cockpit there would be reflections of the pilot and instruments depending on the brightness of these objects. Rendering out such a shot in LightWave against a black background using premultiply for compositing in AE will not yield me the reflections in the glass when brought into AE... period. The shot looks good in LightWave until you export the file for compositing adjustments to match video footage. I can't think of any other way to describe this scenario.

In order to get the external reflections and internal reflections they had to be rendered in separate passes using the matte object procedure that you mentioned. That's why reflection layers are sandwiched... because LightWaves alpha export doesn't support reflections apparently.

Ok, so I think you are already basically doing it, with 2 isolated reflection passes. Have to render an isolated pass of only the interior glass with reflection with all other surrounding opaque geometry matte black, and the exterior glass unseen (100% transparent). And another pass of only the exterior glass reflection surrounded by matte black canopy. Probably use 100% reflection, no transparency, no dif or spec, and 255 white alpha, then merge/dissolve/blend in the desired amount of reflection in comp. I save out special versions of the geometry with appropriate surface settings to go with each special pass scene.

Note: diffuse and reflection values combined should remain less than 100% total.

Richard Hebert
05-27-2009, 12:33 AM
That pretty much sums it up. You know, it wasn't this difficult compositing exported imagery (reflections and all) from Poser 7. I would use it instead if it looked halfway realistic when rendered. It could handle this in one pass with no problem if you don't mind the rest of the render looking less than real. Sigh....

Richard

joelaff
05-27-2009, 08:13 AM
Are you putting your reflections in in Add or screen mode (Add is actually correct, but screen often looks better)??? With these modes the elements go on top of the existing data, without needed an alpha. Of course you may still need to knock parts out here and there.

When I do reflections, like in this spot I did for Interac http://laffey.tv/Interac.mov I usually make a matte pass for the parts I see through the windows, then one for the parta I do not see through. This lets me control each one independently. Another easy method (what I used in that spot since it was FPrime rendered) is to render the object with reflections (often stronger than you want), and render it without. Then use the mattes and opacity to combine the two in the post. That let me independently control the reflections in the comp. Even the dog reflection is through LW (dog image on a plane).

Richard Hebert
05-27-2009, 11:27 AM
Hi Joe,

Man that looks like fun! Great work and really cool spot. I'm not a critic so I've got nothing to critique about it. I just like to enjoy others' work and let it inspire me to punch through software issues. Thanks to you and Mr Rid both for sticking with me through this problem area. I'm employing the concepts submitted by both of you to get the results that I'm after. I'm a little ways off from your commercial spot (and your demo reel Mr. Rid) I'm afraid but... we'll see how far a 1.5 GZ Single Core Mac Mini with 1 GB of RAM can actually go.

Richard Hebert
05-27-2009, 11:36 AM
Hey Joe, when compositing the vehicle behind live video elements (ie. the model was blocked by the handbag) did it have to be cut out manually with rotoscoping?

joelaff
05-27-2009, 01:18 PM
Hey Joe, when compositing the vehicle behind live video elements (ie. the model was blocked by the handbag) did it have to be cut out manually with rotoscoping?

Yes, roto is par for the course in the VFX industry. You either need to get good at it, or get a good freelancer. The handbag shot was tricky because of the shallow depth of field. I had to track the focus of the actual camera including not having it razor sharp all the time, just like in the real world. (If you were not aware in movies and commercials the camera is focused manually by the first assistant camera person, who uses skill and a little luck to get the focus right when things move.)

The water is RealFlow, rendered in LW (FPrime), BTW. I recreated at loeast one of the chair legs, plus the table base in 3d to get proper reflections on them as well.

joelaff
05-27-2009, 01:20 PM
Hi Joe,

Man that looks like fun! Great work and really cool spot. I'm not a critic so I've got nothing to critique about it. I just like to enjoy others' work and let it inspire me to punch through software issues. Thanks to you and Mr Rid both for sticking with me through this problem area. I'm employing the concepts submitted by both of you to get the results that I'm after. I'm a little ways off from your commercial spot (and your demo reel Mr. Rid) I'm afraid but... we'll see how far a 1.5 GZ Single Core Mac Mini with 1 GB of RAM can actually go.

This is starting to look pretty good. The reflection is nice. I would take a look at reducing the frontal (or ambient) fill a little bit, and apply an S shaped curve to the render, especially darkening the shadows. The foreground feel too flat vs. the sky.

Richard Hebert
05-27-2009, 04:51 PM
The ambient lighting isn't coming from the bkgnd photo, it's actually coming from a backdrop that was already there and I just ran quick single frame test composite with it as it was. I'd like to purchase a few 'real' HDRI backgrounds to use to for the radiosity pass and full 360 backdrop (although it probably isn't necessary) just to see what can be done with it. Having the proper motivated lighting from the scene would go a little ways toward the shading looking more appropriate and then adjust levels or curves as you mentioned. It was just a convenient render to test the matte procedure a little better.

As far as Roto goes, I'm using AE 5.5 Standard version and the masking doesn't allow for 'per vertex feathering' which would make my roto work easier and more believable. Using and animating multiple masks on the same object is something that I'd like to avoid as much as possible (unless I'm actually getting paid to do it). But, it's interesting to hear that the job still has to be done by hand and not by software. Thanks for the crits, btw. The model is free from the internet and I'm just adding a Poser pilot with some props built in LW. It's for the youth ministry at my church. We're shooting video on a Canon GL-1 so any 3D work doesn't have to be rendered at high resolutions. It's sort of a 'Space Camp' kind of theme and the kids fly around to different mission fields using these vehicles in the promo. Pretty fun stuff to learn compositing with.

joelaff
05-27-2009, 05:16 PM
I see.. was still WIP. No problem...

There should be a bunch of free HDRIs online. I posted some (semi lame but free) ones made with Vue here: http://fusion.laffeycomputer.com/hdri/

You can shoot your own HDRIs as well. I shoot them for all of my comp work.

More important than the feathering to me are B-Splines. Using Beziers for roto work sucks... Of course some guys swear by Beziers... Not sure if the latest AE even has B-Spliness... You can always simulate the feathering by blurring parts of the mask through another mask.

There is software that can help with roto, like Motor and Mocha from Imagineer systems. But they are not completely magic for roto. Also you can often pull some kind of key (chroma, luma, etc.) as a starting point.

For hardcore roto I would suggest Silhouette http://www.silhouettefx.com/silhouette/ , but Fusion is completely up to the challenge as well.

Richard Hebert
05-27-2009, 06:58 PM
Fusion is a little out of my range but Silhouette looks doable in the near future. Thanks for the tip. What software does your company use for tracking background plates?

Richard

joelaff
05-27-2009, 10:17 PM
PFTrack and SynthEyes. SynthEyes is only around $300-400 or so, and is fully capable.

Richard Hebert
05-27-2009, 11:23 PM
That's interesting, Syntheyes was a package that I was looking at a few years ago but didn't have a 3D package that would use it. Good to hear that industry professionals use something under $10,000 for a change! One last question and I'll end this thread (and start a whole new one), what software is used to create the hdri's so that they'll curve around a sphere properly. I want to make some panoramas in Photoshop and convert them to properly map a sphere. Thanks for the help and info.

Richard

Mr Rid
05-27-2009, 11:49 PM
Zoinks! I just noticed you're hailing from my home town of Irving. Someone else here was just saying they knew Irving.

toby
05-28-2009, 02:39 AM
I find it is very rare that I ever need to separate a bunch of channel passes. Usually it is a waste of time. A compositor I work with said that even on Transformers he never once needed all the channels he was always given.
Rendering and comping separate passes is a completely valid method of doing cg, there are studios with Oscars that rely heavily on it. Curious Case of Benjamin Button for example. It really depends on the studio and the project.

ILM (Transformers) has an outrageous rendering pipeline, with perfectly calibrated texture painting, shaders and lights, that they've been refining for over a decade, so the comping need is greatly reduced. Only a few studios can boast this. It could also be that the comper is given a huge number of passes, so it's not likely he'll need all of them, but they're there if he does.



If it is lit and textured right in 3D, it isnt going to improve with 2D tweaking of channels.
But you wouldn't have as much flexibility. If you have a separate spec pass for example, you can duplicate it and blur it for a dual-layer effect. Many passes can be used for something other than they were intended. Shots also need to match from artist to artist, which could be very difficult without comping.

But most importantly, what's "right" is subjective, and there's more than just right/wrong and what looks good - *clients*. If they want a "stealth" ship that's clearly visible to the audience, at night... ( gotta love hollywood ) the 3d artist would have to sit there with him doing render after render to get the level of spec and diffuse he wants, for every shot.

There's also the re-rendering you save as Cageman mentioned. If your studio has more space than processor power then separate passes is definitley the way to go. The comper can use the beauty pass or do pre-comps if he has a problem with it.

There's certainly nothing wrong with your method in some situations, but it does require that all the lighters be more talented ( more expensive and less frequent ) and/or more render time. So it's not the best way for all studios.

Richard Hebert
05-28-2009, 07:29 AM
Mr. Rid hails from Irving Tx.?

joelaff
05-28-2009, 08:13 AM
When I am comping other people's CG I like all the layers I can get because I am a conotrl freak, and like things "just so." When doing my own I typically end up making most of the changes in CG, with minor tweaks in the comp.

joelaff
05-28-2009, 08:19 AM
That's interesting, Syntheyes was a package that I was looking at a few years ago but didn't have a 3D package that would use it. Good to hear that industry professionals use something under $10,000 for a change! One last question and I'll end this thread (and start a whole new one), what software is used to create the hdri's so that they'll curve around a sphere properly. I want to make some panoramas in Photoshop and convert them to properly map a sphere. Thanks for the help and info.

Richard



You can start with HDRShop for the HDRIs. It is free for non commercial use, or can be licensed commercially. There are a few other programs out there as well. Have a look at the HDRI handbook. If you are new HDRI it has a lot of good info. Even for those of us who have been using HDRI for years it still has some good info. But there are online tutorials for making the HDRIs from chrome balls on the HDRShop website.

One note. Make your HDRI into Lat/Long maps and wrap them around a sphere enclosing your scene. This is much easier and gives you far more control than making a lightprobe image and using ImageWorld (or whatever the background plugin is called). You make your sphere luminous instead of diffuse, point the normals inward to light the scene. This way you can see the orientation of the texture in the viewports, and you can easily turn on unseen by camera. Also, unless you need reflections make your HDRI map low res (like 512 pixels wide or less). This will cut down on rendering noise.

Richard Hebert
05-28-2009, 08:59 AM
Rats, not Mac compatible. Going online to look for Mac alternative but thanks for the heads up.

joelaff
05-28-2009, 09:24 AM
Ah. Sorry... Check out that HDRI handbook. It lists some Mac compatible programs. Or you could perhaps use Bootcamp or Parallels or something...

toby
05-28-2009, 12:06 PM
Here's a few huge hdr's
http://www.openfootage.net/?cat=15

Mr Rid
05-28-2009, 02:15 PM
Mr. Rid hails from Irving Tx.?

My first 27 years of cellular existence. Lived near where Story and Northgate cross. Is there an animation house in Irving?

Irving was famous for being overlshadowed by Dallas... like Texas Stadium, home of the Dallas Cowboys is actually in Irving, and Dallas University is in Irving, and Irving is well between Dallas and DFW airport which ought to be IFW.

Mr Rid
05-31-2009, 03:06 PM
Yeah... but tools like PassPort and Janus makes those things a breeze... a couple of mouseclicks and off you go.
;)
Ive read that Passport does not work with the latest LW(?)


Rendering and comping separate passes is a completely valid method of doing cg, there are studios with Oscars that rely heavily on it. Curious Case of Benjamin Button for example. It really depends on the studio and the project.

I avoid working at large houses, but I have friends at most of the big ones as well as much smaller places, and I have been doing compositing for almost as many years as I have been at 3D, which gives me a unique perspective over those who mostly do just one or the other. It really isnt about how much money or prestige is involved. I find that it largely depends on the quality and type of CG coming out of 3D and the personal preference of the compositor. My wife was a compositor on Golden Compass that took an Oscar. She noticed the same thing I do, that certain artists would consistently supply better renders that may have required none or maybe 1 or 2 special surface passes to make work (usually about reflection, depth, subsurface, or key light), while other artists output renders that consistently needed more help in 2D and special passes to make look decent. Sometimes there is an inherent technical problem that requires passes like some fur renders on Compass. She now works at Sony where again, use of multi-passes are a preference and not mandatory.

Also in large pipelines, the line between lighting and compositing has been blurring in proprietary techniques. But good lighting should not necessarily require a more expensive artist. Management should be training the artists on specific lighting approaches. Basic lighting and texturing skills are easy to pick up but too many artists just dont ever seem to take time to figure it out.



But most importantly, what's "right" is subjective, and there's more than just right/wrong and what looks good - *clients*. If they want a "stealth" ship that's clearly visible to the audience, at night... ( gotta love hollywood ) the 3d artist would have to sit there with him doing render after render to get the level of spec and diffuse he wants, for every shot.

There's also the re-rendering you save as Cageman mentioned. If your studio has more space than processor power then separate passes is definitley the way to go. The comper can use the beauty pass or do pre-comps if he has a problem with it..

Occasionally you have a client that does not not know what he wants, is a pixel-fudger, or there are 15 chefs in the kitchen, requiring a lot of re-renders regardless. So adjusting spec or whatever in the process is not requiring its own re-render and is always better to correct near the source when practical and avoid bogging every version of the comp. I find that better compositors prefer an efficient comp and do not want any more work or elements to keep up with and correct with each iteration than is absolutely necessary. And I find that if the client and the artist both have experienced eyes then they tend to be in synch on the result even if they never communicate directly, and less changes are necessary. I am most proud of elements that are approved after one viewing by a client and no further iterations or breaking out of passes are needed. I hope to soon show some examples from recent projects where I rendered photoreal elements in one RGB pass. There is a 'right' way to do things that experienced artists and clients will tend to agree on.



There's certainly nothing wrong with your method in some situations, but it does require that all the lighters be more talented ( more expensive and less frequent ) and/or more render time. So it's not the best way for all studios.

Taking the least amount of time and expense to satisfy the client is usually the goal. Not adding complexity/time/money to the pipeline that is not necessary. 3D should have a solid communication with 2D to supply them exactly what they need per task. But always rendering every imaginable pass is a waste. It costs less time and money to discriminate which passes are really necessary and ensure 3D is lighting and texturing in the most practical way.

Cageman
05-31-2009, 05:29 PM
Ive read that Passport does not work with the latest LW(?)

Hmm... well... I havn't been using PassPort lately since Janus is what I prefer. Alot of developement is going into Janus and there are new builds comming out pretty often.

Having read through all your posts I can say that I do agree with what you are saying about good vs bad renders and how those are related to a compositing situation. That isn't the point thogh..

Your reasoning is very biased towards LightWaves lack of multilayer/multipass renderpipeline. I honestly wouldn't want to do multilayer/multipass rendering if it wasn't for PassPort or Janus, especially Janus since it is also tied in with exrTrader. When you get the hang of how Janus works, it's dead simple and fast and soon you start to wonder what the hell you've been doing all these years without it. It's currently more versatile and easier to understand and use than Mayas multilayer/passes setup. Oh.. and above all... it actually works... probably due to the fact that LWs renderer is tied in with the app...

The only way I render out buffers (spec, diffuse, reflection etc) is through exrTrader where I've set up a bunch of presets. Each layer that I create within Janus is then rendered out and saved as a multichannel exr. If I don't want any buffers for a pass, I simply just use a preset where exrTrader only outputs the beauty. It's a couple of mouseclicks... The key thing though, is that everything is within a single scenfile, and the only thing I save out are scenefiles that goes to the farm for rendering. Very simple, tidy and, above all, timesaving!

:)

Mr Rid
05-31-2009, 06:26 PM
Your reasoning is very biased towards LightWaves lack of multilayer/multipass renderpipeline. I honestly wouldn't want to do multilayer/multipass rendering if it wasn't for PassPort or Janus, especially Janus since it is also tied in with exrTrader.

:)

We actually use a proprietary interface that works with openEXR when saving multi-passes although I have yet to need it. But what I am talking about is part of a workflow that involves a number of tricks, that may have something to do with why some supervisors have commented that I seem to do the work of 2 or 3 artists. When separating elements, I may optimize each with different lighting, render and camera settings or resolutions that would not render out of one scene anyway.

Cageman
05-31-2009, 07:17 PM
We actually use a proprietary interface that works with openEXR when saving multi-passes although I have yet to need it. But what I am talking about is part of a workflow that involves a number of tricks, that may have something to do with why some supervisors have commented that I seem to do the work of 2 or 3 artists.

Well... that is probably because those supervisors havn't had the chance to meet an alrounded 3D artist yet? :)



When separating elements, I may optimize each with different lighting, render and camera settings or resolutions that would not render out of one scene anyway.

Exactly the kind of stuff that can be done within Janus without saving out new objects or scenefiles (except when "baking out" scenefiles for rendering).

Oh well... I'm talking alot about Janus... but it is so good and it is being worked on so much.

Oh well... :)

toby
05-31-2009, 08:23 PM
I avoid working at large houses, but I have friends at most of the big ones as well as much smaller places, and I have been doing compositing for almost as many years as I have been at 3D, which gives me a unique perspective over those who mostly do just one or the other.
And enough experience to light well, comp well, and have a good idea of what clients want. Try and find a dozen people like that who're available at the same time and also willing to work for 30 bucks an hour. Just look at the forums here, if you exclude arch-viz lighters, there's hardly anyone who focuses on lighting.



It really isnt about how much money or prestige is involved.
Not prestige but definitely money.



I find that it largely depends on the quality and type of CG coming out of 3D and the personal preference of the compositor.
Absolutely. But even if you could guarantee the quality of the 3d, separate passes would be very practical for other reasons.


My wife was a compositor on Golden Compass that took an Oscar. She noticed the same thing I do, that certain artists would consistently supply better renders that may have required none or maybe 1 or 2 special surface passes to make work (usually about reflection, depth, subsurface, or key light), while other artists output renders that consistently needed more help in 2D and special passes to make look decent. Sometimes there is an inherent technical problem that requires passes like some fur renders on Compass. She now works at Sony where again, use of multi-passes are a preference and not mandatory.

Also in large pipelines, the line between lighting and compositing has been blurring in proprietary techniques.Exactly, those are both studios where the 3d lighter does most of the cg-compositing, so there's no one to agree with on passes except yourself. Certainly no need to make extra passes when you're doing your own comp. But I've only heard of this type setup at major studios.



But good lighting should not necessarily require a more expensive artist. Management should be training the artists on specific lighting approaches. And and R&H does train it's artists, but there are still artists there who's work needs fixing with extra passes. Smaller studios can't afford more training, which is no guarantee that extra passes won't be neeeded anyway.


Basic lighting and texturing skills are easy to pick up but too many artists just dont ever seem to take time to figure it out.Or even know that they should or will even agree that they should. Many of them have far more than basic lighting skills, but have very different styles than other artists doing shots in the same sequence, which must be brought to a similar look.


Occasionally you have a client that does not not know what he wantsoccasionally? you give them too much credit!! :D


is a pixel-fudger, or there are 15 chefs in the kitchen, requiring a lot of re-renders regardless. So adjusting spec or whatever in the process is not requiring its own re-render and is always better to correct near the source when practical and avoid bogging every version of the comp. I find that better compositors prefer an efficient comp and do not want any more work or elements to keep up with and correct with each iteration than is absolutely necessary.Better compositors also want flexibility, and are more than willing to adjust things in comp when they make no visual difference, when it saves hours of rendertime.


And I find that if the client and the artist both have experienced eyes then they tend to be in synch on the result even if they never communicate directly, and less changes are necessary. Yes - "if". And you're forgetting the director, vfx and cg supes inbetween the client and artist.


I am most proud of elements that are approved after one viewing by a client and no further iterations or breaking out of passes are needed.In studios that use separate passes for everything there is no 'breaking out' of passes. You hit render and you get the passes, it's even less work than just deciding what you want separate.


I hope to soon show some examples from recent projects where I rendered photoreal elements in one RGB pass. There is a 'right' way to do things that experienced artists and clients will tend to agree on.I wasn't saying there is no right-wrong, I was saying that there's more hurdles than just that. You can make things completely photoreal and matching, yet don't look good enough or convey what the filmakers want. If photoreal was all that was necessary, things would be a lot easier and require less comp tweaking.