PDA

View Full Version : Do unseen lights and objects affect rendering times?



c0deb0y
04-29-2005, 09:36 AM
I have a fairly big scene of a hotel that has different rooms and many lights. I need to make a photoreal still of one of the rooms, but the rendering times on my 3GHZ Pentium are ridiculous. It's been 24 hours and it is still on pass 6 of 7 passes Mitchell antialiasing.

Here's the set up:
Radiosity on 4x12 bounce Interpolated with Radiosity cached
Antialiasing 7 pass, Mitchell

I do have many objects and lights that are not currently in the camera's view or even in the same room. Would turning these lights off help? Does LW calculate things it cannot see in the camera?

One day I hope to get Fprime and that it will help, but for now it's not in the budget.

Thanks

otacon
04-29-2005, 09:53 AM
Take everything thats not in the camera view and turn off cast shadows, recieve shadows and self shadow. Or you could turn object dissolve to 100%, that actually might be better. Lightwave will compute lighting for anything in the scene, whether its in the camera view or not.

dpartridge
04-29-2005, 09:28 PM
i wish lw had a way of disabling a object so it is not calculated when rendering. if you have a large scene and want to save time, you have to remove objects from the scene to speed up rendering. also it would be great if they optimised the lighting so if a light does not affect a scene it to is not calculated in the rendering.

Silkrooster
04-29-2005, 10:07 PM
Open up the scene editor and select all the objects you want disabled and uncheck the box box under the A. That is to activate and deactivate objects. It is like setting the disolve to 100%
Silk

dpartridge
04-30-2005, 12:42 AM
lw still calulates the unchecked objects after pressing f9. watch the 'render status' info bottom left corner.

c0deb0y
05-02-2005, 06:03 PM
Thanks all.
I ended up hiding everything that wasn't in camera view in Modeler and setting all Lights not in the camera view to 0%, etc. And it still was well over 10 hours for 1 640x480 frame. Not exactly going to work for animation, but it did the job for the one still I needed. It would be nice if the renderer was smart enough to know when something wasn't affecting the current camera view, but I guess not.
Thanks again to the suggestions.

Carm3D
05-02-2005, 09:03 PM
An object that is "Unseen by Camera" will still cast shadows and reflections.

bjornkn
05-03-2005, 05:46 AM
What is a bit surprising is that to speed up rendering times you should also set your Nulls to not cast or receive shadows! I read it in the 1001 tips book, but IMO it should really be considered a design bug if LW cant't handle that automatically...

JML
05-03-2005, 07:25 AM
to turn off an object, uncheck the checkbox in the scene editor.
this will make the object not render at all. (no shadow, no reflection,etc..)
(that works on the pc, but on the mac it seems to still render the shadow of the hidden object though, so for the mac you have to disolve the object instead, some co-worker complained about that)

anyway, even if an object is check off, LW will still optimize the hidden object before it render the scene.
on large objects this can be a waste of time..

raytrace light checked off in the scene editor won't affect your render

shadow map lights checked off will affect your render. (if your shadow maps are big and you have a lot of them, you will see that it takes longer for LW to start the render, even though the lights are turned off)

Mebek
05-03-2005, 11:22 PM
What I would do is save the scene with a new filename, delete everything you don't need, and render that. Don't need to optimize or calculate what isn't there. That way you have two scenes - one of the entire object, and one of just one room.

vlucas
05-04-2005, 01:42 AM
Hi all

It's an old trick to put a fully transparent polygon in front of the camera. There are somewhere a complete tutorial of this.

Lucas

c0deb0y
05-04-2005, 09:23 AM
Lucas,

What happens with the fully transparent polygon trick? Does it work in LW?

vlucas
05-04-2005, 09:27 AM
I think, yes.

vlucas
05-04-2005, 09:32 AM
Parent the polygon to the camera, leave every surface settings as it have except the transparency, set it 100%. LW will not calculate the polygons that the camera doesn't see. Are there any lw veterans to confirm me ?

Karmacop
05-04-2005, 10:06 AM
Yes this works BUT it is essentially taking your ray recursion limit down by one. So if you have it at 16, it's practically only 15.

Someone could try to compare these if they have spare time.

c0deb0y
05-04-2005, 10:39 AM
Wow, that would be cool! I only had the ray recursion set at 4 anyway and it still looked great.


Any idea why this works? Know where I can find that tutorial that Lucas was talking about?

toby
05-04-2005, 01:16 PM
Currently there are no 3D renderers with Artificial Intelligence. Rendering Radiosity without optimizing the scene, or expecting LW to know how you want the scene optimized is crazy.

If you don't want an object to be calculated, uncheck it in the scene editor, or check "unseen by camera" and "unseen by rays". Also, if it has geometry that has to be calculated, like subpatches or deformation, set that to 0 or turn it off. Or just get rid of it.

Additionally, Ray Trace Optimization should be turned off unless you have a lot of objects and a lot of raytracing ( do tests ).

The transparent polygon trick does work sometimes, you have to do test renders to see if it saves you time or not.

c0deb0y
05-04-2005, 01:59 PM
[QUOTE=toby]Currently there are no 3D renderers with Artificial Intelligence. Rendering Radiosity without optimizing the scene, or expecting LW to know how you want the scene optimized is crazy.


Why would it be crazy to expect a software package to be able to know whether polygons are in the field of view of the camera or not? Any program that has a stage such as Flash or Director know whether an object is within the stage and if it is, then it renders that object. If it isn't, the object is not seen.

In the same respect, LW has a limited region renderer which supposedly renders only the objects seen within that box. (Actually this doesn't work. I rendered the same scene twice, once without Limited Region turned on and Once with it turned on but left at the edges and the render times were nearly identical with and without Radiosity) Still need to try the transparent Polygon trick.

3D programs render polygons. So why is it difficult to accept that a 3D camera only be able to render polygons in its field of view, like pixels in Photoshop? In camera view, you only see the polygons that are facing the camera, so it makes sense that this could also be applied to the rendering process.

Karmacop
05-04-2005, 08:45 PM
[COLOR=Red][QUOTE=toby]
3D programs render polygons. So why is it difficult to accept that a 3D camera only be able to render polygons in its field of view, like pixels in Photoshop? In camera view, you only see the polygons that are facing the camera, so it makes sense that this could also be applied to the rendering process.

It already does this, this isn't what toby said. Also, this can't be done with radiosity or raytracing (it can but it's not great).

toby
05-04-2005, 11:01 PM
There are so many more things going on than in a 2D program - not only does LW have to calculate objects off-screen for shadows and reflections, but it has to calculate that object if it's subpatch or deformed, or the shadow/reflection will be worthless.

So imagine that you have it set to not render anything outside the camera. You've made sure that no objects are needed for shadows or reflection, but you have hypervoxels in the scene. One of the hypervoxel particles travels off-camera, and suddenly the whole hv cloud that's based on it pops out of the render in one frame instead of continuing off the scene. I can just imagine how many complaints they'd get- 'why doesn't it render the whole cloud until it's gone? Idiots!' The programmers would have to think of and test every combination of everything that LW can do to guarantee that things that don't affect the final output don't get rendered, and that it always guesses correctly.

Sure the renderer could be written to:
scan all your surfaces, see if there's any reflective surfaces, scan to find out whether the reflection is raytraced or not, check whether that surface is in the camera view, check which way the polygons are facing, figure out what objects can't be reflected by those polygons, make sure that those objects don't cast a shadow into an area that would be reflected by the reflective surfaces, check all objects to see if there's hypervoxels that will extend into the camera view, find out if motion blur is on, because anything in the previous frame may affect this frame, and anticipate every combination of everything you can do in LW, then come up with a list of things to ignore, just so we don't have to remove an object that we don't want to render.

Pretty minor compared to all the other things that LW needs to stay competitive.

c0deb0y
05-05-2005, 09:44 AM
Toby, thanks for the explanation. It makes more sense now.

I'm still curious why the single poly trick (might) work. Any idea what is happening there? -Sorry, I just often have this need to know the "why" of things.

Please understand that in no way am I bagging on LW. I love the program. I just know that in every program there are workarounds for many problems, and being newer with LW than other graphic apps, I'm trying to learn what these "tricks" are.

BTW, I do have the "LightWave 3D 8: 1001 Tips & Tricks" book and everyone involved did a fantastic job! Thanks!!!!!!

automan25
05-05-2005, 10:46 AM
Here is the tutorial on this "bucket rendering" technique that everyone has been asking about.

http://www.funnyfarm.tv/thelab/rendertrick.htm

toby
05-05-2005, 01:28 PM
LW renders the polygons closest to the camera first, and the furthest from the camera last, if it's still visible. Makes sense, but what can happen is if you have a ground plane that is a single huge polygon that extends from the camera to the horizon, LW will still render the whole thing before anything else, so the renderer has to calculate raytraced shadows for the entire ground poly cast by pretty much everything else in the scene. Putting a polygon right in front of the camera forces it to render that polygon first, and with raytracing, that polygon looks like the scene that's behind it - and I'm guessing that each poly is renderered one pixel at a time, so it works like a scanline render.

If you've cut down your groundplane to fit in the foreground, or simply subdivided it, this won't save you much time.

That's my best guess anyway :D