PDA

View Full Version : Multi-Camera Rendeing



RCFX
02-21-2010, 09:42 PM
OK, I know this may be a pipe dream, but I would love it if Lightwave could have multiple camera savers in one scene file so that 2 (or more) cameras could all render and save frames in 1 scene. Stereo 3D...for better or worse...is huge at the moment and Lightwave's internal tools generally don't do the job on their own. I've been told renderers like Renderman have this type of arrangement so that you get 2 image sequences out of 1 scene with less than a 20% time penalty since much of the internal math is redundant (though I have no direct experience trying it). I'm on a stereo job now and having to render it all twice is a massive pain. Having a separate output panel for each camera would be a godsend. I realize Lightwave's render engine is a whole other animal and maybe the time savings would be far less dramatic, but even if its just from a workflow standpoint it would be a huge asset.

Captain Obvious
02-21-2010, 10:03 PM
It's a breeze to do with tools like Butterfly Net Render. Submit your scene to the que twice, and change the render camera and file output paths for one of them. Job done. Well, it still needs rendering.

There is no way of generating proper stereo 3D without actually rendering twice. It is not possible to make Lightwave's render engine any more optimized for it than it already is.

RCFX
02-21-2010, 10:13 PM
Well I understand that you need two renders Captain.....thats what I do now using Tequila Scream. But from what I've been told, much of the rendering math is camera independant. Therefore, burning 2 sequences from 2 cameras in 1 file should, in theory, be much faster than 2 independant renders yet give you the exact same result. It seems that many other commercial renderers are already offering this. Even if the time savings was minimal, just from an organizational standpoint it would be a huge plus...at least to me.

Captain Obvious
02-22-2010, 03:58 AM
But from what I've been told, much of the rendering math is camera independant.
The only thing that you can reuse between different camera angles is interpolated radiosity, pretty much, and you can do that right now anyway.

RCFX
02-22-2010, 08:57 AM
Well the way Lightwave runs now, thats true. Thats exactly why I was asking if this change is possible. Since Lightwaves scene files only render from one camera at a time, I think its fair to say that not many of us outside Newtek's programming team know what info COULD be shared if two cameras cold render in 1 scene file. If Max and Maya scenes rendered in either Renderman or Mental Ray can already do this with a huge time savings, I'm guessing it might be possible in Lightwaves engine too. I'm no software expert and I have no idea whats involved code-wise. Maybe something inside Lightwaves code structure does in fact make this impossible. If thats the case, so be it. But ya never know until you ask :)

Sensei
02-22-2010, 03:14 PM
Multiple cameras could reuse point coordinates after deformation, bone transformation, object to world conversions, polygons and points normal vectors.
In the most of scenes you don't even know about existence of these stages.. (they are executed when you press left/right key or click frame slider to change active frame)
KD-Tree also could/should be reused by multiple cameras.

IMHO multiple cameras render at the same time, is so uncommon situation, not worth wasting time for additional development.

RCFX
02-22-2010, 03:53 PM
Well as much as I appreciate your input Sensei, I'm afraid I'm gonna have to disagree with you as far as there being a real need for this. For some users such as myself who do both film and historical re-creation work, rendering the same animation from multiple angles has been quite common. Now, with the growing stereo 3D market, 2 cameras become an absolute must. ESPN and Discovery Channel (an occasional client of mine) are doing test 3D broadcasts this year and are rolling out full-3D networks in 2011. A number of my clients have called me asking about Stereo 3D options. Perhaps the current 3D TV craze will fizzle, but even excluding that there has been a huge jump in Stereo 3D films and multimedia presentations. We were using Lightwave back in 05' to do animation for one of Cameron's IMAX documentaries and this 2-camera render option I'm asking for would have been a HUGE asset back then. Obviously most LW users aren't doing Stereo work yet, but I can almost guarantee that the demand for it is only gonna go up over the next few years. I've been a loyal LW user since 1992 and love it dearly, but it kind of stung to find out that the Mental Ray and Renderman crowd already had this as a standard feature. Perhaps Newtek will read this and agree with you that the demand doesn't justify the effort, but I'd rather not get it because its not viable as apposed to not getting it just because no one bothered to ask. :P

Sensei
02-22-2010, 04:53 PM
LW has stereo rendering for years. But it's done with just one camera. Option is in Camera Properties window.

I was not denying need for stereo 3d rendering, just having to develop special infrastructure by developers over what is now (picking up camera, and rendering sequence in one go, then second camera). It would requiring moving whole Output Files from Render Globals to Camera Properties, some graph editor for enabling multiple cameras and frame ranges which they affect.. etc.. And then just 0.1% of all LW customers using it..

Captain Obvious
02-22-2010, 05:06 PM
There is nothing in mental ray or Pixar's Renderman that actually accelerate stereoscopic rendering significantly *at least not things that would be applicable to Lightwave's render engine. As Sensei mentioned, most of the things that could be reused between the two cameras, are things that don't take very long to calculate anyway. View-angle dependent subdivision (APS) can sometimes be quite slow and that could potentially be shared among the two cameras *but even exceptionally slow APS only accounts for a rather small portion of the render time in all but the most extreme circumstances.

I can understand why you'd want a smoother workflow for stereoscopic rendering, but there is nothing you can do to the rendering that's actually worth doing.

RCFX
02-22-2010, 05:10 PM
Indeed it does have a Stereoscopic option, but its vastly limited and all but unusable for most 3D projects. You can't do convergence with it and can't save 2 separate sequences....which is how most clients want the finished project delivered. I've never once seen the built-in stereo tool used for real-world client production. But, perhaps modifying that tool to allow for convergence and adding an export buffer for separate outputs would be an easier modification. I only suggested the multi-camera render option since thats how virtually every stereoscopic Lightwave job is currently done and because multi-camera rendering would have uses outside of just the stereo 3D crowd. Now that I think about it, a secondary camera export plugin (like LWs current Buffer Export plugin) might not be a bad alternative. It would be less elegant and there would be a few more clicks involved, but clearly it would be less effort to create.

Sensei
02-22-2010, 05:36 PM
I have tried Stereoscopic option and it's appending suffixes L to left camera, and R to right camera to files picked in Output Files.. So, actually it's rendering two sequences.. Output buffer is job for image-filter. I have no idea whether they append L & R too..

RCFX
02-22-2010, 05:54 PM
Well I certainly didn't intend to turn this into a major debate. My request stems from several articles as well as a 3D cinematography book that all claim that Renderman generates point location, texture info, lighting info, etc and then generates two renders based on 2 cameras and it takes only 10-15 percent longer than a standard single-cam render. I quote....

"In fact, programs such as renderman have this built in so that the second camera's render, if done at the same time as the primary camera, will not even take as long to render as the first camera. On average, this second render would add only a 10-15% overhead compared to rendering just one camera, according to Peter Moxom of Pixar."

I don't use Renderman so I can't verify this, but I've read it from more than 1 source. If this info is incorrect, I apologize. As to whether or not that applies to Lightwave's engine, thats certainly outside my knowledge base. I figure the Newtek team will read this and take note or simply laugh and hit delete. I just wanted to put the idea on the table and have them adress it however they see fit. Whether its a multi-camera export option, a secondary camera export plugin, or a more viable Stereoscopic tool, some added stereoscopic support would be a huge plus to guys like me.

Captain Obvious
02-23-2010, 03:08 AM
Well I certainly didn't intend to turn this into a major debate. My request stems from several articles as well as a 3D cinematography book that all claim that Renderman generates point location, texture info, lighting info, etc and then generates two renders based on 2 cameras and it takes only 10-15 percent longer than a standard single-cam render.
The only way that could ever happen, is if actual pixel shading only accounts for a very small portion of the final render time -- which is almost NEVER the case in Lightwave. Now, if you're cachingtextures or shading, then sure, you can re-use the data. But the way Lightwave's render engine works, the only things you can reuse are geometry transformation and tesselation and to a certain extent global illumination. Re-using the GI across several cameras could potentially speed up rendering significantly, but it's already possible to do that...

archijam
02-23-2010, 05:07 AM
I have been curious about this for a while. I would love to have a true render queue solution, where I can drag my cameras into a list, and render them out overnight.

I usually do this by keyframing one camera, but that assumes many things, ie. pixel size, aspect ratio, render settings. etc. Also motion blur can give weird mistakes if you are not careful.

Lightwolf
02-23-2010, 05:20 AM
Now that I think about it, a secondary camera export plugin (like LWs current Buffer Export plugin) might not be a bad alternative. It would be less elegant and there would be a few more clicks involved, but clearly it would be less effort to create.
Just as a side note and another excuse to plug exrTrader:
The current version allows you to append the currently rendered "eye" to a buffer, so you can export two sequences of buffers using LWs native stereo camera.
If you use a custom rig, you can include the name of the camera in the file name of the exported buffers using one of the variables/macros that exrTrader supports (in this case %camera% which you can use anywhere within the file name and it will be replaced by the name of the current camera).

And yes, most people tend to use custom camera rigs in LW for the reasons you mentioned.

Cheers,
Mike

3Dfool
02-26-2010, 08:36 PM
Lightwave has had a stereoscopic rendering since 5.6. I should know, I was the one who helped NewTek implement it.

The way it was design is that it moves the camera half the separation distance at rendertime in +/- X translation. If you camera is targeted to a NULL, you will get toed in or convergent cameras. If you do not use this you get parallel cams.

Lightwave would benefit from a better stereo cam. One that calculates both views simultaneously. If you using 32 bit Lightwave your in luck. The plug-in for this has existed for a few years.

http://colm.jp/plug/stereo.html

I've used this many time and you save about 50% on your rendertime.

djlithium
03-07-2010, 02:12 AM
Yes it would, indeed thats a major thing we wanted happening for a long time. We need a proper camera switching and camera name to file plug-in for lightwave tht works over the network. This way you could do massive amounts of baking work using the power of your network and the set up for it would just be making multiple cameras and getting each UV and mesh looking right in fprime for surface height, then save and let it rip!

These are the things we are missing from 9.6.whatever that we need! Gamma space/linear workflow? ok thats great, but its not worth the time it took to see it and frankly there are things more important like what I have suggested above.

djlithium
03-07-2010, 02:18 AM
Just as a side note and another excuse to plug exrTrader:
The current version allows you to append the currently rendered "eye" to a buffer, so you can export two sequences of buffers using LWs native stereo camera.
If you use a custom rig, you can include the name of the camera in the file name of the exported buffers using one of the variables/macros that exrTrader supports (in this case %camera% which you can use anywhere within the file name and it will be replaced by the name of the current camera).

And yes, most people tend to use custom camera rigs in LW for the reasons you mentioned.

Cheers,
Mike
To further that and the subject of reducing the amount of render time by rendering twice, much of LW's rendering time in many cases is sucked up by loading files and moving them around the scene. Now while moving around in the scene operations are unavoidable I believe with a 2 camera set up, the loading stuff can be. So what would be cool is if LW could pause at the end of L camera's render then switch to the R camera render and continue without closing screamernet, and reloading for the other eye. With exrTrader stuffed between camera render result saves of course.

Then again, one reason why it may not be better to do this this way would be that at least if you render L eye first, you can begin to set up the composites and flows and all that jazz while the right eye is coming off the stack. Once it is available its pretty straight forward in fusion to duplicate the flow, and get stereo comp together.

Still saving the time on the network and all the reloads is a solution that if found is worth money.

Lightwolf
03-07-2010, 02:32 AM
Gamma space/linear workflow? ok thats great, but its not worth the time it took to see it and frankly there are things more important like what I have suggested above.
I beg to differ. I think it's a pre-requisite for anything remotely close to a professional production in the 21st century and long overdue.

Here's a nice example to see how the math can screw up in a very simple image processing function - and the same applies to 3D renders as well:
http://www.4p8.com/eric.brasseur/gamma.html

Cheers,
Mike

Captain Obvious
03-07-2010, 03:45 AM
I beg to differ. I think it's a pre-requisite for anything remotely close to a professional production in the 21st century and long overdue.
Especially considering a linear workflow, once you've understood it, make things easier! While Cityscape certainly does not employ a linear workflow, I at least gamma correct my renders — and it makes a HUGE difference in quality of output and time spent on lighting. What the Cityscape lighters used to do is futz around with ambient light, shadow-less lights, shadow color, etc etc etc. With gamma correction, it's possible to light an interior with just a physical sun/sky and radiosity. Saves us time.





Here's a nice example to see how the math can screw up in a very simple image processing function - and the same applies to 3D renders as well:
http://www.4p8.com/eric.brasseur/gamma.html
I tried to rescale that image in a few different softwares, and funnily enough,*Aperture was basically the only one that processed the image in linear space! I wouldn't have expected Apple to do a better job with stuff like that than Adobe. Edit: read further down the page. Haha, it seems Aperture 3 has gone back to the incorrect method! :D





To further that and the subject of reducing the amount of render time by rendering twice, much of LW's rendering time in many cases is sucked up by loading files and moving them around the scene. Now while moving around in the scene operations are unavoidable I believe with a 2 camera set up, the loading stuff can be. So what would be cool is if LW could pause at the end of L camera's render then switch to the R camera render and continue without closing screamernet, and reloading for the other eye. With exrTrader stuffed between camera render result saves of course.
Lightwave already does that, at least with BNR. A scene that renders on LWSN/BNR will load up once and render frame one, then keep the scene loaded and render frame two, etc.

Cageman
03-07-2010, 04:02 AM
Lightwave already does that, at least with BNR. A scene that renders on LWSN/BNR will load up once and render frame one, then keep the scene loaded and render frame two, etc.

Yeah... it seems that there is alot of power to gain based on what network render controller one is using. We use Muster at work, which does things differently. It is a packet-based solution, where each rendernode will recieve a pre-defined number of frames (a packet), render those, empty the memory and then look for more frames to render. This particular behavior makes some things you can do in LW to not behave correctly. There have been cases where a procedural animation would be reset for each packet, while rendered with native screamernet, things worked because the scenefile is not removed from memory until completely finished. It was a long time since I stumbled across a this issue though.

Multi-camera rendering would be nice to have though... Just as djlithium states, it would become very, very powerful for texturebaking enmasse, especially when you have a renderfarm waiting for you. :D

Lets fog this as a feature request for LWHC.

Captain Obvious
03-07-2010, 04:52 AM
Multi-camera rendering would be nice to have though... Just as djlithium states, it would become very, very powerful for texturebaking enmasse, especially when you have a renderfarm waiting for you. :D
I normally just set up a bunch of surface baking cams and use the camera switcher plugin to switch baking camera each frame. Send it off to the farm as an animation and you're good to go.

Cageman
03-07-2010, 06:21 AM
I normally just set up a bunch of surface baking cams and use the camera switcher plugin to switch baking camera each frame. Send it off to the farm as an animation and you're good to go.

Sure... but what if you need to bake animated textures or lights over a series of frames for 20 objects?

:)

Captain Obvious
03-07-2010, 09:33 AM
Sure... but what if you need to bake animated textures or lights over a series of frames for 20 objects?

:)
That's a bit trickier :P

Lightwolf
03-07-2010, 09:39 AM
I tried to rescale that image in a few different softwares, and funnily enough,*Aperture was basically the only one that processed the image in linear space! I wouldn't have expected Apple to do a better job with stuff like that than Adobe. Edit: read further down the page. Haha, it seems Aperture 3 has gone back to the incorrect method! :D
Completely OT: I tried it in Fusion and you get a grey image if you just scale the image as is. If you change the gamut, scale and then change the gamut back again you get the right result (if you use one of the methods that interpolate when scaling and you render in a mode that actually uses the method, non-HQ doesn't by default).
Which isn't surprising really.

Cheers,
Mike

Captain Obvious
03-07-2010, 10:31 AM
Which isn't surprising really.
No, that's exactly what I would expect it to do, and is also what Lightwave and Shake does. I'd be really pissed off if my compositing application tried to manage colors without me explicitly stating how to do it.

djlithium
03-09-2010, 08:40 AM
Especially considering a linear workflow, once you've understood it, make things easier! While Cityscape certainly does not employ a linear workflow, I at least gamma correct my renders and it makes a HUGE difference in quality of output and time spent on lighting. What the Cityscape lighters used to do is futz around with ambient light, shadow-less lights, shadow color, etc etc etc. With gamma correction, it's possible to light an interior with just a physical sun/sky and radiosity. Saves us time.


Lightwave already does that, at least with BNR. A scene that renders on LWSN/BNR will load up once and render frame one, then keep the scene loaded and render frame two, etc.

Sure, linear work flow is neat and all that but I don't think it trumps other things that should have been addressed in the last cycle. It's very important yes, but it should have been paired up with several other improvements that have been on the list for almost a decade.

As for BNR, yeah it does that ok, but its got to be the slowest controller on the planet. For the amount of time BNR sucks up starting up and shutting down nodes loading things across, I could just have rendered it already using say smedge or muster.

djlithium
03-09-2010, 08:41 AM
Sure... but what if you need to bake animated textures or lights over a series of frames for 20 objects?

:)

Camera name to scene file break out Lscript.

Lightwolf
03-09-2010, 08:45 AM
Sure, linear work flow is neat and all that but I don't think it trumps other things that should have been addressed in the last cycle.
The thing is though, colour management needs to be added at the core level due to the openGL preview.
A baking camera with a batch option could at least be written by a third party if there is enough demand for it.

Having said that, the fact that the baking camera can't be scripted also shows the limits of the current code base. So, in that sense, It would make sense to overcome that first as opposed to including yet another closed off feature that may or may not cover certain needs.

Cheers,
Mike

Captain Obvious
03-09-2010, 10:16 AM
A baking camera with a batch option could at least be written by a third party if there is enough demand for it.
Yes, definitely. Such a tool already exists, in fact, but it is unfortunately proprietary.

Cageman
03-09-2010, 08:59 PM
Camera name to scene file break out Lscript.

Exactly, and as such I have put up a featurerequest on FogBugz.

djlithium
03-10-2010, 02:22 AM
I still think a nice sequential Surface Baking Camera system that works over the network would be really kick ***. Two options for the cameras would be "render as normal sequence" as set in your save image options, and the other would be "save image as UV map name", each camera in the scene picking up the name of the UVmap its working with.

djlithium
03-10-2010, 02:25 AM
The thing is though, colour management needs to be added at the core level due to the openGL preview.
A baking camera with a batch option could at least be written by a third party if there is enough demand for it.

Having said that, the fact that the baking camera can't be scripted also shows the limits of the current code base. So, in that sense, It would make sense to overcome that first as opposed to including yet another closed off feature that may or may not cover certain needs.

Cheers,
Mike

Let's not turn this into a CORE vs classic LW debate please. But yes, this is something that is needed now and then again later. Now being Lightwave 9.6.1 and then whatever it becomes later.

Lightwolf
03-10-2010, 03:18 AM
Let's not turn this into a CORE vs classic LW debate please.
I didn't even mention Core ;)

Cheers,
Mike

3Dfool
03-13-2010, 07:08 PM
This stereo camera http://colm.jp/plug/stereo.html can render two view simultaneously and its very fast. Someone should be able to write a similar plugin that takes in account multiple cameras that you set up in Lightwave.

geo_n
03-15-2010, 07:34 AM
This stereo camera http://colm.jp/plug/stereo.html can render two view simultaneously and its very fast. Someone should be able to write a similar plugin that takes in account multiple cameras that you set up in Lightwave.

I just tried it. It cuts the rendering nearly half!! This is a bigdeal for lw. I dont think max has this so I showed it to my TD and he was shocked.
A similar plugin to render multiple camera angles at the same time would be possible or only fantasy. Imagine renderine GI scenes with multiple cameras at once!

Skywatcher_NT
03-15-2010, 10:47 AM
Just tried it also. Gave me some ideas:thumbsup:
I just tested them and it is possible to do it with standard Lightwave
without any plugs.
And the best thing !!!
It's even possible to do the multiple cam angle thing you talked about.
And even better ..... rendertime is almost the same as with one cam.:dance:

:lightwave

geo_n
03-15-2010, 07:51 PM
Just tried it also. Gave me some ideas:thumbsup:
I just tested them and it is possible to do it with standard Lightwave
without any plugs.
And the best thing !!!
It's even possible to do the multiple cam angle thing you talked about.
And even better ..... rendertime is almost the same as with one cam.:dance:

:lightwave

how's that with standard cam rendering multiple cams in 1 pass?

Skywatcher_NT
03-16-2010, 03:12 AM
Just use CCTV. Link them to polys for L and R cam, or wathever cam you like.
Two(or more) polys with resolution of cams ( eg. 752x576 mm for the pixels).
Put them side by side, point the actual rendercam to them with
double res in horizontal axis.
Then you get a render with the two cams together.
Crop in comp and done.
There's lots of stuff to test still ( passes ???, mattes ...etc..).
But it's a start. Not limited to two cams but then your final renderoutput gets bigger.

Have fun.

geo_n
03-17-2010, 01:45 AM
I've never used cctv shader before. But I'm not getting the result. Proton has a tutorial and I watched it, but my cctv scene doesn't render the other cams on the poly. Just one cam is rendered.
Does it work with GI scenes?

Skywatcher_NT
03-17-2010, 03:02 AM
Yes, it does GI. I'll take a look at your file when I have some time.

Skywatcher_NT
03-17-2010, 04:10 AM
Here it is :beerchug:

The polys for the cams have to be centered in modeler or it won't work.
Just place them in Layout.
I changed also your cams to Perspective type.
In the CCTV plug there has to be a scale of 50 to work also
( don't know why yet ).

Enjoy:thumbsup:

geo_n
03-17-2010, 08:48 PM
Here it is :beerchug:

The polys for the cams have to be centered in modeler or it won't work.
Just place them in Layout.
I changed also your cams to Perspective type.
In the CCTV plug there has to be a scale of 50 to work also
( don't know why yet ).

Enjoy:thumbsup:

Thanks a lot. Its not in the manual to have poly in center. Will play with the scene file :D

evolross
05-12-2010, 10:31 PM
Just use CCTV...
Now that's thinking outside the box... wow.

I have a related question... is there any way in Lightwave or with plugins to have multiple image-sequence export modules and then possibly also have multiple camera setting modules in a single pass? Not necessarily multiple cameras, just camera settings.

After Effects allows this exact feature. You render and compute once, but save out as multiple image format sequences with a vast array of image sizes, compression, etc? I suppose this is a compositing package's function, but it could be useful inside of Lightwave.

Another amazing feature would be render presets! (Also in After Effects) This would be so helpful in Lightwave so you wouldn't have to go back and re-set limited region, resolution, AA, motion blur, GI, compositing color-pixels, etc. each time you're switching between a preview and a final render. So annoying and so many ways to screw a final render up.