PDA

View Full Version : FR: Overhauled Better Render Buffers Please !!!



MrWyatt
02-03-2013, 10:23 AM
Hi to the LW dev team. After staying away for a long time and having thought I'd never get back to upgrading LW, I thought to give NT a last chance and upgraded to 11 a few days before 11.5 came out. I am really happy about the feature additions, yet at the same time feel disapointed again over some features that still to this day haven't been adressed.

Lets face it guys. LW render buffers suck. there is no way to sugar coat it. they suck, plain and simple. Yes I know of dp nodal pixel and image filters, but c'mon. the basic buffers should be there and they aren't or they don't make sense.

we have SSS shaders/materials since wich version now? 9 ? 8 maybe ?
now we are on version 11.5 and still no SSS buffer. Really, did it not occur to anyone this could be useful, much needed, essential even.
then we have some buffers that seem obsolete, like diffuse shading and specular shading since we have shaded diffuse and shaded specular. Kind of confusing really.
Then for some odd reason I don't understand, we have radiosity since what, version 6? yet we don't have a indirectillumination buffer.

So here is a list of buffers I'd like the LW dev team to think about implementing, in order to give things like the so much touted "Compositing Buffer Export" any real relevance.

1. Unify the shading buffers please.
A. Raw RGB
B. Diffuse Direct (direct illumination)
C. Diffuse Indirect (GI )
D. Translucent
E. Reflection
F. Refraction
G. Specular
H. SSS
I. Luminus Shading

2. Usable utility buffers please.
A. AO that doesn't require the frame to be rendered twice, (really whose idea was that).
B. Normals (World space)
C. Normals (Camera space)
D. World Position Buffer. (Every renderer now supports this and it is widely used in production these days, so just do it already)
E. Object Position Buffer.
F. Motion Vectors that reflect the convention you find nowadays in modern compositors such as nuke and fusion.

3. Custom buffers please.
every pipeline is a little different and so are the needs for custom buffers. please allow the user to add custom buffers as needed and feed anything he wants in. How many renderers out there allow this? Let's see, any renderman compliant renderer, mentalray, vray, arnold (as far as I've heard), just to name a few.
just look at the things that are possible with dp's nodal pixel/image filters. I want that, just straight out of the box and maybe not so tedious to handle.

I know I come accross as a bit ranty atm, but I requested things like these so often and never any of it got implemented in LW, on the other hand I bitched about the lack in render outputs in modo on the luxology forum and all of my feature requested buffers (3 or 4) got implemented beautifully in 601. I often hear that NT is starting to listen to the userbase. I have yet to find that out myself as I feel, that everytime I ask for something as trivial as the above, (and it should be trivial as all of the data I request to be exposed in a buffer is there anyway and gets computed anyway, so what is the big deal of writing that data into a buffer for heavens sake). I get a couple of "who would ever need this, NT give us muscle sim and Vue like vegetation instead" answers and that's it. I really hope NT takes this to heart or just even under consideration, but I wouldn't be surprised to find out by LW 14 that none of the above features have made it into the software.

please prove me wrong. In that case I would be so happy to admit I was wrong.

cheers

Guillaume

Doctor49152
02-04-2013, 09:50 AM
+1 for everything he said.

>>2a. AO that doesn't require the frame to be rendered twice, (really whose idea was that).

My problem with the AO is that it uses the scenes ambient intensity (which I usually set to 0% or leave at the default 5%) and this produces extremely dark AO renders making them useless for my work. I then have to change the ambient intensity to 100% and re-render a seperate AO pass. This makes the 2x rendering even more pointless and the frame buffer export totally useless for me.

So if we have to render the AO out twice we should be able to set the ambient intensity for the second pass and over-ride the initial setting for the first pass.

right now this is my only real beef with LW.

mummyman
02-04-2013, 10:24 AM
Their new AO export method also, on my render farm, does not work. It requires radiosity to be cached, which works when rendered local / still frames fine. But a long animation on a render farm...I haven't gotten it to work properly yet. Sticking to EXR trader/shadermeister for now.

Pavlov
02-04-2013, 06:05 PM
+1
i'd start from simpler and universally needed things anyway, like AA's and colored Surface and Object IDs.
As you say, any engine provides this out of the box and these are essential for any post processing operation.

Paolo

sukardi
02-04-2013, 06:35 PM
+1

I don't know how difficult it is to do but unless all these is implemented in Lightwave soon, it risks becoming obsolete for studios of any size.

Things are moving really fast and I hope we don't have to wait until 12 for at least some of these to be implemented...

MrWyatt
02-05-2013, 04:29 AM
+1

I don't know how difficult it is to do but unless all these is implemented in Lightwave soon, it risks becoming obsolete for studios of any size.

Things are moving really fast and I hope we don't have to wait until 12 for at least some of these to be implemented...

Actually I am having a real hard time to get my head around how NT could tout the Compositing buffer export as a such cool feature, when the underlying tech of subpar render buffers, at least for me, makes it absolutely useless, because I still have to create extra scenes for the buffers I need using shadermeister (love it and I see it still usefull even when we finally get GOOD buffers).

erikals
02-05-2013, 04:59 AM
regarding AO rendering twice,

is there any way to fix that?
using DP buffers or such...?

MrWyatt
02-05-2013, 05:05 AM
regarding AO rendering twice.
is there any way to fix that?

Yes.














by NT implementing it differently (aka propperly)

;)

sorry couldn't resist.

erikals
02-05-2013, 05:06 AM
maybe this could work... (?)
http://forums.newtek.com/showthread.php?123925-Baking-Occlusion&p=1201913&viewfull=1#post1201913



anyway, you might save time rendering twice actually,
if you use this DP denoiser method >
http://forums.newtek.com/showthread.php?102616-DPont-s-Denoiser-Node-anyone-tried-it



(not saying NT shouldn't fix it though...)

MrWyatt
02-05-2013, 05:17 AM
As many here might know. I'm not a friend of half assed workarounds that give NT a free pass to not improve LW's subpar tools. I'm a big friend of bugging devs to fix the root of the problem instead of me having to fix the symptoms. Please take no offense in this reply. I am sure your workaround gets you there, but we got to face the facts. we shouldn't be in the situation to discuss workarounds in the first place.

NT has to fix this BS excuse of a buffer system. period.

And I'm not going to cut them some slack, untill they start not only to listen, but start implementing features like this propperly.

mummyman
02-05-2013, 07:56 AM
As many here might know. I'm not a friend of half assed workarounds that give NT a free pass to not improve LW's subpar tools. I'm a big friend of bugging devs to fix the root of the problem instead of me having to fix the symptoms. Please take no offense in this reply. I am sure your workaround gets you there, but we got to face the facts. we shouldn't be in the situation to discuss workarounds in the first place.

NT has to fix this BS excuse of a buffer system. period.

And I'm not going to cut them some slack, untill they start not only to listen, but start implementing features like this propperly.

Tell us what you really think! LOL Well said. I heard that NT does actually like to hear the bitchings about the program. Hopefully, they are still taking these into consideration. I figured now with 11.5, the depth channel thing would be fixed for me. So I tried using the composite buffer to spit out a depth channel on a 240 frame sequence. In the middle of the shot... the depth channel just blinked. Very wierd. I've never seen that happen before. Seems random. Other shots...it works fine. Still buggy from my POV. Good luck!

geo_n
02-05-2013, 12:03 PM
3. Custom buffers please.
every pipeline is a little different and so are the needs for custom buffers. please allow the user to add custom buffers as needed and feed anything he wants in. How many renderers out there allow this? Let's see, any renderman compliant renderer, mentalray, vray, arnold (as far as I've heard), just to name a few.
just look at the things that are possible with dp's nodal pixel/image filters. I want that, just straight out of the box and maybe not so tedious to handle.


This one is a biggie. Rendering out per light into custom buffers. I thought it was possible with thirdparty and save with the main exr but the sdk is not exposed to do this as I've been told. Used dpfe instead and saved out separately each light. :D
In vray this is all easy and free when you render a frame. Extensive integrated buffers. Not bad really. People say that with other packages you have to send out the scene to an external renderer and it sucks and you need to buy a third party renderer.
Truthfully when they mdd out from maya, lightwave is AN EXTERNAL RENDERER. :D
The compositing buffer export seems broken, too, and outputting files either in the project root folder or creating a "Render" folder. :thumbsup:
Lw 11.5.1

Lightwolf
02-05-2013, 07:06 PM
F. Motion Vectors that reflect the convention you find nowadays in modern compositors such as nuke and fusion.

The convention is perfectly fine (and Fusion accepts it without a hitch). It's RSMB that's "broken" (mainly due to the non-float legacy).

Yes, and I stick to that. :p It would be a lot easier, sensible and safer if RSMB supported proper float based values out of the box.

Cheers,
Mike

MrWyatt
02-06-2013, 12:59 AM
The convention is perfectly fine (and Fusion accepts it without a hitch). It's RSMB that's "broken" (mainly due to the non-float legacy).

Yes, and I stick to that. :p It would be a lot easier, sensible and safer if RSMB supported proper float based values out of the box.

Cheers,
Mike

Well in that case of fusion I stand corrected, I don't use it so my "facts" seem to have failed me in this case, but I can tell you that in Nuke they don't work as well and I don't use RSMB btw.
the best motion vectors for nuke are just one rgb channel (R= positive and negative values for x motion. G = positive and negative values for y motion. B = is simply 0). Sure I can take LW's motion X and motion Y buffers and build a working motion vector channel from those inside nuke, by shuffling channels around and doing simple math on them, but my point is: I don't want to do that.
Modo 501 for example introduced motion vectors in 501. It was limited to the 8 bit RSMB version that I don't like much myself because it forced you to guesstimate the max pixel movement in the shot and normalized it to that value, eurgh.
Then I did some bitching and moaning, especially about the fact that modo's renderer couldn't output negative values (wich is important for true floatingpoint motion vectors) and asked for motion vectors that work better with nuke. In 601 they implemented it. Now you can choose if you want the motion vectors clamped and remapped (RSMB style) or unclamped (nuke style).
All I am asking for is to get the dust of the old school way of doing motion vectors. If fusion still needs 2 RGB channels as input for motionblur then I guess the fusion way is just as antiquated as LW's way of writing them to buffers.

(ducks for cover)

;)

Lightwolf
02-06-2013, 01:53 AM
Well in that case of fusion I stand corrected, I don't use it so my "facts" seem to have failed me in this case, but I can tell you that in Nuke they don't work as well and I don't use RSMB btw.
the best motion vectors for nuke are just one rgb channel (R= positive and negative values for x motion. G = positive and negative values for y motion. B = is simply 0). Sure I can take LW's motion X and motion Y buffers and build a working motion vector channel from those inside nuke, by shuffling channels around and doing simple math on them, but my point is: I don't want to do that.
That's not a problem if the buffer format then but an exporting issue.


Then I did some bitching and moaning, especially about the fact that modo's renderer couldn't output negative values (wich is important for true floatingpoint motion vectors) and asked for motion vectors that work better with nuke. In 601 they implemented it. Now you can choose if you want the motion vectors clamped and remapped (RSMB style) or unclamped (nuke style).
I suppose the only implementing they did was to optionally remove their conversion to RSMB style, float is what you'd get natively anyhow.

All I am asking for is to get the dust of the old school way of doing motion vectors. If fusion still needs 2 RGB channels as input for motionblur then I guess the fusion way is just as antiquated as LW's way of writing them to buffers.
Seriously though? I thought Nuke was a little more advanced than that when it comes to reading EXRs. Having to read an extra channel just because (causing i/o and memory overhead) and a restriction on channel naming (something that is explicitly allowed and suggested by the EXR developers) seems a little antiquated. ;)


(ducks for cover)

;)
You better :D

Cheers,
Mike