PDA

View Full Version : Point p(xyz) world positions pass



Eagle66
09-20-2011, 01:00 PM
As in Title - is it possible native in LW 10 to render a 32Bit floating point world position pass for generating Volume 3D Fog or relighting a Scene in a Comp Application like Nuke or Fusion?

:help:

Lightwolf
09-20-2011, 01:03 PM
You can either uses Denis' Nodal Pixel Filter (free -> http://dpont.pagesperso-orange.fr/plugins/nodes/DP_Filter.html ) to create one or our (commercial) shaderMeister (see below).

Native without any third party plugins, no.

Cheers,
Mike

Eagle66
09-21-2011, 12:39 PM
Thanks.

Which Node has p(xyz) world position?
http://dpont.pagesperso-orange.fr/plugins/nodes/DP_Filter.html#NodePixFilt

Node Pixel Filter?

Sensei
09-21-2011, 12:58 PM
Spot Info > World Position..

Greenlaw
09-21-2011, 01:40 PM
We started using this in Fusion about a week ago. What it does is intriguing though we're still figuring out how to best apply it in our pipeline and for our current production.

You might already know this but you also need to export your Lightwave camera via FBX and import that into Fusion for this to work properly. There's a couple of videos on this subject on Eyeon's YouTube page.

Lightwolf
09-21-2011, 01:51 PM
You might already know this but you also need to export your Lightwave camera via FBX and import that into Fusion for this to work properly.
Until I finally find the time to reverse engineer it from the camera matrix that's exported as metadata in EXRs.... ;)

Cheers,
Mike

Greenlaw
09-21-2011, 05:35 PM
Sweet. ;)

Eagle66
09-23-2011, 02:08 PM
There's a couple of videos on this subject on Eyeon's YouTube page.

Yes, this one
http://www.youtube.com/watch?v=gcSo0nmS40Q

It was the Reason for my Question :)
Fusion has a "Z to World Position" Tool, buts simpler if the 3D App has a WPP...

3DGFXStudios
11-04-2011, 05:17 AM
Can somebody show how to do this with the DPnodes? A node setup would be nice :D

Greenlaw
11-04-2011, 09:44 AM
I started using this feature myself a couple of weeks ago. I had a shot where I need to quickly 2D track bullet impacts along a wall in Fusion but because of the camera move this got a lot more complicated that I expected. Jon Sadonsky, another compositer here, showed me how to pin my elements to a 'pixel' in the rendered image, which gave me a rock solid tracking result and it actually moved in 3D space.

Sorry, I'm not sure how the setup really works though--it's alien science to me. Somehow it involves embedding certain auxiliary channels from LW plus the scene's camera position and lens data in the metadata of the rendered exr file. When you load the frames in Fusion, a Fusion camera read the metadata from the file, and automatically aligns itself to where the Lightwave camera was when it rendered the image. Combined with the auxiliary channels, this somehow gives you the position of every pixel in a rendered frame and allows you to attach things to them in three dimensions. Pretty mind-boggling. This setup requires custom tools from our programmer Mike Popovich. Very impressive!

It would be really cool if LW could do this natively in a future release. This technique is incredibly useful.

G.

Sensei
11-04-2011, 09:55 AM
Pretty mind-boggling.

Not at all.
Spot World Position = Ray Origin + Ray Length * Ray Direction
modified (normalized, so in range 0.0 to 1.0) Ray Length is Z-Buffer.
Ray Direction can be easily calculated from Width, Height and Camera FoV/Pixel aspect.
I see, problem with not enough data would be in cameras like Surface Baking Camera, which doesn't have one Ray Origin location for entire rendered image.

Greenlaw
11-04-2011, 11:24 AM
Not at all.
Well, it boggles my tiny 'non-technical' brain anyway. :)

One potential issue we suspect with the embedded LW camera data is that it may not motion blur properly with Fusion's camera. This hasn't been a issue for us yet but if it does become one, we will probably just bake camera path in Fusion using the metadata and then disable the metadata. (Or just import an FBX, which IMO is usually preferable because screen updating is a lot faster with an actual motion path.)

G.

P.S., Thanks for the explanation!

3DGFXStudios
11-04-2011, 12:52 PM
Not at all.
Spot World Position = Ray Origin + Ray Length * Ray Direction
modified (normalized, so in range 0.0 to 1.0) Ray Length is Z-Buffer.
Ray Direction can be easily calculated from Width, Height and Camera FoV/Pixel aspect.
I see, problem with not enough data would be in cameras like Surface Baking Camera, which doesn't have one Ray Origin location for entire rendered image.

Thanks for the explanation, but I still can figure out how to set this up using nodes. I've never seen such a pass so I don't know how it should look.
Can you show it to us?
:hey:

Sensei
11-04-2011, 12:53 PM
You should ask Mike, he is full job compositor after all.. :)

3DGFXStudios
11-04-2011, 01:00 PM
You should ask Mike, he is full job compositor after all.. :)

haha yes but then I've to buy his shader meister plugin and use the preset I found ;)
Not that thats a problem but then I've to update the rendernodes again :D and I'm a bit lazy sometimes especially on friday ;)