colour sampler shader?

chunderburger

Active member
quoted from xsi guide, Is this possible?
i was thinking this would be neat for baking down sections of nodal networks into a map. insert it in the tree, pick a texture projection for baking and then render. then switch off the tree above the node. use the generated texture.

-------------------------------------------------------------------------

The color sampler shader is a lightmap shader that samples an object’s surface and writes the result to a texture file. This is conceptually similar to RenderMap’s [surface baking cam] surface color map, but works in a different way.

Rather than using a virtual camera to sample an object’s surface from a specified distance, the color sampler shader evaluates the object’s render tree directly. Whatever portion of the render tree is connected to the color sampler’s Input port is computed and written to a texture file.

The color sampler shader itself must be connected to the Material node’s Lightmap port in order to generate the texture.



Advantages of using the Color Sampler shader:

• Because the color sampler shader evaluates the render tree directly, you don’t have to worry about other objects getting in the way of a virtual camera. Nor do you have to worry about introducing distortion the way you might when setting RenderMap’s camera-distance value too high.

• You can quickly generate a texture from any shader or branch in the render tree by simply changing what is connected to the color sampler’s input.

• The color sampler shader can easily be set to output texture sequences rather than still images.

• Using the color sample shader involves very little setup. You need only make a few render tree connections and set a few parameters.

• Depending on the material being evaluated, the color sampler shader can be faster than RenderMap.
 

Attachments

  • Untitled-1.jpg
    Untitled-1.jpg
    60 KB · Views: 349

Lightwolf

obfuscated SDK hacker
You still need UVs for this, right? I suppose that is where it gets tricky with LWs current system.
I don't see an obvious way of adding something like that, but maybe Denis (dpont) might be able to think of something, he knows how to hack nodes ;)

Cheers,
Mike
 

Karmacop

I am Jack's cold sweat
I think it should be fairly easy ... hmm ... you better not have just made me stay up all night! :p
 

duke

A BUG PLANET.
The problem would be if you want to use it for SSS and so on like in XSI, I don't think you can tell lightwave to evaluate the file you're writing out and depending on pulling back in at the same time, so you'd have to do a pre-pass which just renders out the lightmap.
 

Karmacop

I am Jack's cold sweat
I stayed up some of the night before realising it's a big job :p

I started out trying to create a node that would save samples to a buffer ... so essentially you feed any output into a buffer node that is saved as an image. The node will get called at every sample, so it will need to do anti-aliasing etc by itself .. and some other stuff. So it's more complex than I'd first thought, but I'm going to keep working at it.
 

dpont

Member
May be this is not exactly what you want,
but I made a kind of colormap with SB camera,
setup with "Camera Position" instead of "Smoothed
Normal" to get proper Specular,
Except the problem of UV map resolution,
(body UV is smaller than head) you can see
here a normal render and the colormap mapped
and output in "Diffuse Shading".

colormap.jpg

Denis.
 

dpont

Member
...But you mean, baking this, in a node tree directly
without SB Camera ?
It is just to have a multi-baking process in one render?

Denis.
 

duke

A BUG PLANET.
Denis it's pretty much exactly what you have setup for the custom buffers, with these differences:

-It's all in the surface node editor
-The colour_sampler itself doesn't HAVE to save to file, it can simply be a temporary file to use for rendering, however i'm not sure lightwave can do this.
-The colour_sampler has 2 main inputs:
1) an image input which is the "lightmap" itself - it writes the data to this.
2) a sample input - whatever is plugged in here is what's sampled/written to the image. You can see both of these in action in the original image CB posted.
-The image you've plugged in can be plugged into the input of another node, however because colour_sampler is calculated first, that image already has the data when the other nodes are evaluated.
-Because the colour_sampler is writing to a file or virtual file, it can calculate it at any resolution, which is where you get the speed.
-The colour_sampler bakes this data down according to the projection of the image you've plugged in, so it could bake to a front-projection or UV or whatever.

It's really hard to explain this so I might post some screenshots of examples in XSI when I get home.
 

gerardstrada

New member
If I understand well, we can do almost the same thing with the new Extra Buffer Node from Denis Pontonnier and Surface Baking Camera.

I made a test with the ZBrushHead scene (LW Content):



here is the Displacement map:



Let's say we want to save a normal map and an AO map:



We render with SBC and save these maps with the Get Extra Buffer node:



Then, we get the occlusion map:


(notice this is a different UVmap from the Displacement map)

and the normal map:



In this case, subdivision is set at 20 (Per Object Level). Compare the details with this version with SubD at 5:



and both maps were saved at the same time :)



Gerardo
 

duke

A BUG PLANET.
Yeah but I think in this case the advantage is being able to write these out first and read them back in for use.
 
Top Bottom