PDA

View Full Version : Displacement vs. Normal Maps... I Need Clarification!



evolross
02-08-2011, 07:36 PM
I was watching a mini-tutorial in LW about using displacement maps and normal maps. I understand what each one is and how they work, but I still have a few questions...

1. Are normal maps only used as a surface rendering effect (similar to bump)? The video I watched showed placing a normal map node in the node editor and sending the vector into the "Normal" input on the surface node. Is it possible to drive actual geometry displacement with a normal map?

2. If normal maps are a rendering effect only, do they have the same limitations as bump maps? (e.g. no shadow casting, flat looking at parallel viewing angle).

3. I understand that a normal map is multi-colored because it displays the polygons' normal direction, but do they include height/distance information? If not, how do normal maps achieve the protrusions and convex looking features I've seen in renders that use normal maps? Will these protrusions and convexities cast a shadow (see question 1)?

4. Is it common to use both a normal and displacement map at the same time over the same geometry?

Thanks, I'm just asking because I honestly don't know the above answers. And I'm asking in reference to using the maps in Lightwave.

warmiak
02-09-2011, 09:24 PM
I was watching a mini-tutorial in LW about using displacement maps and normal maps. I understand what each one is and how they work, but I still have a few questions...

1. Are normal maps only used as a surface rendering effect (similar to bump)? The video I watched showed placing a normal map node in the node editor and sending the vector into the "Normal" input on the surface node. Is it possible to drive actual geometry displacement with a normal map?


Yep, normal maps are simply collections of normal vectors ( rgb = xyz ).

When you render a surface, for every pixel you need to know its orientation in regard to whatever light you are using to shade the pixel.
Without normal (or bump maps), the direction the pixel is pointing at is determined based on averaged normals of the face itself ( defined by its vertices) - this is why heavily tessellated models tend to look much better.
Ultimately, you would want to have vertex -> pixel ratio be 1 to 1 but obviously this would create extremely polygon heavy meshes so the way around it is to have a separate texture map which provides normal values for pixels ( the concept of 1 to 1 ratio of pixels to vertices ( or close to it) is actually being used when generating normal maps ( ZBrush and similar apps pretty much work that way))

These maps only provide normals ( directions in which pixels are facing) and not the actual positions, so they only affect the color of the pixel and not its placement and thus normals maps always look flat when viewed from the side.

There is a technique called parallax mapping which avoids this problem by having an additional texture map ( height map) which actually contains info about pixel displacement and is capable ( depending on implementation) of generating not only true displacement but also localized shadows.

PS.
http://dpont.pagesperso-orange.fr/plugins/nodes/nodes/ReliefMapping.html#ReliefMap

Danner
02-10-2011, 02:03 AM
You talk about displacement maps, bump maps and normal maps, but there is also normal displacement. (wich is a displacement map based on normal vectors).