PDA

View Full Version : Norma Maps - How ?



Lewis
10-26-2006, 05:17 AM
Hi !

It may seem like supid Question for guys which work in game market but i woder how do you make Normal Maps in LW ? Can i make normal map in modeler somehow ? What the heck normal map really is ? Is that just "better" bump map or what ?

Thanks

colkai
10-26-2006, 05:29 AM
There was (is?) a normal map shader and an example of how to create it in layout / modeller. I must admit, it's something I haven't quite wrapped my head around doing.
What is does though is give a much better look to your model than bump mapping, though I think it's limited as I thin klighting has an impact, so as light rot/pos changes, I would imagine the normal mapped object would end up looking odd.

EDIT:
Ahh, good old google:
http://amber.rc.arizona.edu/lw/normalmaps.html

Lewis
10-26-2006, 05:43 AM
Hmm, i know there is something in NODE editor about normal Maps but i wondered can i make it in modeler somehow? This link+example is in Lw 7.5 and that is simply workaround if i undersood it right since Lw 7.5 didn't support NMaps natively ? Also what would i gain with Normal maps in simple objects as let's say dumpster or sink or chest of gold..? I get it for human and animal models (although i don't yet understand process of making them) would be very good but what if object is made from solid non deformed material ? I.E. Why would i make hi-detaild dumpster of 5000 polygons and then optimize to 1500 to apply NMap if i can model it in 1500 anyway and use just UVMaps for color/specular/bump ?

thanks for link

colkai
10-26-2006, 06:07 AM
As far as I know, I think this is the way most folks still do it, but admittedly, it's a grey area for me as well. Like with many things, it's something I keep promising myself I'll look into further. :)

Sensei
10-26-2006, 07:35 AM
Normal vector is distance in the all three dimensions x,y,z. It can be normalized or unnormalized. Normalized vector is when length of vector, sqrt( (x*x)+(y*y)+(z*z)), is equal 1.0, in which case it's just direction, without amplitude.

For making life easier programmers use image maps for storing normals (otherwise they would have to invent special normal map file-format), in which case red channel is x, green channel is y, blue channel is z, or similar combination. But non-HDRI images have usually constant amplitude, and 24/32 bit pixel has 8 bits per channel which gives values between 0...255. It must be converted to floating point value by dividing by 255, and 0 is mapped to 0.0 and 255 to 1.0, or 0.0 is mapped to -1.0, 128 to 0.0 and 255 to 1.0 (that's ZBrush case if I am not mistaken).

Example of normal vector is Morph vertex map in relative mode. Base point is start point, and x+map_x,y+map_y,z+map_z gives target point after applying normals.

Polygon normal vector is calculated from points 0, 1, and the last one. That's why flip tool reverses polygon sides- it's done by mapping point with index 0 to last,1 to last-1, and so on. You should have tool in Modeler that moves polygon along polygon normal vector. Because for calculation 3 vertices is taken, 1 and 2 point polygons can't have normal vector, and that's why they're invisible in Perspective camera, or other cameras that ray-trace.. Classic camera doesn't ray-trace from camera position (but surfaces does, if they are reflective or refractive, or shaders want this), but does similar rendering like SasQuatch/Fiber Factory pixel-filters, in other words post-process effect.

Point normal vector is calculated from the all polygon vectors that uses point. You should have tool in Modeler that moves point along point normal vector. Normal map is complete set of such point normal vectors.

Now, what're these normal map? They're images where are stored directions or directions and amplitudes how much given point should be moved. Normal maps are applied like any other images, and UV is used to restrict which 2d image area belongs to which 3d polygon. If you don't sub-divide geometry before appling normal maps, only points that existed are moved, and the rest of normal map is not used giving poor effect, but if you set sub-division more, the more values from normal map is readed like it happens with surface image maps, and then added to point position. It's basically point_x + normal_map_red, point_y + normal_map_green, point_z + normal_map_blue... Or if normal map is normalized and has no amplitude point_x + normal_map_red * amplitude, point_y + normal_map_green * amplitude, point_z + normal_map_blue * amplitude..

Now, what's bump map? It's image where just amplitudes are stored without directions. Directions are taken dynamically during rendering from spot info->smoothed normal vector, which is then multiplied by bump map amplitude, and gives normal vector.. Bump map can be source for baking normal map, but after that it's completely tied to given object and UV, nothing can be modified, otherwise everything will be destroyed..

bryphi7
10-26-2006, 02:32 PM
yes, you can bake and apply object space normal maps in Layout...

bryphi7
10-26-2006, 04:16 PM
Here is a zip file with two scenes...
1. normtest.lws is set up to bake a normal map and a AO map from high res mesh to low res mesh.
2.nmapon.lws is the low res mesh with the maps applied...

palpal
10-26-2006, 04:51 PM
I use a normalmap plug-in for LW... It is all in the Steve Warner tutorials... look what I did in a day after not understanding anytihng at the morning...http://www.spinquad.com/forums/showthread.php?t=13559

Yours PAL

Lewis
10-26-2006, 05:39 PM
Thanks guys. now it's more clear what NMaps are (especially after Sensei's technical explanation - thanks) but i think i'll skip them untill i can render/make them in modeler. I don't want to go in/out modeler/layout to check/make NMaps. Maybe in next LWs we will see some major improvments in modeler and possibility to link renderer in it :).

cheers

Silkrooster
10-26-2006, 05:57 PM
I think normal maps are hear to stay, saying that for the time being there are third party programs that will create normal maps, such as Genetica http://www.spiralgraphics.biz/
I haven't looked over what lightwave can do with them yet. :o
Silk

Lewis
10-27-2006, 03:21 AM
Yes I'm sure normal maps would be standard feature from now on but i think it's too complicated to make them currently in LW :(. This Genetica looks nice i'll download demo to see it properly, thanks.

bryphi7
10-27-2006, 03:28 AM
It really is easy to do in LW! I don't know about genetica, but I don't think it will let you make a normal map from high res mesh to low res mesh. Which is the whole point of making normal maps... Zbrush, microwave, or the bake cam are your best bet...

colkai
10-27-2006, 03:46 AM
Lewis, Bryan is dead right, it's real easy.

I played about yesterday, I can tell you, you can make them in modeller and they are so durned easy I don't understand why I I had such a problem with it.

Quick example, take a cube, subdivide with fractal and jitter it a few times so it's suitably bumpy.
use a poly reduction (I use the PLG simpify mesh one), to create a low res version. Or just create another cube and subdivide it once maybe.

With the low poly in the foreground layer, create an atlas UV map for it.
Put the high res model in the background, run the plugin "NormalMapcreate" and accept the defaults, saving the image where ever ya like.

Save the object then load into layout, slap the TB_NormalMap shader on the low poly model, load in the generated NormalMap image under the AtlasUV you created. Render.

So simple, plus the "TB" shader can handle deformations.

bryphi7
10-27-2006, 03:51 AM
Save the object then load into layout, slap the TB_NormalMap shader on the low poly model, load in the generated NormalMap image under the AtlasUV you created. Render.

That is what the normal input in nodal is for...deforms fine too.

colkai
10-27-2006, 03:57 AM
Do, durned 2 minute edit.

Meant to say, the high and low poly objects needs to have different surface names as the lowpoly one will have the shader applied to the generated UV.


Quick example results:

colkai
10-27-2006, 03:57 AM
That is what the normal input in nodal is for...deforms fine too.
Ok, you knew I was gonna say this but...
Example?

Must admit, ain't touched nodes yet.

bryphi7
10-27-2006, 04:01 AM
look about 10 posts back...

colkai
10-27-2006, 04:05 AM
:twak: :bangwall: :bangwall:

Ok, colour me suitably embarassed hehe....

Dave Jerrard
10-29-2006, 02:29 PM
Sensei, excellent discription execpt for one thing:

Because for calculation 3 vertices is taken, 1 and 2 point polygons can't have normal vector, and that's why they're invisible in Perspective camera, or other cameras that ray-trace.. Classic camera doesn't ray-trace from camera position (but surfaces does, if they are reflective or refractive, or shaders want this), but does similar rendering like SasQuatch/Fiber Factory pixel-filters, in other words post-process effect.Single- and two-point polygons are points and lines without thickness, which is why they can't be ray traced. Even in the Classic Camera, these will not show up in reflections, refraction or raytraced shadows (they will cast shadow maps though, but those aren't raytraced). When the Classic Camera draw them, it does exactly that - draws a pixel over the point where a SPP would be in the render, or a line connecting points in the render, and it draws these using the user specified thickness is pixels. These are drawn after any real polygons are rendered for those points, if any. If the points aren't directly seen by the camera, then they don't get drawn over. A point isn't directly seen in reflections or ray traced transparency/refraction, or shadows.

In fact, since the lines drawn are straight and thickness is defined in pixels, they're a difficult matter to handle in any of these situations - how do you draw a reflection of a line in a reflective or refractive ball, how thick is it, and how to you curve it? If the reflections and refractions are the same thickness, they could appear thicker than the line itself. The new ray tracing cameras have this same problem. You can think of these as rendering everything through a refractive tranparent polygon. Polygon edges are no longer limited to rendering with straight edges - they can be curved - so a new method needs to be developed to do these that works for the new cameras. I think a gradient could do this if it was able to detect distance to Polygon Edges for all the current edge types. I'd even add an option for Distance to Patch Edge, which would be handy for subpatch wire renders like we see all over this forum, without having to create a multitude of different surfaces.

Again, the rest of that discription was very informative.



Now, what's bump map? It's image where just amplitudes are stored without directions. Directions are taken dynamically during rendering from spot info->smoothed normal vector, which is then multiplied by bump map amplitude, and gives normal vector.. Bump map can be source for baking normal map, but after that it's completely tied to given object and UV, nothing can be modified, otherwise everything will be destroyed..Normal maps should calculate faster than bump maps too. As you mentioned normal maps have all teh surface normal information embedding in them, but bump maps have to figure this out on the fly. This is done by comparing the spot being rendered with six adjacent spots, so effectively LW has to prerender six additional spots to obtain their bump amplitude to figure out which way the normal bot the real spot should be perturbed. Back when I first started using LW onthe Amiga on a whopping 7Mhz system, you really felt the impact of using bumps. They were SLOW. If you could do without them, you did. :)

With the speed of today's machines, this stuff is handled so fast it's hard to see the small fraction of time these take to calculate, especially when we're throwing things like Radiosity and SSS into things now. A half second for a bump is easily lost in the general system overhead load on a CPU - drive access probably slows things down a lot more now.


He Who Loves Seeing Technical Stuff Expressed In Laymans Terms.

Lewis
10-29-2006, 03:22 PM
Lewis, Bryan is dead right, it's real easy.

use a poly reduction (I use the PLG simpify mesh one), to create a low res version. Or just create another cube and subdivide it once maybe.

With the low poly in the foreground layer, create an atlas UV map for it.
Put the high res model in the background, run the plugin "NormalMapcreate" and accept the defaults, saving the image where ever ya like.

Save the object then load into layout, slap the TB_NormalMap shader on the low poly model, load in the generated NormalMap image under the AtlasUV you created. Render.



I guess those plugins are 3rd party since i don't find any plugin/button/command named "NormalMapcreat" in my LW9 ?? How to do it OUT of BOX then ??

Sensei
10-29-2006, 03:31 PM
Even in the Classic Camera, these will not show up in reflections, refraction

But reflection and refraction are the most common ray-tracing operations, and exactly the same used by shaders, nodals and LW9 cameras.. We have following LW ray-tracing functions:

ray-cast - checks what is distance between starting position in given direction, returns -1.0 if ray goes outside of scene geometry..

ray-shade - calls ray-cast and if it didn't return -1.0 at surface hit position check what is color, diffuse, specular etc. useful to learn what is channel value at this position.. We're using it to extrude additional buffers data in VirtualRender if user wanted them.

ray-trace - calls ray-shade and flatten color to just RGB.. That's what ACT used when it was 3rd party plug-in, not included in LW9..

There's a few more for lights and introduced in LW9 low-level that I leave without comment..



Normal maps should calculate faster than bump maps too. As you mentioned normal maps have all teh surface normal information embedding in them, but bump maps have to figure this out on the fly. This is done by comparing the spot being rendered with six adjacent spots, so effectively LW has to prerender six additional spots to obtain their bump amplitude to figure out which way the normal bot the real spot should be perturbed.

Only image map or procedural texture is evaluated 6 times to get so called "gradient".. 2 points for X axis, 2 points for Y axis, 2 points for Z axis.. It's very easy calculation, and vector normalization which needs sqrt().. Procedural textures can return pre-calculated gradient, instead of leaving this task to LW, which might be much faster, if implemented properly..


Back when I first started using LW onthe Amiga on a whopping 7Mhz system, you really felt the impact of using bumps. They were SLOW. If you could do without them, you did. :)

Well, 68000 had very slow higher level math functions.. f.e. IIRC multiplication done on simple 16 bit by 16 bit was taking 70 CPU cycles, and dividing 32 bit by 16 bit was 140 cycles.. Remember there was just 7060000 cycles per second.. That's just 100 thousands integer multiplications per second! Floating point math had to be emulated by integer instructions.

I don't think so that LW used current ways of calculation even bump map, there had to be game-like tricks to speed up everything..

Sensei
10-29-2006, 03:35 PM
I guess those plugins are 3rd party since i don't find any plugin/button/command named "NormalMapcreat" in my LW9 ?? How to do it OUT of BOX then ??

It's free plug (but should not)! http://amber.rc.arizona.edu/lw/normalmaps.html

Lewis
10-29-2006, 03:40 PM
Yes, sensei i found it later when i posted (it's rather set of few plugins). I just don't like to set my workflow around pugins (especially free ones) since then it can alway go "down" when next version ships and older plugins stop to work form whatever reason 8new core, new LScript, nep plugin handler...) :(. Is there other solution/another way to do NMaps from default modeler 9.0 out of box ?

thanks

Sensei
10-29-2006, 03:59 PM
Well, not directly..
But I just realized that it might be possible to create nodal expression to do such operation, together with surface baking camera, but setting this up won't be trival task..

Dave Jerrard
10-29-2006, 04:04 PM
But reflection and refraction are the most common ray-tracing operations, and exactly the same used by shaders, nodals and LW9 cameras.. We have following LW ray-tracing functions:

ray-cast - checks what is distance between starting position in given direction, returns -1.0 if ray goes outside of scene geometry..

ray-shade - calls ray-cast and if it didn't return -1.0 at surface hit position check what is color, diffuse, specular etc. useful to learn what is channel value at this position.. We're using it to extrude additional buffers data in VirtualRender if user wanted them.

ray-trace - calls ray-shade and flatten color to just RGB.. That's what ACT used when it was 3rd party plug-in, not included in LW9..

There's a few more for lights and introduced in LW9 low-level that I leave without comment..But all of these can only see geometry that has a thickness. Particles and polylines have no thickness. It takes a post process effect in the Classic Camera to draw these. This should be possible with the Perspective Camera since this could be handled the same way with a filter, but when you get into cameras that have curved distortions, these things need to be rendered a different way. A surfacing technique would work and another way would be to create virtual geometry for them. Either method would then allow these to appear in any raytraced effects. You then have to worry about their thickness due to perspective effects, which can be a whole new can of worms. Hopefully the team is working on these now. Edges and particles are too important a feature to ignore with the new cameras.






Only image map or procedural texture is evaluated 6 times to get so called "gradient".. 2 points for X axis, 2 points for Y axis, 2 points for Z axis.. It's very easy calculation, and vector normalization which needs sqrt().. Procedural textures can return pre-calculated gradient, instead of leaving this task to LW, which might be much faster, if implemented properly..Right. I should have made a distinction. I was referring to just image maps before.




Well, 68000 had very slow higher level math functions.. f.e. IIRC multiplication done on simple 16 bit by 16 bit was taking 70 CPU cycles, and dividing 32 bit by 16 bit was 140 cycles.. Remember there was just 7060000 cycles per second.. That's just 100 thousands integer multiplications per second! Floating point math had to be emulated by integer instructions.

I don't think so that LW used current ways of calculation even bump map, there had to be game-like tricks to speed up everything..Like I said, it could be slow. On a stock A1000, a simple reflective ball on a checkerboard at 640x400 would take about 8 hours. I was doing wineglass renders in Sculpt that would take upwards of 50 hours for a low-res HAM image (320x400). Most LW renders at that time, even without raytracing (which wasn't available until version 2.0) would take about 20-40 minutes. Once I got a 68030 (36Mhz) and a math co-pro, images that took LW 8+hours were suddenly done in under 20 minutes. Most renders took about 10 minutes. These still didn't have any real Antialiasing though. At the time, the Antialiasing setting was just rendering the image at double resolution. I get a chuckle everytime I hear someone complaining that they can't afford to have a render take more than a couple minutes a frame. They obviously never had to deal with the old days. :)


He Who Remembers When It Could Take Over 30 Seconds To Compress A JPEG.

bobakabob
10-30-2006, 11:17 AM
:twak: :bangwall: :bangwall:

Ok, colour me suitably embarassed hehe....

Colkai,

Thanks for the normal map info - very useful. Btw, You're going to love nodes :). Check out Splinegod's tuts in 3D World and Proton's tutorials on the Lightwave site - a great introduction to a very powerful feature.

colkai
10-31-2006, 04:39 AM
Colkai,
Thanks for the normal map info - very useful.
You're more than welcome. :D


Btw, You're going to love nodes :). Check out Splinegod's tuts in 3D World and Proton's tutorials on the Lightwave site - a great introduction to a very powerful feature.
Hoping to do just that, this is highly technical stuff but I'm sure, like most things, once I wrap my head around it, I'll wonder why I had so much trouble wrapping my head around it. :p

Dodgy
10-31-2006, 04:57 AM
Note that the normal map node in LW is tangent space, and not object space, so you must set that toggle in the plugin.

I for one hope NT add some sort of easy method for baking tangent space normal maps or a shading node is made for this as it's a pain having to go out of LW and the modeler plugin would be much more powerful if you could make it a layout shader.