PDA

View Full Version : Bump map from greyscale in node editor?



jrandom
02-14-2010, 12:36 PM
This one strikes me as bizarre since it's usually one of the easier things to do in 3D programs.

In the node editor: I want to take greyscale output (built from mixing 3D texture nodes; either color or alpha channels, I'm flexible) and use that final output as a bump map but I can't seem to find any way to go from greyscale color/alpha to a vector-format bump map.

This has to be possible, but I'll be darned if I can figure it out.

jrandom
02-14-2010, 02:29 PM
Example: I'd like to take the node structure shown in the screenshot below, and use that greyscale output as a bump map.

Sensei
02-14-2010, 04:57 PM
If I recall correctly Dpont made plug-in converting color and/or scalar to bump vector.
Such conversion is always slow (also when texture/image node has bump vector output directly), because it requires calculating 6 spots for each bump vector returned.
So, anyone caring about speed, should consider converting bumps to normal maps, which are free from such calculations.

jrandom
02-14-2010, 05:03 PM
If I recall correctly Dpont made plug-in converting color and/or scalar to bump vector.

I'm a fan of the DPont lights, but I didn't know about this other plugin. I shall take a look, thank you!


... So, anyone caring about speed, should consider converting bumps to normal maps, which are free from such calculations.

Wouldn't generating normal maps from 3D textures luma also incur the performance hit, or am I missing something obvious... :confused:

Sensei
02-14-2010, 05:12 PM
By generating normal map, I meant baking it to image file on disk..
In LW v9.x you have special Baking Camera for this task (older LW owners had to buy Microwave plug-in http://www.evasion3d.com/mw_lw_prodinfo.html)
This way normal vector is read from image file, then used as-is without any cpu heavy calculations. Might be crucial if procedural texture, or even whole node tree, has long rendering time.
If you're doing stuff for games, there is also no other option. Procedural textures etc stuff must be removed by baking textures.

On my website http://www.trueart.eu you can find couple tools helping in baking process: SurfaceBakingLights and Node Library's Normal Pass node.

jrandom
02-14-2010, 05:19 PM
The DPont Scaler->Bump doesn't appear to work with anything I throw at it (nor at any scale), although it does slow the render down substantially.

jrandom
02-14-2010, 05:27 PM
I. Am. A. Moron. :thumbsup:

I've been using the IFW2 nodes (a heck of a deal at $80) and the Combiner Node has a Bump output. I tried using this earlier but nothing happened.

On a whim, I hooked it back up and set the bump scale to 1,000,000 and it worked! Turns out I need ridiculous scaling on my bump map for it to show up.

Not only does this allow me to convert greyscale to bump, but it does it an order of magnitude faster than the DPont scalar-to-bump node.

Edit: The scalar-to-bump runs at the same speed as DPont -- for some reason the first time I ran the DPont node it ran reaaaaaally slowly, but now runs at the same speed as the IFW2 bump conversion.

jrandom
02-14-2010, 06:39 PM
I'd like to thank you all for putting up with my silly questions and helping me find my way through this enormously complex program.

As a result, I get to make things like this!

*bounce* *bounce* *bounce*

IRML
02-14-2010, 06:53 PM
I can't understand why bump isn't a scalar input, or that newtek don't provide a scalar to bump node

jrandom
02-14-2010, 06:56 PM
I can't understand why bump isn't a scalar input, or that newtek don't provide a scalar to bump node

From what I can tell, Bump is implemented as a vector, so it's very similar to a normal map in that regard. I'm with you on that second one, though -- you'd think that greyscale-to-bumpmap would be a standard node, along with the ability to mix bump maps (which one can do via the Composite node in the IFW2 node collection, also used in the above rendered image to make the bumps in the lowland areas less than in the highlands).

Lightwolf
02-14-2010, 07:07 PM
I can't understand why bump isn't a scalar input, or that newtek don't provide a scalar to bump node
Basically because a bump is a gradient/slop.

In 2D terms, think of the cross section of a mountain. A single scalar is thus the elevation at any point on the mountain.
However, the bump is based on the slop, which in turn is basically defined by the elevation before and after the actual point you're looking at.
Well, 3D bumps actually have a slope on all three axes, but the principle is the same.
The problem with the scalar is that it basically returns the elevation at the point. Nothing is known about the surroundings though.

Having said that, I think denis has a node in the dpkit that can produce a bump. Mind you, to do it it needs to evaluate the upstream scalar (and everything hooked up to it) 4-6 times. Which is why it's faster to use native bump outputs if possible.

Cheers,
Mike

jrandom
02-14-2010, 07:11 PM
... Having said that, I think denis has a node in the dpkit that can produce a bump. Mind you, to do it it needs to evaluate the upstream scalar (and everything hooked up to it) 4-6 times. Which is why it's faster to use native bump outputs if possible.

Yep, it's noticeably slower. The reason I needed it is that the procedural texture I used to make the craters outputs the inverse of what I need (the Pebbles IFW2 node) and I have no idea how to invert a bump channel.

(I also used the technique of adding various Pebble nodes together to get the craters-upon-craters look -- I don't even know if this is possible using the bump channel.)

Lightwolf
02-14-2010, 07:13 PM
Yep, it's noticeably slower. The reason I needed it is that the procedural texture I used to make the craters outputs the inverse of what I need (the Pebbles IFW2 node) and I have no idea how to invert a bump channel.
Inverting a vector should work (multiply be -1). Which basically just flips the "virtual" direction as defined by the slopes into the opposite direction in 3D space.

Cheers,
Mike

jrandom
02-14-2010, 07:24 PM
Inverting a vector should work (multiply be -1). Which basically just flips the "virtual" direction as defined by the slopes into the opposite direction in 3D space.

Handy thing to know. :) Doesn't help me out in this case, but now any bump map I need to invert that isn't built up of multiple layers is now within my grasp for speedy rendering!

(I really need to study up on my vector math. I've forgotten most of what I've ever learned about it.)

Sensei
02-15-2010, 04:27 AM
In the simplest case polygon is flat- it's normal vector is pointing in one direction regardless of part of triangle (example is box). It's called Geometric Normal in Spot Info node.

In little more complex case, normal vectors on vertexes of polygon are pointing in directions calculated used all polygon's normal vectors the point is attached to- they're basically added together and normalized. It's then called Smoothed Normal in Spot Info node. Normal vectors inside of triangle are calculated as average of distance from vertexes.

These normal vectors always must be normalized. Which means that length of vector is equal 1.0. That's essential because later this normal is used to calculate shading (diffuse=-DotProduct(normal_vector, light_direction)).

If somebody uses bump image. There is calculated bump vector from raw image RGB data. It must be done when ray hits polygon (because that's moment when spot size has known value). Therefor it's slowing down rendering.

This bump vector is not normalized- length not necessarily 1.0.

Bump scalar visible for user in the main Surface Editor window, is just multiplying bump vector. That's why if you set it 100000% it won't change rendered image a bit, unless you have some image. Multiplying 0,0,0 by 100000% always gives 0.

Renderer subtracts bump vector from normal vector. Then normalize it. And does -DotProduct( normal_vector, ray_direction ). In LWSDK this value is present as shaderaccess/nodalaccess->cosine. In application for user this is visible as Incidence Angle in Spot Info/Gradient.

As you can imagine bump vector with length less than f.e. <0.1 will modify normal vector (with constant length 1.0) by very small difference. Such bump will be barely visible.
But bump map with length >1.0 will be changing normal vector dramatically.

jrandom
02-15-2010, 09:36 AM
It also appears that Lightwave takes the size of the object into account (unlike other 3D programs I've used) which is why my bumpmap did not initially show up on my 900km-diameter object.

Elmar Moelzer
02-15-2010, 01:22 PM
Well scalar Bump maps work fine in the classic, layered Surface Editor and they are very fast too. So I dont know what Antti has done there, but IMHO it aint no good.

Lightwolf
02-15-2010, 01:25 PM
Well scalar Bump maps work fine in the classic, layered Surface Editor and they are very fast too. So I dont know what Antti has done there, but IMHO it aint no good.
The classic surface editor computes 6 samples as well (which is equivalent to using scalar nodes) or uses the direct bump gradient if the texture layer can return it (which is optional).
However, the evaluation loops in the texture layers are just quicker, nodes have a higher overhead.

Cheers,
Mike

IRML
02-17-2010, 09:56 AM
Basically because a bump is a gradient/slop.

In 2D terms, think of the cross section of a mountain. A single scalar is thus the elevation at any point on the mountain.
However, the bump is based on the slop, which in turn is basically defined by the elevation before and after the actual point you're looking at.
Well, 3D bumps actually have a slope on all three axes, but the principle is the same.
The problem with the scalar is that it basically returns the elevation at the point. Nothing is known about the surroundings though.all the bump maps I've ever used have just been greyscale elevation maps, so I don't see why the bump input is vector when the displacement is only going in one direction

you say a bump is based on the slope? I'm guessing this is so you don't see the steps of individual pixels being displaced in the bump map? but that doesn't explain to me why the input isn't scalar, surely lightwave should evaluate this slope after the values are connected to the bump input, not before?

am I misunderstanding what a bump map does? is it more complicated than I thought?

Sensei
02-17-2010, 10:13 AM
all the bump maps I've ever used have just been greyscale elevation maps, so I don't see why the bump input is vector when the displacement is only going in one direction

It goes in the all directions..

Bump map and normal map are not displacing. They modify spot normal vector. This normal vector is then used in dot-product operation together with light direction giving diffuse scalar (the same as in rendered image buffers)



you say a bump is based on the slope? I'm guessing this is so you don't see the steps of individual pixels being displaced in the bump map? but that doesn't explain to me why the input isn't scalar, surely lightwave should evaluate this slope after the values are connected to the bump input, not before?

That's what does DPont's scalar to bump node.



am I misunderstanding what a bump map does? is it more complicated than I thought?

Yes. Read my post #15. It's describing in technical detail how 3d ray-tracing renderer works.

IRML
02-17-2010, 10:25 AM
I know it's not a proper displacement, it's just the only word I could think of to describe it

I'm afraid your post number 15 goes way over my head, my point is I thought a bump map only displaced in one direction - the direction of the normal, so I couldn't understand why it needed a vector input

Lightwolf
02-17-2010, 10:51 AM
am I misunderstanding what a bump map does? is it more complicated than I thought?
Hm, let me try this from a different angle.

I was actually thinking about what diagrams to draw, so here's a (verbal) description (one more attempt before I will draw diagrams ;) ).

Assume you have a modelled hemisphere. If you look at the normals in modeler you'll see how they fan out at the sides, while the ones at the top point upwards, as those on a normal plane would do.
Now, if you look at it from the top you see the kind of shading that a bump map should fake onto a plane (in this simple case). It does that by taking the normals and bending them to look like they do on the hemi-sphere.
A normal map would just store the difference between the vertical normal of the plane and the bent normal on the hemisphere (or just the bent normal) and use that during the render.
A bump map in this case is the equivalent of rendering the hemisphere unshaded from the top view, with the high areas being white, the low areas being black (or vice versa) - which, as you've stated, gives you a displacement or elevation map.

Now, the question is, how does LW take that elevation map and turn it into a vector to bend the normal (more precisely: how is the bump map converted into a normal map on the fly) when rendering/shading?

That's where the slopes come in. Look at the cross section of the hemisphere and select a point. Now select the two points on either side, draw a line through them. That's an approximation of the slope at the initial point you picked. And the slope in turn tells you by how much to bend the normal on this axis.
Do that for all three dimensions (actually, 2D is enough for simple image maps, but 3D is needed for procedurals) and you've got a 3D slope that can modify the normal.
And, as you've seen, you need to evaluate the points/pixels around the current point/spot to get the slope.

Cheers,
Mike

IRML
02-17-2010, 11:02 AM
Hm, let me try this from a different angle.

I was actually thinking about what diagrams to draw, so here's a (verbal) description (one more attempt before I will draw diagrams ;) ).

Assume you have a modelled hemisphere. If you look at the normals in modeler you'll see how they fan out at the sides, while the ones at the top point upwards, as those on a normal plane would do.
Now, if you look at it from the top you see the kind of shading that a bump map should fake onto a plane (in this simple case). It does that by taking the normals and bending them to look like they do on the hemi-sphere.
A normal map would just store the difference between the vertical normal of the plane and the bent normal on the hemisphere (or just the bent normal) and use that during the render.
A bump map in this case is the equivalent of rendering the hemisphere unshaded from the top view, with the high areas being white, the low areas being black (or vice versa) - which, as you've stated, gives you a displacement or elevation map.

Now, the question is, how does LW take that elevation map and turn it into a vector to bend the normal (more precisely: how is the bump map converted into a normal map on the fly) when rendering/shading?

That's where the slopes come in. Look at the cross section of the hemisphere and select a point. Now select the two points on either side, draw a line through them. That's an approximation of the slope at the initial point you picked. And the slope in turn tells you by how much to bend the normal on this axis.
Do that for all three dimensions (actually, 2D is enough for simple image maps, but 3D is needed for procedurals) and you've got a 3D slope that can modify the normal.
And, as you've seen, you need to evaluate the points/pixels around the current point/spot to get the slope.

Cheers,
Mikeinteresting, I thought the whole idea for normal maps was that bumps couldn't bend the normal in this way, so normal maps were better quality, I guess I was wrong

but still, what you've just described and the fact that bump needs a vector input suggests to me that this is all done within the the node that supplys the bump input, so I still don't really understand why bump can't be a scalar input that gets evaluated like you say afterwards, and for something like a procedural could output normal vectors instead

Elmar Moelzer
02-17-2010, 11:09 AM
However, the evaluation loops in the texture layers are just quicker, nodes have a higher overhead.

Considering that I have been using bump maps in LW since version 5 and I have never noticed a signifficant slowdown with them, I suppose that this means that Nodal Bump Maps are A LOT slower?
If so, what does that mean for CORE?

Lightwolf
02-17-2010, 11:11 AM
interesting, I thought the whole idea for normal maps was that bumps couldn't bend the normal in this way, so normal maps were better quality, I guess I was wrong
Well, first of all they're faster (and easier to use in realtime rendering).
You can also create a normal map from any kind of bump, but not vice versa (normal maps can have negative components that point in the "other" direction).


but still, what you've just described and the fact that bump needs a vector input suggests to me that this is all done within the the node that supplys the bump input, so I still don't really understand why bump can't be a scalar input that gets evaluated like you say afterwards, and for something like a procedural could output normal vectors instead
The bump output is a vector, just like the normal is. Three values.
There's also a bump strength input in LW, but that's just a multiplier.

"Afterwards" from a single, scalar, value in a nodal tree means evaluating everything that is upstream 4-6 times.
In my example, the scalar value that is passed on in your node tree would be the elevation of the current point. And that's not what you want or need to compute the slopes (well, it could be used, but it wouldn't be sufficient alone).
And LWs nodal system is mainly designed to evaluate the values once per settings that defines how a spot is rendered (Denis was the first to break that limitation with his image blur and bump map creation nodes - both of which add extra evaluation of the upstream nodes at different points on 3D space).

Cheers,
Mike

Lightwolf
02-17-2010, 11:12 AM
If so, what does that mean for CORE?
It means that they better optimize the heck out of their nodal connections. :D

Cheers,
Mike

Sensei
02-17-2010, 11:34 AM
Considering that I have been using bump maps in LW since version 5 and I have never noticed a signifficant slowdown with them, I suppose that this means that Nodal Bump Maps are A LOT slower?
If so, what does that mean for CORE?

Computers in LW v5 times and current LW v9.x or Core changed a lot. Speed changed 10,000% or more..
Also LW v9.0 received kd-tree the first time.
In <LW v9.x a lot of time was spend on ray-tracing objects seen from reflections and refractions, thus shading influence was not so much visible. Imagine that ray-tracing takes 10 sec of rendered image, but 1 method bump calculations just 0.5 sec, 2 method bump calc 1.0 sec total f.e. 10.5-11 sec (difference 4.7%). But when developers speed up ray-tracing to 1 sec, then 1 bump method calc will take 1.5, 2nd 2 sec (difference 33%).

Lightwolf
02-17-2010, 11:36 AM
Computers in LW v5 times and current LW v9.x or Core changed a lot. Speed changed 10,000% or more..

Well, Elmar was comparing the current layered texturing system to the current nodes.

Cheers,
Mike

Edit: Ooops, I almost missed it... 10.000! Time to have a beer :D

Sensei
02-17-2010, 11:41 AM
Edit: Ooops, I almost missed it... 10.000! Time to have a beer :D

I will never catch you, unless I will kill you.. :p

Lightwolf
02-17-2010, 11:44 AM
I will never catch you, unless I will kill you.. :p
You'll have to catch me first to kill me ... and it looks like you still need a few years for that :D

Cheers,
Mike

Elmar Moelzer
02-17-2010, 01:46 PM
It means that they better optimize the heck out of their nodal connections.

Yeah, lets hope the best for that.
There is actually a growing concern in my team about the possibility that the Nodal basis of CORE could make VoluMedic slower, once we port it.
Right now it really flies. I have not really done a side by side, but I think that in some situations we are faster than polygon rendering. I have to do a real side by side comparison one day though. That would be interesting.
Still, even if only half as fast, we are pretty fast for a volume renderer...
I would hate to loose that.

Sensei
02-17-2010, 02:25 PM
In VoluMedic you don't edit surface parameters like in Surface Editor's Node Editor, so how it would affect it? I don't see any connection..

TrueHair Previewer, Fprime, Kray, TrueInfinitePlane are affected, these are calling Node Editor functions, and leave shading to LW or 3rd party nodes side.

Myagi
02-17-2010, 03:02 PM
interesting, I thought the whole idea for normal maps was that bumps couldn't bend the normal in this way, so normal maps were better quality, I guess I was wrong

Normal maps are more accurate, and can have more drastic (/sharper) normal changes from pixel to pixel, so you were on the right track.

An attempt to explain it in a less techie fashion. Consider that in order to determine how a normal is perturbed/changed by a grayscale bump map, you need a number of neighboring pixels to calculate the "slope". Wheras from a normal map you only need a single pixel at any given spot to determine the perturbed normal. (As described in earlier posts)

Since the bump map requires neighboring bump map pixels it intuitively (hopefully? :) ) suggests it doesn't contain equally detailed information as a normal map (which has no dependency on adjacent pixels) at the same resolution.

Lightwolf
02-17-2010, 03:09 PM
Since the bump map requires neighboring bump map pixels it intuitively (hopefully? :) ) suggests it doesn't contain equally detailed information as a normal map (which has no dependency on adjacent pixels) at the same resolution.
Well, the point is that you can only cover a range of slopes with neighbouring pixels.
A graphical representation would be a circle (the centre being the spot being sampled) and a line connecting two points on either side of the circle. The line in this case is the slope (and the normal perpendicular to it).
A bump map requires either point to stay on it's side, a normal map would allow them to cross to the other side (and thus allows for a full 360).

I should really draw the stuff :D

Cheers,
Mike

Myagi
02-17-2010, 03:35 PM
Well, the point is that you can only cover a range of slopes with neighbouring pixels.
A graphical representation would be a circle (the centre being the spot being sampled) and a line connecting two points on either side of the circle. The line in this case is the slope (and the normal perpendicular to it).
A bump map requires either point to stay on it's side, a normal map would allow them to cross to the other side (and thus allows for a full 360).

True, I was trying to keep it simple :). And it's a bit theoritcal of perturbed normals of >=90, for real-time rendering at least, since lights might get culled below the face plane, and pixel normals no longer getting lit correctly (it already happens to some degreem but excessive use of extreme normals would just make it even worse).

Lightwolf
02-17-2010, 03:38 PM
True, I was trying to keep it simple :).
I'm afraid my explanation is only simple if you actually draw it ... it's harder to explain.

Cheers,
Mike

Myagi
02-17-2010, 03:46 PM
I'm afraid my explanation is only simple if you actually draw it ... it's harder to explain.

perhaps explaining it as something more familiar to non-techies, like imaging a bump map as a vertex displacement map of a simple grid mesh (ie. simple terrain). That gets the general idea of the limitations.

Lightwolf
02-17-2010, 03:47 PM
perhaps explaining it as something more familiar to non-techies, like imaging a bump map as a vertex displacement map of a simple grid mesh (ie. simple terrain).
Gee, and I thought a simple circle was easy enough :D

Cheers,
Mike

Myagi
02-17-2010, 03:48 PM
it's so mathematical ;)

Lightwolf
02-17-2010, 04:15 PM
it's so mathematical ;)
D'oh, what was I thinking... Obviously it's a disc!

Cheers,
Mike *grins*

Elmar Moelzer
02-18-2010, 12:56 AM
In VoluMedic you don't edit surface parameters like in Surface Editor's Node Editor, so how it would affect it? I don't see any connection..

Well, we do have two texture Editors (with stackable layers) for Color and Opacity at the moment, with the option for more in the future.
All other parameters like Luminosity, Diffuse, Specularity, etc are just minisliders right now.
Of course every additional supported texture reduces performance, so we have to be careful with that, but we have a few ideas to make that a little more interesting in the future. Dont let yourself get confused though, you dont need that many texture controls with VoluMedic to make things look good. It is hard to compare to a normal LW object.