PDA

View Full Version : Spot Definition?



Surrealist.
01-15-2007, 02:11 PM
Does anyone have a definition for spot as used here in the documentation for the Node Editor. Is this the same thing as a pixel?

The vast majority (almost all) of connections in the Node Editor are evaluated per spot therefore if the incoming color connection has a variance in the shader or texture values that is affecting the color channel (i.e. a pattern of some kind) then that varied color value will be supplied to the color input it is connecting to. Simply put, if you connect the color output of a red and green marble texture to a color input then that input will receive a red and green colored marble pattern - evaluated on a spot by spot basis.

Thanks in advance.

Sensei
01-15-2007, 02:16 PM
It's direct using word that is in LW SDK. Spot is place that is currently evaluated by shader, node or texture..

Surrealist.
01-15-2007, 02:28 PM
It's direct using word that is in LW SDK. Spot is place that is currently evaluated by shader, node or texture..

Pardon the feedback loop, but what is meant by "place"? What place? How much space pxels whatever? Defined by what?

Thanks:)

Sensei
01-15-2007, 04:02 PM
At a time evaluation is done with just single value or triple values - color or 3d position vector. But in anti-aliasing it's smoothed - several rays hitting the same location with little (VERY little) differences - that's spot size. Spot can be thin and long or look like circle - that depends on angle between ray and surface that has been hit by that ray. I don't know why do you want to know this, it's only important for 3rd party developers writing their own renderers, nodes and shaders..

Lightwolf
01-15-2007, 04:10 PM
In the case of the node docs you mention, a spot is basically just that, a point on your surface that is being evaluated.

LW casts one (or more) rays per final image pixel into the scene of 3D objects. Once that ray hits a surface, it needs to evaluate the surface properties on the point where it intersects said surface. That point of intersection is (to put it simply) the spot.
If you use AA (anti-aliasing) LW casts more than one ray per final pixel, resulting in a spot evaluation each (since the ray point in slightly different directions, the intersections and thus the points will be at flightly different locations).
Also, when rendering reflections and refractions, LW needs to compute the surface values where the reflected/refracted surface is hit as well. This can result in more than one spot being rendered per ray.
As far as the nodes are concerned, the post is a location in 3D space, which is the location where the ray has hit the surface. A spot also has a few other properties (such as a size) that is, as Sensei wrote, really only interesting for developers (and not accurate either ;) ).

I hope that helps.

Cheers,
Mike

Surrealist.
01-15-2007, 04:53 PM
Thanks to both of you for the help. So I gather that a spot is subpixel but could be many pixels, yes? It is an area? Or it is just the intersection of a ray or many rays? I gather you are trying to say that where a ray intersects is the "spot". In that case it is just that, the intersection. Yes?

But that is a good question. Why do I need to know this? Seriously. They took the time to write a whole paragraph in the doc, so, how does it apply to me? This is the question.


I am connecting a color output and I know it is going to effect the color of a node by connecting it to the input. Simply put. But then they add - on a spot by spot basis. Why? They could have just said if you connect a red output to a color input the result will be red. Duh. But they didn't. They said - on a spot by spot basis. So this tells me there is something more they were trying to say. This was not just an asside to programers at the expense of my understanding.

So why then are they making note of this? I really want to understand nodes so any more of a contextual explanation would be great.

Example: Does that mean that if I connect a color output (BG and Foreground) into a brick pattern that the colors will then be evaluated based on the "spots" that that patern makes (grout and brick) or should I say puts forth to be evaluated as a kind of filter? I mean I know this is what happens, the BG is the grout and the FG is the brick - but is this what they mean by on a spot by spot basis?

And sorry to be so **** pedantic but, hey they started it! :)

Sensei
01-15-2007, 05:35 PM
They started to not make more confusion - there is node "Spot Info".. There is really no sense calling it "Pixel Info" or anything else, that's exactly "Spot"..

Little repeating:
rendering without AA - spot is single ray.
rendering with f.e. Enhanced AA - spot is 9 rays smoothed together.

Imagine that 9 rays are traveling 3d world so close together that they're like tube, almost parallel.. Then they hit plane.. Each of them is evaluated separetely by node editor and renderer merges them together using currently chosen by you AA Reconstruction Filter..

KevinL
01-15-2007, 05:42 PM
helped me a great deal. I got more intrinsic understanding of the concept of nodes from this one question and replies to it than I've had previously. Never know what seemingly useless question is a keeper.

Thanks folks.
Kevin

Wonderpup
01-15-2007, 05:49 PM
I'm also struggling with this one. I think the problem is that the word 'spot' implies a very specific point, but in this context it's meant in a more abstract way- so I'm taking it to mean an arbitrary point on a given surface at a given time, rather than a precise measurable point. So 'spot info' is more about an evaluation method than about units of measurement-I think!

Weepul
01-15-2007, 06:36 PM
At risk of being redundant, here's another attempt to word it clearly.

A "spot" is a position on an object that is being sampled via one of LW's raytrace functions, be it one fired from a camera or from a reflection on another object. The information revealed by that "spot" includes things like its position, its color, its reflectivity, its illumination, the surface normal at that position, and so on. This information is used to determine what the surface looks like at that position.

A spot is a theoretical infinitesimally small point in space that lies on an object's geometry. A spot exists at that position if a ray hits the object - the spot is not an entity of its own, and if you fire enough rays, there could be infinite spots on the object, just like how position is a continuous and infinitely fine set of coordinates.

However, it can be thought of as having an approximate size based on factors I'm not sure about at all, since I've not looked into it. :D The way I can think to describe it, though, would be to think of a non-antialiased image. Each pixel gets one ray, and therefore evaluates one spot each. However, that information gets spread over the entire pixel; if you project the size of the pixel out from the camera, that size as it would cover the object can be though of as the approximate "size" of the spot.

Surrealist.
01-15-2007, 07:49 PM
OK. Thanks guys. This has been helpful. OK so a spot is just a point on an object that is getting evaluated by one or more rays. Fine. Thank you. I can think with that.

Now, lets put this into the realm of practice.

I can say that a spot is RED because the values of the surface attributes at that point are 255, 0, 0. OK, fine. Got it. Now there could be a godzillion spots all over the object, I don't know, doesn't matter but let's just say there are enough of them to tell us that the surface is a "something" that has various attributes and these things are being evaluated by rays. OK fine, even if my understanding is purely theoretical at this point. That is good enough for me.

Well now this is where my understanding breaks down -or seems to. The same could be true for an object with or without the node editor - though even at this point my understanding of how LW works has risen so this is good.

However, here it is again:


The vast majority (almost all) of connections in the Node Editor are evaluated per spot therefore if the incoming color connection has a variance in the shader or texture values that is affecting the color channel (i.e. a pattern of some kind) then that varied color value will be supplied to the color input it is connecting to. Simply put, if you connect the color output of a red and green marble texture to a color input then that input will receive a red and green colored marble pattern - evaluated on a spot by spot basis.

And another place here:


Color outputs always supply three channel data evaluated per spot, in the format red, green, and blue (RGB) regardless of what other inputs are connected to the node or what other settings you may have set for your scene.


So if my understanding of this is correct, and it may well be at this point, then all they are saying is that the node takes the location information (spot information) as well as the color information.

So lets say theoretically, there are 10 spots on the object. Spots 1, 3, 6, and 9. all have value of X,X,X on the color channel bcause of a pattern from a procedural texture that has colored them there in those locations (spots).

The information of where this color information is as well as the actual color data is passed on though the output. And the input of a node, if connected by that output, is then affected in those areas and added to that node's data. How it is added I guess depends on the blending mode, but that is anothger subject.

Where as in layers the spots remain intact from layer to layer. So a brick patern layed over a crumple patern is just that. A merging of two textures that retain their own "spot data". The spot data of one does not influence the other.

Here is an example of what I am talking about. I set up some nodes to affect a brick texture. There is a bump node that is plugged into the bump input of the brick. This gets carried through all the way to the texture on the bump channel pretty much I spuppose as it would in a layer set up because it is not effecting anything else in the chain.

Now then I set up another procedural and fed it into the position data of the brick texture and it randomized the pattern of the brick. (the only reason I did it as a bump layer node was so I could scroll through and try different proceedurals rather than load, then connect and disconect texture nodes. I don't know of any better way yet.)

So if this was a layer set up the spot data of the prodecural would not effect the pattern of the brick. In fact there is no way to connect it there in a layer set up.

Am I right in this? Is this the idea they are trying to get accross?

Thanks again for all the help so far.

Node Set up, Brick patern, Brick pattern with procedurals:

faulknermano
01-15-2007, 09:53 PM
then what is the spot in relation to the displacement nodes? does this mean rendertime displacement?

Surrealist.
01-15-2007, 10:42 PM
I sure hope you aren't asking me...:)

Lightwolf
01-16-2007, 02:03 AM
Hm, allright, let me elaborate a bit more in terms of nodes.
I suppose I know why they have written the documentation like this.
Basically, the whole node set-up of a surface (as an example) gets evaluated once per spot. So, any of the inputs/outputs can be different for every spot.
In the case of texture layers, some values are only evaluated once per frame, for example the position of textures.
This gives you a lot more power in the node editor, you could for example change the position of a texture at every spot (using another node that depends on "Spot Info" for example) - something that is not possible using the traditional approach.
The downside is that the nodes need to calculate more data per spot than texture layers do, making nodes a bit slower to evaluate.
In terms of displacement, the spot would be the current vertex being displaced.
Trivia:
As weepul menitoned, a spot does actually have a size. Think of a single final image pixel being projected into the scene as a cone (just like the camera frustum visible in Layout, but only starting from a single pixel). This cone will cover a larger area the further away from the camera it is. Once this cone intersects an item, the intersection (likely elliptical) will have a certain position and size (as well as a shape depending on the angle of intersection).

Cheers,
Mike

LightFreeze
01-16-2007, 05:00 AM
another way to think of how nodal works in comparison to layers,
in layers the final result of any spot is the combination of the layers stacked on each other whereas with nodal the final result is the combination as you go from left to right through the tree.
So if you think of a simple left-right branch as a layer stack then you could copy a layered texture with a single branch, the real power comes when you start to use attributes from your branch to influence another branch

hope I`m not confusing you with this

Dave Jerrard
01-16-2007, 06:56 AM
When your brain stops bleeding, you'll understand. :)


A spot is just the place you're rendering at a giving time during the render. If you're rendering a ball, and the ball is centered in the frame, and the render is hal;f finished, meaning the center of the ball is currently being rendered, the spot being rendered is the center - where the current pixel is rendering. Another way to think of it is to use a text analogy. As you type, the position of the character you're currently typing is the spot. As you keep typing, you place a character in a spot, then move to the next spot and place another character. The spot is simply the position you're working on at the moment.

In 3D, the same thing happens, but in 3D. You render pixel. A ray is fired out and if it hits a surface, it reads the data from that spot on the surface where it hit. This is where the renderer finds out everything it needs to know about what it hit - what surface it is, how far away it is, the angle the surface is at that point, what the surface properties are, how much light is falling on it, weightmap values, and where it's moving, if at all, etc.. You can actually see many of these values when you look at the various buffers that LightWave generates.

Once these things are known, then the process of figuring out the color is done. Surface is necessary to figure out which surface setting to use for the rendering process. The distance is necessary for fog and other distance related attributes. Angle is necessary for reflections, refractions, and Fresnel calculations. It's also necessary for shading. For textures, the position of the texture is derived from Object Spot (X, Y, Z cordinates in relation to the object's coordinates), or World Spot (X, Y, Z cordinates in relation to the World Coordinates) in the case of World Coordinate textures. This is necessary in order to render the correct part of a texture for each spot.

All this stuff is then mixed with the various texturing, lighting, reflections, etc, to come up with a final color for the current pixel. As mentioned before, there can be more than a single spot evaluated for a pixel - anything using Antialiasing is calculating several spots very close together, and an average value of those spots then becomes the value for that pixel.

It's the basis of rendering. Before, all this was done behind the scenes, where the user never had to worry about it. You didn't need to know what info was there unless you were writing a shader or texture. With nodes, you can use this information for more creative stuff. For example you can access the normal info and modify it with another node to create high quality bump effects. Most of the time, you won't need to worry about. Probably the most common use I have for it is to us the Polygon Side info. This lets me know if a ray is hitting the front or the back side of a polygon that has the Double-Sided option turned on. Using that, I can assign a different set of surface attributes to each side, without having to create a second, flipped polygon with a different surface.

As I said, your don't need to worry about it unless you plan to use it for special purposes. Most textures will rarely ever need it. The most common use would probably be for the Polygon Side info, which is extremely useful for doing glass, where you can have a different refractive value depending on which side you're looking at. There's several threads around that talk about that.

He Who Hasn't Used Spot Info For A While Now.

serge
01-16-2007, 09:43 AM
Thanks to all the guys explaining 'spot', and thanks to Surrealist for bringing it up. :) I got a better understanding of the definition of 'spot', but I would like to see some examples in a scene for a better practical view.

Maybe this is a good example:
If you take a look at this thread (http://www.newtek.com/forums/showthread.php?t=61655) you see that MrJack, for experimental reasons I guess, has connected 'Object spot' with 'Normal' of the SSS-node. It does have quite a drastic effect. Can somebody explain what's going on here? And does it make any sense to connect these two?

Also, Dave Jerrard, I read your explanation about 'Object spot' and 'World spot' in relation to rendering textures. I still don't get this one. Could you (or someone else) give a pratical example of the difference of a texture being rendered in terms of "Object spot' and 'World spot'?

Lightwolf
01-16-2007, 09:55 AM
Could you (or someone else) give a pratical example of the difference of a texture being rendered in terms of "Object spot' and 'World spot'?
Easy. Object coordinates move with the object, the pivot point of the object is 0,0,0 (X,Y,Z). So, if you map to an object using object/local coordinates, the mapping (and thus the texture) move with the object.
World coordinates use the origin of the scene as 0,0,0 - if you map using world coordinates and move the object, the texture will stay fixed in space, the item will seem to move through the texture.

Cheers,
Mike

Surrealist.
01-16-2007, 10:14 AM
Thanks to Dave and Mike your explanations are shedding much light. And Lightfreeze, no you are not confusing things. :)

I too would like more pratical examples but the double sided glass seems to make sense.

So OK new to my understanding of this then is that spot data is something that has always been there but we now have access to it in nodes.

Did my brick example have anything to do with spot data? I guess that is the question before moving along. Also LightFreeze pretty much echoed this with his branch example.

Any other examples anyone could give would be great. It did seem to be that there was a reason for bringing it up in the manual.

serge
01-16-2007, 10:27 AM
Easy. Object coordinates move with the object, the pivot point of the object is 0,0,0 (X,Y,Z). So, if you map to an object using object/local coordinates, the mapping (and thus the texture) move with the object.
World coordinates use the origin of the scene as 0,0,0 - if you map using world coordinates and move the object, the texture will stay fixed in space, the item will seem to move through the texture.

Cheers,
Mike
Mike, you're talking about the placement of a texture in Object- or World-coordinates, I get that. But the 'Spot info' here seems something different. For example, I texture an object. I can move the object around and the texture remains the same on the object (unless I check 'World coordinates'). Now, if I connect 'Object spot' (Spot info node) to 'Position' of the texture node I would expect no difference (I mean the spot is still being evaluated in object coordinates), however it isn't, the texture position is way off when I do this.

Surrealist.
01-16-2007, 10:30 AM
Interesting...

Surrealist.
01-16-2007, 03:32 PM
Color (type: color):

Outputs three channel color information in R, G, B format, evaluated on a per spot basis.

Alpha (type: scalar):

Outputs single channel alpha (grey) information evaluated on a per spot basis

Just a couple of more references from the manual.

Sensei
01-16-2007, 03:47 PM
Surrealist, spot is not thing.. It doesn't exist in similar space as pixels in image.. It's not stored somewhere and just read by renderer.. Every Spot Info is unique, calculated and given at a time when ray hit surface..

Lightwolf
01-16-2007, 03:52 PM
Now, if I connect 'Object spot' (Spot info node) to 'Position' of the texture node I would expect no difference (I mean the spot is still being evaluated in object coordinates), however it isn't, the texture position is way off when I do this.
Ah, but the "texture position" input of a node is the position of the texture in relation to the object pivot. At 0,0,0 the texture is effectively centred at the pivot (let's talk local/object coordinates only).
Now, what you are doing is this: For every spot that is being evaluated, tell the texturing node to use the spot position as the centre for the texture itself. So, a spot at 0.5,0,0 would tell the texture itself to offset to 0.5,0,0 - and would then evaluate the resulting colour.

Cheers,
Mike

Dave Jerrard
01-16-2007, 05:17 PM
Mike, you're talking about the placement of a texture in Object- or World-coordinates, I get that. But the 'Spot info' here seems something different. For example, I texture an object. I can move the object around and the texture remains the same on the object (unless I check 'World coordinates'). Now, if I connect 'Object spot' (Spot info node) to 'Position' of the texture node I would expect no difference (I mean the spot is still being evaluated in object coordinates), however it isn't, the texture position is way off when I do this.That's because you're reading the position of the spot that you're rendering, and inputting that as the texture's position. This causes the texture to have a different position for every ray that sees it. With a bit of fancy math, you can use this for some interesting effects, like this.

40947

Here, the center of the Anistotropic shader is shifted along the X and Z directions, and quantized at regular intervals. The result is a grid pattern, with the specular highlight radiating out from the center of each square. Previously, to do this would take thousands of polygons, all UV mapped, and BRDF set up to use those UV maps. If you wanted to change things, that meant a fair deal of Modeler work again. With nodes, changing the size of the grid pattern is as easy as changing a couple values in the node map.

40948

He Who Didn't Come Up With This Node Setup.

Surrealist.
01-16-2007, 05:18 PM
Oh boy! I think we need some pictures and examples at this point.

Still no response on my brick pattern. Is this relevant to spot data?

EDIT: OK Dave! - mind reader! :D

theo
01-16-2007, 06:22 PM
This thread needs to be a sticky in the node library... too much knowledge being bandied about for it to be buried by time.

serge
01-17-2007, 09:19 AM
Mike and Dave, thanks for the explanation. I understand now.

Dave and Sensei, you both implied that spot-info is something the average LW-artist shouldn't be bothered with (except for polygon side), but Dave's example above (thanks for that) is showing that spot-info can be very useful and a big timesaver in some cases. Therefor wouldn't you think that the manual should elaborate more and become more understandable for the average user who might benefit from the spot-data?

Or, is it better to conclude that: yes, it can be very useful, but without some programming knowledge and good understanding of math it'll never really be accessible for the average user? If that's the case then it might be good to mention this in the manual. Because until now I assumed that spot-info-data is something I might need often in my node-networks, but just don't understand well enough yet. Like Surrealist I also got confused by all this mentioning of "per spot basis".

theo
01-17-2007, 09:47 AM
Dave and Sensei, you both implied that spot-info is something the average LW-artist shouldn't be bothered with (except for polygon side), but Dave's example above (thanks for that) is showing that spot-info can be very useful and a big timesaver in some cases. Therefor wouldn't you think that the manual should elaborate more and become more understandable for the average user who might benefit from the spot-data?

This is the EXACT reason why bridges need gapping when dealing with abstract (creative) and concrete (logic) worlds. These two mindsets do not bridge themselves well... Frankly, I think as time unfolds there will be a new kind of talent needed to meet this gathering divide.

So... this thread has become a analogous bridge of sorts.

Surrealist.
01-17-2007, 10:27 AM
Yes, that example was very helpful in understanding what spot is. That is, untill I looked at the node set up and I thought I was to square one again. But maybe not so. Consider this:

I had come to understand that spot data is part and parcel to the way nodes work or for that matter the way the render engine works. And that spot data was being evaluated - as the manual keeps saying - all the time.

Look again at the larger context:


If the incoming connection is a vector type output then XYZ or HPB etc., will supply the values for the RGB respectively. This means that if the incoming vector type is a Position for example, then the X value of that vector will be used to define the red channel of the color. Y to green, and Z to blue and so on in a similar way for HPB if the vector being used is a rotation and so on.

The vast majority (almost all) of connections in the Node Editor are evaluated per spot therefore if the incoming color connection has a variance in the shader or texture values that is affecting the color channel (i.e. a pattern of some kind) then that varied color value will be supplied to the color input it is connecting to. Simply put, if you connect the color output of a red and green marble texture to a color input then that input will receive a red and green colored marble pattern - evaluated on a spot by spot basis.

This is telling me that spot data is part and parcel to how the information is being transfered from node to node. That is, that patterns and so on - which are evaluated on a spot by spot basis - are transfered to other nodes.

So to water it back down to my brick example - and a lot of the examples in the manual actually - you can have a pattern affect other things.

In this case rotation:


We are using the checkerboard texture in two very specific and distinct ways. Often while creating networks you’ll notice that nodes can function for more than one purpose within the network. Here the Checkerboard2D is being used to colorize the Wood2 texture and at the same time rotate the wood texture in such a way that each “block” looks like a different wood texture.

Using the checkerboard to rotate the blocks is the most interesting aspect of this example.

So the reference to spot data in the above example seems relevant here too.

Therefore, anytime you are manipulating attributes in such as rotation, size, color, position, from node to node, you are able to do so because of the fact that these outputs are able to be evaluated on a spot by spot basis.

What does that mean? That means, much like Dave's example - fancy as it is - it just means that we can manipulate just about any attribute on a very low level - the level of the actual render engine, the spot data. Where something is being evaluated. And we can shift the location, or the spot it is being evaluated at by connecting the output of one node - which is being "evaluated on a spot by spot basis" - to another.

Whereas with Layers all you can do is maintain the spot data from layer to layer and thus line things up, with alpha blending and other means in a stack one on top of the other. One layer can only influence another in a unidirectional almost linear way. Yes layers and gradients can be manipulated by other data, but from layer to layer the information remains locked to it's own coordinance - if you will.

So if you put a gradient - which is being driven by a bump peramiter, and I can assume that would be spot data - on top of another prodeural texture, that gradient can not change the procedural. That is, the spot evaluation stops at the gradient. And I know that may not even be technically correct, it is just a way I am trying to understand it from a useage point of view.

Thats my take on it.

Lightwolf
01-17-2007, 12:43 PM
This is the EXACT reason why bridges need gapping when dealing with abstract (creative) and concrete (logic) worlds. These two mindsets do not bridge themselves well... Frankly, I think as time unfolds there will be a new kind of talent needed to meet this gathering divide.

The problem is, in this case the technical term is more concrete than the abstract, creative term (which might be easier to understand).
(i.e. position of the shading vs. spot)

However, from looking at the questions here, the problem is not the wording itself, but the underlying concept.

I wholeheartily agree with Dave here, mastering your tools goes beyond finding out what happens immediately when you press a button. It does take time and patience though.
Nonetheless, there are great artists out there who haven't the foggiest about how a renderer works and they still produce gorgeous imagery. (This goes the other way too, most people that could code a renderer produce awfully bad images ;) ).

As for meeting the divide, that's what a TD is (afaik and imho) supposed to do. Understand both side and actually speak both languages.

Then again, if you work directly with clients that haven't a clue about your work (be it CG, html or whatever) - then you know how hard it can be.

Cheers,
Mike

Wonderpup
01-17-2007, 01:01 PM
I also feel that there is a perhaps understandable reluctance on the part of those who work with and understand these technologies (ie programmers) to have their work presented in ways they might find simplistic. So there may a tendancy to design your manuals with one eye on your peers rather than the intended end users.

Lightwolf
01-17-2007, 01:35 PM
I also feel that there is a perhaps understandable reluctance on the part of those who work with and understand these technologies (ie programmers) to have their work presented in ways they might find simplistic. So there may a tendancy to design your manuals with one eye on your peers rather than the intended end users.
That would be sad indeed. However, there is a difference between making technical points more understandeable and dumbing down. The former should happen, but also requires an effort by the reader. (Remember, it took the developers years to understand some of the finer points themselves, if those could be explained in a sentence it would be the holy grail indeed ;) ).

As in this thread: Explaining what a spot is takes a few words. Explaining how it may be used and what consequences it has to our work...

Cheers,
Mike
P.S. And yes, I think stuff should be explained for those who want to know, and it should use the proper terminology as well. Then again, only few people seem to read manuals...
I've given a few day courses explaining the basics of how rendering works to budding CG artists - and I've heard plenty of "Oh, now I get why xxxx happened back at that job...".

Surrealist.
01-17-2007, 01:41 PM
OK, Here is an example:

I turned the contrast to 100 percent just to illustrate.

1st image is just the Turbulance Node. 2nd is the alpha from the Planks 2D plugged into the Frequencies of the Turpulance Node


The vast majority (almost all) of connections in the Node Editor are evaluated per spot...

So the output of the Planks 2D is
evaluated per spot (location, intersection, place whatever you want to call it) - and merged with the frequency.

That's my take on it.

Lightwolf
01-17-2007, 01:55 PM
That's my take on it.
Pretty much spot on (if you excuse the pun).

Basically, the main difference is: Texture layer parameters are usually changeable per time slice (if animated), Node parameters can be changed from one single evaluation (of a spot) to the next.

Cheers,
Mike

pixelranger
01-17-2007, 02:03 PM
so, one could replace "spot" by "sample", or "sample position" (if one thinks of samples as in a ray's hit/evaluation, like mental ray's samples)?
So spot info is info about any surface sampled at any ray's hit/evalutation?

Surrealist.
01-17-2007, 02:05 PM
EDIT: Thanks Mike. That helps. And well, the first pun of the day so, you are forgiven. :)

Another.

Turbulance with a little randomness from another.

Second, plugging the Alpha output into Scale, Rotation and Position.

Lightwolf
01-17-2007, 02:17 PM
so, one could replace "spot" by "sample", or "sample position" (if one thinks of samples as in a ray's hit/evaluation, like mental ray's samples)?
So spot info is info about any surface sampled at any ray's hit/evalutation?
Bingo. Hey, me might eventually make it to a clear, concise, one sentence definition after all ;)

Hm, the only minor difference might be (but that is nitpicking)... the spot information is the "input" for the shader which will then return an "output" - the sample. So, sample position is fine, sample is a tiny weeny bit off. But I am seriously nitpicking here, sorry ;)

Cheers,
Mike

Lightwolf
01-17-2007, 02:19 PM
EDIT: Thanks Mike. That helps. And well, the first pun of the day so, you are forgiven. :)

You just wait :p

Cheers,
Mike

Surrealist.
01-17-2007, 02:29 PM
Basically, the main difference is: Texture layer parameters are usually changeable per time slice (if animated), Node parameters can be changed from one single evaluation (of a spot) to the next.

Interesting. OK, so not in time necessarily then, but in, ah.. space if you will.

OK what I mean is a layer can be animated through time which is of course technically changing the spot data. That is, this spot now has less or more of this or that attirbute -through time. But nodes, though obviously can be animated, they can also morph through, er, the chain I guess. I mean, they can be manipulated but the final output would be constant at that given time slice (if not animated) but to get there they have been manipulated in other spot-related ways first. Like each node is a dial that twists and tweeks spot data.

Whew!

Think I better quit while I am ahead. :)

Giacomo99
01-17-2007, 09:50 PM
This thread MUST be distilled and put in the Lightwave maunual. At the very least, it should be made sticky.

Surrealist.
01-17-2007, 10:15 PM
Cheers to that, I have a much better understanding now. :)