PDA

View Full Version : Spot size



Dan Ritchie
07-30-2017, 01:23 PM
Can anyone explain "Spot size" and how it is calculated?

MonroePoteet
07-30-2017, 03:10 PM
I'm no expert, but it appears to be a means of allowing the nodal network to provide less detail the farther an object is away from the camera. To paraphrase, it appears the "spot size" gets larger the farther away from the Camera it is.

Attached is a sample scene which uses a Gradient to set the Surface color based upon the Spot Size, with the node setup:

137528

Using VPR to see the Surface changes, the Camera is then moved towards the Big Ball (stretched in X to 0.2 for an elipse) over the 120 frames. When far away, the color is white (i.e. largest spot size in the Gradient) migrating down the Gradient until at frame 120 all spots are basically 0 (zero) in size (red color). The following are renders, with the last 3 digits of the file name being the frame number:

137537 137531
137532 137533
137534 137535

Interesting.

mTp

RebelHill
07-30-2017, 03:50 PM
A spot is basically a pixel in the render... spot size is the size/space viewed within that spot. So if you have a 1m item, that is so far away that it occupies only 1 pixel when rendered, then the spot size is 1m, etc.

Dan Ritchie
07-31-2017, 01:15 PM
What I suspect spot size is, is the value used for mip-mapping. Basically, (I think) it is used to calculate which level of the mip map to choose. That should mean it's not just distance. Oblique angles on an object would have a higher mip-map level whether they were close or far away.
But how is it calculated? In screen space, object space, uv space?

MonroePoteet
08-01-2017, 08:04 AM
I can confirm that oblique angles change the spot size: using the scene I posted previously, substitute a cube for the sphere at a fixed distance from the camera rotating around Y and the oblique sides have a larger spot size.

mTp

Dan Ritchie
08-02-2017, 02:56 PM
Well, I see it does increase with oblique angles as well as distance. But I see no indication that it is linked with texture coordinates, otherwise, it would get brighter where polygons are small and scrunched up.

Normally, in open GL, when they calculate which mip map level to use for a given pixel, they take the gradient of the current UV coords, and the ones from the pixel at the left (or other location.) Something like...

float2 dx = ddx(iUV * iTextureSize.x); //get the u gradient from this and a nearby pixel
float2 dy = ddy(iUV * iTextureSize.y); //get the v gradient from this and a nearby pixel
float d = max(dot(dx, dx), dot(dy,dy));//basically, estimate the distance (faster than using a square root, I guess)
return 0.5 * log2(d); //inverse operation to exponentiation, related to each level being 1/4 smaller than the last

This relates texture coordinates to screen coordinates and gives an estimation of which mip map level to use. However, it had some problems. Since the gradient doesn't change much across polygons, the levels appear faceted. There's also a big seam where the U/V map wraps from 1 back to 0. I've seen these artifacts in LW openGL, but not in the final renders.

I'm still trying to understand the nature of all this. There's not much to go on out there.