1. ## Rotation matrix order?

Since there doesn't appear to be a standard LWSDK function that takes the Scale/Position/Rotation vectors for generating a transformation matrix, I'm going to have to roll my own. (This is for the standard transforms found in 3D nodes.)

In what order are rotations applied? XYZ? ZYX? Some crazy order I'm unaware of?

Secondly, in what order are the transforms to be applied when building the final transform matrix? I'm guessing it's Rotation -> Scale -> Position, but I'm not entirely certain of that.

2. Empirical research (with the checkerboard 3D procedural node) would suggest:

Scale -> Rotate1 -> Position

1) H -> P -> B

In case you care about the crude empirical procedure to see if I had any brainfart :

Base test config XYZ scaling of 40mm, and other vals at 0.

If you scale something non-uniformly (like 40 40 10 mm) and then rotate it you can see in the preview that the scaling is fixed nicely rotating, so that would suggest that scaling is done first of the two.

If you set a uniform scale and then manipulate position along one axis, say Z and get a feel for how much the texture visually shifts when you scroll 10mm, and then set Z scaling to 25% or something of the original value and then again scroll position 10mm, the visual shift still appears to be the same amount, i.e. not affected by scale. That would suggest that scaling is also done before translation.

Now remains the order between tranlsation and rotation. Resetting all vals again, scroll postion back and forth on Z and see where the local Z axis lies in the preview. Rotate H 45° or so and visually track where the local z axis would be. Now scroll Z pos back and forth and see how it's no longer offsetting the texture along the local Z axis, which suggests that translation is done last.

That also makes sense because scale -> rotate -> translate is a pretty normal order in general afaik.

Rotation order in LW is H -> P -> B (empirical test on checkboard node confirms). However that's not X -> Y -> Z in terms of axii. H is around Y-axis, P is around X-axis and B around Z-axis, so order would be Y -> X -> Z. I'm not sure if the vector you get from rotation is a HPB vector or if it's axial rotations (PHB order) though, so you'd have check that to see if it's x->y->z or y->x->z.

3. I'll give it a shot. Thank you!

4. I'm really starting to wish I'd payed attention in algebra.

So If I'm going to combine HPB rotation matrices, I'd multiply them in the order (H*P)*B? (Where H, P, and B are the 3x3 rotation matrices I've constructed?) I remember matrix multiplication doesn't commute so it's order-sensitive.

5. If (and I do mean if) I got my math right, applying HPB rotation to a vector V results in vector R:

Rx = Vx * (cos^2 + (-sin)^3) + Vy * (sin * cos + (-sin)^2 * cos) + Vz * (-sin * cos)
Ry = Vx * (-sin * cos) + Vy * (cos^2) + Vz * (sin^2)
Rz = Vx * (sin * cos + (-sin)^2 * cos) + Vy * (sin^2 + -sin * cos^2) + Vz * (cos^2)

That's without factoring out anything in the multiplications. No idea if it's right.

Edit: Completely forgot to separate out the HPB angles, so the above only works if you're rotating the same amount on all three axis. I don't math well.

6. Attempt #2 gets me this monster. Should keep the FPU busy.

Using the original 3D texture coordinate Vxyz, and using the negative of all the angles (otherwise the rotations go the wrong way -- texture transforms are all backwards) θxyz (so θx is the negative of the amount given in the P rotation input), our rotated texture coordinate R is:

Rx = Vx * (cos θy * cos θz + -sinθy * -sin θx * -sin θz) + Vy * (cos θy * sin θz + -sin θy * -sin θx * cos θz) + Vz * (-sin θy * cos θx)

Ry = Vx * (cos θx * -sin θz) + Vy * (cos θx * cos θz) + Vz * sin θx

Rz = Vx * (sin θy * cos θz + cos θy * -sin θx * -sin θz) + Vy * (sin θy * sin θz + cos θy * -sin θx * cos θz) + Vz * (cos θy * cos θx)

Again, I won't know if I messed up on the math until I actually put it into code, but it's a start.

7. The above math appears to work. Huh. Go figure.

Code:
```        // ==================================================================== Coordinate Transforms
// Coordinate Transforms
// --------------------------------------------------------------------
inline LWTypes::LWDVector Transform (const LWTypes::LWDVector spot_position,
const LWTypes::LWDVector translation,
const LWTypes::LWDVector rotation,
const LWTypes::LWDVector scale)
{
// Note: negative/inverse transform amounts are used -- otherwise the
//       texture transforms will be opposite of the intended
//       direction.

// Scale
const double scaled_x = spot_position.X() / scale.X();
const double scaled_y = spot_position.Y() / scale.Y();
const double scaled_z = spot_position.Z() / scale.Z();

// Rotate
const double rx = -rotation.P();
const double ry = -rotation.H();
const double rz = -rotation.B();

const double cos_rx = cos(rx);
const double cos_ry = cos(ry);
const double cos_rz = cos(rz);

const double sin_rx = sin(rx);
const double sin_ry = sin(ry);
const double sin_rz = sin(rz);

const double neg_sin_rx = -sin_rx;
const double neg_sin_ry = -sin_ry;
const double neg_sin_rz = -sin_rz;

const double rotated_x = scaled_x * (cos_ry * cos_rz + neg_sin_ry * neg_sin_rx * neg_sin_rz) +
scaled_y * (cos_ry * sin_rz + neg_sin_ry * neg_sin_rx * cos_rz    ) +
scaled_z * (neg_sin_ry * cos_rx);

const double rotated_y = scaled_x * (cos_rx * neg_sin_rz) +
scaled_y * (cos_rx * cos_rz    ) +
scaled_z * (sin_rx);

const double rotated_z = scaled_x * (sin_ry * cos_rz + cos_ry * neg_sin_rx * neg_sin_rz) +
scaled_y * (sin_ry * sin_rz + cos_ry * neg_sin_rx * cos_rz    ) +
scaled_z * (cos_ry * cos_rx);

// Translate
return LWTypes::LWDVector( rotated_x - translation.X(),
rotated_y - translation.Y(),
rotated_z - translation.Z() );
}```
I hope this can save other coders some of their valuable time.

I decided not to simplify anything in the rotated_* calculations as I've found it's generally not wise to try and outsmart the compiler. Since everything in there is const, the compiler's optimizer should be able to make the best decisions as to how to structure the whole thing.

8. I missed all the action it seems. I would've suggested that you actually make a matrix class first and to the muls in code. When everything works and you need to squeeze out some additional performance then you can write the optimized matrix generation where you have HPB concatenated. Such a class has a good chance of coming in handy for future projects too. You can easily apply any combination of transforms that way, and switch things around. Aaaanyway, good to see that you got it working.

9. I considered it, but really needed to stretch my math-legs. This made for a good exercise.

(aaaand I see the forums don't seem to work with .jpg rotation settings correctly...)

#### Posting Permissions

• You may not post new threads
• You may not post replies
• You may not post attachments
• You may not edit your posts
•