Averaging matrices?


#7

It would also be interested to see what the speed difference is from what Denis has to just averaging the vectors as operations like slerp are known to be slow. Test both and see what speed differences you get.

Note that what should be faster isn’t always. Add, subtract, multiply and divide are done in Max script where a function is done in C++. For instance I wanted a faster solution to calculating the Distance function that is using square root and should be slow. So in Max script I wrote what is call a taxi cab distance it isn’t as accurate but if you only need to compare distances it should be faster then using the true Distance. How ever because the math was being done in Max script it turned out to be far slower then the Distance function that is written in C++.


#8

why slerp and distance are slow? 100000 operations takes 0.2 sec on my machine. Is it slow?


#9

Just as far as Math functions go they would be slower then others. Like I said, that is relative to other operations in Max Script how ever and they could be faster then just adding in MXS.


#10

I think what denisT might be implying is that he can’t envision a situation in which the tiny difference in speed between the two methods would become a practical concern.

As for myself, I came by the normal-averaging method more or less accidentally, and simply found it an easier process to visualize and understand.


#11

could you please using the normal-averaging method average these two matrices and show how you do it?
matrix3 [-1,0,0] [0,1,0] [0,0,-1] [0,0,0]
matrix3 [1,0,0] [0,1,0] [0,0,1] [0,0,0]


#12

That is where it will break down of course. I’m not trying to say that yours is not the better mathematical solution.


#13

sorry. maybe I’m wrong, but honestly the normal-averaging methods for transform matrices doesn’t make any sense for me.


#14

It worked well for what I needed at the time. What you have is definitely more accurate. I should do a speed test on the two as yours could be faster.


#15

Blending matrices by linearly interpreting the basis vectors is not accurate for any weighting other than 50%.

In a nutshell you are interpolating along the line between two points on the unit sphere and then projecting the interpolated point out to the surface of the sphere (when you re-normalize) --instead of interpolating along the arc of sphere between the points.

The key is to linearly interpolate the angular difference between the two rotations, not a line that connects the endpoints of unit vectors that are separated by that angle.

While the interpolated point moves at a constant (linear) rate along the line between the points (this point actually lies in the interior of the sphere) the resulting projection doesn’t move at a constant rate across the surface of the sphere.

Doing this 3 times (once for each axis) will cause distortion of your basis vectors and result in a matrix that is no longer orthonormal – and orthonormalizing a matrix is something to avoid if at all possible!

So if you need anything other than an exact 50/50 (‘average’) blend you’ll need to do it the way Denis suggested. 50/50 works because the projection onto the unit sphere of the midpoint of the line between the points is also the midpoint of the arc connecting the points.

If you are at all worried about speed then maintain the quat for each matrix so that you don’t have to continually call rotationpart() and recreate it just to slerp it.

I’m sure there must be a good diagram of this situation out there for the googling.

.biddle


#16

what is the function for the averaging matrices? is it something like:


fn averageMatrix m1: m2: =
(
	row1 = (m1[1] + m2[1])/2
	row2 = (m1[2] + m2[2])/2
	row3 = (m1[3] + m2[3])/2
	row4 = (m1[4] + m2[4])/2
	matrix3 row1 row2 row3 row4
)

?

what has to be normalized?


#17

Ya something like that. Mine did a loop through any number of transforms to get the solution. Each vector needs to be normalized I find or you get small errors popping up.


#18

The rotation portion of the matrix (first three rows) must be normalized, otherwise the scale will be off.

This average trick only works when both incoming matrices are orthonormal (ie. first three rows define vectors that are each of unit length and are all at right angles to each other)

You cannot get the correct average this way if the transforms are scaled.


fn averageMatrix m1: m2: =
(
	row1 = normalize ((m1[1] + m2[1])/2)
	row2 = normalize ((m1[2] + m2[2])/2)
	row3 = normalize ((m1[3] + m2[3])/2)
	row4 = (m1[4] + m2[4])/2
	matrix3 row1 row2 row3 row4
)

EDIT: and of course you need to check for degenerate normals by ensuring that the length of (m1[i] + m2[i]) is greater than some suitably small value.


#19

Correct, it will end up in a skewed transform. how ever you can run orthogonalize on it as well and make sure that it is square.


#20

but if we normalize first three rows it will kill scale at all


#21

That’s entirely true

It’s a shortcut, and only useful for a subset of transforms.


#22

fine. so if we want to average scales we have to normalize vectors before sum and after, calculate average scale and pre-scale the final matrix.

it’s in matter of the performance…


#23

…and I’d never do it that way either (averaging the vectors).

I prefer to maintain separate components. Something like:

struct  PosRotScale
(
	mPos,
	mRot,
	mScale,
	
	fn applyBlend prs1 prs2 w =  (
		invw = (1 - w)
		mScale = prs1.mScale * invw + prs2.mScale * w
		mRot = slerp prs1.mRot prs2.mRot w
		mPos = prs1.mPos * invw + prs2.mPos * w
		),

	fn fromMatrix3 m = ( mPos = m.translation; mRot = m.rotationpart; mScale = m.scalepart; m ),
		
	fn asMatrix3 = ( translate (rotate (scalematrix mScale) mRot) mPos ),

	fn dump = ( format "Pos: %
Rot: %
Scale: %
asMatrix3:%
" mPos (mRot as eulerangles) mScale (asMatrix3()) )
)

m1 = PosRotScale()
m1.fromMatrix3 ((eulerangles 0 50 0) as matrix3)

m2 = PosRotScale()
m2.fromMatrix3 (transMatrix  [10,20,30])

m3 = PosRotScale()
m3.applyBlend m1 m2 0.25

m3.dump()

A more ‘complete’ solution, where memory is traded for speed, would store the matrix too (and perhaps maintain it’s inverse too, if you find that you are inverting it a lot)


#24

and we have to check that pair of vectors are not parallel or not differently directed.so it’s three dot products more.


#25

Yep, this is why I went with the simple averaging of vectors as it did what I needed it to do at the time.


#26

This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.