View Full Version : Comparing vectors spoilt by rounding errors

 Cableguy08 August 2003, 09:11 AMAs I program using vectors I'm finding most of my bugs are caused by rounding errors in the vector arithmetic, and now I've stumbled on one I don't know how to resolve - but it seems it shouldn't be too hard if I had a better grasp of vectors..... if (vnorm(vectorA) == vnorm(vectorB)) How can I alter the above expression to account for the 0.000001 rounding errors I'm getting on either side of the comparison? Like: [0.698740,0.132414,0.703014] vs. [0.698741,0.132415,0.703014] ... the two are really the same, but they're not matching because of that tiny difference. I stumped because I can't just compare the lengths and account for the difference, I'm after the directions of the vectors (hence the use of the vnorm() function). How's it done?
playmesumch00ns
08 August 2003, 10:17 AM
You just have to check that the lengths of the vectors are the same to within some epsilon value. This goes for any floating piont comparison really. Your value of epsilon can be anything you want, like say 0.0001 perhaps, it all comes down to what you're doing.

So you just replace

a == b

with

(a < b + epsilon) && (a > b - epsilon)

I don't know much about Coffee (is that what you're using) but you basically want a function something like

bool isParallel(vector A, vector B)
{
vector nA = vnorm(A);
vector nB = vnorm(B);

float espilon = 0.00001

if (nA.x < nB.x + epsilon && nA.x > nB.x - epsilon)
if (nA.y < nB.y + epsilon && nA.y > nB.y - epsilon)
if (nA.z < nB.z + epsilon && nA.z > nB.z - epsilon)
return true;

return false;
}

aurora
08 August 2003, 02:21 PM
Something playmesumch00ns stated needs repeating This goes for any floating point comparison really
Floating point can be a real pain in the butt sometimes. In fact I once worked on a program with a few fp vari's wherein 0 never equaled 0. My epsilon was 0.0000001 to get it to work right!

Cableguy
08 August 2003, 03:03 PM
I was hoping for a shorter solution, but I guess there isn't one, the method playmesumch00ns offered works fine, so thanks!

My problem with rounding errors hasn't gone completely away though, it seems dependent on the scale of the object. The epsilon value of say 0.000001 works at one size but not at another, where a value of 0.001 may be needed. (But the 0.001 value is wrong in cases where 0.000001 is appropriate.)

Has anyone got a standard routine they use to calculate the optimal epsilon value based on the size of the object? Seems to me that's the only way to go, hardcoding it won't cover all cases. I'm not sure how to calculate it.

playmesumch00ns
08 August 2003, 03:48 PM
It is a bit longwinded I'm afraid, but that's the way it goes. this sort of thing is a lot easier in an OOP language where you can just overload the == operator for any givent type.

It would be very hard to generate an epsilon automatically for any case, for exactly the sort of problems you're encountering. The problem comes to errors down the line of calculations getting magnified by some process or other (in this case it would seem to be object size). This is down to the software you're using, and if you're getting errors as large as 0.001, well that's pretty bad!

Basically I'd advise just sticking with 0.001 and see how that works. You just have to decide how accurate you need the comparison to be!

stew
08 August 2003, 10:19 PM
In any case, it's very useful to define epsilon as a global constant and to declare your own functions isequal (V1, V2) and iszero(x), both using epsilon.

astrofish
08 August 2003, 02:10 PM
FWIW, a shorter version which is independent of orientation (the error region is spherical rather than a cube) is:

if( vlen( vnorm(A)-vnorm(B) ) < epsilon ){
...
}

Note that by the time it's been expanded to assembly it will probably be more complex than the previously suggested method though. You could also compare eplsilon squared (pre-calculated) against length squared and save yourself a square-root.

Regarding floating point errors. Yes, you've got to be really careful. You've got to decide on a case by case basis how equal is equal, depending upon what you are doing.

Unless you know exactly what you are doing, you really shouldn't use standard == with floating point numbers.

It's also important to always do calculations in the same way. For example, suppose you want to calculate a point F at a fraction f along edge AB.
This is typically done by: F = A + f*(B-A).
If you do the same thing from the other end, you've got: F = B + (1-f)*(A-B).
Mathematically these are identical, but with floating point you probably won't get the same result.

This particular example can easily crop up, for example if you've got two adjacent polygons and you want to split a common edge. If you're not careful then the extra vertex position calculated for the two polys will not be the same since adjacent polys will usually traverse edges in opposite directions...

Similar issues arise when deciding which polygon of a meshed surface a particular point belongs to. If you're not careful then you can get cracks between polys...

Cheers - Steve

Cableguy
08 August 2003, 03:06 PM
Thanks guys!

I really do need to have a different epsilon value for different scale of object, setting it to the lowest common denominator creates new errors where a smaller value would work because the margin of error defined by the epsilon is too high then.

Unless someone comes up with a better idea, I'll solve this by finding the optimum epsilon at two extremes of scale, then calculate a linear ratio to get the value roughly right at each runtime.

astrofish
08 August 2003, 04:16 PM
Actually, since you are comparing _normalised_ vectors, I'm a bit surprised that the scale of the original objects is that significant. Hmm...

Cheers - Steve

08 August 2003, 04:39 PM
You might be able to reduce the rounding error by reducing the number of operations, i.e. get rid of those evil vnorms.

if ( dot( A,B ) >= ( 1-epsilon )*sqrt( dot(A,A) * dot(B,B) ) ) parallel = true

Cableguy
08 August 2003, 06:02 PM
Thanks Skidbladnir, that will come useful when I optimise my code.

Astrofish, I wasn't talking about the comparison expression in particular, I was talking in general. I've actually just narrowed it down somewhat, have found what causes the biggest errors I get - a bit of code that uses the PointLineDistance() function - C4D Coffee - with a vnorm()-ed vector as one of the operators. At the upper extreme of scale, where the object is x=10000000 units big I get an error of 0.008!

But no matter really, I'll stick to the scale that works. And avoid programming vectors if I can help it! :wise:

Katachi
08 August 2003, 02:53 AM
Originally posted by Cableguy
Thanks Skidbladnir, that will come useful when I optimise my code.

Astrofish, I wasn't talking about the comparison expression in particular, I was talking in general. I've actually just narrowed it down somewhat, have found what causes the biggest errors I get - a bit of code that uses the PointLineDistance() function - C4D Coffee - with a vnorm()-ed vector as one of the operators. At the upper extreme of scale, where the object is x=10000000 units big I get an error of 0.008!

But no matter really, I'll stick to the scale that works. And avoid programming vectors if I can help it! :wise:

what do you actually need this for (an 8-floating point...I donīt even know if COFFEE can really handle this)?

Btw. when optimizing code you better be avoiding Sqrt functions for they take time. Find another solution, especially when using COFFEE.

Best
Samir

Cableguy
08 August 2003, 01:14 PM
Originally posted by Designer
what do you actually need this for (an 8-floating point...I donīt even know if COFFEE can really handle this)?

Hi Samir. I get the precision of Coffee from the Console printout, where it routinely uses the 0.000000 format. However, it's actually even more accurate than that because I've noticed it sometimes not matching values that were equal up to those dot-six figures. That's what made me settle on the 0.000001 epsilon to begin with, until I noticed the scaling problem.

I'm using this stuff for a plugin - deforming a hi res mesh with a lo res mesh, like subdivision nurbs, but 'post facto' as it were. Need it primarily to deform faces, coz I hate bones, ffd, etc. After a lot of time and effort in this I thought I'd find a way of releasing it commercially, but now I won't bother as I've run out of time to fix the scaling issue. Might reconsider in the future, for now it's back to the art for me! :thumbsup:

playmesumch00ns
08 August 2003, 09:04 AM
Why have you found it necessary to have an object that's like 10000000 units big? This is always going to throw up problems for exactly the reasons we've already gone into. A rounding error of 0.01 on a scale of 10000000 is pretty good, after all it's only 1 part in a billion. If you keep the scale of your scene more sensible I think you will have less problems.

Cableguy
08 August 2003, 11:22 AM
Well I played with scale for two reasons - to better understand the way rounding errors happened and to make the plugin commercially viable. I think it's quite easy to fix, just need to calculate the scale before setting the epsilon, something I might do when I have the time.

Oreo
08 August 2003, 11:14 AM
Well first off i'd like to say i'm really new to programming and just trying to start to get into programming for 3D apps so i'm no expert by far.

I don't know if this would work in COFFEE however i solved a c++ problem this way and i know COFFEE is somewhat similar.

To find the value your float espilon would need to be is by dividing by a 3rd value and comparing. such as this code below

{
Xa = 0.698740 //One Value you gave
Xb = 0.698741
TempXa
TempXb
iEpsilon = 10 / the Float Epsilon value
Do{
iEpsilon *= 0.1
TempXa = Xa % iEpsilon
TempXb = Xb % iEpsilon
}
While(TempXa == TempXb)
iEpsilon *= 10 /*This is to set it back to the value in which the Xa and Xb values would equal because on the last value they are different*/
}

I don't know if this would work but i thought i'd give my idea..

CGTalk Moderation
01 January 2006, 10:00 PM
This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.

1