PDA

View Full Version : plugin design - WHEN...?


daniel_arz
07-27-2005, 07:08 PM
Hi guys,
I have a general question regarding software design. I'm wondering when to choose to write a member function within a function. I'm writing a Maya API plugin which, within the API guidlines, defines its own compute function. Within this function I have about 30 lines of code that will be iterated. I'm wondering if it would be a good idea to convert these lines into a separate function. The reason I'm not sure whether to do it or not is
1)The function would require 9 arguments - which seems like alot
2)3 of those arguments would be pointers to values calculated within the function
3)Speed is of the essence

One nice thing about writing the function is that the compute function looks very clean. Having the function also makes it alot easier to use it in a different context.

D :shrug:

Siladar
07-28-2005, 04:56 PM
Do you need to use the function else where?

If speed is of the essence then you probably don't want the overhead of calling a function. Several pop's onto the stack....pop's back off....global references...blah blah blah....

No reason to put it into a function unless you need it elsewhere. Nobody else is going to see your code right? And 30 lines of code isn't all that much for a loop.

WTB

nurcc
07-29-2005, 07:41 PM
If it really bothers you, then go ahead and break it into a separate function, but mark it with the inline keyword. A good optimizing compiler should be able to inline it for free, basically.
You could verify this with some profiling.

Although looking back on the amount of work in the above paragraph, it'd have to really bother you.

daniel_arz
07-29-2005, 10:11 PM
"Do you need to use the function else where?

No, I don't need the function anywhere else. The compute function is the only one called during playback.

"If speed is of the essence then you probably don't want the overhead of calling a function. Several pop's onto the stack....pop's back off....global references...blah blah blah...."

I understand.

"Although looking back on the amount of work in the above paragraph, it'd have to really bother you."

I guess it doesn't really bother me. I thought breaking it up into a separate function would have adhered to the modular phylosophy. But it might be overkill in this instance.

Thank you for the responses guys.

daniel_arz

gga
07-30-2005, 11:54 AM
Hi guys,
I have a general question regarding software design. I'm wondering when to choose to write a member function within a function.

Never. Create another member function, not a function with another function inside.


1)The function would require 9 arguments - which seems like alot

It probably is.


2)3 of those arguments would be pointers to values calculated within the function

Then pass a single pointer to a struct that contains the 3 results. The function then fills in the members of the struct.


3)Speed is of the essence

Don't optimize. You haven't tested the function yet, so you don't know if speed is an issue.

Another possibility is to do multiple loops. Basically, you try to keep no function between the iteration. That is, you take your 30 lines and create two functions that are just called once from your compute. Each one of those functions iterates thru whatever it needs and calculates something.




// Instead of...
function() { // 30 lines of code }
compute()
{
for (i...100)
function();
}

// You do...

function_a() {
for (i...100)
{
// 15 lines of code
}
}

function_b() {
for (i...100)
{
// 15 lines of code
}
}

compute()
{
function_a()
function_b()
}

daniel_arz
07-30-2005, 08:12 PM
Never. Create another member function, not a function with another function inside.

Provided they are both members of the same class, correct?

Then pass a single pointer to a struct that contains the 3 results. The function then fills in the members of the struct.

I'll do that. I don't know why I didn't think of that

Don't optimize. You haven't tested the function yet, so you don't know if speed is an issue.

Is there a way to gauge the speed of a function? Some sort of Macro perhaps? Or would comparing the number of lines in the functions be enough?

Another possibility is to do multiple loops. Basically, you try to keep no function between the iteration. That is, you take your 30 lines and create two functions that are just called once from your compute. Each one of those functions iterates thru whatever it needs and calculates something.

I'll give that a try. thanks Gonzalo.

d

gga
08-01-2005, 02:28 PM
Is there a way to gauge the speed of a function?


Couple of ways. What you want is called profiling.

The most common one is to call a function to get the system clock ( see clock() on unix systems and GetTickCount() on windows) once before the beginning of the code you want to test and once after the function is done. The difference between both times is the time your function took (usually given in milliseconds).
If your function finishes in less than a second or so, you probably want to add an additional loop to make sure the time taken is significant (5+ secs). And you should also do several test runs to make sure that the timings you get are accurate (sometimes the cpu may lag due to some other process).

All compilers also support the ability to compile your code for profiling and debuggers can often run the code measuring the time of all functions. The way this works varies from compiler to compiler, so you need to scan the manual. Note, however, that for dll/so libraries, this may not work quite as advertised, as the main application is not compiled with profiling. I have never tried profiling a plugin this way so I'm not sure if it would work.

Finally, Alias has a library and mel command that will automatically profile the speed of your node by running it several times. Unfortunately, this library is not shipped with maya so developers don't really have a way to use it.

daniel_arz
08-01-2005, 08:08 PM
Couple of ways. What you want is called profiling.

The most common one is to call a function to get the system clock ( see clock() on unix systems and GetTickCount() on windows) once before the beginning of the code you want to test and once after the function is done. The difference between both times is the time your function took (usually given in milliseconds).
If your function finishes in less than a second or so, you probably want to add an additional loop to make sure the time taken is significant (5+ secs). And you should also do several test runs to make sure that the timings you get are accurate (sometimes the cpu may lag due to some other process).

All compilers also support the ability to compile your code for profiling and debuggers can often run the code measuring the time of all functions. The way this works varies from compiler to compiler, so you need to scan the manual. Note, however, that for dll/so libraries, this may not work quite as advertised, as the main application is not compiled with profiling. I have never tried profiling a plugin this way so I'm not sure if it would work.

Finally, Alias has a library and mel command that will automatically profile the speed of your node by running it several times. Unfortunately, this library is not shipped with maya so developers don't really have a way to use it.

Thank you for your reply gga.
I used timeGetTime() with timeBeginPeriod() and timeEndPeriod() set to 1 for maximum precision. It printed the results in milliseconds. I'm not really sure how well it worked because some frames printed out 0 the others where in the range of 13 to 14 milliseconds. Huge discrepancy if you ask me. Could the function run faster than 1 millisecond in one frame and take 14 milliseconds on the next? I then added GetTickCount and printed those results as well. They where off by no more than 1 millisecond. In any case I would run the same code for different versions of the function and so the results would only need to be relative. Accuracy is not that important. However, the zeros where still kind of bothering me so I looked to mel for the solution. It just so happens that there is a command called dgTimer that does everything I needed and much more.

I would still like to understand the huge discrpancy in values. Hopefully in the future I can return to profiling but for now, back to the api.

D

Robert Bateman
08-02-2005, 11:00 PM
dont bother trying to profile stuff using timers, thats of no use whatsoever. Get hold of vtune or code analyst and profile it properly - that will tell you if you need to do anything with much more measurable certainty.

I'd suggest splitting the compute into multiple functions, i do it all the time to ease readability. Function call overhead is really pretty insignificant in this instance, especially when you compare it with what maya has done to call compute.....

always aim to make it readable, maintainable and simple. If (and only if) the profiler says you have a problem, optimise. Optimised code is far harder to maintain than the un-optimised equivalent - also bear in mind that the bottlenecks in maya6.5 may not be the same as those in maya 8.....

daniel_arz
08-03-2005, 11:06 PM
Thank you for your reply Robert.

dont bother trying to profile stuff using timers, thats of no use whatsoever. Get hold of vtune or code analyst and profile it properly - that will tell you if you need to do anything with much more measurable certainty.

I will give those a try. Have you tried the dgTimer command in Mel?

I'd suggest splitting the compute into multiple functions, i do it all the time to ease readability. Function call overhead is really pretty insignificant in this instance, especially when you compare it with what maya has done to call compute.....

I understand what you mean. Maya API can be an entirely different animal, correct? I think in the case of the Maya API, since we are working on an abstracted layer anyway, it might be more advantageous to lean towards readability and maintainability rather than optimization. I'd like to profile the two versions of the same code and see how big the timing difference is.

always aim to make it readable, maintainable and simple. If (and only if) the profiler says you have a problem, optimise. Optimised code is far harder to maintain than the un-optimised equivalent - also bear in mind that the bottlenecks in maya6.5 may not be the same as those in maya 8.....

In that case, what do you think of a function that take 9 arguments?

daniel_a

Robert Bateman
08-04-2005, 02:03 AM
Have you tried the dgTimer command in Mel?

nope.

I understand what you mean. Maya API can be an entirely different animal, correct? I think in the case of the Maya API, since we are working on an abstracted layer anyway, it might be more advantageous to lean towards readability and maintainability rather than optimization. I'd like to profile the two versions of the same code and see how big the timing difference is.

you always get a trade off between speed and flexibility. Maya took the flexibility route, xsi for instance took the speed route. That's mainly why maya's so much easier to develop for, but then xsi can handle about 10x the geometry of maya (at the expense of it's api).

If the compute function is heavy, the best optimisation with maya is to think about how to split the node so that the DG does not need to compute as often or as much. ie, rather than....

compute(var)
{
if var == result
{
compute A;
compute B;
result = A + B;
}
}

if you make A and B to be output attributes on the node, you can do:

compute(var)
{
if var == A
calculate A
else
if var == B
calculate B
else
if var == result
result = A+B
}

when the result is requested, it'll simply add the two vars. A and B will only be re-computed if the upstream graph of A or B is dirty. Basically the DG works on lazy evaluation, so if you get the attributeAffects set up in a clever way, you can often cut out the actual amount of computation required. Thats pretty much the only way to ompitimise the compute func....

In that case, what do you think of a function that take 9 arguments?

I suggest that anymore than 5 is bad because you won't remember what they are next week ;) either package the args in a struct and pass a reference (or pointer), or just use member variables of the class. The difference is that with a struct you will need 2 pointers (1 to this, 1 to the struct) and with members variables that simply drops to 1 (this). You do know that member variables are fine within an MPxNode derived class? they work the same as normal C++ classes, it's just they can't form part of the DG.

(fyi __fastcall will only ever work with 5 args max...).

mujambee
08-04-2005, 04:56 PM
I would still like to understand the huge discrpancy in values. Hopefully in the future I can return to profiling but for now, back to the api.
D

The standard OS timer has an accuracy that varies among different systems, but I seem to recall it was in the 10ms range the last time I tried to use it (PIII 650Mhz).One possible explanation is that your system has an acuracy of 13 ms, wich means that if your function started and finished in the same 13ms slice, it reads 0, but if it started in one and finished in the next, you read 13. This would mean that your function took something in the range 0 to 13 the first time and 1 to 26 the second.

Try timeGetDeviceCaps and see what it says.

daniel_arz
08-04-2005, 10:04 PM
If the compute function is heavy, the best optimisation with maya is to think about how to split the node so that the DG does not need to compute as often or as much. ie, rather than....

when the result is requested, it'll simply add the two vars. A and B will only be re-computed if the upstream graph of A or B is dirty. Basically the DG works on lazy evaluation, so if you get the attributeAffects set up in a clever way, you can often cut out the actual amount of computation required. Thats pretty much the only way to ompitimise the compute func....

I think I understand why you choose the flexible approach. The first reason is probably because Maya has chosen that route but also if your code is written so that different variables are computed depending on what attribute has its dirty bit set, then it makes sence to compartmentalize the compute function into more basic parts and then use those parts within the different if(plug == ) statements, correct? It would make a complex process more managable.

or just use member variables of the class...and with members variables that simply drops to 1 (this). You do know that member variables are fine within an MPxNode derived class? they work the same as normal C++ classes, it's just they can't form part of the DG.

Do you find yourself using member variables because of your method of optimizing the compute function?

(fyi __fastcall will only ever work with 5 args max...).

yeah 9 seems too much. I will try to break the function into two parts.

daniel

daniel_arz
08-04-2005, 10:07 PM
The standard OS timer has an accuracy that varies among different systems, but I seem to recall it was in the 10ms range the last time I tried to use it (PIII 650Mhz).One possible explanation is that your system has an acuracy of 13 ms, wich means that if your function started and finished in the same 13ms slice, it reads 0, but if it started in one and finished in the next, you read 13. This would mean that your function took something in the range 0 to 13 the first time and 1 to 26 the second.

Try timeGetDeviceCaps and see what it says.

wouldn't timeBeign/EndPeriod(1) set the precision timing to 1 millisecond? I will try timeGetDeviceCaps. I'll also look into code analyst and vtune.

daniel

CGTalk Moderation
08-04-2005, 10:07 PM
This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.