I’m working on a C# library for the 3dsMax, everything works correct. If I want to increase performance, is there any easy method to write some custom C++ functions and call them inside C#?
Let say I need a function that will set transform for all objects in the scene rapidly.
Calling C++ function inside C#
you can wrap the c++ as if it was native c# then compile with CLI option set then you can call from c#
Did you try to use unsafe pointer reads/writes in c# to improve the performance? You can achieve pretty solid speedup
Single Threaded Sub-Systems of 3ds Max
3ds Max’s reference system and node evaluation system are both single threaded. Thus trying to work with these sub-systems from multiple threads is unsafe and can lead to issues such as race conditions or crashes.
According to this you can get and modify TMs in parallel, but cannot set transforms in parallel.
...
static public void SetSceneTransforms(int variant)
{
var global = GlobalInterface.Instance;
var nodes = new List<IINode>(global.COREInterface14.RootNode.NumberOfChildren);
for (int i = 0; i < global.COREInterface14.RootNode.NumberOfChildren; i++)
{
nodes.Add(global.COREInterface14.RootNode.GetChildNode(i));
}
if (variant < 0) return;
var rand = new Random(12345);
var interval = global.Interval.Create();
int t = global.COREInterface.Time;
if (variant == 0)
{
// do all job in single thread
for (int j = 0; j < nodes.Count; j++)
{
var tm = nodes[j].GetNodeTM(t, interval);
tm.Trans.Z += (float)(rand.NextDouble() * 10);
nodes[j].SetNodeTM(t, tm);
}
}
if (variant == 1)
{
// do all job in single thread with disabled ref messages, then invalidate tms
global.DisableRefMsgs();
for (int j = 0; j < nodes.Count; j++)
{
var tm = nodes[j].GetNodeTM(t, interval);
tm.Trans.Z += (float)(rand.NextDouble() * 10);
nodes[j].SetNodeTM(t, tm);
}
global.EnableRefMsgs();
for (int j = 0; j < nodes.Count; j++)
{
nodes[j].InvalidateTM();
}
return;
}
if ( variant == 2 )
{
// collect and modify TMs in parallel, set in single thread
var TMs = new IMatrix3[nodes.Count];
Parallel.For(0, nodes.Count, j =>
{
var tm = nodes[j].GetNodeTM(t, interval);
tm.Trans.Z += (float)(rand.NextDouble() * 10);
TMs[j] = tm;
});
for (int j = 0; j < nodes.Count; j++)
{
nodes[j].SetNodeTM(t, TMs[j]);
}
}
if (variant == 3)
{
// collect and modify TMs in parallel, set with disabled refs messages then invalidate
var TMs = new IMatrix3[nodes.Count];
Parallel.For(0, nodes.Count, j =>
{
var tm = nodes[j].GetNodeTM(t, interval);
tm.Trans.Z += (float)(rand.NextDouble() * 10);
TMs[j] = tm;
});
global.DisableRefMsgs();
for (int j = 0; j < nodes.Count; j++)
{
nodes[j].SetNodeTM(t, TMs[j]);
}
global.EnableRefMsgs();
for (int j = 0; j < nodes.Count; j++)
{
nodes[j].InvalidateTM();
}
}
}
test script
(
tea = for i = 1 to 10000 collect Teapot pos:(random -[1000,1000,0] [1000,1000,0])
for i = 0 to 3 do
(
t1=timestamp();hf = heapfree
SetSceneTransforms i
format "% undo on Time: %sec. Mem: %\n" i ((timestamp()-t1)/1000 as float) (hf-heapfree)
redrawViews()
t1=timestamp();hf = heapfree
undo off SetSceneTransforms i
format "% undo off Time: %sec. Mem: %\n" i ((timestamp()-t1)/1000 as float) (hf-heapfree)
redrawViews()
)
)
my timings
0 undo on Time: 0.13sec. Mem: 64L
0 undo off Time: 0.146sec. Mem: 64L
1 undo on Time: 0.06sec. Mem: 64L
1 undo off Time: 0.062sec. Mem: 64L
2 undo on Time: 0.138sec. Mem: 64L
2 undo off Time: 0.14sec. Mem: 64L
3 undo on Time: 0.058sec. Mem: 64L
3 undo off Time: 0.059sec. Mem: 64L
no difference between parallel and single thread
I think we can set the transform to a billion scene nodes pretty quickly using pure MXS. The only slowdown in this case is the MXS loops.
What performance improvement do you want to get by using c++ and C#? I don’t see anyone … except when you want to do some matrix algebra. But that’s a different story.
I mostly prevent to jump in to the C++, as I’m not good at it. But yes, sometimes it become necessary to calculate matrixes in the vertex things and also rigging. Set transform was just simple example to describe my question.
I just updated my code with DisableRefMsgs, but I now have problem with complex rigs (mixed biped and bones)
I’m not an animation guy, so can’t really tell you how to tackle that. But I assume you should set/invalidate TMs of all dependents as well (since none of them receive tm change message, when their parents tm changes). Not sure if you can still have any performance improvements after that
To demonstration what I want to achieve, I created a C# project (demo and source codes are attached). This tool will save the pose for all objects in the scene in a xml file, then you can modify objects and blend to the saved pose by using the slider. it works on simple scene, but my goal is to make it compatible with all max rigs (biped, CAT and custom bones). and also increase the performance.
@denisT do you think we should switch to C++ for this?TestProject(2020).rar (3.2 MB)
I played just a little bit and it is clear that you need to somehow hash TMs so you could compare if two TMs are already equal and thus do not require blending whatsoever and as a result you simply skip SetNodeTM call. Check you PM for source files
gif
<Object Name="Bip001 Spine">
<Transform Hash="[5.90464E-08,-0.0007963198,0.9999997][-1.402106E-06,-0.9999997,-0.0007963198][1,-1.402059E-06,-6.01629E-08][-73.05801,-21.80891,19.57084]">
<Row0 X="5.90464E-08" Y="-0.0007963198" Z="0.9999997" />
<Row1 X="-1.402106E-06" Y="-0.9999997" Z="-0.0007963198" />
<Row2 X="1" Y="-1.402059E-06" Z="-6.01629E-08" />
<Row3 X="-73.05801" Y="-21.80891" Z="19.57084" />
</Transform>
</Object>
Code I used to make ‘hash’
You can implement something more performant
static string Point3String( IPoint3 pt )
{
return string.Format("[{0},{1},{2}]", pt.X, pt.Y, pt.Z);
}
static string Matrix3String( IMatrix3 m3 )
{
return string.Concat(Point3String(m3.GetRow(0)), Point3String(m3.GetRow(1)), Point3String( m3.GetRow(2)), Point3String(m3.GetRow(3)));
}
I saw your code, Yes, it will remove unnecessary set transform. thanks!
Another point is the DisableRefMsgs also impressively will increase the performance.
But the loading issue is still exist with the biped. Please find attached .fig file and load on your biped to see what problem I’m talking about.(rotate the clavicle bone)TestFigure.fig (14.7 KB)
Yes, bones didn’t change their positions when I move the slider. Maybe you need to use .SetBipedTM from IBipMaster, just guessing.
Maybe you didn’t exit the figure mode?
For me the problem is when I rotate the clavicle for example, then entire arm will move back and forth randomly during blending,
I couldn’t find it neither in IBipMaster or IIBipMaster:
public static Autodesk.Max.IBipMaster GetIBipMasterInterface(IINode node)
{
var ptr = ((INativeObject)node.TMController.GetInterface((InterfaceID)0x9167)).NativePointer;
return (Autodesk.Max.IBipMaster)Autodesk.Max.Wrappers.CustomMarshalerBipMaster.GetInstance(string.Empty).MarshalNativeToManaged(ptr);
}
public static IIBipMaster GetIIBipMasterInterface(IINode node)
{
return globalInterface.IBipMaster12.GetBipMaster12Interface(node.TMController);
}
Oh, that… yes, I didn’t exit the mode.
Maybe that’s because you set tms in arbitrary order? I don’t know if there’s any difference if we set TMs starting with childs and go up in hierarchy to parents. It is a question for somebody more knowledgeable about animation
SetBipedTM is in IIBipDriver9 interface
It should present in IBipMaster9+ ( AD renamed classes and interfaces cause of this master/slave bullshit ) and in newer versions it became IBipDriver9.
But is it really a good idea to store poses as world coords in the first place? .SetBipedTM expects world tm
If you move whole rig just a tiny bit then all your TMs must update since all of them now differ from what is strored in xml, but the pose remained absolutely the same
It is not ideal, but the only method that comes to my mind for now. to make the entire biped or rig, we should move the root object, but Any other mechanism are welcome.