Anybody here own a Perception Neuron Kit from Noitom? The kind folks over at Noitom have sent me a 32 PN Kit to evaluate a possible Cinema 4D plugin. If there are enough of you out there then I will look into it and can release for R17 upwards.
I should have added a “Would like a Kit but don’t currently have one” option in there as well. So please just post here if both the Kit and the Plugin interest you. Thanks.
Im interested in buying one but dont see it happening.
I have one.
I recently acquired one, but what would the plugin do? At the moment, I import the capture as FBX or BHV, and the more pressing question is how to (comfortably) retarget and smooth out the motion. It might be nice to connect the suit “live” when working in a team, but for a single person mocapping, you can’t sit at the machine and wear the suit at the same time anyway.
I agree with Cairyn. It wouldn’t hurt to see the mocap live in Cinema, but the real issue in C4D is retargeting, cleaning and IK/FK blending.
So it’s what comes AFTER the raw capture that’s a problem.
During capture, you have to have Perception Neuron’s Axis neuron open anyway, so you see the manniquin in realtime.
My current workflow currently involves :
- Capturing the raw data in Axis Neuron software, export as BVH with Tpose.
- Sending it to ikinema/bvhacker for retargeting, cleaning, zeroing, key reduction etc… (depending on the final use I might record straight to iclone, but I like the autocleaning in ikinema)
- Import into iclone for animation layering, chaining, correction, blending etc… It’s using autodesk’s HumanIK so that you can easily add IK corrections on top of FK motion. In effect it’s a cheaper alternative to motionbuilder.
- Export the final processed FBX or BVH motion in C4D for render.
So for a mocap solution to be really useful in C4D, it would need those 3 things :
-1) Stream/import data to a standard rig that allows you to layer IK adjustments on top of the FK motion in C4D NLE
-2) bake/filter/optimize keys (NLE can do some of that, some plugins do it better)
-3) Retarget that processed motion to custom rigs (preferably with presets for the most common standards like Daz, mixamo, Mobu…)
Unfortunately, that’s a lot of work for a single developper and a relatively small market at the moment.
Firstly you can think of it as not having to go through a separate application to do the motion capture. So you can record your mocap directly in Cinema 4D. Create multiple captures and manage your captures. Also you could drive an existing rig in your scene, so you can see directly how the retargeting is going to work with your character and test out the best way to capture the motion for your particular character.
Essentially if you would like a full system for dealing with and managing mocap data with Cinema 4D then any issues you currently have could be incorporated into this same plugin to stream line and iron out any issues you may be having. That would include tools for retargeting and smoothing out the capture data.
So feel free to list all the pain points you currently have in a mocap workflow and let me know what an ideal solution would be for you.
I myself want to use it to drive all sorts of characters, not just humanoid characters, so live control would be essential to being able to figure out the best way to control the rig.
You could also wear the suit while in VR and see the character you are controlling directly in front of you, allowing you to perform with another character you have previously captured. Allowing you to essentially be an actor in a scene, side by side with your existing characters, eventually acting out a full scene with a full cast.
Thanks for the overview of your workflow Eric, really helpful. Agreed it is a lot of work, but I will still look to see if there is something that can be done, even if it’s just a first step integration.
That is basically a valid idea. But here’s the catch (remember I just started this kind of stuff, so you may have more experience, but this is what I currently do):
I do not only own a Perception Neuron, I also own a Rokoko Smartsuit Pro. So, the plugin would work only with half of my suits. (At least, the direct mocap part would - if the mocapping and the post-processing is sufficiently decoupled, at least the latter would still be accessible.)
When in the garden for mocapping, I have only the laptop with me. Running a full C4D there is possible but I don’t know yet whether it’s practical. (At the very least, it would require a specialized layout; at worst, I would need additional monitors, my 3D mouse, and other stuff for a full setup.) This may not be an important point as the native suit applications also need certain minimum setup, but it’d be good to keep it in mind what you can do in the field vs. what you can do when sitting in front of your full machine.
The native applications already do some processing on the raw mocap data. Like, the Perception Neuron can set the floor contact points, and this influences the way the rig is evaluated. If I understand the APIs correctly, you would get the raw data and need to replicate everything that the native applications can do (otherwise using your plugin would mean I lose some functionality). Sure, that’s also a chance to make it better but… you are racing against the native producers of the suit.
Using an interchange format like FBX decouples the hardware and raw processing side from the C4D functional side. That sounds a lot like “oh that developer guy has funny ideas about modularization” but it does make sense because you can use the C4D side stuff with any hardware that supports the interchange format, not just that hardware whose API you support. If the - let’s call it MCP, mocap cleanup plugin - works with FBX input and just fine-tunes the mocap, I can use it with any suit I own and may own in the future. With a hardware specific plugin that gets its data from the API, I will be bound to this specific manufacturer (or even this specific hardware; I do remember what PITA it was when 3DConnexion changed their API). Changing the hardware would mean I lose the plugin and all its functionality.
It is a valid point that it would be cool to control other things than just a humanoid skeleton and see the results live on the screen. I did consider e.g. controlling an ostrich (no, make that a velociraptor, it’s 20% more cool) neck and beak with your arm and hand. And yes, VR may be a solution to the feedback issue if you can feed the VR from C4D. I am actually curious what could be done with that.
The best way of course would be if mocap hardware producers would support a standardized API, but I don’t know when we’ll be there.
(And then there are the standard questions whenever looking into a plugin, as basic cost, ROI, long-term support, etc.)
No problem. I’d be glad to help you in any way possible with this project if it becomes a reality.
In my view, mocap is more like video than animation : it’s about shooting, directing and editing clips.
And unlike traditional animation with complex rigs, key poses, curves and dopesheets, this is something motion designers and generalists using C4D would understand very well if they had the right tool.
So instead of trying to catch up with Maya in terms of hand animation, which is a bit of a lost cause because of entrenched habits, I’m actually of the opinion that Maxon should really do something to make mocap easy in C4D.
It’s a much better fit for their core audience (generalist, broadcast, medical and architecture). It’s a quick and effective solution to get minutes of convicing animations done, instead of mere seconds.
Between the likes of mixamo, the libraries you find for games and the affordable mocap suits, it’s never been so easy to get your hands on motions or create it yourself. With VR this is going to be even more widespread.
Now we need an easy way to mix it all in C4D.
Yeah, if Maxon released a groundbreaking Mocap Module that made it easy to work with and retarget Mocap data, that would be fantastic. It would definitely need to incorporate a more automated weighting system similar to what SideFX released recently with Houdini.
Cinersity made an in depth series like 8 years ago, on how to adapt any ik/fk style rig to a retarget rig so that you can animate on top of the mocap. It was dobe by Jon Ware former Tech support who went on to be a pipeline developer for R&H and then MPC.
It was a little different from mine which was used by Sky Sports, which I created based on EA Sports motion reatargetting rig, which again can be applied to modt typical custom bipedal rigs. Both of these could then be set up to be templayes for the auto rigger character object to aid in the resizing and placement for adapting to different chatacters.
In other words, its all there, capable for years. People just need to actualky work on rigging in C4D. Like in every software… its the people making the rigs, that make the usability.
My current work around for Mocap with Cinema 4D is to use Maya.
- I model the characters in Cinema 4D, but I rig and skin them in Maya using HumanIK (by the way, the skinning in Maya is also WAY better than Cinema 4D)
- Then I export then back to Cinema 4D.
- Then I import a mocap file (from Mixamo, for example) into Cinema 4D and use the retarget tag. As both the mocap file and the character have the same exactly skeleton, the retarget tag works flawless.
I have 54 completed minutes of a solo 3D animated
Marvel comics based fan film ,all rendered in Maxon Cinema4D.
It has Dozens of unique characters both human & alien in multiple
environments both space faring and terrestrial lipsincing in multiple
languages from contrived alien dilalects to Arabic to the Queens
My workflow is similar to EricM’s
except that I do not have a Mocap suit for use with my Iclone pro
suite.( I wish I did )
I use the highly versatile Daz genesis people with animation Data
retargeted to them from realtime Iclone avatars with one single click in
the 3DXchange app
For dynamic Ragdoll events I use Iclone 3DX to import the natural
motions Endorphins “Stick people” and retarget Iclone human IK
animation to them as well with one click using a saved retargeting
template created with Iclone 3DX.
I send the BVH back over to Endorphin to merge & layer on those
unique behavorial events in that old, but thus far ,irreplaceable
program that exports DAZ & Iclone compatible BVH.
I send all of my externally creation Motion Data as BVH,back to Daz,
to add lipsinc and facial animation with either the old mimic 3 Pro
application from Daz with phoneme strength editor& replacement
pallete or the mimic live plugin, also from Daz.
The Daz studio non linear Motion clip system(aniMate2)
puts even the current R19 C4D motion clip system to utter shame as
its clips stores morph Data such as eye blinks, looped breathing
and any of the other thousands of body morphs Available for the Daz
It has reversing and mirroring as well as having no need for a pivot
object to reorient the character when Disparate self made clips are
joined in the timeline.
I use Daz optitex Cloth simulation software with a $7 USD script
to convert my own clothing meshes to the ridiculously expensive
optitex proprietary format,to create my Dynamic clothing simulation
Exportable as .obj/MDD to C4D or Lightwave2015 or Maya MODO
etc etc .
I find OPtitex much more stable than the recently added “Dforce”
cloth simulation in system Daz studio
I use a program called Morph Vox Pro
the goes far beyond simple pitch changing to completely and
convincingly morphing my voice to create my own voice dialogs for
I export as .obj/mdd from Daz studio on windows to an ancient
version of C4D R11.5 on the Mac OS using keith youngs Riptide pro
My entire pipeline is transferable to my windows seat of
Lightwave3D and I am in the process of a migration by 2019
ATM, IClone pro with 3D exchange is $250 USD.
Daz Studio is freeware plus another $100 USD or so for the optional
spline graph editor and Dope sheet & other add ons.
Morph VOX pro was around $40 USD.
Toss in Riptide pro and you still have a complete Scalable and Portable
Character animation solution for less than $500 USD
with no monthly subscription fees or yearly MSA
( excluding Endorphin of course which is discontinued anyway)
I applaud the OP for his effort to bridge C4D with Popular Mocap
hardware … truly it is a nice start.
However to be perfectly blunt,
Maxon Cinema4D’s native Character animation solutions for both
independant and team projects involving multiple diverse, Characters,
are so far behind lower costing external solutions
( autoLipsinc ,Dynamic cloth etc),
that the ability to bring in live mocap is the least of the shortcomings
The same can be said for just about every toolset. Houdini could do very complex simulations years ago, but it was much more difficult then than it is now.
With every toolset available, there’s an overall “ease of use” meter that slowly increases over time as the tools are improved. And on each meter, there’s a “breakthrough” line, which, when crossed - makes the entire process incredibly easy to use for a much larger audience. The classic case for me is with particles. Cinema’s thinking particles could be used, and had a wide range f functionality, but it was still a bit too complicated for many users. Then xParticles arrives, crossing that “breakthrough” line and opening up particle usage for a much wider audience.
One could say the same for Steadicams & the Movi.
That’s the sort of thing that many of us are looking for when it comes to mocap. Is it technically possible now? Yes. Is it difficult enough that we’d rather focus our time on other things unless we absolutely have to? For the most part, yeah.