I really like Houdini - and I’ve still barely scratched the surface of it - but trying to get the two programs to work together can be challenging depending on what you need to create.
Essentially, a Houdini Digital Asset is a highly customizable user-created rig object. In C4D, it’d be like taking a detailed rig with all kinds of objects, placing it under a Null, creating a bunch of Xpresso controls, and then saving the whole thing to disk.
When using assets within Houdini itself, you can then create as many instances of the rig as you want, adjusting all of those exposed parameters as needed. You can even dive into the internal structure of the rig, make an adjustment or a fix (or even create a few more user controls), re-save it, and the fix will ripple across all the other instances of the rig. But as the above user mentioned, that doesn’t work with the Cinema integration.
One of the primary reasons I’ve stayed away from the Cinema bridge is that Maxon does all the Houdini Engine integration inside Cinema, and it only works with very specific Houdini builds that are specified by Maxon - and Maxon only updates those supported builds at certain times of their choosing - regardless of when Houdini updates their own builds. Houdini 16.5 was released on November 7, 2017. And yet, here we are 5+ months later, and the latest supported build by Maxon is still 16.0.633, which was released on June 7, 2017. By comparison, the Houdini Engine bridges for Maya, Unreal, and Unity are developed by SideFx, and new versions are released with each and every daily/weekly build.
A few things to be aware of:
- Motion Blur is a challenge (or non-existent). When geometry is generated by a Houdini Asset inside Cinema, you essentially get a mesh object in your viewport without any keyframe or C4D transform data attached to it. As you scrub your timeline, you get updated meshes in your viewport, but C4D doesn’t have any internal understanding of what will arrive with the next frame or what the previous frame contained, so you can’t use traditional transform-based motion blur on the mesh. The only way to get moblur would be by accessing the point velocities of the mesh - and Cinema doesn’t allow you to access that (AFAIK), so 3rd party render engines have to add that support themselves through different methods. (Off the top of my head, I don’t even know if any 3rd party engines natively support point based motion blur direct from Houdini assets).
- The engine is also a challenge if you’re generating lots of objects. Cinema can only handle a certain amount of objects in the viewport without a massive slowdown, so the only way to really work with a large amount is to bake them all into a single large mesh, which can be prohibitively expensive to generate time wise depending on how many polygons there are.
- Houdini is fantastic with VDBs - both SDFs & Fog Volumes. But Cinema doesn’t support either of them natively, so one cannot generate smoke from a Houdini asset inside Cinema and have it easily available for rendering with a 3rd party render engine. You’d have to adjust your Houdini asset so that the fog volume is automatically written to disk, and then load those files from disk back into Cinema using your 3rd party render engine’s loader. You’d also have to make some adjustments inside the Houdini asset so that the resulting Fog volumes were positioned in the correct transform space so that they line up correctly with the other objects in your scene once the smoke volume is reimported into Cinema via the 3rd party render engine.
This is just a few things I’ve run into in my own tests, and is just scratching the surface.
TLDR: In general, I’ve just found it a lot easier to work in each program natively, and bake out point/mesh/camera data via Alembic for re importing into the other program. It tends to be faster and all-around easier.
What I’d really like to have is a Dynamic Link between the two - so that I can have the two programs open at the same time, and have geometry & scene data flowing from one program to another fluidly without baking, but that’s a pipe dream that’ll never happen (and even if it does, it’ll probably be too buggy or just too slow to be practical.)