Project Neutron node flow


#1

Pretty interesting stuff with the new node system demo. I’m curious about how the system is supposed to flow though. In this example he shows the flow going backwards? The Distribution Op connects to the Sphere that connects to the scene?

In the next example, this is reversed, where, more logically, the Sphere connects to the Distribution Op.

Can anyone shed some light on that? Cheers.


#2

More details will come later, but in the first case it is matrices being created (Distribution) and then get a geometry assigned (Sphere), in the other case a geometry is created (Sphere) and then multiplied (Distribution).
Neutron is flexible like that, the advantage of being able to do both is that the Distribution can influence individual spheres parameters in the first setup, while in the second the sphere can influence the distribution. Of course you can change both later on.


#3

Think of it like adding ingredients to a cake. You watch Jamie Oliver make a sponge cake, he puts flour in a bowl, adds some eggs, then puts some sugar in. You then watch a Gordon Ramsay video and hes mixing eggs and sugar, then sifts some flour over the top. both end up with the same cake.

Ultimately so long as you mix the same ingredients together you will mostly end up with the same thing. It isn’t that the nodes are flowing backwards, its just that the ingredients of sphere geometry and matrix of positions were added in a different order.

Sometimes doing it in a different order can make your life easier down the road in different ways. If you mix the eggs and sugar, you could reuse that mixture for other baked goods. If you create the matrix of positions first, you could reuse that matrix for other distributed objects.


#4

Thank you, both.


#5

So, Neutron node flow is rather a piece of cake than mixing acid and water where the order actually matters :wink:


#6

This is looking really good! The speed looks good!


#7

Trying to wrap my brain around this nodal system. Why does this approach give better viewport performance over the drag and drop modifiers we use now?


#8

The short answer to your question: It doesn’t :wink:

The long answer: It wasn’t the viewport.

And now: My answers are wrong.
It is simply not that easy, but what many boiled down to “the viewport is slow”, is actually a combination of several things. Like always in life…
There were more over simplifications floating around. Like e.g. “C4D is not multi threaded”. Actually it is and was for a long time. But you can multi thread things and you can multi thread things.

Take for example the process of building a house:

One worker will take a year. Single threaded.

Now, lets multi thread the job and assign the work to two workers. One can build walls, the other a roof. Lets assume both start at the same time. The second will not only fail in his job until the first one is finished, he’s also running uselessly around at the building site, partly blocking the first to properly do his job. To finish the house will even take longer than a year in this scenario.

The situation will improve, if they know about each other and the second will stay at home until the first is finished. Now, the house will again be finished in about a year (leaving aside the few hours the second needs to drive to the site, after he got the call, that the walls are finished). Yet, it’s still not the big improvement, we had hoped for by multi threading the job (and paying two workers).
The situation will further improve, if both can do both (walls and roof). Will the house be done in half a year? No, it won’t. Both workers need to spend extra time to coordinate each other. The improvement in build time will actually depend on how good a team they are.

Lets assume workers with good team skills. Will the time needed further reduce, if we hire more workers?
With four? Yes, probably.
Sixteen? Maybe.
Thousand workers? I highly doubt, the house will ever be finished, because actually none of the workers is able to move anymore on that small building site.

Why am I writing this?
Because it may perhaps visualize the issues one is facing, when tying to multi thread software. It’s all about the communication of the workers and their dependencies. Knowing the jobs that are independent. Minimizing the amount of communication needed. Reducing the time one is waiting for another to finish his step.

Now, back to C4D.
Of course the aged viewport isn’t the fastest. But it is only a smaller part of the actual issue. Still, from Maxon’s Tech Preview (and also the last release) we know they also tackled the viewport and seem to go forward there as well. This is great in itself. But the actual bottleneck is the object handling and the lack of (real) dependencies within the object handling. Without C4D knowing the actual dependencies (like in above example the workers not knowing, what the others are doing) it had to update/recalculate more than actually was needed (a change to the roof architecture also triggered the other worker to rebuild the walls) and it also is not properly able to parallelize the calculations needed for the update (assign a proper amount of workers to the job). This is where Neutron seems to pick up. An object handling based on a node graph is inherently aware of all dependencies. Based on this only stuff really needed will be recalculated and it can be done in parallel to a far greater extend.
So, to wrap this up, in my view it is actually the new object handling demoed with Neutron, that will actually be the reason of such great speedups.

But there is so much more to such a system. E.g.:
The communication of the workers.
How fast can one specialist take over the position of another to pick up the next task on the same wall.
To leave the analogy and use more technical IT terms: The memory management (because it was mentioned by Maxon around R16).
… etc, etc…
Just to name a few.

And none of these things is a simple matter of just implementing or doing it, but it needs to be done right. To use another analogy: A soup needs salt and spices. But just adding any amount of both to water will probably not result in a good soup. Even worse, adding half a spoon of salt and half spoon of spices makes up a reasonable soup. Using a full spoon each results in an even better soup. But one and a half of both is actually meeeehhhh. So you stick to one spoon each. Until one day by accident you find out only a quarter spoon of salt together with a full spoon of spices, that is really yummy!
In IT such is called an optimization problem. And it can be incredibly hard to find a good mixture of ingredients. And good results on one end may block the view for better solutions on the other end.

All of these together parts (threading, memory management, communications,…) are probably, what Maxon calls the (new) core.

But you also need to understand, you can not simply replace one of these things with a new one, without the need to touch (maybe even break) other things. To use the house again, replacing the windows of an existing house with larger and better isolated ones, will most likely also cause quite some changes to the surrounding walls…

And now, this may explain why all of this was (or still is) taking so long.
Maxon was probably well aware, what users loved about their product. E.g.: stability, compatibility (compare for example the long term compatibility of scene file format or plugins to competing products), lots of different systems neatly interacting with each other. And all of this is supposed to migrate to something better, without the user noticing or having any disadvantages.
In my mind and from a software developer perspective it was inevitable, that during such a process there pop up things, that seemingly do not fit that well (like e.g. the latest material system). And other things have to be broken, like e.g. plugin compatibility.

Ok, I leave it at this.
Last thing: Of course this is still an over simplification of what’s going on. All of these thoughts are based on my experience as a software and C4D plugin developer (so I’d call them educated guesses). And, while I doubt none of this is an issue for Maxon, most likely their situation is still a lot more complex and difficult.

I wrote all of this, because I have the feeling parts of the community have a way to simplistic view on the actual problems of C4D and what it takes to solve them in the context of a complex application as C4D is. A complex product which is also gown over decades.
One more: Above example about the difficulties of multi threading. I did not mean to say, Maxon was doing a bad job there. It simply grew over decades into what it is today. Anybody who has worked on any job for more than ten years, please, tell me, you didn’t pile up a bunch of corpses in the basement…

I hope this helps to at least imagine one possible version of this story.


#9

One more thing, Maxon also tried to mention during the Tech Preview Demo.

It is not an advantage over the drag and drop approach of the Object Manager. Just like the drag and drop approach of the Object Manager is not the actual current issue. It is the technology below the Object Manager, which is causing the issues. And so Neutron is rather a new base technology, which in the end will hopefully allow Maxon to again build a new abstraction level on top, hiding the nodes (for those who don’t want to see them) and providing a similar look and feel and workflow as the current Object Manager.


#10

Not necessarily.


#11

I think, I said inherently, not necessarily.
But would you like to elaborate?


#12

Well said Andreas, i can only agree.
Neutron is geared towards clean dependencies with no ambiguity regarding execution order, one of the things that limits the old cores speed.


#13

Regardless of the improved efficiencies (that are always still welcome), the most exciting part is moving towards a truly procedural system, that lets you more easily break away from the cookie-cutter workflow that tends to plague a lot of C4D work, unless you tack on a bunch of plugins or drop in some code, or wrangle up some XPresso.

Next up - a world-beating particle system please…