R20 (multithread?) - what does it meant for users?

Become a member of the CGSociety

Connect, Share, and Learn with our Large Growing CG Art Community. It's Free!

REPLY TO THREAD
 
Thread Tools Search this Thread Display Modes
  2 Weeks Ago
Thanks Srek. I wonder if at speed hardware is going - much faster than software -  software core does not need to be always in revision.
 
  2 Weeks Ago
Originally Posted by Bullit: Thanks Srek. I wonder if at speed hardware is going - much faster than software -  software core does not need to be always in revision.

In my opinion, far from being "always in revision" MAXON has undertaken these efforts only in recent years to modernize the core. Everything I know about MAXON suggests they are very conservative in their approach to making changes to the way C4D works, or doing anything that might hinder compatibility (or stability) with some part of their user base. Put another way, these modernization efforts we're talking about are arguably long overdue and inarguably necessary for this app to maintain a competitive edge (i.e. to continue delivering at the level customers expect). The way I understand it from many years worth of discussion on forums like these and reading MAXON's own comments, it's not a case of "this year's revisions" and "last year's revisions" with respect to the core, but a case of the core revision being a 5 or 6 year effort from start to finish. And hopefully we're in "the home stretch" to borrow a horse racing term.

The other thing I would say is we've basically hit a wall in terms of hardware processor speeds, so I'm not sure I agree with the "[progressing] much faster than software" theory. The golden days of this year's processor being 100,  50, or even 35% faster than last year's processor are long gone.  Moore's Law is basically dead (techncially it never was a law but back when the term was coined there was no foreseeable future where the rule wouldn't apply). Time flies.

Now instead of substantially increasing clock frequency, Intel and AMD (and NVIDIA) add more same- or similar-frequency cores to their designs and make those designs less power-hungry per core. To use a simplistic analogy, instead of the individual core or "brain" getting substantially faster than last year's brain, they add more brains as the means of improving compute power of a processor design. Thus the general trend of parallelism and developers seeing where their apps can and customers can benefit from same.

Unless  there is a major breakthrough in materials science + semiconductor microlithography, which would allow the "restarting" of Moore's Law and the doubling of clock speed every 18-24 months, OR unless microprocessors start to be made in a completely different way which would allow same, I think the way things are now is the foreseeable future. Maybe in 30 years all CPUs and storage tech will be based on crystalline or quantum technologies or something else we don't yet know about... but for now this is what we've got IMO.

Last edited by Blinny : 2 Weeks Ago at 12:36 PM.
 
  2 Weeks Ago
Originally Posted by Blinny:
Now instead of substantially increasing clock frequency, Intel and AMD (and NVIDIA) add more same- or similar-frequency cores to their designs and make those designs less power-hungry per core. To use a simplistic analogy, instead of the individual core or "brain" getting substantially faster than last year's brain, they add more brains as the means of improving compute power of a processor design. Thus the general trend of parallelism and developers seeing where their apps can and customers can benefit from same
Problem with this is that the whole multi core thing is mainly to the advantage of server technology and virtualisation. Advances in multithreaded development aren't what many here seem to expect. The pressure for this isn't very high since there aren't actualy that many desktop applications that demand this level of power, we are talking mostly of web and data services where many core designs are most used. If desktop applications were the driving factor of CPU technology the direction would still be higher clockrates and only 4-6 cores instead of <3GHz and >10 cores.
Let's face it, DCC applications are a very small niche for hardware vendors.
__________________
- www.bonkers.de -
The views expressed on this post are my personal opinions and do not represent the views of my employer.
 
  2 Weeks Ago
Originally Posted by Srek: The pressure for this isn't very high since there aren't actualy that many desktop applications that demand this level of power, we are talking mostly of web and data services where many core designs are most used. If desktop applications were the driving factor of CPU technology the direction would still be higher clockrates and only 4-6 cores instead of <3GHz and >10 cores.
Let's face it, DCC applications are a very small niche for hardware vendors.

Exactly.  95% of desktop users (or even 99%+) simply don't need faster processors, so what's the point in spending money developing faster single core speeds for that 1%? Especially when there's oodles of money in the mobile market these days as devices get smaller people get new ones every year. 
 
  2 Weeks Ago
Well the recent AMD made a hit. 
More importantly  GPU's will not stop being faster, so maybe developing Cinema 4D code for GPU will warrant many benefits even knowing the issues that Srek made known.
 
  2 Weeks Ago
Originally Posted by Bullit: Well the recent AMD made a hit. 
More importantly  GPU's will not stop being faster, so maybe developing Cinema 4D code for GPU will warrant many benefits even knowing the issues that Srek made known.

You're not wrong in the absolute sense. MAXON and others are building GPU-acceleration into their apps in different ways. The ProRender stuff uses OpenGL and classical 3D acceleration technologies but everything else that we talk about generally goes back to the parallelism thing. GPU cores don't double up in speed each revision either, so the trend is the same — AMD and NVIDIA keep adding more cores every cycle to perform a variety of specialized tasks more quickly. But like Srek was saying WRT to CPUs, there are only a limited number of use-cases in 3D where this type of optimization can be fully applied (it's actually more applicable in the video world in many cases — debayering raw footage, certain types of special effects, etc). The rest of the "GPU market" (games aside) is for machine learning, mining (what a waste), gene folding and other highly specialized tasks that don't apply to most applications.

So the hope with MAXON is that  they've found  a few areas where each of these specialized  technologies can help speed parts of the C4D worflow in a big way, and the rest is good old fashioned re-writing of code to be more efficient on a single thread / process, all else being equal. I'm sure much of the "core optimization" has been just that. In any case, I'm eager to see what R20 will bring. Hopefully we'll get some "sneak peaks" well in advance of SIGGRAPH.

Last edited by Blinny : 2 Weeks Ago at 02:07 AM.
 
  2 Weeks Ago
I am actually quite happy, that maxon is rewriting the core. I think that a lot of other companies would have waited another 20 years. And I hope to see speedups of object handling  (that would be the killer feature of the century for me)
For GPU: I think maxon wants to stick to the pc mac parallalism. that means they have to go the open cl route. I can understand that because they are the biggest 3d player in the mac world. On the other hand the results they presented with their GPU stuff are underwelming. I have doubts that they will be able to speed up prorender by factors. And so It might just stay a alibi feature like pyrocluster or cloth. 
for parallelism:  I understand the problems of paralleling sequential systems. what I do not understand why maxon doesnt give the user a bit more opportunities to use their cores by own decision. I could imagine to have a dropdown in every tag/deformer to define the core running on. like this you could probably set up two or three characters each calculated on one core. or distribute time consuming xpressos to different cores. I am shure that this also can lead to problems with synchronization, but I also think that one could find some spots to implement this (for example deformers and such). 
 
  2 Weeks Ago
What Srek said is that there are issues with virtualization and GPU's  and if the user is in a limited gpu computer it will bring issues. Neverthless an ideal code would be to use a la carte so to speak. If there is good GPU it uses GPU, if not uses CPU.  Of course this means more code to work with which might not be justifiable if the improvement is not substantial.
 
reply share thread



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
CGSociety
Society of Digital Artists
www.cgsociety.org

Powered by vBulletin
Copyright ©2000 - 2006,
Jelsoft Enterprises Ltd.
Minimize Ads
Forum Jump
Miscellaneous

All times are GMT. The time now is 11:55 AM.


Powered by vBulletin
Copyright ©2000 - 2018, Jelsoft Enterprises Ltd.