New Arnold but no GPU support.

Become a member of the CGSociety

Connect, Share, and Learn with our Large Growing CG Art Community. It's Free!

REPLY TO THREAD
 
Thread Tools Search this Thread Display Modes
  04 April 2017
Wow, the thread got choppy! New Arnold looks sweeeet, congrats to everyone involved. That standard hair shader looks insane! I still remember when Pepe first wowed us years ago with the early version of Arnold.

I think people are going crazy trying to pick 'the ultimate do everything' renderer. Pick the renderer which suits your needs. Arnold is the best full production renderer available right now, period. It does not have the speed in most cases to compete with Octane for motion graphics, quick jobs. I don't understand why anyone would be upset and pushy about either existing in their own space. Octane is insanely good at what it does well.

I'm working with Nvidia on Iray, hopefully we can begin to be part of the discussion in the next few years as the features and workflows broaden to enhance more user's needs. As it stands, we're the most viable bi-directional path tracer available in C4D, which means more scientifically accurate renders for product and architectural scenes. But I would never try to convince someone that it's strictly BETTER than either Arnold or Octane. What are you creating?

Listen to Marcos Fajardo: https://www.youtube.com/watch?v=35morxCJOIQ#t=35m07s Obviously he's biased toward Arnold, but he is realistic about it's viability in ALL areas. If you NEED the future now (GPU, BI-Directional pathtracing), use Octane or Iray or whatever other renderer. Otherwise, he makes a great case for Arnold.

Last edited by muckymouse : 04 April 2017 at 07:47 PM.
 
  04 April 2017
Originally Posted by muckymouse: Listen to Marcos Fajardo: https://www.youtube.com/watch?v=35morxCJOIQ#t=35m07s Obviously he's biased toward Arnold, but he is realistic about it's viability in ALL areas. If you NEED the future now (GPU, BI-Directional pathtracing), use Octane or Iray or whatever other renderer. Otherwise, he makes a great case for Arnold.


Very cool link, thank you.
 
  04 April 2017
Originally Posted by vel0city: Very cool link, thank you.


Right? I love the way Marcos Fajardo talks about tech.
 
  04 April 2017
I'd rather have Arnold work on integrated denoising in the short term, but Marcos seems to be pretty much against it for philosophical reasons. I applaud their efforts to improve sampling noise distribution etc... but it's not a mutually exclusive situation. Even if you improve sampling speed by 50%, another 20% or 30% "for free" is always good.

Renderman, Hyperion, Vray or Corona have it and most other brute force renderers are working on it. It's been production tested on billion dollar movies from Disney/Pixar... so I guess it's not just a fluke or a cheap cheat (even though everything is a cheat).

I've tried it a bit with Vray and especially with Corona and it really is excellent to get rid of that final noise with reasonable time and it makes a world of difference to have it right within the render settings vs an external app or plugin and tons of AOVs to manage.
 
  04 April 2017
Originally Posted by muckymouse: I think people are going crazy trying to pick 'the ultimate do everything' renderer.



The challenge is that every engine has a cost, in terms of both money (software licenses, hardware, render farms) and time (learning basic functionality, all the cases of "I know how to do this in render engine X but not in Y", advanced functionality to actually make good work with it, the quirks and gotchas that you find out by trial and error, not to mention plain and simple render time).

For a solo freelancer, these can add up to dramatic effect, and often get in the way of simply making work. So yes, we will spend a great deal of effort trying to figure out the best path forward: it's business 101.

Of course, our industry is like the Wild West, so picking a render engine feels an awful lot like choosing between VHS or Betamax. Solid at the moment, but a bust in the making - for reasons unforeseen.
 
  04 April 2017
Maybe I'm on the wrong path here but for me, the future is cloud rendering. As such, Arnold makes the most sense moving forward. Projects develop quickly with a bit of noise until they are ready for final rendering. Up the settings a bit and then upload. They then come back at an amazing quality/speed-to-cost ratio.

One of my biggest 3d technology rushes came recently when I tried Zync with Arnold on a high refraction scene that was taking about 14 min. per frame on my cheese grater mac. Sent to Zync and 90 beautiful frames came back in an hour at about $25. Exciting stuff.

If I didn't have clients willing to eat the rendering cost then my opinion might be different.
 
  04 April 2017
About arnold, I had a scene that keeps eating memory until cinema crashes because its out of memory. Until Arnold 2.0 it was fine, but now Im not sure what happened.

I converted all the textures and changed the Arnold Sky for an Arnold Skydome_light.

edit: I foubd the solution, a material was corrupted and was eating the ram.of my pc
__________________
demo videos
Close, open-relationship: C4D / Zbrush
Hate / love: Maya / Houdini
Former gf: XSI

Last edited by luisRiera : 04 April 2017 at 07:16 PM.
 
  04 April 2017
Originally Posted by Gearbit: If I didn't have clients willing to eat the rendering cost then my opinion might be different.


That's the problem I face. Because it essentially requires payment for revisions. Most clients I work with want to choose a fixed price up front and stick with it throughout production.

Sadly, I've actually desired this as well - otherwise every creative decision needs to be financially approved, and too often we run into situations where we NEED to change X, Y and Z to save the project and present a quality product, but the client doesn't think so - and won't pay for it. The video suffers, I take the wrap, and my portfolio withers.

Cloud render farms are only financially feasible with higher end clients who trust their creatives and understand that limiting revisions benefits everyone.
 
  04 April 2017
Originally Posted by Gearbit: Maybe I'm on the wrong path here but for me, the future is cloud rendering. As such, Arnold makes the most sense moving forward. Projects develop quickly with a bit of noise until they are ready for final rendering. Up the settings a bit and then upload. They then come back at an amazing quality/speed-to-cost ratio.

One of my biggest 3d technology rushes came recently when I tried Zync with Arnold on a high refraction scene that was taking about 14 min. per frame on my cheese grater mac. Sent to Zync and 90 beautiful frames came back in an hour at about $25. Exciting stuff.

If I didn't have clients willing to eat the rendering cost then my opinion might be different.



My projects iterate like bunnies so I prefer fast in-house GPU rendering.

But I agree w/your point that we will ultimately see cloud rendering built into all the engines. That option will become increasingly convenient and easy.
 
  04 April 2017
Originally Posted by LukeLetellier: That's the problem I face. Because it essentially requires payment for revisions. Most clients I work with want to choose a fixed price up front and stick with it throughout production.



For me, the revisions also have a bit of noise and are a smaller frame size. It's not until the final is absolutely final (it does happen!) that it is upscaled, up-res'd and sent off.
 
  04 April 2017
Originally Posted by NWoolridge: Not to be contentious, but this is a strongly stated claim this is not even remotely true. Writing software that runs effectively on CPUs is very different from doing the same for GPUs. The programming tools are different, the languages are different, the instruction set architecture is different, the debuggers are different, the memory constraints are different, and the algorithms have to be different to account for the differences in the number (by 2-3 orders of magnitude) and complexity of the cores.


I currently develop GPU algorithms for a living - realtime video processing on GPU with 2000+ lines-of-code algorithms that do 4K UHD video in realtime. Here's what I can tell you about it:

- GPU cores - even with old HLSL/GLSL - can handle C code just fine. All of the floating point math instructions you have on CPU are on the GPU as well. IF/THEN statements, LOOPS and similar all work on the GPU as well. A GPU core is thus not "an exotic beast" by a far shot. Its instruction set is very similar to a CPU. A lot of CPU algorithms are thus not particularly hard to port to GPU. I know, because I ported my old CPU video processing algorithms written in C to GPU myself.

- The higher number of cores (hundreds, thousands) on GPU changes nothing, unless your CPU algorithm did a poor job of using more than one core to begin with. You should be going "Hurrah - hundreds of fast worker cores to parallelize my shit on" not "OMG - too many cores, too many cores..."

- I regard people who pretend that GPU programming is "something extra-special" compared to CPU programming as nothing more than dishonest salesmen - "God GPU programming is hard! God the GPU is exotic! Please give us more money when we become so advanced that we run on the GPU."

Take a good, hard look at HLSL/GLSL. Its basically the C programming language, slightly simplified, running on programmable GPU cores.

I had no problem whatsoever going from C on the CPU to HLSL on the GPU - it took me 3 days to write my first working GPU video processing algorithm.
 
  04 April 2017
Originally Posted by skeebertus: I currently develop GPU algorithms for a living - realtime video processing on GPU with 2000+ lines-of-code algorithms that do 4K UHD video in realtime. Here's what I can tell you about it:

- GPU cores - even with old HLSL/GLSL - can handle C code just fine. All of the floating point math instructions you have on CPU are on the GPU as well. IF/THEN statements, LOOPS and similar all work on the GPU as well. A GPU core is thus not "an exotic beast" by a far shot. Its instruction set is very similar to a CPU. A lot of CPU algorithms are thus not particularly hard to port to GPU. I know, because I ported my old CPU video processing algorithms written in C to GPU myself.

- The higher number of cores (hundreds, thousands) on GPU changes nothing, unless your CPU algorithm did a poor job of using more than one core to begin with. You should be going "Hurrah - hundreds of fast worker cores to parallelize my shit on" not "OMG - too many cores, too many cores..."

- I regard people who pretend that GPU programming is "something extra-special" compared to CPU programming as nothing more than dishonest salesmen - "God GPU programming is hard! God the GPU is exotic! Please give us more money when we become so advanced that we run on the GPU."

Take a good, hard look at HLSL/GLSL. Its basically the C programming language, slightly simplified, running on programmable GPU cores.

I had no problem whatsoever going from C on the CPU to HLSL on the GPU - it took me 3 days to write my first working GPU video processing algorithm.


Your work sounds like the "best case" for CPU to GPU compatibility. Pixel operations, video processing, etc. are what GPUs are designed for, and are relatively easily implemented.

Modern path tracers, on the other hand, are much more complex, and face some serious hurdles in being implemented on GPUs. Some of these have to do with lack of experience on the part of developers (it is a relatively new area after all), and some of them are improving with improving technology. Some issues are:

- Having thousands of simultaneous threads is great for speed, but can be hell on cache access with many contending threads.

- Memory limitations, and speed issues for those renderers that can do "out-of-core" memory

- Cores optimized for shading (via fragment programs) are sometimes ill-suited to algorithms that would be simple on a CPU. For example, common actions like scatter operations (which are littered across many algorithms) can't be done on a GPU. You must refactor the code.

- The difficulty (due to concurrency issues) of creating bi-directional path tracers

So, I don't think I'm a dishonest salesman.
 
  04 April 2017
@skeebertus
So you are telling us that big engine developers(including Arnold and Vray) are struggling to port CPU features on the GPU just because they are incompetent.
Arnold announced GPU porting since years and still nowhere near to final release, VrayRT(GPU) after years of developing is still not on par with the CPU engine(and therefore only a small percentage of users are actually working with it and only for small tasks), other engine like Corona will clearly not be ported to GPU in the foreseeable future( https://corona-renderer.com/features/proudly-cpu-based/ ).
Are they all clueless developers or is just that writing code for video/image editing is far easier than porting complex CPU render engine loaded with tons of features?
__________________
www.3drenderandbeyond.com
www.3dtutorialandbeyond.com
www.facebook.com/3drenderandbeyond
 
  04 April 2017
if you want GPU rendering go with redshift... they are miles in front of the competition....
__________________
ArtStation
 
  04 April 2017
"And be careful with Youtube Arnold tutorials. Most of them completely miss the point of Arnold's energy conservation/physically accurate shaders and will lead you down a path that you will have to unlearn. As a guide, your diffuse, spec, transmission, coat, sss, emission values all need to be the sum of one (or below). So if your diff is .5, your spec can be .3 and your coat will then be .2 - anything above that goes against the recommended workflow for shaders.

Any tutorials where they use a value of 1 on the diffuse, then 1 on the specular is wrong and will give unpredictable results, including noise and increased render times, as well as your shaders falling apart if you change the lighting."

Yes, this was true for Arnold 4, but in Arnold 5 it seems it is like this: "If all of the individual weights and colors are less than or equal to 1.0 then the Standard Surface shader is energy conserving. Unlike the old Standard shader, you don't need to worry about the sum of weights being less than 1.0 or manually enable Fresnel, which is always enabled."

Last edited by nesafarm : 04 April 2017 at 12:14 PM.
 
reply share thread



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
CGSociety
Society of Digital Artists
www.cgsociety.org

Powered by vBulletin
Copyright ©2000 - 2006,
Jelsoft Enterprises Ltd.
Minimize Ads
Forum Jump
Miscellaneous

All times are GMT. The time now is 07:26 AM.


Powered by vBulletin
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.