PDA

View Full Version : Rendering character in parts (Maya)


germanager
01-28-2012, 09:13 AM
Hello guys,

I've been searching for this for long time and finally decided to ask.

Imagine you have a character. Big reflective robot with couple million parts, for example. And your PC is not able to render it all. How to divide the character to parts (head, arms, mechanical parts) render them separately and then combine them, so shadow from the head would fall on the separate render of the armor, reflection of the arm would appear on separately rendered leg.

My main problem is that I don't know how to transfer shadows, reflection, bounce light from hidden objects on visible.

Maybe you know some good tutorial about it.

PS Most of the tutorial I found so far talk about extracting render passes from the whole scene. In my case its not possible to render the whole scene

kanooshka
01-28-2012, 05:16 PM
In maya it's pretty easy to have invisible objects still casting shadows/visible in reflections/refractions etc... All you'd have to do is uncheck primary visibility in the shapes' render settings. It's easiest to select all the objects you want to be invisible and then go into the attribute spreadsheet and change it there. I'm really not sure this would be any improvement in render time because the objects are still being calculated, you're just not seeing them. If you want each part of the robot visible in reflections, it'll have to be calculated at some point. You may want to look into using reflection occlusion instead, it's not perfectly accurate but it can be helpful.

It's important to figure out why exactly you're not able to render. Try disabling different features to find where the bottleneck lies. Start by disabling all lights and reflections and see if your scene can even be committed to memory. If the scene still won't render, one fix could be to combine chunks of geometry, the fewer individual objects the faster it should be committed to memory.

If your scene is committing to memory fine then try turning your reflection limit to 1 or 2 to start. Then keep increasing settings until you find your issue or the difference between settings are unnoticeable.

Maybe it would be better not to split up the robot into parts but instead split up into render layers. Direct lighting in one layer, reflections in another layer etc.... But the first thing would still be to figure out why it's not rendering.

-Dan

germanager
01-29-2012, 12:25 AM
Thanks a lot, Dan

Well, it renders but I'm looking towards breaking it in parts to save on render time of single frame, so in case of a mistake I would only need to rerender arm for example.
Yeah, I was thinking that for the reflections you'd have to render the whole scene anyway. I'm just thinking how they do it in movies, or they only separate foreground with midground with background, etc.
I guess, render passes for the whole character would be the only way to go.

molgamus
01-30-2012, 05:47 PM
If your character is a very dense polygon mesh or simply a lot of individual objects, then this sounds like a memory issue to me too. You are asking for a general solution to render lot of reflective objects, right? A hypothetical scenario is your robot that has a lot of parts, I would guess different kinds of metals.

In my experience, reflective objects works well with raytracing renderers like mental ray. The drawbacks of the renderers I've used is that they usually cache the whole geometry in memory. Unlike REYES-renderers like PRMan and 3Delight that discard all polygons that are not seen by the cameras. Raytracing is something that Pixar went great lengths to avoid and spent a lot of effort getting around such ways of rendering. Because when renderman was made, raytracing was quite, and still is, expensive.

For both kinds of renderers there are ways to get past these limitations. Mental Ray allows you to export geometry in a memory friendly format, .mi-files. REYES renders, such as PRMan and 3Delight can export RIB-archives that contains your geometry. Animated characters could have their own RIB-archive and the environment could be exported in one or several archives depending on geometry. If you were to make a forrest you could have one tree per RIB.

In your case, with the dense robot mesh, I would make one RIB-archive for the head, one for every arm and leg, and one for the torso. I'm assuming we are talking about a humanoid robot. This way the renderer will only request the arm to be read into memory if it is in view or is seen in reflections.

From what I've heard and seen, Arnold render handles geometry very well for being a raytracing renderer. But I have not yet tried it.

germanager
01-30-2012, 09:32 PM
molgamus, thank you!

Very useful information.
I was trying to learn 3delight but its very limited on tutorials and explanations. But I'll try it again.
I searched for Arnold, there are some impressive renders on the web but it's not released yet.

molgamus
01-31-2012, 12:12 PM
Yes, 3Delight is quite a mouthful to get into. There are some tutorials at 3Delight.com that explains the basic concepts of rendering with 3Delight for Maya. http://www.3delight.com/en/index.php?page=3DFM_tutorials

If you stick to maya default shaders and maya lights your scenes should render fine. As long as you apply the correct geometry attributes.

germanager
01-31-2012, 12:31 PM
molgamus, I've seen them all to be honest. They are quite basic. Though I'm starting to dig deeper in shading stuff.

One of the biggest noobie problems there is about shadows. Depth-map shadows look extremely unnatural if blurred (blurred evenly across the edge) while raytraced shadows take too long to calculate.

molgamus
02-01-2012, 12:06 AM
I usually stick with shadow maps, as long as a shadow is cast the illusion works. Even though it is a little bit too filtered close to the shadow casting object. For the parts of the shadow that are further away, it is indeed to sharp too be realistic. But if you use global illumination and ambient occlusion that are pretty quick to calculate on point clouds in 3Delight, you can fill that part in with light. If the light source is very big, a raytraced shadow map would be preferable. Even though it is expensive.

If you are versed in RSL you may write a light shader that gets around this problem while maintaining speed and being physically plausible, traded for a little bit of accuracy.

Maybe Maxwell or V-Ray might be better suited for these kinds of raytracing tasks. If they can manage the wast amount of geometry.

CaptainObvious
02-01-2012, 09:57 AM
In all honesty, the most cost-effective method might be to just purchase more memory for your computer... 16 gigs of RAM goes for less than $200 now, doesn't it? The cost for more memory might be lower than the cost for additional work, since splitting it up and comping everything can take a fair bit of time.

I'm going to assume that you're rendering in mental ray, since you didn't specify. It should be possible to do memory cycling in Maya/mental ray, where it dynamically removes parts of the model from memory in order to render everything. It will take longer to render, but it won't fail. I haven't really used mental ray a lot, but that's what I typically do in modo. By splitting everything up into small parts, the render engine is able to load only the parts it needs for each bucket.

Additionally, since you're talking about millions of parts... You're using instancing, right? Most ray tracers, mental ray included, are capable of rendering billions and billions of polygons, by copying individual meshes multiple times. Something like a robot generally has lots of repeating elements. Thinks like nuts and bolts and such. As long as they're identical, you can use instancing to ensure that memory usage isn't higher than when you're rendering just one of them.




Rendering it part by part with some stuff hidden will not help you, because as long as an object is casting shadows or shows up in reflections, it still needs to exist in memory. So if you want the robot's arm to cast a shadow on the torso, both the torso and the arm need to be in memory at the same time, and rendering it in two passes will not use less memory than rendering it all at once.

I don't necessarily think switching to a different render engine will help you, either. As long as you're using ray tracing, the same basic problem will still occur. Some render engines are more memory efficient than others, but generally the difference is kind of marginal. If render engine A lets you render five million triangles with a given amount of memory, perhaps engine B lets you render ten million. A big difference, sure, but not always big enough to make a difference. Especially not compared with instancing and such...

molgamus
02-01-2012, 11:31 AM
I agree with you, adding more memory is a cheap solution. However, there are cases where this method is not applicable. Maybe you all the memory slots on the motherboard are occupied, or maybe you find yourself having to work on laptop with 2GB of RAM.

I find it very interesting to solve a problem like this. Because even if you have lots of computers at your disposal you might hit the roof. How did they manage the robots in Terminator Salvation for example? Lots of metal with glossy reflections etc.

I found that 3Delight was used on that movie, but not what shots. My guess is that different parts of the bot was split into RIB-archives which was then loaded on demand. I assume they did different versions of the robots, low-poly to high-poly versions. Depending on distance from camera a different RIB-archive would be read into memory. Possibly also some use of point clouds for ambient occlusion and color bleed. But this is just me speculating!

earlyworm
02-01-2012, 02:25 PM
Both ILM and Rising Sun Pictures worked on Terminator Salvation. ILM use PRMan and RSP use 3Delight.

In dealing with large amounts of polygons, you typically want to use displacement, object instancing or procedural loading of geometry such as renderman read archives or mental ray proxies in order to work with the geometry. When dealing with reflections you'd also probably want to use short raytrace distances and use reflection environment maps for everything else.

Renderers like PRMan and 3Delight also have other tricks with which to save render time such as baking out point clouds which can be used raw or converted into brick maps (which are mipmapped) in order to store data which can be used to calculate occlusion, indirect diffuse and blurry reflections.

germanager
02-05-2012, 10:40 PM
thanks a lot, guys

I can't upgrade to 16gb because my board supports max of 8. My system is of previous generation, well, its not that bad, but definitely outdated. I want to upgrade, but maybe not even in this year.

After reading your comments I understood that there is a lot I need to learn:)

molgamus
02-07-2012, 12:30 AM
The last few days I've been looking closer at V-Ray and Arnold. I'm sure they are very competent rendering engines, but as I see it now, the renderman compliant renderers have the advantage. In the past good looking raytracing was very expensive, and with the computing power of a regular workstation you can get by today.

Then again, computers are constantly getting faster, but the time to render a frame doesn't seem to follow. The creators of Katana explains it well I think, in the 90's it was a huge challenge to make Stuart Little. The fur, even on such a small creature, was probably a huge thing to render. Today, not so much.

I havn't been in this business for very long time, but the trends I've spotted is; more geometry, larger textures, more detailed light data (eg. lightstage) and bigger effects. I don't think it's ever gonna stop, but in essence it's about managing lots of data quick enough for art direction. This is the reason why I favor the renderman renderers, they seem to handle lots of data more efficiently. Point clouds and brick maps are already mentioned, also delayed RIB-archives and the ability to program procedural geometry/particles/instances. I'm aware that these features have counterparts in MR/V-Ray/Arnold. Renderman trades some accuracy for speed, the raytracers tend to go for hardcore photorealism at a higher price.

Excuse my late night ramblings, I hope my point got across anyway.

Edit: 100th post! cake anyone? :D

CaptainObvious
02-08-2012, 07:05 AM
The best choice in a render engine really depends on what you're doing, and what your requirements are. In architectural visualisation (which is what I do), being able to handle large datasets is important, but I suspect less important than it is in film effects work. What's more important, however, is to be able to quickly calculate very realistic lighting and shading effects, and this is where ray tracers really shine. It's not happenstance that VRay dominates this segment of the industry, rather than Pixar's Renderman. In areas where images and animations are produced by very small teams or even single individuals, the ease-of-use and readymade solutions are much more important than they are in a large studio with technical directors and dedicated R&D. One of the reasons Maxwell is used in the visualisation business is the fact that you don't need to know the first thing about how rendering works in order to use it. Sure, it's a bit on the slow side, but throw enough CPU power at it and it will still render your print-res still in an hour or two. There's no mucking about with anti-aliasing settings or anything, and it will never produce weird errors because of sampling problems. You just press 'go' and wait for it to look good enough.

CGTalk Moderation
02-08-2012, 07:05 AM
This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.