Why would you will to render a 7secs frame on a network ?
Why would you will to render a 7secs frame on a network ?
Why not? What if you had say, 2000 frames?
Distributed renderings are NOT for animations. If you have 2000 frames and let’s say 20 machines… you’ll assign some backburner job to render 100 frames, locally, on each machine.
Distributed renderings are generally for GI/FG scenes which takes quite long to be rendered. 1h rendering with 4cpus ?.. you’ll get around 15mins if you add another 3*4cpus. Generally I use distributed renderings for frame up to 5mins in render time.
Distributed rendering works only for 1 frame in max. Even I just checked in 2010, it renders/calculates FG only first frame then on very second or third frame, Either I get “unknown error” or “Slave 1 Died”… Does anyone know why does this happen in DBR? I still don’t find any workaround for DBR failure…
Scene I am working on is having heavy grass setup made using hair and fur and I set it to render in geometry node, with BSP2 (as it also contains few MR proxy trees.)
Your the one who asked why would you render a 7 second frame on the network. Being an Fx artist i render quick frames all the time over distributed…
You can do whatever you like if you’re cool with that.
Eventually, consider that linking ie. 20 machines for a single frame, where only one faill you’ll lose the whole frame while all the others will try to finish their job on that frame anyway. While distributing frames instead of tiles will put you in the condition to lose only that frame for that machine, thing that you can also monitor and put in your job schedule for re-rendering and such.
MR is resetting the VFB to black by default. I found some tuts showing how to turn it off via preferences (2009), but Max 2010 does not have this checkbox anymore.
is it possible to keep VFB in max2010?
thanks in advance
During DBR, when most buckets have been completed and few others remain, MR does not re-distribute the available CPU resources to quicken the process.
For example: I have three dual-quad-core workstations in DBR, and I frequently frown at my buckets inching their way to the end while many of my cores are idling. This can be a great problem especially at extreme sampling values.
Why doesn’t MR distribute given tasks equally over available CPU resources? Am I missing a hidden switch?
Thank you very much.
A weird problem occurs when I render high poly object with relatively complex materials.
There are some 1.2 mil faces and five blend materials. Two of them have four different 20484096 8-bit grayscale maps and three have the same but in 40964096. There are also two 24-bit half resolution maps for each of them. There are also some other smaler maps for some parts.
When rendering MR starts to give warnings that it can’t start thread X becouse there is not enough storage available to proces that comand. Sometimes there is also warning that there is not enough memory to allocate filter tables for bitmaps.
The trouble is that page file is only 1.57 GB and there is stil almost 1.6 GB of free RAM. I’m running 32-bit Max2009 on 32-bit win XP and 4 GB of RAM (only 3.25 recognized).
Does anybody have a clue what the problem might be?
I have a studio light scene set up.It’s pretty much identical to Jeff’s setup on his Gnomon vid. I have the scene lit and my subject shaders/materials all finished. Problem is: everything was going well and i havn’t changed the lighting at all whilst applying materials and now am ready for final render tests but all my reflections are completely blown out. I’ve checked all my photometric lights/exposure settings over and over and nothing there has changed.
I’m completely stumped as i must have overlooked something as simple as a tick of a checkbox.
Any ideas? Cheers.
do you have a glow effect on? do the highlights get blown out after the render finishes? that’s the only thing i can think of.
You can’t know how much the same I feel, during many 3d tasks. On one of the latest jobs, During distributed bucket rendering Mental Ray reported some errors about my maps (HDR and GIF ones) that they’re not recognized. There were no problems rendering solo, but as DBR is turned on those maps did not appear at all. I have rendered hundreds of frames solo and turned on DBR as one of the other machines was free and all of a sudden there’s this unidentified illumination change on the new frames.
I ve spent hours trying to understand the sudden illumination change on the object.The fact of the matter is, I may figure out easier if there weren’t a lot of reflections on the scene.
You may also consider checking any unhidden ojects that are not actually in front of the camera, but their reflections.
Open the message window to check for any stupid(*) errors.
Stupid errors are errors that do not disturb the rendering process. They seem to have no effect on the scene, but they actually do.
Think of hiding objects one by one in a rational order to see if you can find a clue.
Check also jittering and lock samples setting to see its state has been changed mistakenly.
May the God be with you if global illumination or final gathering on and hope you don’t need to render everything from the beginning.
thanks Nezih, but i’m pretty sure it’s a global setting as it’s affecting the entire scene rather then specific objects.
I’ve since setup a new lighting scene using a photo background and jeff’s production shader setup. This is working out a lot better and i’ll definitely revisit a photometric studio setup when i done some more research:)
i had an animation where i was rendering this sea, i used ocean (lume) shader on the bump slot to produce waves for the animation, it worked all well until i used distributed rendering with backburner, it seemed that the machines couldn’t continue the generation of the waves correctly and it couldn’t work at all and the waves looked all flickery and at the end i had to use a normal noise map to generate waves.
I wonder if there’s something I didn’t notice that well help me with this.
This morning I woke up to a very strange rendering bug. I had kicked off a multi-frame render overnight, which was supposed to render the same model from various angles by rendering select out-of-sequence frames from an animated model/camera setup. Upon reaching frame 0 (which was supposed to render after succesfully rendering frames 60,70 and 50 - in that order), it came up with an error telling me it can’t find UV1 coordinates for any of the objects in the scene. Having done a few test since I can confirm this only happens when frame 0 is rendered out of sequence. All other frames can be mixed up just fine and likewise if frame 0 is rendered first it all works fine as well.
The workaround is easy (render frame 0 first!), but how can this happen?
(This post was deleted at the request of the original poster.)
That out-of-sequence rendering bug has happened to me, too, but a long time ago, with a scene that I used a lot back then (about five years ago). It happened sometimes when I rendered frames out of sequence, not sure if it was connected to just frame 0 or if it could happen with any frame. And it claimed that UVW coordinates were missing, even though they clearly were not. I never could figure out why that happened, but as I had objects in the scene that were imported, from CATIA, via .IGES format, my best guess was that something was weird with those objects. Not that I knew for sure, really, or at all, but I felt I needed to assign blame to someone or something!
But anyway, you are not the only one to have experienced this bug. I think this was while using the standard MAX scanline renderer, not Mental Ray, though, but I’m not sure… so it could be more MAX-related than Mental-related.
Hmm very interesting! I’ve tried rendering the same scene in scanline and the bug doesn’t happen then. Odd…
I dug out my old files and checked what renderer I used. It was four years ago, and I seem to have switched from using the scanline renderer to using Mental Ray at that time - the older versions of that file use the scanline renderer, while the newer ones use Mental. I think I remember now that I needed Mental for its speed, and that this pesky bug was something I had to live with - small price to pay. I think I remember discussing this with my MAX vendor, but that they didn’t really use Mental themselves, so they didn’t really know.
So it was probably with Mental the bug appeared, not with the scanline renderer as I assumed in my previous post. Sorry!
I see that I used MAX Standard materials back then, and also the Multi/Sub material, always with Standard in all slots, and a couple of Raytrace materials, in that scene. I wonder if the bug has to do with the materials used? Nowadays I pretty much use Arch&Design and not much else, never Raytrace, and never Multi/Sub. And I never get that bug.
I’ve done some further testing and the bug also appears if I apply a single A&D material to it with a single bitmap texture to the whole model. It also occurs using a single Standard material. So the bug is definitely with the Mental Ray renderer’s usage of UV coordinates.