It becomes very tricky when it is required using vray for animations, it is ok when only camera is animated but when there is everything animated in the scene it is almost impossible to render it flicker free using with GI,
Now days it is getting popular in Industry other than the ArchVIZ renders so I question Is there any solid production proven example of how this wonderful renderer can be used in animation with GI where there is not only character but almost everything is having animation.
I request all you helpful people to please discuss it for the community.
What I usually do, is placing a Vray ambient light and within it I put a Vray dirt map, and tweak it to my needs. Basically that’s the global AO in vray, which is important for diffuse shadows, and also for global (equal lighting).
Ofcourse, I don’t stop there, it’s only a first step. I then place lights to my needs, and I think it’s not bad approach for animations (and it renders very fast).
You could also consider rendering in passes, and then compositing in another app (Nuke, After, Combustion…).
Anyone tried spherical Harmonics Engine(SH) inside Vray for maya? there is very less information on this particular topic on vray help site.
I do have attended SH workshop in Siggraph 2010 by weta but it looks like the implementation of SH in vray is very simple and may not be the same. but anyways have anyone tried baking and using the GI with Spherical Harmonics practically on moving objects?
I’ve been a big supporter of VRay for many years, trying to push it on studios wherever I work, but it is tough to push when doing full GI renders with animation.
Lately for simple scenes, I’ve adopted a camera animation only workflow for VRay.
and have added both Octane and Maxwell render into my workflow for the elements in motion. Successfully composting the two is always a challenge, but with a little camera projection work, you can usually find a solution that works. Alternatively you can render scenes with no blurry reflections, and again with a few projection mapping tricks you can breathe life back into the shots.
Also, VRay is still a gorgeous looking render engine without GI. You’ll be surprised what you can accomplish without it in some cases.
It’s very simple… bruteforce for 1st bounce and LCache for secondary. It works perfectly!
When I was at Blur I used this constantly on characters and it made my life much much happier compared to Mentalray pipeline we used years ago. The only thing is that you need to know how DMC works if you plan to use BForce GI.
Could you tell a bit about it please? How can you project reflections for moving scenes?
As far as I know, for exteriors brute-force works quite fast (plus what’s have been said about lightcache for the secondary bounce (won’t it get flicker?). For interiors, you could use area lights to simulate Gi (using test frames with GI), but it will work only for some degree, as you will have some dark shadows. Maybe you could render just characters with GI (for example using just one brute-force bounce+area lights, which simuate GI), and simulate Gi for the surroundings, using vray matte options to render separately characters and the surroundings.
Interpolated global illumination will always be prone to flickering, but there are work-arounds. For one thing, the simpler the lighting is to solve the less flickering you will notice. So use direct lights to fill in the GI as much as possible. Cache GI where possible. V-Ray has a built-in feature for interpolating the GI from frame to frame. This can reduce GI flickering significantly, at the cost of a slight time blur of the indirect lighting. Not always noticable. You can also remove flickering in post using motion/velocity vectors, if you’ve got access to Nuke or Fusion or something similar. There is a Nuke method posted here. If you use Fusion rather than Nuke, I could upload the node network I built that does pretty much the same thing. It works pretty well at removing all kinds of noise.
Edit: There are also optical flow-based noise removal tools, for example Re:Vision Denoise.
Adaptive settings are becoming irrelevant after some point because you are switching from doing adaptive sampling on shader level to image sampler level. When I use BF my DMC settings are usually pretty high like 1 min and 50 max, and only thing you need to tweak at this point is image sampling threshold since every single sampling that uses subdiv in scene is casting only 1 ray (shadows, reflections, refractions etc.)
Isn’t that horrendously slow? When you’re only taking one secondary sample per primary sample, you get really bad stratification, and you need significantly more samples per pixel. It also increases the risk of completely missing features, since the adaptive system is based on local contrast. If no samples in a given area of pixels hit anything, then it won’t know that it should sample it more, which makes it a bad method for resolving small geometric detail.
My knowledge about practical use of V-Ray is not as deep as it could be, but generally speaking it’s better to take fewer higher quality samples, rather than more lower quality ones.
If you have a scene with a single point light and a single area light, and the area light needs all 50 subdivs to resolve properly, with this method you’ll have to fire all 50 subdivs worth of primary samples, resulting in 2500 camera rays per pixel, plus 2500 shadow rays to the point light, plus 2500 shadow rays to the area light, for a total of 7500 rays per pixel. If you instead set the sampler to, say, 2:8, you’ll get a total of 2628 rays and pretty much the same quality, since it’s unlikely that you’ll need more than 64 primary samples in the first place.
Using the adaptive anti-aliasing as a brute force method for clearing up render noise is easy, but it’s not a very fast method.
Actually this method of relying completely on DMC sampler works very good and produces fast results. As I said, we used that on many projects while I was at Blur so it’s tested in production. The only thing is that sometimes if I had DMC set to 1 50 I would set BF samples to 100 to 150 which basically gives 4 or 9 samples of glossy reflection per shading sample (non adaptive). This was for situations where we had highly reflective surfaces and reflection was priority over image sampling.
I remember presentation that DD did about their work on Tron and they basically used same techniques, relying completely on DMC sampler to solve noise problems.
When I started doing these experiments I was also a little bit sceptic about final results and render time, but after trying it over and over again it became my primary workflow. However, this was only used for moving objects, while static ones were rendered as separate pass and with different settings bcause GI was cached for those.
So you precalculate Lcache for that? or per frame? also how you compensate for the GI for static environment objects which you are doing as seperate pass? how do you get that GI bounce in your scene with only characters?