Vray for Anmiations


Hey all. I have a friend who is a PRMan expert trying to push my studio to adopt PRMan workflow instead of Vray. One of the point he came up is that Blur couldn’t stand Vray and had already moved on to other renderer.

Now, Oglu’s 3dworldmag link suggested other wise with Elder Scroll trailer only a few months ago. Can anyone shed some light please if he’s just bluffing or is it true?


You really shouldnt buy a renderer because blur or any other studio uses it though.

that said, I know Blizzard uses prman, maybe he meant them?


I’m not buying Vray because of Blur, Vray is my favourite engine regardless :open_mouth: I’m just curious about his statement, that’s all.


From what I know it’s not true.

Best regards,


Everything on this page is rendered with vray if you are looking for animation examples:


Blur has a very robust pipeline built around Max. Switching from Vray to PRMan would not be easy for them.

The main lighting leads there are also very much in favor of keeping the number of passes to a minimum and doing as much as possible in-camera. I could see them giving Arnold a try at some point, but not PRMan.


iv used vray for animations every day for the last 8 years

moving cameras are pre-calculated with IRR/LC generally, then animated elements done with the BF/LC method.

if its a heavily animated scene it will be BF/LC all the way - its fast when set up correctly

sounds like you just need to learn how to use vray tbh - its really quite straighfoward


I’ve never heard of Blur using PRMan, and even if it was true, if his best argument is that, I would doubt his “expert” status, or at the very least his motives.
That I know of until end of 2012 (no news after that) they were still using VRay extensively.

PRMan has its place, but from what I read of your studio in the farm thread in HW, if I had to take a wild, uneducated guess (not knowing much about what you guys are doing or how) I’d say its place is unlikely to be there.

I’d sooner consider Arnold if you have reason to not use VRay than PRMan in your place.


Would you mind enlightening me on some of your settings? I’m quite interested.



Guys this video shows some very interesting possibilities of Spherical Harmonics in real time in view-port, I don’t know what render engine he is using but its very interesting and gives glimpse of what can be achieved with SH.

have anyone have any extra light on this mysterious Spherical Harmonics? particularly in Vray.


afaik vray for maya has SH, the max version doesnt.


They aren’t mysterious, they have been introduced in the late 1700 by Laplace, and they have been used in rendering for quite a few years.

It boils down to replacing some expensive parts of the contribution equations (or others) with a cheaper, more easily computed and interpolated chunk in spherical function space (frequencies modulated in such space, more like).

For non deforming objects it can also result in very efficient caching/look-up with a variant point of view.

It’s like asking “tell me more about this ray tracing mysterious thing!”, or path tracing, or unified sampling models, or various partitioning/sortin techniques, or whatever else have you.
It’s been around for quite a while, it’s a relatively complex mathematical subject and when available it’s entirely transparent to the user, it’s just one of many tricks available to speed things up in some situations, or parts. Implementations and what they cover can differ wildly.


What I mean is not to ask what is SH Mathmatically but its implementation in render engine it may be Vray, mr, Arnold or PRman. I know its a used in real time stuff but I still not able to see its role I mean how one can use it in his workflow to calculate GI and other lightinc calculations. it was very widely used and tested on Avatar but after that I dont see it being implemented in any render engines or if it is then may be I don’t know. what I have seen in Weta’s Siggraph presentation on SH is very future proof. but what I see in Vray is just like another baking engine.

It is like knowing what is Raytracing in manner of simplifying it conceptually than mathematically and then using it in workflow without going in to too much details just understand how can I get what I want from this old but new stuff :slight_smile:


But that’s what I’m saying, you shouldn’t care that much, it’s transparent to you.

Do you care whether a rendering engine uses RGrids, OCTrees, BTrees, BVH, or something else for its scene data? Unless the system is badly implemented, and therefore you need a knowledge of it to just get it to work (See MRay’s horrible BSPs for years), it’s the kind of stuff you don’t want to care about unless you’re on the technical end of things, because you have no control over it. It’s how something is done, that’s it.

Does an engine use SH to condense, or simplify, or defer/refer recomputation when you move the camera? Or do they have some very smart way of aligning and accessing data propietary to that engine? Or do they do something else again such as using the GPU to brute force the lot?
Do you care as long as recomputation as soon as you move the camera is instantaneous?

As I said, SH isn’t some magic trick, or a mystery. It’s one of many means to an end that can be used in different places, to different extents.
It should be transparent to you as a user, and therefore time’s better spent elsewhere.

Unless you have a genuine interest in the technology and the fundamentals, in which case you SHOULD read about the mathematical and CS sides of it, then it’s irrelevant whether an engine uses it or not.

Like many things for a while it was hyped, and SH found its way to the lips of many who don’t really know or care for it, much like radiosity did at some point. It doesn’t deserve stardom status, it’s just one of many options. Study it from the basics if you want to understand it, or leave it alone IMO.

There’s hardly anything you can do about it even if you understand it anyway, if you feel like studying your time is much better spent understanding some basics of the sampling models and rules to optimize your renders, not how a particular calculation is condensed in such a way you can affect no change in it whatsoever.


Thats correct! I was so interested in SH because I was looking it as a lighting option or helping tool for rendering animations where there are deforming surfaces. I will be very interested to know if there is any other way to get GI flicker free, it dosnt matter if it is SH or something else, since I don’t want to spend my time on learning all the maths behind. that’s the only reason I started this thread.


SH excels as a model at precomputation and then quick retrieval of elements that greatly shorten the computation of contribution and energy (replacing entire parts of an equation by simply looking up based on lights and camera the value resulting from a simpler formula mapped to a spherical space).
Deforming objects are one of those things where, most of the time, a full recompute will become necessary, making them less than ideal. But again, it depends from what, where and how you use SH, which are just a mathematical concept, nothing more.

They also don’t explicitly address flickering, but being often used as a precomputation element to be re-used, they often implicitly bring more consistency in temporal sampling, AKA less flicker. So does sample preservation and several other things though.

Again, you seem to confuse a mathematical model with the consequences of some of its implementations.

If you wonder about flicker, as I mentioned in my first post, you are better off studying the basics of sampling and understanding where it comes from first and foremost :slight_smile:

It’s perfectly possible to have a SH based model inherit flicker from somewhere else, or even produce it, and just as possible to have burte force approaches enforce consistency to the point it won’t produce any in the final frames.


The problem with SH is we need a detailed geometry. Big “faces” on big surface give us undetailed shadows for example. We have to subdivise the geometry to get detailed shadows.
And this, is diffucult to apply on some productions.


This isn’t actually related to spherical harmonics, as such; it’s a question of where you generate and store the samples, not the samples themselves. If you calculate lighting on each vertex in the mesh, you need a lot of geometry to get detailed lighting. But you could theoretically store the SH samples at arbitrary positions on the mesh, for example via UV coordinates or barycentric coordinates on each polygon or whatever. Edit: Or even store them in 3D space and use a “volumetric” approach.


The information thus far has been very useful re: Vray animation. Hopefully it doesn’t get sidetracked. Thanks to all contributors.


This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.