PDA

View Full Version : Is GI actually used in production environments?


mr_wowtrousers
04-09-2005, 07:22 AM
Heya, I am still new at CG stuff but love lighting/rendering. Of course, at first I got very excited with Max's light tracer, radiosity, mental ray etc, but was stunned by how slow it was. Now, obviously larger production houses have more grunt on hand than I have in my bedroom (ooo-errr!) but is GI/radiosity basically a brute force approach more realistically suited for stills at this point in time?

I have been learning to set up more effective light rigs with the standard lights and of course the results are quicker and often you are allowed more flexibility to make quick changes.

So, my question is: "Is GI used in actual, real production environments or is it just a flashy technique at this point in time".

I am going to focus more on using basic lights as I think that will give me a better grounding in dressing a scene, but would be interested in hearing opinions or experiences.

Cheers, big ears.

ThirdEye
04-09-2005, 09:17 AM
It depends on the productions, it's been used even in large ones (i hear Shrek has used its proprietary GI, but don't quote me on that) but usually you also hear "hey we couldn't even afford raytrace, nevermind radiosity!"

playmesumch00ns
04-09-2005, 11:01 AM
GI is being used more and more in feature film vfx. As mentioned Shrek 2 used GI passes rendered on low-res geometry that were then rolled into the final render.

I can't speak for other productions, but here we're using GI for lighting some environments. This is then baked into textures for final rendering. We've also got to do lots of specular reflections (a river) as well that we're raytracing.

Another guy here on a different project is using GI on a character with a clever technique to keep render times low.

I think it'll be a while before TD's regularly just turn GI on and let it render. Since most films are rendered with PRMan, it's taken a while for GI to become a feasible technique to use, since until the latest release, raytracing in PRMan has been pretty much unusable for production.

rendermaniac
04-09-2005, 11:49 AM
Sometimes you need to use GI, but it's best if you can limit it. Most productions tend to use ambient occlusion - usually baked into textures. You can also do that with GI if the lighting or objects aren't changing too much. Also for refractive glass etc you do need to use raytracing as there isn't much choice - you can fake one level of refraction with maps, but any more doesn't work.

Simon

Andrew W
04-09-2005, 12:40 PM
With PRMan's ability to bake out lighting data into pointclouds and from there to maps (or brickmaps) I think we'll see more GI turning up in future productions. Brickmaps for SSS are certainly being used in productions now and I see no reason why the same technology can't be leveraged for cheapish one bounce GI.

My 2 pence.

A

PS I see rendermaniac and playmesumch00ns are working on Saturday too. Ah deadlines, how we love 'em... Busy old Soho.

mr_wowtrousers
04-09-2005, 01:06 PM
Thanks for the responses. I was just reading about occlusion passes but have not tested it out yet. Good point about baking the lighting into the textures. I assume this would be very useful for backgrounds etc.

So much to learn . . . so few hours in the day!

Andrew W
04-09-2005, 01:12 PM
Thanks for the responses. I was just reading about occlusion passes but have not tested it out yet. Good point about baking the lighting into the textures. I assume this would be very useful for backgrounds etc.

So much to learn . . . so few hours in the day!

Well provided your geometry doesn't deform you can bake things like ambient occlusion which is view independent into a map for re-use. Sub surface scattering etc. can't be treated in this way though.

In most instances ambient occlusion is the most useful of the "fancy new techniques" which is I guess why it is now pretty ubiquitous in production.

Andrew

mr_wowtrousers
04-09-2005, 01:38 PM
Thanks again Andrew. I checked out your page and tutorials. Certainly some interesting stuff there. It's no wonder I don't have much time to do stuff on weekends when there is so much to take on board!

lkruel
04-09-2005, 05:56 PM
the writer of PRman came to my school last month (Rob Cook) and we asked him if Pixar would start using GI, and why arent they using it now. And he brought up a good point

GI isnt really directable yet, you press render and the renderer does it thing. If you want the character eyes to be in shadow, or you want his hands to be lit differently you dont really have that option unless you make bouncecards, etc. He said that they're working on making it more controllable and more artist friendly, so it might happen in the future, but he said that the technology isnt that mature yet.

rendermaniac
04-09-2005, 10:49 PM
Hi Andrew

I am posting from home! PS I know what show you are on - I am on the "other" one ;) Anders - is there any more information about Shrek's GI stuff?

I have heard that using GI for things like Shrek makes rendering slightly longer, but makes overall setup shorter as you need simpler lighting setups - the GI does most of the fill lighting for you.

Most vfx houses use prman and it has only been recently that Gi has become useable with brickmaps as Andrew said (and I mean REALLY recently!). Shrek and Robots are different - for a start they are animation houses, not vfx, but mainly because they both have their own prioritory renderers. Blue Sky's CGI Studio seems really nice for all the reflections in Robots (even if the story sucked majorly).

I have used prman raytracing for production work (Sahara - can you guess where ;) ).

Simon

popol
04-09-2005, 11:13 PM
about shrek http://www.tabellion.org/et/paper/siggraph_2004_gi_for_films.pdf

avedis
04-10-2005, 02:04 AM
Hi all

Good question to ask Matt, thanks to everyone that posted to the questio. I actually wanted to follow up on one thing. Andrew you had mentioned brick maps, are these the equvilant to cache files like ambient occlusion, If I understand correctly that is a form of point cloud? Since you can view cached occlusion (.icf ) files with ptviewer, is that considered a visual representation of a point cloud?

Thanks

Avedis

jeremybirn
04-10-2005, 02:26 AM
The short answer is yes. Things like GI and raytracing, which used to be used mainly for people creating stills or low-res output, are now being widely used in feature film productions.

But, in terms of your skills, learning how to light in a "simulated radiosity" style (adding your own bounce lights) will remain a necessity for a long, long time, espectially when lighting elements that will be comped with live-action plates and matte paintings, where there aren't any other surfaces for the renderer to bounce light off.

-jeremy

mr_wowtrousers
04-10-2005, 07:39 AM
But, in terms of your skills, learning how to light in a "simulated radiosity" style (adding your own bounce lights) will remain a necessity for a long, long time, espectially when lighting elements that will be comped with live-action plates and matte paintings, where there aren't any other surfaces for the renderer to bounce light off.

-jeremy

Yes, I am in the process of discovering that. I have been learning how to render passes and layers and having control of your lights is obviously very important (something I learned from your book btw).

tweeeker
04-10-2005, 10:52 AM
Andrew you had mentioned brick maps, are these the equvilant to cache files like ambient occlusion, If I understand correctly that is a form of point cloud? Since you can view cached occlusion (.icf ) files with ptviewer, is that considered a visual representation of a point cloud?

Brickmaps are probably best thought of as the 3d equivalent to pixars 2d texture format. In general brickmaps are superior to the old style icf cache files written by prman. Both brickmaps and pointclouds are a way of storing 3d data (any data too, not just GI or ambient occlusion), although because brickmaps are derived from pointclouds, I guess its correct to say that brickmaps are a form of pointcloud.

Brickmaps are better because they can be compressed, are tiled and therefore support load on demand and also can be filtered. prman pointclouds don't have any of these qualities making them less than great for production.

As far as viewing goes, your right that ptviewer provides a visual representation of 3d data in the same way as an image viewer offers a representation for 2d images. You can't view brickmaps with ptviewer though, for that there's an additiaonal program that ships with prman called (wait for it....) brickviewer.

T

rendermaniac
04-10-2005, 05:14 PM
popol (http://www.cgtalk.com/member.php?u=46778) - thanks for the link! The other reason people don't use GI much is because of prman needing some serious work in area lights (doesn't have them) and photons (which need a lot of work). Unless you want to grapple with using Mental Ray with prman then it's a lot of extra effort. Mental Ray falls down in areas where prman realyl shines. (motion blue, depth of field, displacement ...).

Simon

ngrava
04-11-2005, 07:23 AM
Yikes! I hate to be the voice of disagreement here but, well ahh... I really have to disagree. ;) First off, I have this feeling that PRMan users have a skewed perspective on GI. I'm not sure if it's an issue with the fact that it's a shader operation and not part of the render pipeline or whether it's something else that I'm just not aware of but It's really, really slow. I'm not positive because I don't use renderman anymore but I think the issue is that the shaders are interpreted like a scripting language during the render. This doesn't normally cause an issue but when we're talking about shooting millions of rays, bouncing around the scene... Also, I'm pretty sure PRMan's GI is based on Photon Mapping and in my opinion, this is one of the ugliest methods out there. I'm actually quite surprised by this because everything else about PRMan is top notch. Anyway, with PRMan, I'd bet you're better off with Ambient Occlusion.

The second thing is, we use Vray here at work and it rocks so hard! I haven't done a project that I didn't use GI on in two years and I'm loving it. That being said, I am only working on TV commercials so it's not like I'm doing 4K renders with GI. :D I'll have to dissagree with Rod Cook because in my opinion, the opposite is true. To me, GI gives you so much more 'real world' control because your not always trying to fake and cheat everything. And, lighting with bounce cards and flags is the way it's usually done on sets anyway. Doing things like lighting hands and eyes separately (probably two of the most common things we used to do oddly enough, when I worked at Vinton Studios) is something you usually have to do because the render engine doesn't work like it does in real life.

To be fare, I can understand this perspective but I feel it's only temporary. It's kind of funny because you end up with hundreds of Lighting TDs out there with all this experience lighting things for this fake environment. You give then GI and suddenly they have all this light bouncing around and filling in corners and... It takes so long! "What is this?!" They can't rely on their old tricks to make the scene look good anymore. You can't light with dozens of specialty lights anymore. With GI you have to take a totally different approach to lighting. Less is more. you have to think of things as a whole, not separate things on separate layers. Everything effects each other and that's part of the wonder of GI. Often times I see people who are not used to GI saying things like, "GI is a crutch for people who don't know how to light." I'm sure there is some truth to that but once you learn to light in a 'real world' way, so much more is possible. I know because I came from this perspective and learned how to use GI from lots of trial and error. I felt the same back then. Now, I can't imagine going back. I'm not trying to say that my lighting is somehow superior to the guys who work at Pixar but It's getting better. In my mind, GI is the future and we should start learning all there is to learn about it.

-=GB=-

Andrew W
04-11-2005, 08:34 AM
I would agree that PRMan's photon mapping needs more work, but it is getting better and brickmaps are a very powerful way of storing view-dependent data but I think the usefulness of GI in movie effects production is going to be limited for a while if only because of the complexity of the datasets we deal with. If you have a render which can max out 4Gb RAM without using ray-tracing and where you're already using every efficiency trick you can think of (which happens a fair bit) you don't want to turn on GI whatever renderer you use. I'm sure GI will be used more and more, Matte World Digital have been using it for years - I think they use Radiance, but for the time being the machines "canna take any more, Captain".

A

cpan
04-11-2005, 09:27 AM
PRMAN should have a new GI methdod, not photon mapping, but a vray&mray multibounce method, cuz photon mapping has a lot of limittations and it's slower than other gi renderers.

mr_wowtrousers
04-11-2005, 12:04 PM
In my mind, GI is the future and we should start learning all there is to learn about it.

-=GB=-

I am all for GI. I am not all for sitting on my duff for 4 days while my computer struggles to do a second or two of animation :eek: I am definitely learning more about it (picked up Final Render as part of the Turbo Toolkit) but I am aware of it's limitations in terms of time at the moment, hence the question. It seems that most small productions/student reels etc don't use it as much as it is so computationally and time intensive. As I am looking to do a final year project, I am trying to learn as much as I can about scene optimisation, rendering in layers, fake GI etc so I can get the look I want while keeping rendering times to a manageable level.

Andrew W
04-11-2005, 12:58 PM
so I can get the look I want while keeping rendering times to a manageable level.

is the key phrase here. No one cares tuppence what technique you use in production provided you render something that the client likes by the time your producer agreed they would have it.

A

ngrava
04-11-2005, 03:48 PM
Again, if you're using PRMan then I would agree that using GI would take way to long. However, I use GI in Vray and it's way faster then you guys are leading on to. Most of the time my frames are no longer then 15 - 20 min. It's true, I'm using fast machines with dual processors on a 10 machine render farm but I don't think many other studios would use less. The trick is to know your limitations and start with GI. In other words, lighting a scene without it and then turning it on as a way of "sweetening" would be the wrong approach. The main thing I see people do that slows a scene down is adding specialty lights to the scene. The issue is that every time you add a light it multiplys the render time by order of magnitude. Instead, try lighting with one or two lights and using bounce cards where you would normally use small fill lights. Most of the time I don't even use bounce cards because something in the environment causes enough bounced light on the character that the scene looks better without it. This usually helps to create more "Natural" lighting and renders really fast in Vray. Another trick is to not entirely enclose your character in the scene. If the scene is going to be inside a room, don't include all the walls that you can't see (much like a movie set). What this does is reduce extraneous bouncing and allow some rays to terminate early.

Here's another tip: Instead of using Turbo or mesh smooth, use Vray displacement set to 'smooth' and the displacement set to 0.0. This will force it to use the 'Loop' Subdivision surface method to dynamically create view dependent, smooth geometry on the fly (Like Pixar Sub-D's or patches) and save a ton of memory in the process. Renders like butter. ;)

By the way, the "datasets" I deal with in commercials are pretty complex as well. It's the frame size that's different here when compared to movies. I still have to render with as much geometry as I can get away with, I still have to use 4K maps, and I still have to render with high anti-aliasing.

cpnichols
04-11-2005, 03:57 PM
Everything effects each other and that's part of the wonder of GI. Often times I see people who are not used to GI saying things like, "GI is a crutch for people who don't know how to light." I'm sure there is some truth to that but once you learn to light in a 'real world' way, so much more is possible.
-=GB=-

Hey GB... Great points. Well Said. I agree with pretty much everything you said, as a strong Prman user and Vray user. Talk about talking to people with blinders on.... it is a shame because they are so smart too. And until they wake up and smell the coffee, Pixar will see no reason to make major changes to their software. Maybe it is because their software is so old that even small changes involve herculean efforts (hence $$$$).

I just wanted to touch on one of your point that you said with respect to learning GI lighting. It is actually a little hard to make the leap artistically. But it is far more effective. And saying that "GI is a crutch for people who don't know how to light," would be lighting saying photography is a crutch for those that don't know how to paint. I mean that simily on many levels. And to be honest, when trying to do photorealism, I would rather use photography than painting.

tweeeker
04-11-2005, 06:21 PM
Talk about talking to people with blinders on.... it is a shame because they are so smart too. And until they wake up and smell the coffee, Pixar will see no reason to make major changes to their software. Maybe it is because their software is so old that even small changes involve herculean efforts (hence $$$$).

Erm, I think it's pretty clear that the majority of prman users on this board are aware that it's GI should/could/will be faster. VRay looks great from what I've read and seen, but as I'm sure your aware, any non programmable renderer is totally useless for highend production. And a non programmable renderer tied to 3d studio max is even more useless than that.

Also, just to clarify, prmans GI is not *based on* photon maps. Photon maps are available in addition to a method similar (but not the same as) final gathering.

T

cpnichols
04-11-2005, 07:03 PM
Erm, I think it's pretty clear that the majority of prman users on this board are aware that it's GI should/could/will be faster. VRay looks great from what I've read and seen, but as I'm sure your aware, any non programmable renderer is totally useless for highend production. And a non programmable renderer tied to 3d studio max is even more useless than that.
T

Well just as Prman is trying to catch up to the GI game, Vray and Brazil (I'm going to lump them for the sack of argument with respect to Prman), are busy working on the Maya/Standalone port. They are also well ahead in terms of programable shaders etc... People have written shaders for Brazil and Vray. Vray's SDK is doing strong, and is being used a great deal.... links availbale on request.

Plus I would say that 3dsmax prooved very useful to companies like the Orphange that successfuly used 3dsmax and Brazil on Day, as well as other movies that were recently done. Oh wait... lets not forget how Pixar use Brazil on The Incredibles. I suspect that they did not use Brazil inside of Maya. I would consider those highend productions.

tweeeker
04-11-2005, 07:51 PM
Well just as Prman is trying to catch up to the GI game, Vray and Brazil (I'm going to lump them for the sack of argument with respect to Prman), are busy working on the Maya/Standalone port. They are also well ahead in terms of programable shaders etc... People have written shaders for Brazil and Vray. Vray's SDK is doing strong, and is being used a great deal.... links availbale on request.

Plus I would say that 3dsmax prooved very useful to companies like the Orphange that successfuly used 3dsmax and Brazil on Day, as well as other movies that were recently done.

I'm glad to hear Vrays moving along - any links you have regarding its progress (especially its SDK) would be great.

Of course you are right, 3d max in the right hands is capable of great things, my comment was a little over the top... All I'm saying is I don't see much evidence of prman users (at least on this board) having the 'blinders' on. With many prman users actively working in production environments, they're merely talking about what they know.

As Andrew said, when you have renders hitting 4gb and taking 4hours+ without raytracing it is often hard to believe that another piece of software could to do it with full GI in a similar period of time. Thats not to say it's not true of course, just that many of us will have to wait for standalone versions for these programs before we can find out.

T

rendermaniac
04-11-2005, 09:52 PM
There are a couple of cases where raytracing is more practical than traditional map based techniques. The first is refraction as I mentioned before. The other is ray traced shadows - especially when you have several thousand or more objects - because shadow maps become completely impractical. Unfortunately they also tend to pop a lot if you don't crank the samples right up.

For most of us the benefits of using Maya with prman far outway any shortcomings with GI. But it would be really nice to get more natural fill lighting with less lighting. It definitely does add something to the look.

Simon

ngrava
04-12-2005, 08:23 AM
Erm, I think it's pretty clear that the majority of prman users on this board are aware that it's GI should/could/will be faster. VRay looks great from what I've read and seen, but as I'm sure your aware, any non programmable renderer is totally useless for highend production. And a non programmable renderer tied to 3d studio max is even more useless than that.
Ok, I just don't understand what you mean by this. Lot's of high end productions have been done with non-programmable (as you put it) renders. They come out looking fine as far as I can tell. I understand the power of programmable shaders but you don't have to write shaders for everything all the time. That's just silly not to mention wasteful. That's like saying you can't navigate a file system without a command prompt (well, maybe some of you can't ;)) And, Both Vray and Max are very programmable. So far I haven't needed to create any kind of custom shaders for my project but if that was the case, you can bet I'd be able to do it. Another thing that kind of irks me is that all that programmability of RSL is useless if you don't know how to program. Unless of course, you're using Slim and in that case, you're basically doing what you'd be able to do in Max/Vray anyway.

And finally, Max is absolutely not a useless second rate 3D program! There is absolutely no practical reason why you would need to use Maya over Max. In fact I can think of many reasons why I would use Max instead of Maya Most having to do with ease of use and ever expanding options.

Also, just to clarify, prmans GI is not *based on* photon maps. Photon maps are available in addition to a method similar (but not the same as) final gathering.

T

Thanks for clearing that up. I wasn't sure about that but from what I've seen the Photon mapping seems to end up in most examples of PRMan GI. By the way, Final Gathering is a really ambiguous term and just refers to some method of collecting samples and interpolating them within a given radius. Irradience caching in Vray is an example of this too. Is the PRMan method different from this?

jeremybirn
04-12-2005, 11:45 AM
There is absolutely no practical reason why you would need to use Maya over Max.

I hope we can take these sweeping, blanket statements with a grain of salt. I haven't found a major program yet that doesn't have some unique advantages to some groups of users. But, the issue of which renderer something is a plug-in for isn't really the point - film studios generally don't want renderers that are directly tied as a plug-in to any commercial animation package. As renderers progress, they should develop or adopt a standard scene description that can be used by anyone who wants to run it, no matter what software creates or edits the scene description.

By the way, Final Gathering is a really ambiguous term and just refers to some method of collecting samples and interpolating them within a given radius.

Generally, when someone mentions Final Gathering, I think of Mental Ray's implementation. Two years ago, I heard of one other renderer implementing something called Final Gathering that sounded similar, so it isn't an MR-only thing, but the MR-based idea of FG provides two things: raytraced hemispheric sampling for a simple, single-bounce GI solution, combined with a sampling of photons in any photon mapped GI which will smooth out the blotchy appearance that often occurs in photon maps. So, you can use FG by itself as a simple GI pass, or in conjunction with GI or caustics to smooth out and enhance the results.

-jeremy

StephanD
04-12-2005, 12:17 PM
Whoa!

With each and every post by you Jeremy I learn so much,thanks for being there :)

Andrew W
04-12-2005, 12:57 PM
film studios generally don't want renderers that are directly tied as a plug-in to any commercial animation package.

I would go slightly further and say that a studio can't be tied to a specific 3D package. We use Maya, Houdini and some XSi here at FS-CFC and the beauty of all these packages is that you can get something exported as RIB, combined with other RIB either generated procedurally or from one of the other packages and rendered together, in our case through PRMan. For large scale production you cannot tie yourself to one package because you need the option to feed multiple inputs together to get one output.

I believe Splutterfish are developing a standalone version of Brazil that will parse RIB. This I would like to see very much. I very much hope Vray does the same. It would be a huge bonus to the VFX industry.

I understand the power of programmable shaders but you don't have to write shaders for everything all the time. That's just silly not to mention wasteful

I don't agree that hand-coding a shader is wasteful. By writing something yourself you know exactly what has been put in and can eliminate anything that is unnecessary. Many pre-built shaders will have parameters you don't use all the time, for example if you dial the specularity down to 0 on a shader I presume that the shader will still execute a specular() call and will then multiply the result of that function by zero. Result = wasted computer cycles.

A friend of mine working on a project that will remain nameless cut the render times by a third by simply removing several calls in a shader that weren't in common usage on that show. The investment you make up front in coding somthing by hand generally pays dividends in production by saving you time on every sample of every shaded point of every frame of every render that uses that shader. On large scale productions these savings are massive.

We don't write shaders by hand just to be obtuse, we do it because it is necessary to get the job done on time with minimum agro.

All the best,

Andrew

cpnichols
04-12-2005, 03:37 PM
Generally, when someone mentions Final Gathering, I think of Mental Ray's implementation. Two years ago, I heard of one other renderer implementing something called Final Gathering that sounded similar, so it isn't an MR-only thing, but the MR-based idea of FG provides two things: raytraced hemispheric sampling for a simple, single-bounce GI solution, combined with a sampling of photons in any photon mapped GI which will smooth out the blotchy appearance that often occurs in photon maps. So, you can use FG by itself as a simple GI pass, or in conjunction with GI or caustics to smooth out and enhance the results.

-jeremy

Now correct if I am wrong, but the words "Final Gather" can be used to describe a bulk of methods that all sorta accomplish the same goal. I recently wrote a review on Turtle and it also used a method called Final Gather. From what I can tell, it is collecting a series of light samples where the rays originate form the camera (as opposed to the light source like photon mapping), and shoots hemispheric samples out with every pixel of the camera (depending of course of you are over or under sampling). For efficiency reasons, it may use an adaptive method (such as a monte carlo sampling) in order to accelerate the process. All of these methods, such as Vray Irradiance Map, and even Prman's Irradiance cache method that we use for AO, can be called Final Gather. Even 3dsmax had a method called "regather" for refining the Radiosity solution, which is essentially the same principle as I just described.

I could of course be wrong and Mental Ray has patented the name.

playmesumch00ns
04-12-2005, 04:04 PM
Monte carlo is not an acceleration method. Monte carlo is a numerical method of integrating a function that is difficult or impossible to integrate analytically, such as the "global" irradiance at a shading point.

In terms of rendering this means shooting a lot of rays to find out the irradiance at the end of each one then averaging the result.

It's this process which is now usually described as "Final Gathering" in raytracers.

afaik, mental ray's final gather does irradiance caching both at the "sampler" end and caches the direct illumination at the "samplee" end as well, thus making it rather quick.

PRMan's problem with "Monte-Carlo, Hemispherical Irradiance Sampling", as I suppose we could call it, is that while mental ray caches results at the end of rays, therefore limiting the number of calculations to be done, prman has to re-evaluate the shader at the end of every ray.

Worse still, prman's shader evaluation is incredibly slow because the shader is interpreted. It amortizes this cost during scanline rendering because it shades grids of about 300 points all at once. But during raytracing it only evaluates points 3 at a time, so the shader vm introduces a huge overhead. In my experience just adding an if() statement to a shader can add several minutes to a render when doing GI.

The simple fact is that PRMan does everything else so well, having fast GI isn't really an issue. I honestly can't see other renderers catching up to prman on its strengths (fully customizable and capable of rendering huge amounts of heavily displaced, super-motion-blurred geometry in the blink of an eye) any time soon. And by the time they do, the GI in prman will probably be useable anyway.

cpnichols
04-12-2005, 04:59 PM
Monte carlo is not an acceleration method. Monte carlo is a numerical method of integrating a function that is difficult or impossible to integrate analytically, such as the "global" irradiance at a shading point.

OK... good point... I should have used the term Optimization rather than acceleration which is mostly used for methods such as KDtree and octree methods.



The simple fact is that PRMan does everything else so well, having fast GI isn't really an issue. I honestly can't see other renderers catching up to prman on its strengths (fully customizable and capable of rendering huge amounts of heavily displaced, super-motion-blurred geometry in the blink of an eye) any time soon. And by the time they do, the GI in prman will probably be useable anyway.

Vray's displacement and motion blur is incredibly fast. And from what I have seen from the new MentalRay, their motion blur has improved a great deal as well, where now it is only slightly slower than Prman's... I'm sorry you don't see it catching up to prman. You may want to take a look at them.

MCronin
04-13-2005, 12:46 AM
Ok, I just don't understand what you mean by this. Lot's of high end productions have been done with non-programmable (as you put it) renders. They come out looking fine as far as I can tell. I understand the power of programmable shaders but you don't have to write shaders for everything all the time. That's just silly not to mention wasteful. That's like saying you can't navigate a file system without a command prompt (well, maybe some of you can't ;)) And, Both Vray and Max are very programmable. So far I haven't needed to create any kind of custom shaders for my project but if that was the case, you can bet I'd be able to do it. Another thing that kind of irks me is that all that programmability of RSL is useless if you don't know how to program. Unless of course, you're using Slim and in that case, you're basically doing what you'd be able to do in Max/Vray anyway.

Programming shaders in RSL is easier and will always be easier than using any other renderer's SDK, for the simple fact that you don't really have to deal with libraries at the same level, allocating memory, or any of the other overhead that comes with using a compiled language like C++. It's part of the reason why despite the proliferation of Mental Ray recently, so many studios are still spending the extra money on prman. If you want a little test, sit down and write a simple diffuse shader for whatever renderer you are using. It's like two minutes, two or three lines of code, in RSL. It will take you significatly longer with a C++ compiler. Imagine you were working a project with a compositor who needed a much more complicated shader output into seperate passes. You could finagle it in any piece of software either by sifting through the SDK, or by massaging a collection of the canned shaders in your software into giving you the passes you need, but I can guarantee you cramming everything you need into one shader in RSL is a lot easier to do.

And finally, Max is absolutely not a useless second rate 3D program! There is absolutely no practical reason why you would need to use Maya over Max. In fact I can think of many reasons why I would use Max instead of Maya Most having to do with ease of use and ever expanding options.

There are plenty of practical reasons not to use Max, right off the top of my head, not wanting to have a Windows machine anywhere in the building is a pretty good one, but that's besides the point. Max works great for you, but you are going to have to understand that what's working for you, or working for the Orphanage, just isn't going to fly for everyone, and isn't ever going to meet the needs of many artists no matter how many plugins you tack on or how robust the SDK is.

tweeeker
04-13-2005, 07:00 PM
Also, in addition to the points playmesumchOOns mentioned, another thing mental ray is much better at than prman is interpolating the irradiance samples. prman simply linearly interpolates between each sample, mental ray on the otherhand averages samples rather than use them directly which gives far smoother results with fewer rays. This is particularly evident in mental ray 3.4.

T

cpnichols
04-13-2005, 07:18 PM
Also, in addition to the points playmesumchOOns mentioned, another thing mental ray is much better at than prman is interpolating the irradiance samples. prman simply linearly interpolates between each sample, mental ray on the otherhand averages samples rather than use them directly which gives far smoother results with fewer rays. This is particularly evident in mental ray 3.4.

T

Yeap... Vray does the same thing. In fact they offer several different methods and parameters when it comes to averaging samples.

playmesumch00ns
04-14-2005, 10:57 AM
This is turning into a bit of a "my renderer's better than your renderer" argument.

Going back toward the original question, GI isn't used more in film vfx because prman's not very good at it yet, so the time it takes to produce a good result with GI is much greater then the time required for a skilled lighter to light a scene photorealistically without it.

So the second question is why not use a renderer that's better at GI? Like mental ray, VRay, Brazil etc?

Aside from the reasons why PRMan's inherently better-suited to film production than any other renderer (rendering a pretty GI interior at PAL res has absolutely nothing to do with renderering a high-detail, 100,000-strong battling army at 2k), it's a matter of time and money more than anything else.

Personally, I'm lobbying the powers-that-be here for us to have development copies of renderers like VRay and Arnold to try and roll them into our pipeline. However, evaluation and development will take a long time, and there are plenty of more pressing issues to deal with first.

In short, prman gets the job done, and it looks fantastic because there's a bunch of highly skilled people here to make it work. There's simply no pressure to include dynamic GI in every shot. Yet. I anticipate that by the time we absolutely can't make a film without dynamic GI on all the characters and environments, PRMan will probably be fast enough anyway.

azazel
04-14-2005, 02:02 PM
Gi makes sense for set/background light, especially if it's pre-rendered, augmented with some matte painting etc - just have a look at Orphange's work on Hellboy, where they used Brazil's gi (at least that's what was said in some articles over the net).

cpnichols
04-14-2005, 04:49 PM
So the second question is why not use a renderer that's better at GI? Like mental ray, VRay, Brazil etc?

I think it is just a matter of time. In fact, it may be happening right now, but since most productions can take a year or more, you may not see the results in theaters for a few years.


Aside from the reasons why PRMan's inherently better-suited to film production than any other renderer (rendering a pretty GI interior at PAL res has absolutely nothing to do with renderering a high-detail, 100,000-strong battling army at 2k), it's a matter of time and money more than anything else.

Talk about a "my renderer's better than your renderer" argument.... those renders are perfectly capable of doing that. The reason you have not seen it may be for the same reason that I pointed out in my first answer.

Anyway... I just urge you guys to look into it. I really think things will be changing for all of us, and in fact I would say that it already has changed, but the results are not out there yet.

pokoy
04-14-2005, 05:56 PM
sorry, i have to jump some posts back because of something jeremey b. mentioned: a 3d-format which stores all scene data that is relevant for rendering. just like the rib format, but more "sophisticated" and aiming towards future features like GI, SSS and so on.

is anyone aware of a development of such a file format? just think of the pdf for 2d-files, which has become a standard in the pre-press process...

Andrew W
04-14-2005, 06:16 PM
sorry, i have to jump some posts back because of something jeremey b. mentioned: a 3d-format which stores all scene data that is relevant for rendering. just like the rib format, but more "sophisticated" and aiming towards future features like GI, SSS and so on.

is anyone aware of a development of such a file format? just think of the pdf for 2d-files, which has become a standard in the pre-press process...

I think RIB works just fine. There are plenty of RenderMan compliant renderers that can do SSS and GI. There's nothing inherent in the RIB architecture which makes it unsuitable for GI or SSS, in fact it's pretty much independent of these technological niceties, and that's one of its strengths. RIB was designed to be to 3D what PostScript is to print (I think it's even defined in those terms in "Advanced RenderMan") and in as much as it is pretty widely supported by a number of vendors this seems to work. In a perfect world (for me) all stand alone renderers would parse RIB; it would make life much easier... But we can all dream, can't we?

Andrew

rendermaniac
04-14-2005, 09:34 PM
It seems like a lot of the better renderers are at least based on RIB - eg Houdini's Mantra. Isn't one of those "new" renderers you mentioned shaders based on shading language?

Unfortunately the key word here is based on.They all have their specific quirks. OK when you get to specific renderers they all have their own extensions, but you can throw a basic RIB file at it and get an image out.

Gelato's API (it's more than just a file format) looks superior to RIB. Unfortunately it's a not completely production tested. Of course no one wants to be the guinea pig.

Simon

playmesumch00ns
04-15-2005, 09:00 AM
In a perfect world (for me) all stand alone renderers would parse RIB; it would make life much easier... But we can all dream, can't we?

God, can you imagine? Only one file format to have to work with? That would be a wonderful life. Everyone all over the world, holding hands and singing in unison

Andrew W
04-15-2005, 09:10 AM
God, can you imagine? Only one file format to have to work with? That would be a wonderful life. Everyone all over the world, holding hands and singing in unisonStop it! I'm getting all teary-eyed. <sniff sniff>

rendermaniac
04-15-2005, 01:20 PM
God, can you imagine? Only one file format to have to work with? That would be a wonderful life. Everyone all over the world, holding hands and singing in unison

...until someone wants to add their latest much have feature to it ;)

Simon

playmesumch00ns
04-15-2005, 02:31 PM
...which would probably be along the lines of:


Attribute "user" "string my_cats_name" [ "Spot" ]


edit: RIB Syntax Error... :)

jeremybirn
04-16-2005, 02:15 AM
sorry, i have to jump some posts back because of something jeremey b. mentioned: a 3d-format which stores all scene data that is relevant for rendering. just like the rib format, but more "sophisticated" and aiming towards future features like GI, SSS and so on.

is anyone aware of a development of such a file format? just think of the pdf for 2d-files, which has become a standard in the pre-press process...

I agree with Andrew, .rib is great as a standard and works well with renderers such as PRMan that support features like GI, SSS, and so on.

I was looking back through this thread to see what had been said earlier, and I saw someone else posted something about brickmaps being ideal for an irradience cache that might be used in SSS, GI, etc. Brickmaps are a data format that could be computed at rendertime like a shadow map, or baked ahead of time and stored like a texture, but they are not a scene description file, they are more of a flexible data structure that could store photon maps, voxel data, or other data represented as points in 3D space.

-jeremy

ThirdEye
04-17-2005, 07:18 PM
Talk about a "my renderer's better than your renderer" argument.... those renders are perfectly capable of doing that. The reason you have not seen it may be for the same reason that I pointed out in my first answer

do you really think Vray is as fast as Prman in terms of displacement??

cpnichols
04-18-2005, 03:38 PM
do you really think Vray is as fast as Prman in terms of displacement??

aaaaah... yeah. I would say that with straight forward small displacements, Vray is at least as fast as prman. But when Prman starts to get into Displacement Bound issues, Vray can outperform Prman. Also, the moment you get into a situation in Prman where you need to turn on "trace displacement...." Forget Prman... Vray really shines. Such as doing Displacement with GI.... it is almost brainless. You start doing things that you never thought were possible in Prman.

Keep in mind that Vray is a very young program, and Prman is a very mature program. Each of which have their ups and downs. I mean, Prman is STILL struggling to figure out how to make it a true multi-threaded program.... and it is in BIG trouble if it can't do it before the dual core processors come out. And while Vray and Brazil and the rest are building a shading languages and scene description file formats, Prman has a TON of TDs and shader writers trained around the world in Prman... how are they going to convince them to change? Most of them have been using that file format for over 10 years? Most of them would prefer to wait and see Prman "improve" itself, but seeing how long it it taking it to add multi-threading, they may have to wait a long time, at which point other people will be trained in other programs, and Prman TDs may go by the way of the Cobalt programmers, and the Dodo birds... ok... a SLIGHT exaggeration... ;)

chainsaw
04-19-2005, 01:21 AM
prman is a great brandname and has a lot of talented users, but as render engines go it's becoming outdated, i guess slowly we'll see prman loosing its position in the production pipeline.

MartinGFoster
04-19-2005, 01:43 AM
back to the original question of this thread, we have our own proprietary in-house renderer that is a scanline/raytrace/GI hybrid and we do mostly feature film VFX but also animated, all-CG work. We generally don't do full GI because it's just too slow. We have done multi-bounce GI (diffuse reflections) but it almost killed the studio and our control so we avoid it with creative workarounds, whenever possible.

We raytrace when we have to but avoid it too, if possible. We do use some forms of HDRI based lighting sometimes raytraced for specular passes, sometimes shadow map based for single-bounce diffuse passes. I wouldn't call that GI but I suppose some people would. It's more of a coloured occlusion pass type of thing.

If anything, we are moving more towards lighting in the comp with extensive and creative use of specialized layers/mattes with hundreds of light specific layers adjusted in the comp.

We do use Houdini's Mantra sometimes because it's pretty flexible, open, and we have a site license. We have a bunch of PRMan licenses but not enough to fill the farm so we don't really use it except for rare, special occasions. I wish we did but it's a matter of economics and also owning the code for your in-house renderer and having a team working on improving as shows require new features, can have some distinct benefits.

jeremybirn
04-19-2005, 03:50 AM
One thing that I hear repeated almost as a truism is the idea that GI = a loss of control. Of course, I see where that impression comes from: turn on GI in MR and watch it splatter all over, running a grand simulation of everything... but if you have your own software (it's called Hues, right?) there's no reason you can't say exactly how much control you want in your implementation, what relationships the TD's set up and how they shape and control them. There's nothing intrisicly less controllable about something like an indirect diffuse bounce than about a shadow or a reflection map; it's all a question of what controls you want to give yourself.

-jeremy

cpnichols
04-19-2005, 04:27 AM
If anything, we are moving more towards lighting in the comp with extensive and creative use of specialized layers/mattes with hundreds of light specific layers adjusted in the comp.

Oh YEAH!!! The relationship between lighting and compositing is blurring a lot... in a good way. Lighters and Compers are starting to become one. Multi-pass rendering has a huge role in this, and GI should not be discounted as part of that exact process. So basiclaly I wuld say I agree with you up to the part that you say "with hundreds of lights." In fact, controling GI in comp is exactly the way it needs to go, and it also need to go way beyond the whole AO thing.

playmesumch00ns
04-19-2005, 09:00 AM
One thing that I hear repeated almost as a truism is the idea that GI = a loss of control. Of course, I see where that impression comes from: turn on GI in MR and watch it splatter all over, running a grand simulation of everything... but if you have your own software (it's called Hues, right?) there's no reason you can't say exactly how much control you want in your implementation, what relationships the TD's set up and how they shape and control them. There's nothing intrisicly less controllable about something like an indirect diffuse bounce than about a shadow or a reflection map; it's all a question of what controls you want to give yourself.

-jeremy

I think I'd disagree with a small part of what you're saying, Jeremy. Physical simulation (GI and specular raytracing) is inherently less controllable than "old-skool" methods.

Sure you have different types of controls, but compared to the precision of a spotlight cone, or a reflection card, physical simulation is a blunt knife.

That's not to say say you can't do plenty of cool stuff with GI, you just have to learn a different way of working. Like placing invisible bounce objects in your scene to throw coloured light around, that sort of thing.

I'm curious, cpnichols, as to how you'd control GI in comp?

jeremybirn
04-19-2005, 01:28 PM
Shadows, GI, reflection maps, and raytraced reflections can all be considered types of "simulation" - but any lack of control or "blunt knife" feeling is a fault of the software. Make a list of the ways that you consider spot light cones or reflection cards to be more controllable than an indirect diffuse bounce - you probably won't list any specific kind of control that couldn't be implemented, only controls that aren't implemented in a specific program.

-jeremy

playmesumch00ns
04-19-2005, 02:38 PM
Ok, I can list controls that would be very hard to implement :)

The main thing I was thinking of was this: you place a spotlight, its cone defines exactly where the light's going to go. With GI, the light goes everywhere.

Actually as I'm typing this I'm thinking one could just place an occluded in the way to cast a shadow where you didn't want the GI to go, then tie the occluder to the object whose bounce light you don't want in that specific area.

Mind you that doesn't sound as easy as just changing the angle of a light. Not to mention the fact that you'd have to shoot more rays to do it.

Hmm... but then again you could use another light to drive the GI placement, or use some concept of volumes...

Okay you've made me realise some cool stuff... I'm going off to code it up now :)

cpnichols
04-19-2005, 03:08 PM
That's not to say say you can't do plenty of cool stuff with GI, you just have to learn a different way of working. Like placing invisible bounce objects in your scene to throw coloured light around, that sort of thing.

I'm curious, cpnichols, as to how you'd control GI in comp?

Yes, you would have to learn to light differently. The analogy I have used many times before is that you will need to learn to light more like a photographer and less like a painter. And photographic methods can get things to be photoreal faster than painting methods.

But back to your specific questions, what I mean by controling it in comp is that you could output a number of passes. For example, right out of the box in Vray you have: diffuse, specular (if there is any), reflection, refraction, shadow, light intensity, indirect illumination, caustics, etc... and that is just lighting, you also have mattes through Object ID, material ID, and even velocity, normal, etc... But just being able to split lighting out like that to manipulate it and then put it back together in comp gives you a lot of power. Too much color bleed? Desaturate the indirect light pass... etc... Anyway, you get the general idea.

playmesumch00ns
04-19-2005, 04:00 PM
Oh right, I thought you meant some way of relighting the GI in comp.

I recently wrote a system for relighting scenes in Shake using spherical harmonics. You write a bunch of data out of renderman and recombine it in Shake. Then you can relight your scene using distant light sources and hdri environment maps in real time, with full soft shadows and diffuse interreflection.

Of course, the amount of data you have to generate is quite large (at least 9 coefficients images per frame), and generating them in prman is quite slow. But it's a lot of fun to drag light sources around in shake and see the shadows and gi in your scene (doesn't matter how complex because it's just a bunch of pixels) update in real time.

Andrew W
04-19-2005, 04:11 PM
But it's a lot of fun to drag light sources around in shake and see the shadows and gi in your scene (doesn't matter how complex because it's just a bunch of pixels) update in real time.

Shake do something in a real time? Shake doing an A over B in almost real time would be a start...

A

playmesumch00ns
04-19-2005, 04:15 PM
Hehe. Okay "as close to real time as shake allows" then. It works real time if I do it in my own OpenGL window using fragment shaders :)

I'm going to spend some more time on the relighting thing I think: those new 3d controls in Shake 4 could be pretty cool!

rendermaniac
04-19-2005, 05:33 PM
prman is a great brandname and has a lot of talented users, but as render engines go it's becoming outdated, i guess slowly we'll see prman loosing its position in the production pipeline.

User base is definitely in prman's favour. Also though one of the main reasons people have been using it is that you can buy it together with RenderMan Artist Tools (RAT) and Alfred and get a shading/lighting pipeline and renderfarm running pretty quickly.

MTOR is a bit flakey, but it is updated with the latest prman features fairly regularly and it's pretty easy to write new SLIM templates. This power and simplicity make the whole package very attractive. Solutions such as Mayaman also exist, but they are always trying to keep up with prman features whereas the RAT people are working with the prman developers. Also with Mayaman you still need to set up a renderfarm yourself.

Saying that you can just change the shader compiler and renderer and use a different RenderMan complient renderer - although most people don't as they have already bought prman licenses with RAT so they may as well use them.

Mental Ray seems to be aiming for this market - hence RenderMan for Maya going along the same route as Mental Ray. I think this has only been possible because Alias recently changed Maya to support third party renderers internally.

Anyway this is getting a little off topic ;)

Simon

rendermaniac
04-19-2005, 05:34 PM
...which would probably be along the lines of:


Attribute "user" "string my_cats_name" [ "Spot" ]


edit: RIB Syntax Error... :)

I meant like adding a new Xdefug primitive type etc, but then I realised you can probably do this with Procedural primitives anyway!

Simon

cpnichols
04-19-2005, 05:39 PM
Oh right, I thought you meant some way of relighting the GI in comp.

I recently wrote a system for relighting scenes in Shake using spherical harmonics. You write a bunch of data out of renderman and recombine it in Shake. Then you can relight your scene using distant light sources and hdri environment maps in real time, with full soft shadows and diffuse interreflection.

Of course, the amount of data you have to generate is quite large (at least 9 coefficients images per frame), and generating them in prman is quite slow. But it's a lot of fun to drag light sources around in shake and see the shadows and gi in your scene (doesn't matter how complex because it's just a bunch of pixels) update in real time.

Yeah... I was reading that PDI might have done something like this on Shrek. Not sure. But as you said. Takes a huge amount of data and take a long time to get that data writen in the first place.

MartinGFoster
04-19-2005, 05:49 PM
One thing that I hear repeated almost as a truism is the idea that GI = a loss of control. Of course, I see where that impression comes from: turn on GI in MR and watch it splatter all over, running a grand simulation of everything... but if you have your own software (it's called Hues, right?) there's no reason you can't say exactly how much control you want in your implementation, what relationships the TD's set up and how they shape and control them.
-jeremy

It was more a case of speed of multi-bounce diffuse illumination that stops us using it. Or at least, be very careful where we use it. Combine it with motion blur and it can be a showstopper. We have come up with all manner of optimizations but it's still inherently very expensive.

I think we could probably write all manner of controls and perhaps we have. Control is probably less of an issue than spped. The most obvious control I can think of would be a false colour rendering to isolate 1st, 2nd, 3rd, 4th generation bounces used as a matte element. So you could at least colour-correct the bounces in the comp. But if it costs too much to generate your layers you get much less iterations with supervisors and, boy, do they like iterations. Several a day typically, if possible.

The Rhythm and Hues renderer is called "wren" but they do have a modelling package called "And" with a compositing package called "Icy". "And" has largely been replaced by Maya for modelling. We have an animation/light setup package called "Voodoo". The general philosophy is to write their own software for the large "bulk" stages of the pipeline. Animators, lighters, compositors are the numerically large positions at the company. So the software they use is where you see more in-house software development focused.

Saturn
05-08-2005, 09:02 AM
Oh right, I thought you meant some way of relighting the GI in comp.

I recently wrote a system for relighting scenes in Shake using spherical harmonics. You write a bunch of data out of renderman and recombine it in Shake. Then you can relight your scene using distant light sources and hdri environment maps in real time, with full soft shadows and diffuse interreflection.

Of course, the amount of data you have to generate is quite large (at least 9 coefficients images per frame), and generating them in prman is quite slow. But it's a lot of fun to drag light sources around in shake and see the shadows and gi in your scene (doesn't matter how complex because it's just a bunch of pixels) update in real time.

Have you any docs or papers related to relighting in comp using spherical harmonics ?
thanks

el_diablo
05-08-2005, 12:33 PM
http://www.cs.virginia.edu/~gfx/pubs/vdrt/ (http://www.cs.virginia.edu/%7Egfx/pubs/vdrt/)

I found this paper but I think there are many more recent papers on relighting techinques and some even use modern GPU.

playmesumch00ns
05-09-2005, 08:18 AM
That's the sort of thing. The original stuff all used a spherical harmonics basis, but everyone's started switching to wavelets now because they give you better shadow fidelity.

Most of it's aimed at doing realtime relighting of 3d geometry, but doing it for a 2d image is quite an easy extension.

playmesumch00ns
05-09-2005, 08:20 AM
Yeah... I was reading that PDI might have done something like this on Shrek. Not sure. But as you said. Takes a huge amount of data and take a long time to get that data writen in the first place.

From what I can gather of PDI's paper, they actually held render data in what they called a "deep image buffer" or somesuch. Their lighting tool allowed an artist to light a scene, then recalculate the GI pass every now and again as needed. Basically like an IPR but with GI included. Looks like a very good workflow though.

Ministry
05-21-2005, 01:55 PM
hi,
well baking textures can work for the some scenes where there aren't much of a moving objects. But this is not the case is most of the scene right. at that time ... how do u really handle it.

thank u

playmesumch00ns
05-23-2005, 08:26 AM
If your scene is all dynamic, moving objects then you don't use GI!

jeremybirn
05-23-2005, 11:00 AM
If your scene is all dynamic, moving objects then you don't use GI!

Or you don't bake the GI into texture maps, anyway.

-jeremy

playmesumch00ns
05-23-2005, 01:40 PM
Jeremy, are you using indirect diffuse at all on Cars?

gga
06-08-2005, 04:42 AM
I believe Splutterfish are developing a standalone version of Brazil that will parse RIB. This I would like to see very much. I very much hope Vray does the same. It would be a huge bonus to the VFX industry.

Andrew

Hell, no. RIBs and SL, but particularly RIBs, need to die the most painful death asap. Tomorrow if possible.

RIBs is a format that is ill suited for all the following things:
- IPR. RIBs are an all or nothing proposition. You describe the whole world or nothing. Want to do IPR? No such luck. You have to develop your own api/message system, cause ribs don't support it. Why am I developing my own api if I'm already using another one?
- Multi-frame rendering. I want to take advantage of geometry coherence in my scenes. How am I going to do that with WorldBegin/WorldEnd?
- Multi-camera and multi-pass rendering. Again, no such thing. You need to keep re-spitting and re-spitting the scene over and over again, even if no objects or anything changes but the shaders. As my renders become more real-time, and renderers are not tightly coupled with the animation package, you waste time and time again re-scanning and re-spitting the scene. Sometimes more than rendering.
- Oh yes, and the beauty of the rib stack... who in hell came up with that one? That person owes me several days of my life. Let's see...
if I want to verify if an object is being lit by a certain light I have to count the Illuminate statements in a hierarchy of rib code. Huh?
What else is wrong with the stack you ask? Well, attributes. Some attributes get passed thru the hierarchy like "matte" but others don't. Oh yes, and this can change depending how you generate the rib... the more hierarchical it is, the more attributes get passed down. Yikes!

Okay, enough with RIBs. What's wrong with SL?
Well, for starters, shaders cannot be chained. Huh? In this day and age. So wait a second, if I want to chain shaders I have to once again forget about the ri "standard" and come up with my own invention or cough money and be content with the stuff mtor offers? What if I want to use xsi or 3dmax or cinema4d or silo or...
Speed and raytracing is another concern. But honestly, speed, I really don't give a damn anymore. With each cpu being twice as fast and offering multiple cores in the horizon, speed is not my top priority. Ease of use is. Besides, if speed becomes an issue, I can hopefully always go down to C if I have to. Oh wait... rsl's C api is .... almost non-existant.
Also, looking at the language syntax really... do I really want to be stuck with this? In the 21st century? Let's see:
- object orientation -NO-
- classes -NO-
- parameter inheritance -NO-
- iterators -NO-
- namespaces -NO-
- dynamically typed -NO-
- syntax based on... C? C!!!??? I mean REALLY! Years ago it was the best language around and made sense.
But the whole point of a shading language is to make it EASY and ACCESSIBLE. Why have people learn C? Why do I need to type semicolons and brackets everywhere? Or type color X; float Y; everywhere -- why can't the compiler guess the type automatically? Do you *honestly* believe that within this next decade, when virtual machines rule the programming world and compilers target multiple platforms, C (or C++ for that matter) will remain as popular a language as it has been?
The way you make a shading language popular is that you don't base it on C but based it on the simplest language you can think of. And when I think of that, I think of BASIC. Yes, good old plain basic.
Of course, since basic is kind of dead, you base it on its descendants. Thus, if you think rsl is the way, you *really*, *really* need to take a look at python and ruby. I find python a not so good glue scripting language for a number of reasons, but as a syntax framework for a modern shading language, however, it would be very, very interesting. But don't just bind python. That has already been done (see Stan Winston's mray shading language by Jim Rothrock, for example) and it is no good as those languages are not fully thread safe. Be creative. Copy the simplicity of Ruby's C binding api, too. That way, adding new functions to the language would be easy and a pleasure instead of having to work with the limited api that rsl offers.
And if you feel like taking a big challenge... make the language dynamically typed. Just leave the shader parameters statically typed but all else... try to guess the types as you compile the shader. Then, and only then, we have the shading language for the 21st century.

gga
06-08-2005, 05:23 AM
But back to your specific questions, what I mean by controling it in comp is that you could output a number of passes. For example, right out of the box in Vray you have: diffuse, specular (if there is any), reflection, refraction, shadow, light intensity, indirect illumination, caustics, etc... and that is just lighting, you also have mattes through Object ID, material ID, and even velocity, normal, etc... But just being able to split lighting out like that to manipulate it and then put it back together in comp gives you a lot of power. Too much color bleed? Desaturate the indirect light pass... etc... Anyway, you get the general idea.

That has been done for a decade or two now. And quite frankly, imo, the practice is slowly dying and bound to die completely within a couple of years.
Why? Because it takes too damn long to do so. You need to keep your renderer and 2d package open and keep switching between both. You also waste tons of disk space in images that can be used somewhere else.
Comping renders makes sense when the comp allows to avoid re-rendering layers or allows you to do effects you could not easily do otherwise or just as good (and no, color corrections are out, but blurs and matting elements are still in).
Modern renderers now support the not so novel idea of IPR and as such you avoid going to the comp at all. You can keep doing all of those tweaks (sans blurs) within the 3d package itself. Much faster. Then, going to the comp is just doing a+b, if you ever go to the comp at all.
Outside of PDI, which has the best and only production-ready IPR of any house, IPR has remained kind of out of the realm of film rendering (well, Tron used it extensively in the cycle race, but since then...). Mainly this was because of lack of memory and lack of ipr functionality in prman. But this will dramatically change come 8 GB of ram on a 64-bit machine and come new render engines for film work (mainly mentalray for now).
For supervisors that want to see iterations each and every day or more often (and that happens at a lot of big houses), having a pipeline that forces you to go thru a complex comp to show render results is just a big waste of time. Mind you, I'm not too keen on supervisors asking for things each and every day. I think better results come out of showing stuff every couple of days.
The only pluses for a company of having an artist comp renders is that: a) they can save on a compositor and b) the artist can feel he owns the work more. Point b) is a big thing for some (I like to comp my renders when possible), but point a) also feels and leads more to a sweat-shop atmosphere if you are required to do multiple iterations each day or one per day. Also, with a full time compositor and with elements requiring very little tweaks out of the render stage, I would argue the company benefits more, imo, as iterations can be done faster. You also get the benefit of two pair of eyes which is always good. With renders requiring fewer or no tweaks, you tend to avoid the issue of 3d people complaining about their renderers looking very different than what ends up in the final comp.

playmesumch00ns
06-08-2005, 08:46 AM
- Multi-frame rendering. I want to take advantage of geometry coherence in my scenes. How am I going to do that with WorldBegin/WorldEnd?


Rib archives


- Multi-camera and multi-pass rendering. Again, no such thing. You need to keep re-spitting and re-spitting the scene over and over again, even if no objects or anything changes but the shaders. As my renders become more real-time, and renderers are not tightly coupled with the animation package, you waste time and time again re-scanning and re-spitting the scene. Sometimes more than rendering.


Again, Rib archives


- Oh yes, and the beauty of the rib stack... who in hell came up with that one? That person owes me several days of my life. Let's see...
if I want to verify if an object is being lit by a certain light I have to count the Illuminate statements in a hierarchy of rib code. Huh?
What else is wrong with the stack you ask? Well, attributes. Some attributes get passed thru the hierarchy like "matte" but others don't. Oh yes, and this can change depending how you generate the rib... the more hierarchical it is, the more attributes get passed down. Yikes!


The rib format is flexible enough to handle just about anything. You can have a renderer add features by simply using options and attributes. It's great.

And if you don't like it, there's nothing to stop you doing your own binding either like the CG toolkit python binding.

As for SL well sure it's old, it was designed many moons ago. But just because it's old doesn't mean it's bad.

Do you really want all those features you listed? Would they really help you? What's so painful about the way it's done now? If the shading language was going to change at all, I'd rather it be pure c++ and not interpreted. And yes I do *honestly* believe c/c++ will still rule the roost in ten years time.

tweeeker
06-09-2005, 08:16 PM
There's been some great points about rib and sl made there. Some I agree with, some I don't, none the less all are interesting.

In general my belief is that it's the relative simplicity of rib & sl that keelps it in favour at the majority of studios and shops. Mental Ray is fantastic, it's architecture and design certainly more sophisticated than prman, but I'd rather use rib and sl (almost) everytime. When the going gets really tough, rendermans naivety can be problematic. However for day to day work, the simplicity for me is certainly a good thing.

Without a doubt I think there's aspects of prman that should be better developed than they are...

A proper IPR tool is badly needed. As gga said, there's more pressure than ever to turn shots around quickly. Tweeks need to take a few seconds, not hours, which is almost guaranteed if the whole thing has to be run by a compositor as well:D . Anyway, perhaps with the advent of dual core cpus and larger memory capacities this is something we'll see sooner rather than later.

Cpu speeds are increasing rapidly, but network and server speeds less so. For medium size scenes (and theres plenty of them - a wide of a single character maybe, or integrating small elements into plates) the constant rib write/ shader write is incredibly inefficient. The number of times I see the same bunch of rib and shaders get written over and over drives me nuts! Procedurals need to be used more by both mtor and mayaman. It needs to be possible for prman to use mayas already memory resident data structures where ever possible, not just for saving resources sake, but also to prevent the total waste of re-allocating almost duplicate data to memory, or even worse disk (in the case of ribs).

Its kind of ironic that PRmans little brother, Renderman For Maya, is likely to do much of this. The idea that its rib or nothing for 'highend users' makes no sense.

Oh yes, and of couse there's rib archives, and we use them massively every day BUT:- prman can chew thru a 100mb rib in a minute or two on a modern dual cpu box. 50 dual cpu farm machines get a frame each and thats 10gb (100mb per prman process) of data to come off the server (not to mention textures etc). Not a problem, but it takes waaayyyy longer than a minute to get 10gb+ of data off most server systems, so now io is taking longer than rendering... Anyways, the frames finish, prman exits. Next 50 frames - all the same again. 10gb of data... prman starts... prman exits...and again.... Just pointless. Quite often all thats changing is a f****** 4x4 camera matrix! It's taken us quite some time to come up with systems to try combat these issues, but really it shouldn't be necessary.

Two words - Multi-threading (or is that one and a half?)

As for SL, I actually like it. While the shadeop dso stuff is basic, I can do what I need to do with minimal fuss, which is more than I can say for my endevours with mental ray. That's not to say that a proper c++ api to complement RSL wouldn't go a miss. For me I'd be happy if there was an Rx equivelent of every SL shadeop for use in dso's and all global variables (such as P) were also visible to dso's without explicitly having to pass it. I also think it needs to be possible to define custom data structures in rib, that can be passed to dsos. At the moment we write way too many 'cache' files here there and everywhere, which just causes more book keeping work.

Low level shader chaining is a must tho. For a standalone renderer, prman is way too tied to a maya/slim, maya/mayaman workflow.


hmm, just noticed this has gotten a touch off-topic.... ah well

T

gga
06-10-2005, 05:48 AM
Rib archives.

No cigar. They don't allow geometry coherence. Once again, they are an all or nothing proposition. Also, rib archives give you roughly a 25% hit in render times. No good. Rib archives need to be used as a last and only resort (crowd shots).
Also, other issues I forgot is that ribs suffer from the need of geometry coherence for motion blur, as for motion blur you are forced to repeat the object definition multiple times. This is problematic for any geometry type that is not coherent (isosurfaces, blob meshes, etc). Yes, I know you have blob primitives and what not. But I don't want any constraint in the format. If tomorrow someone comes out with a new primitive type, as it is bound to eventually happen, that merges blob modelling with subd modelling, the renderer format needs to be easy to adjust for that. Having a flexible mesh format would easily allow for that.
Another concern in the rib format is how matrices and transforms are sent. Again, these are sandwiched somewhere in the rib stack. Not a good idea, imo.
Finally rib files, like mi files currently too, suffer from the issue that objects are defined in their entirety which is, quite frankly, pretty silly. If I take any average shot, for example, uv coordinates do not change neither between shutter opening/closing nor *gasp* within the whole frame range, even if I have moving objects. Why then am I spitting uvs over and over again or sending the connectivity of the mesh over and over and over again? This is time wasted in rendering and translating. The main reason this has worked so far was that: a) people rendered a frame at a time on a cpu. and b) rendering times often dwarf the parsing/translation time. Point a) is probably at the point where it can start changing, while for point b) we will still be waiting some more.
A good render format needs to be aware of coherence and be able to just define objects by just sending a full object definition for the first frame and then just point clouds with their corresponding velocity information for motion blur for any subsequent frame.
The interesting aspect of doing this is that it makes the format also more suitable for sending across the net and as a modeling format, too.


Again, Rib archives.

No cigar again. To allow arbitrary change of materials with rib archives and remain flexible each and every single object in your scene has to be spit separately as a rib archive. You are talking lots and lots of archives there. It *can* be made to work. I'd rather not have to deal with that scenario.


the shading language was going to change at all, I'd rather it be pure c++ and not interpreted. And yes I do *honestly* believe c/c++ will still rule the roost in ten years time.
If you want C++, it is already here. You have Lightwave, vray, mental ray, cinema4d and probably others all offering C++ apis or compatible ones (from those vray and cinema4d are the most c++ of them all, btw, while lightwave and mray are C but have C++ bindings). At least if you are talking cpu bound. GPUs are another issue altogether, but you already have libSh if you want to go that route.
C++ is bound to not die but slowly and surely vanish. Oh... sure. 20 years from now it will still be with us, very much like COBOL, 6508 and 68000 assembler. But the people that will actually *want* (or put another way, *need*) to code in it will be much, much, much less than what it is now. The anology I make is that albeit I can code in both 6508 and 68000 assembler, which was something pretty useful in the last decade, it doesn't mean I have or want to code in that now anymore, unless I *really* have to. Knowing C++ will remain an asset in the same way as knowing assembler is but it won't be a critical thing. That being said, for the next 5 years at least, you will keep seeing C++ code appearing everywhere, because in order to get to the point of making C++ obsolete, virtual machines and the like still need a lot of improvement.

In terms of other features... yes and maybe. How desparate for them? Depends.
- Shader linking (today). Depending on monolithic shaders or shader recompiling at render time is not acceptable.
- Inheritance (of at least shader parameters across multiple shaders) (today). That way shader libraries can be mantained easily, even with monolithic shaders.
- Syntax based on agile languages (maybe). Let me get back to you on this one after Siggraph.
- Dynamically typed (wishful thinking). or perhaps I should say statically typed on compilation, similar to templates in C++? Shaders can grow pretty complex and a library of shaders quickly becomes something noone quite understands. And I think you hardly ever do casts. Having shaders be dynamically typed would reduce needless clutter enormously. It would also avoid the need to have identical overloaded functions on type as you have in SL so that they work on floats and colors, for example.
- Iterators and namespaces (not critical, experimental). Not critical but could be useful to have each function written by an artist in a namespace. Iterators would be critical to have if you place the SIMD functionality in the shader writer's hand instead of the renderer. In some way's, SL's new grid iteration functions are akin to that. But since when rendering you are iterating thru lights, triangles, buckets, samples, cameras, etc. I think having a consistent and generic iterator syntax would be good.
- Classes (yes and no). While classes themselves I don't give much a damn, the idea of inheriting/replacing functions I kind of like, as a lot of shaders are just, say, phong with a different specular calculation. Having shading networks is nice, but having the ability to glob them together into a single entity is also needed. Phenomena is a good idea that has been implemented poorly so far. My issue with phenomena is that to me it does not belong in the .mi description but should be part of the shader construct/language, so that a compiler can try to optimize the stuff away pre-rendering.

playmesumch00ns
06-10-2005, 09:42 AM
No cigar. They don't allow geometry coherence. Once again, they are an all or nothing proposition. Also, rib archives give you roughly a 25% hit in render times. No good. Rib archives need to be used as a last and only resort (crowd shots).


I'm not quite sure what you mean by geometry coherence? What's more coherent than a big chunk of geometry that's exported once and only loaded when needed.

From the point of view of explotiting frame-to-frame coherence, I don't believe that's the renderer's job at all. You're essentially asking it to open two files to render one frame. Every time you render an image you've first got to open the "reference frame" to see what everything is, then open the actual frame to see where it is and how it's changed. What if you've got a huge spaceship that goes in and out of shot? You still have to load it for every frame just to see that it's not there! If you want to do it yourself it's fairly trivial to export the mesh data yourself, then load it up in a procedural and apply whatever frame-to-frame offsets you want before exposing it to prman.

If you want a new primitive type, just write a procedural that loads the data and tesselates it to polygons, subdivs or nurbs patches for prman to handle. Adding a new surface type to a renderer entails changing the scene description no matter what renderer you're talking about, I don't see how it could be any other way!?

PRMan has lots of problems, to be sure, but I think RIB and SL are two of the strongest things about it. SL is starting to show its age, but it does exactly what it's supposed to. The only thing I would like that's not there would be the ability to iterate over the grids properly like you can in gelato. I'd love to be able to iterate over the mesh itself, similar to mental ray's lightmap shaders, but of course, with a reyes renderer that's not going to happen without doing a texture oven sort of thing.


tweeeker: you can fix the network lag on rib archives/textures/anything else by distributing everything to your render boxes.

Using MToR is incredibly slow: it's rib export is a pile of poo basically. Geometry should be cached per-object and only re-exported if the object changes.

I think the main problem with all this geometry-coherence-exporting-blah stuff is mtor. There's nothing inherent to prman stopping you from making a better system. In fact, we have done and are in the process of doing exactly that here.

I also have good reason to believe a couple of the things you're after aren't too far away either.

I agree it'd be very nice to have an Rx call for everything in SL. I also wish I could just iterate over grids in the shadeop so I could do proper irradiance caching without having to do a sort of feedback loop.

Shader chaining is probably the only thing I think it's seriously lacking, but export-time compilation is that much of an arse. Just because Slim's awful doesn't mean it can't be good.

rendermaniac
06-10-2005, 05:13 PM
The shader chaining issues (which is a good one) has come up a few times on cgtalk now. So is anyone actually going to put a suggestion on the Pixar Beta forum? (I know several of you have access). Otherwise I will!

I do know someone who wrote there own renderer with a shading language in Python. I never actually used it myself, but it looked good. Most likely if you were going that route, then you'd have it as a module written in C++ and use an embedded interpretter to bind it all together. I would really like to see this as I really like scripting languages. It would be really cool to extend the language just by writing a new python module, or bringing in a crazy one from the web. (not that having an embedded web server in a shader sounds like too bright an idea... but at least you could do it).

The frame to frame coherance thing is a tricker one. At the moment the best option you have are inline archives. This reduces network traffic (a lot!) at the expense of keeping more data around. I think it only keeps the high level definition about, not micropolygons, but that seems to be a good system. If there are more details then I would like to know as inlines have saved my arse on several occasions!

There is Ian Stevenson's diff trick if you really want to get a bit of coherance. However as this is not done in the renderer it still ends up getting the full RIB file to parse, you just rib gen less (if you have a dedicated exporter) and send less IO.

Ironically RenderMan for Maya is definitely the right direction. If they added Maya DAG nodes for RIB archives and run procedurals, plus RIB archiving for batch rendering. (you do not want to be starting up Maya as well as currently happens with batch) then they would have a really neat tool. Shader chaining would make it really good. It does feel crippled to save RAT - I know I would want to go for RfM if it did everything RAT does. (half of which RAT does not do well).

IPR is definitely an issue which it would be great to see resolved. Wasn't this problem resolved even with Wavefront Explore? (so I've heard - before my time ;) ).

Shader chaining is the only really critical point for me - the rest I can easily live with.

Simon

tweeeker
06-10-2005, 10:02 PM
I guess we all mean a slightly different thing by frame coherence here. All I was meaning was I'd like to be able to run a persistant prman process that can hold on to things like inline archives between renders. That way instead of sending 1 frame at a time to farm boxes I can send 5 or so and not beat up the network be constantly requesting the same resources. In the near future, when 8gb ram is the norm this be more important than it is now. Also, as raytracing becomes more common, the usefullness of the classic reyes 'throw it away when your done' approach diminishes. If your gonna hold onto geometry for raytracing, you may as well hold onto it between frames no? In any case this is totally different to mental rays incremental change feature. I think rib archives and procedurals work pretty well here, but it would be nice if it were possible to share uv data (or whatever) between primitives.


tweeeker: you can fix the network lag on rib archives/textures/anything else by distributing everything to your render boxes.


Thats pretty much what we do. Almost everything gets fed to prman through a runprogram, which amongst many other things, takes care of caching assets to local storage. Works ok, except that because prman typically runs as 2 separate processes (well 3 actually:) ) it becomes very tricky to do reliable file manipulation operations. Only the commanding prman process is really in a position to do that. The other option is to send EVERYTHING to the farm boxes before rendering begins, but this tends to lead to loads of stuff being sent that may never get seen or used.

I agree though, theres nothing fundamentally wrong with prman, just needs to be a tad more flexible. The tools for the most part are what need to be smarter. I must admit though, the maya api isn't over flowing with stuff to help detect changes to particular node data. For example, on complex scenes, just parsing the dag takes a long time for mtor & mayaman. But why not have the plugin always analyize the dag when theres idle cpu time? So when you actually hit render, its allready figured out whats new, what needs re-ribbing and what can be ignored.

rendermaniac - your right, this should be said on the beta forum, although I'm sure the Pixar guys are aware of all this stuff. At the end of the day, it all boils down to the same old time cost equation.

T

playmesumch00ns
06-13-2005, 09:54 AM
I mentioned the whole shader chaining thing to a pixar honcho in person. I basically said "This stuff in gelato's really cool. Any chance we're going to see the Ri Spec updated to handle this sort of thing?". The general feeling I got was that they don't see anything wrong with the standard and they're certainly not going to spend any time on that at the moment. Mind you a few years ago the official line from pixar was that they saw no need to have raytracing :)

The stuff they are going to do addresses some of the issues that people have raised here though I won't go into the specifics on a public forum.

Having said all that, the impression I got was that they're serious about pushing RenderMan development forward, so if enough people clamour for things like shader chaining, maybe we'll start to see it in like prman 15 or something. :)

gga
06-14-2005, 07:09 AM
The shader chaining issues (which is a good one) has come up a few times on cgtalk now. So is anyone actually going to put a suggestion on the Pixar Beta forum? (I know several of you have access). Otherwise I will!

I do know someone who wrote there own renderer with a shading language in Python. I never actually used it myself, but it looked good. Most likely if you were going that route, then you'd have it as a module written in C++ and use an embedded interpretter to bind it all together. I would really like to see this as I really like scripting languages. It would be really cool to extend the language just by writing a new python module, or bringing in a crazy one from the web. (not that having an embedded web server in a shader sounds like too bright an idea... but at least you could do it).


Python's current interpreter is not a good language for a render engine because the interpreter is not multi-threaded. Lua's VM, however, might be. Of course, Lua as a language is nowhere near as nice to use as ruby or python, thou. I guess I could do this in a week or so for mray once my compiler arrives.

The frame to frame coherance thing is a tricker one. At the moment the best option you have are inline archives. This reduces network traffic (a lot!) at the expense of keeping more data around. I think it only keeps the high level definition about, not micropolygons, but that seems to be a good system. If there are more details then I would like to know as inlines have saved my arse on several occasions!


Well, the whole idea is to also try to keep the microtriangles around. With 8GB of ram, most scenes will fit in memory just fine in their tesselated state. Even with 4GB you should be able to get at least one creature with some additional props in memory.
Currently with prman this is problematic due to their caching being too aggressive, while in mental ray this is problematic due to the geometric cache being too little aggressive and also having a single cache shared for texturing also (albeit a shader that reads say tiffs with its own cache easily solves that second problem).

Ironically RenderMan for Maya is definitely the right direction. If they added Maya DAG nodes for RIB archives and run procedurals, plus RIB archiving for batch rendering. (you do not want to be starting up Maya as well as currently happens with batch) then they would have a really neat tool. Shader chaining would make it really good. It does feel crippled to save RAT - I know I would want to go for RfM if it did everything RAT does. (half of which RAT does not do well).


I agree. But Pixar would have a political nightmare on their hands for customers that use mtor in terms of having to re-learn how to do things. Plus, by learning the maya dag, they also run the risk your customers may decide to stick with what comes built-in with the package (ie. mental ray).
As a technology company, they'd also start to rely on alias for their gui, which is already in bed with mental images, which might not be the best move. Plus, if the pipeline becomes too good, you also create new competitors that may start competing doing cg movies now with an almost as good pipeline as pixar has.
Not a good move just yet. Better to wait until alias/mental ray kick their butt before going that way. May still be better to choose to make mtor really strong in all departments by caching stuff within the mtor/slim translator more aggressively... if such thing can be done. Of course, in order to make mtor/slim strong in my opinion, tcl would have to go the way of the dodo... which is also a headache.
Another option, of course, would have been to make an alliance with another 3d package instead. Particularly if the other 3d package would allow access to their data thru their api in much faster ways than maya does.

IPR is definitely an issue which it would be great to see resolved. Wasn't this problem resolved even with Wavefront Explore? (so I've heard - before my time ;) ).


Not really. mental ray is the first product to deliver a production ready ipr, at least by today's standards (and I'd honestly say that only once you are using a 64-bit machine/anim program... not quite yet).
TDI was, however, the first commercial solution of a working ipr. TDI's main issue was that it was a product that really came way before its time, as machines were nowhere near as speedy as they are now and thus their IPR was really not of that much use.
TDI's IPR while being production ready by those days, would not be fully production ready by today's standards. It used progressive refinement of the frame but it was missing supporting micropolygon displacements, hair, gi and geometry on disk. TDI's renderer was also a polygonal renderer mainly (no subds, and very rough nurbs support).
If you want to see the closest thing to TDI's IPR these days, check Steve Worley's FPrime for Lightwave. It is a pretty amazing preview renderer for what it offers.
I still think progressive refinement is the way to go, compared to say mray's ipr -- albeit, unlike fprime, I would do progressive refinement only on secondary rays and leave primary rays be rendered with a fully antialiased scanline cache all the time. Another big issue with most IPRs is that they tend to not support motion blur frames (interesting enough, Pixar's irma is one ipr I know that handles that very well --at least sans raytracing. Of course, the rest of irma is... well...very shaky).

gga
06-14-2005, 07:54 AM
I guess we all mean a slightly different thing by frame coherence here. All I was meaning was I'd like to be able to run a persistant prman process that can hold on to things like inline archives between renders. That way instead of sending 1 frame at a time to farm boxes I can send 5 or so and not beat up the network be constantly requesting the same resources. In the near future, when 8gb ram is the norm this be more important than it is now. Also, as raytracing becomes more common, the usefullness of the classic reyes 'throw it away when your done' approach diminishes. If your gonna hold onto geometry for raytracing, you may as well hold onto it between frames no? In any case this is totally different to mental rays incremental change feature. I think rib archives and procedurals work pretty well here, but it would be nice if it were possible to share uv data (or whatever) between primitives.


yes, that's what I mean, too. But you'd be surprised that that's *exactly* what mental ray already does (assuming you do have the memory for it, it will cache all tesselations). I think bsp generation is the only big issue that is not kept (but don't quote me on that if using large bsp). One thing I think it is still wrong in the current mray design is that, if I understand correctly, the geometry cache is given the same importance and address space as the texture cache, which means mray's cache would only really shine once you have a terrabyte of memory and can keep all textures in ram too.
The stuff that sucks about mray's incremental update is that you have to fully re-define the object between ipr changes or frame updates. Would be great if you could just send the new vertex positions and be done with it. Something like:

incremental object "AA"
vertex update `binarydata`
end object

I agree though, theres nothing fundamentally wrong with prman, just needs to be a tad more flexible. The tools for the most part are what need to be smarter. I must admit though, the maya api isn't over flowing with stuff to help detect changes to particular node data. For example, on complex scenes, just parsing the dag takes a long time for mtor & mayaman. But why not have the plugin always analyize the dag when theres idle cpu time? So when you actually hit render, its allready figured out whats new, what needs re-ribbing and what can be ignored.


The maya api is okay method-wise. It does offer pretty decent methods to find what needs updating. Just check my mrLiquid on sourceforge to see how.
If anything is wrong with maya's api is that its primitive types (mesh, nurbs, etc) send their data very slowly, imo. I've recently heard a theory on the most likely reason why this may be so. Anyway, would be great to have a way to access the data just as a pointer to an array without any C++ classes or methods in between. Then your renderer api could be made compatible with that array too, so sending the data around would be just doing memcpy.
Also, no need to analyze the dag with idle time. Just analyze the branch whenever you make a connect/disconnect. You have to do that for IPR anyway...
Parsing the shader dag is also slow only on renderman since you are creating and concatenating an sl shader on the fly (unless you pre-compiled it before). The issue of doing shader concatenation also becomes worse if your renderer tries to support both cpu and the gpu at the same time, as the headache doubles. In those cases, you kind of start considering that indeed moving to a shading language without a silly pre-processor and one that could indeed run either interpreted or not or that could create the bytecode on the fly could be a good idea. Once you start thinking along those lines, you start considering all sl languages designed so far as flawed, cause the preprocessor is already in the way.

playmesumch00ns
06-14-2005, 09:09 AM
I agree. But Pixar would have a political nightmare on their hands for customers that use mtor in terms of having to re-learn how to do things. Plus, by learning the maya dag, they also run the risk your customers may decide to stick with what comes built-in with the package (ie. mental ray).
As a technology company, they'd also start to rely on alias for their gui, which is already in bed with mental images, which might not be the best move. Plus, if the pipeline becomes too good, you also create new competitors that may start competing doing cg movies now with an almost as good pipeline as pixar has.
Not a good move just yet. Better to wait until alias/mental ray kick their butt before going that way. May still be better to choose to make mtor really strong in all departments by caching stuff within the mtor/slim translator more aggressively... if such thing can be done. Of course, in order to make mtor/slim strong in my opinion, tcl would have to go the way of the dodo... which is also a headache.
Another option, of course, would have been to make an alliance with another 3d package instead. Particularly if the other 3d package would allow access to their data thru their api in much faster ways than maya does.

Or they could just make a better product and sell loads of licences before everyone gets hacked off with MToR and writes their own solution anyway? Do you honestly believe that they're not going to do that?

One thing I think it is still wrong in the current mray design is that, if I understand correctly, the geometry cache is given the same importance and address space as the texture cache, which means mray's cache would only really shine once you have a terrabyte of memory and can keep all textures in ram too.

How can a caching scheme shine when you've got everything in memory? If everything's sitting comfotably in ram then a cache is redundant! mental ray blows with heavy textures.

In those cases, you kind of start considering that indeed moving to a shading language without a silly pre-processor and one that could indeed run either interpreted or not or that could create the bytecode on the fly could be a good idea.

If you're going to use an interpreted language at all it's much better to have it precompiled. I'd much rather the shader compiler tell me I've got a syntax error than have to wait for the render to start up just so it can tell me I've missed a bracket out somewhere! And what if I boot off a sequence render and leave for the night?

gga
06-14-2005, 09:11 AM
I'm not quite sure what you mean by geometry coherence? What's more coherent than a big chunk of geometry that's exported once and only loaded when needed.

From the point of view of explotiting frame-to-frame coherence, I don't believe that's the renderer's job at all.


Yep. it is

You're essentially asking it to open two files to render one frame. Every time you render an image you've first got to open the "reference frame" to see what everything is, then open the actual frame to see where it is and how it's changed.


Err... no. You need to open a single file per frame. The first frame gets the full object description; following frames get point clouds. You just move away from the idea of rendering a frame per cpu. Of course, if the renderer is kept in memory with the anim package, you just don't open anything. You just pass memory data around.
Doing it thru procedurals is a pain in the neck and you also take the 10-25% hit of rib archives. No good.


If you want a new primitive type, just write a procedural that loads the data and tesselates it to polygons, subdivs or nurbs patches for prman to handle. Adding a new surface type to a renderer entails changing the scene description no matter what renderer you're talking about, I don't see how it could be any other way!?


I take it you never did blobs before 3.9, did you? The problem you run is that the mesh description as it is presented in prman assumes it is coherent from shutter open to shutter close. As blobs, to put an example, split and can create different topologies between shutters you run into problems that are not so easily solved due to that rib limitation. You also keep running into the procedural slowdowns of prman.
Having the mesh format be based on vertex velocities as in mental ray instead of repeating an identical topological mesh twice means that any topology changes can be handled much more easily (particularly in something like blobs were velocity is obtained from the field).
For mental ray, currently adding a new primitive you really don't need to depend on mental images to do so. You can, for example, pass binary data into the mi file without the renderer messing with it, for example, which you can then access in any shader (particle rendering for example is done that way in maya2mr). Within a geometry shader, you can then create a mesh, nurbs, etc. as you see fit as in prman, but without the topology limitations of prman, as you deal with just motion vectors. Or if those primitives are no good, you can do the stuff completely within a volume shader and just figure your own way of doing a ray intersection/scanline shading with your primitive type. That means you can add a new geometry type any time you want and relatively easily just by coding a shader.
The volume approach within prman is also tricky as the dso support is limited in giving you stuff like where your camera is or other attributes. And the mesh approach in prman is limited due to the topology issues. Both stuff is done much better in mental ray, imo.
The only thing I consider prman superior is that procedurals or rib archives can be nested in a hierarchy, which you currently cannot do in mray.

Shader chaining is probably the only thing I think it's seriously lacking, but export-time compilation is that much of an arse. Just because Slim's awful doesn't mean it can't be good.

Okay, I'll make the bet that you cannot do much better than slim/mayaman without recoding prman. And that if you try to go that way, you'll end up with a big headache on your hands once you try to add gpu support :)

gga
06-14-2005, 12:11 PM
Or they could just make a better product and sell loads of licences before everyone gets hacked off with MToR and writes their own solution anyway? Do you honestly believe that they're not going to do that?


Well, beating something like mtor or maya2mr is not that easy. Mainly because you need time to write it and debug it.
I have not seen rfm, but as Simon says, I was betting rfm could eventually be just that replacement as it is new code, assuming it added the stuff that simon mentioned. To sell loads of licenses, personally I'd pursue an alias/rfm bundle.
May I ask then why you want to write your own rib translator then?


How can a caching scheme shine when you've got everything in memory? If everything's sitting comfotably in ram then a cache is redundant! mental ray blows with heavy textures.


Yes, didn't word it properly. The way I understand it is that mray uses a single cache for geometry and textures. As far as mray is concerned, you are dealing just with ram. Putting more stuff in memory is benefitial when you multithread across machines as the load gets shared. I believe the engine does not currently pay too much attention to what type of ram it is. It just flushes it based on what was the oldest thing used in the tile. That's what I don't like about it. Personally, I'd want the geometry cache be handled quite separately from textures so that you could just lock the geometry cache or give it most of the ram and just leave say 30mb for textures.
How long ago have you used mray, btw? I don't use mray's map format, personally, but you do have to set up stuff in some ways for mipmap to kick in for the map format. First, you need to use map files. Other formats don't work.
map files need mipmapping correctly before hand (Quite frankly, this is documented pretty poorly as you really have to use "imf_disp -r -p" to do so. Without that you are loading textures fully in memory each time). If you are using tiffs or other formats, you are not mipmaping, even if the file is mipmaped. After you do that, for the .map's mipmap to work you also need to use spit the texture line in the mi file with both the local and the filter keyword so mipmap kicks in. Without that, mray still reads the full map.
There's also elliptical filtering, which I don't recommend, which in addition to the above also requires setting a proper shading network with a shader that uses the elliptical filtering network.
I also would not vouch for maya's file shader node as I'm not sure how it reads textures in (they seem to do their own filtering, like quadratic now, which is non-standard).
If that's too much work, you can also use mipmapped tiffs thru a tiff shader. I gave a similar shader away that uses portions of Pixie's source code, which may or may not be the most solid thing. That does have the benefit of using tiffs all across the board which I like better than mray's map format as then I can keep the texture cache separately and also switch between renderers much more easily.

If you're going to use an interpreted language at all it's much better to have it precompiled. I'd much rather the shader compiler tell me I've got a syntax error than have to wait for the render to start up just so it can tell me I've missed a bracket out somewhere!

In a language such as python, you don't have brackets. Just tabulation :)
But joking aside, the question seemed valid. And quite frankly, I don't see the issue.
Most guis have shaderballs to show you what the shader looks like. In order for the gui to show you what your shader looks like it has to have had to run your shader onto a ball (and thus implicitly compiling/parsed it without syntax errors). SLIM kind of works like that, albeit it forces you to do the compilation manually. Maya's api is pretty horrible in dealing with gui issues (ie. no api other than mel), so it is no surprise maya2mr does not use shader balls at all.
That being said, if you are not a fan of shader balls and say you turn them off... there isn't much of an issue either.
Properly written, shaders would get compiled as soon as the bounding box of the object is hit and the compilation would obviously happen before a single pixel for the current tile would be done or, worse case scenario, as soon as they are needed. Thus, a single crop around the object would give you your fast syntax check. So... I assume you are concerned in case the object with the shader would be outside view and part of a reflection or it is seen thru many other transparent objects making the render slow, then?
But even that is not a big issue. One thing about what I proposed is the fact you can render multiple cameras as in mray. Thus, for those bad cases, you can just set a one pixel render camera facing the object you are testing to render first. When you can render multiple cameras with a single scene description, then hitting "render" can mean running two or more renders, not just one as you are used in renderman.

playmesumch00ns
06-14-2005, 12:16 PM
I take it you never did blobs before 3.9, did you? The problem you run is that the mesh description as it is presented in prman assumes it is coherent from shutter open to shutter close. As blobs, to put an example, split and can create different topologies between shutters you run into problems that are not so easily solved due to that rib limitation. You also keep running into the procedural slowdowns of prman.
Having the mesh format be based on vertex velocities as in mental ray instead of repeating an identical topological mesh twice means that any topology changes can be handled much more easily (particularly in something like blobs were velocity is obtained from the field).

What about if you just add the vertex velocity vectors onto the current vertices?

gga
06-16-2005, 06:46 AM
What about if you just add the vertex velocity vectors onto the current vertices?

And create another mesh, you mean? Yes, that's what you would do. But that's the thing. If the mesh topology changes during the shutter, you end up with a surface that is not quite correct. You are basically forced to treat mesh topology the same way you treat surface shading: evaluating them at only, say, shutter open.
Or you could evaluate and spit the mesh twice and then shade its opacity based on the shutter time of the ray (making sure the renderer does not shade just at shutter open).
With velocity vectors you still have the same issue, but implementing that second option that I call shutter objects is tad more easily and less memory heavy.
With volumetric rendering you can avoid the issue all together as you can just evaluate the thing on a sample basis.

Anyway, going back on topic, I think the main issue that will make GI used much more in production will end up improving IPR to a level that it is production ready.

playmesumch00ns
06-16-2005, 09:14 AM
Anyway, going back on topic, I think the main issue that will make GI used much more in production will end up improving IPR to a level that it is production ready.

I agree! I think we're going to see a lot more focus on interactive lighting tools over the next couple of years.

floze
06-16-2005, 10:19 AM
Have this links been posted yet?

http://www.cs.virginia.edu/~rw2p/s2005/

http://graphics.stanford.edu/papers/allfreqmat/

Somehow I feel that there's opening a shear between 'traditional' GI, and GI obtained by hardware techniques like the method of spherical harmonics. Some of you surely remember those papers waay back from 2002, I mean compare it to the quality obtained today:

http://www.cs.wpi.edu/~emmanuel/ISPs/cliff_isp/sh_shading.htm

http://research.microsoft.com/~ppsloan/shbrdf_final17.pdf

I wonder why everyone blames it to be not production ready, if you look at the allfreqmat stanford paper above. Maybe someone should finally just write some proper application/implementation so we can go nuts with it..?

playmesumch00ns
06-16-2005, 11:03 AM
The relighting methods (spherical harmonics or wavelets) are not methods of obtaining GI. They are methods of compressing the light transfer function so that you can dynamically relight scene with soft shadows and diffuse inter-reflection in real time.

You still have to compute the GI in the first place, and this phase is generally quite slow because you've got to do extra work to compress the function down into the coefficients of your chosen basis.

Plus the storage requirements are huge. For a 2k image using 5 levels of spherical harmonics (assuming you're doing the relighting per-pixel) you need over 318MB of data per frame. The processing times and storage requirements for the wavelets method are even more obscene.

I've already implemented a relighting tool using spherical harmonics (I think I mentioned that in this thread) and I'll be spending some time in the next couple of months developing it further. This stuff's going to be a useful tool in the future, but I don't ever seeing it being mainstream just because of the huge overhead required.

One day of course we'll all have very fast 8-core cpu's and dozens of GB of ram. But when that day comes around, we'll be able to do the GI in something approaching real time anyway!

floze
06-18-2005, 12:27 AM
http://graphics.stanford.edu/papers/allfreqmat/glossy_allfreq.pdf

The Haar wavelet implementation they explored looks rather very interesting than obscene to me; the method is capable of view and light variation at high frequencies. Though the frame updates take longer than with spherical harmonics, the overall compression and memory performance seems to be way better.

dmaas
06-19-2005, 05:16 PM
Regarding additions to the shading language - imho RenderMan SL is properly treated as a low-level language on top of which you can put shader builders (like Slim). I agree with the Pixar guys that it's not a good idea to cram more features into the shader compiler, when they could easily be implemented "on top of" the existing language.

That said - and this is a little hypocritical - I would like to have C-like structs in SL, so that I could package up a bunch of parameters I'm going to send to an SL function. Listing them out makes it a pain to add new parameters since they have to be written in at least three places (the main shader parameter block, the call to the function, and the function itself).

Also, I don't like how AOVs must be listed in the shader's main parameter block. They should just act as if the renderer declared them invisibly. (and what is up with RiDisplayChannel and point clouds? Why the heck do I have to tell DisplayChannel what variables I'm going to bake?)

Regarding incremental RIB - this is really interesting, since at some point the renderer is going to have to construct a "scene graph," it would make sense to offer a way to manipulate it directly. Perhaps this should start out purely as a C++ API, since it would be less useful if you had to do all the work of serializing the scene update to RIB. Any program that wants to use the incremental API would probably prefer a direct C++ interface anyway.

Slowness of geometry parsing hasn't been a big problem for me, except when I'm forced to use MTOR :)... if your only experience is with Maya then you are probably used to slow geometry emission. I use a custom system that reads Lightwave objects at render time via a RunProgram procedural, and it's not a huge bottleneck (even with motion blur).

A lot of the time it seems like the geometry bottleneck is just sprintf("%f") (or presumably atof() on the renderer side). Using binary RIB helps a lot.

Incidentally, I was experimenting the other day and found that you don't have to emit secondary primitive data like ST coordinates at each motion step; PRMan lets you omit the data on calls after the first. (whether this is designed behavior or just a bug, I don't know).

tweeeker
06-19-2005, 10:32 PM
Regarding additions to the shading language - imho RenderMan SL is properly treated as a low-level language on top of which you can put shader builders (like Slim). I agree with the Pixar guys that it's not a good idea to cram more features into the shader compiler, when they could easily be implemented "on top of" the existing language.


To me having something as fundemental as networked shaders implemented 'on top of' (ie not designed 'around') shading language is a really bad thing, and holds back the develpoment of renderman as a widespread standard.

The first problem - is it's not possible to share shaders without giving away your source. Sure you can hand out a compiled shader - but in all honesty, 99.9% of the time its unusable because it's not gonna be quite right for anyone. Look how the mental ray shader community is booming. This is largely because shader writers can wrap up their hard work into a dll knowing that their source is safe. Users are happy because they know that they can connect the output of said shader into ANY existing mental ray shader. Thats simply way more elegant than renderman. Of course you could write a dso shadeop and distribute that, which has been done before, but then you loose all of SLs benefites, like all its handy functions, plus have to deal with the general awkwardness of the dso implementation.

Say I have a cool subsurface scattering routine that I'm happy for people to use, but don't want to distribute the source. What base model should I use in my compiled shader:- lambert? blinn? orennayer? does the user need textures for color? diffuse? What about aov's? specular aovs, shadow aovs, matte aovs, i'm gonna have to define all those too? All I wanted was to share my subsurface shader, but without source its no use to anyone. Oh well, best forget it.:shrug:

The other problem, on a less charitable note:) At work we have dozens and dozens of renderman ready assets that are already ribbed along with compiled shaders. In total we're talking probably tens of thousands of compiled shaders. The compositor decides he/she needs another aov of whatever variable, or the art director decides the blinn shader needs tweaking just so... Total nightmare. Recompile of shaders on that scale is an issue for sure. And what about stuff that's allready using those shaders?

Mental Ray on the otherhand, much more straight forward. Update your dll (yes one dll!), apply the right version info, and away you go. Don't get me wrong I love SL as a language, but compared to mental ray, rendermans approach is simply not sufficient any more. Especially so for an expensive production renderer.

I really don't see whats wrong with somthing along the lines of:-

Shader "checker"
"uniform float version" [1.0]
"uniform color check1" [0 0 0]
"uniform color check2" [1 1 1]
checkerpattern

Surface "standardblinn"
"uniform float version" [1.0]
"uniform color basecolor" checkpattern.outcolor
blinnshader

where both 'checker' and 'standardblinn' are defined in some kind of compiled library.

That said - and this is a little hypocritical - I would like to have C-like structs in SL, so that I could package up a bunch of parameters I'm going to send to an SL function. Listing them out makes it a pain to add new parameters since they have to be written in at least three places (the main shader parameter block, the call to the function, and the function itself).


Totally agree.

Also, I don't like how AOVs must be listed in the shader's main parameter block. They should just act as if the renderer declared them invisibly.

Not sure how that would work. I think it's necessay to qualify aovs with 'output' so that the result of that variable are stored with the micropolygon along with Ci & Oi. Then the hider can composite the aov. If the aov was never known to the shader, how would the renderer know what variables to hold onto once the shader has finished executing?

I use a custom system that reads Lightwave objects at render time via a RunProgrm procedural, and it's not a huge bottleneck .

That sounds cool. I think the issue for me is that if mayas running, and all this geometry data is all ready resident in memory, whats the point in exporting it all to a rib file (probably on a server) to re-read it all back in to memory again. Even on small to medium scenes with 300-500mb of data thats an awful lot of wasted file/network io. I remember way back when the idea was that prman would be built in to modeling environments, not sure why that didn't take off. One thing is certain though, if prman is going to compete with the likes of mental ray and vray (workflow wise) in the years to come, ipr and efficient scene transfer are needed.

T

gga
06-20-2005, 07:44 AM
it's not possible to share shaders without giving away your source.


Well, not giving away source also creates complications. For example 3d vendors (and particularly alias) that use mray are usually guilty of not documenting the message passing data that they use in their own custom shaders, as if it was some kind of trade secret or super duper code.
This creates problems in sharing shaders between say xsi and maya, because, for example, their light diffuse/specular message passing is handled differently. And even within just maya for example you are in a blind alley when it comes to handling maya's built-in renderpasses, particle data in your own shaders and light diffuse/specular (well, for the last one alias added an api, which is painfully bad and slow).
Without the source code, you are usually better replacing most vendor's code from scratch.


Mental Ray on the otherhand, much more straight forward. Update your dll (yes one dll!), apply the right version info, and away you go.

That method works well as long as there's a single person doing so. But it can be madness once you have multiple people trying to hack the code, unless the code is under revision control. And it breaks once a user wants to have a specific version of the shader for a particular shot or similar.
It is still much better to have the shaders be compiled individually. Again, maya2mr also makes this painful as it uses a single file (maya.rayrc) to load all shaders, instead of depend on spitting the proper link lines to the mi file.


I really don't see whats wrong with somthing along the lines of:-

Shader "checker"
"uniform float version" [1.0]
"uniform color check1" [0 0 0]
"uniform color check2" [1 1 1]
checkerpattern

Surface "standardblinn"
"uniform float version" [1.0]
"uniform color basecolor" checkpattern.outcolor
blinnshader

where both 'checker' and 'standardblinn' are defined in some kind of compiled library.


The problem is that basecolor in your example is no longer uniform once it is connected. Thus, all your shaders would need varying parameters. Not good. To do that, your render core really has to throw away the idea of uniform/varying being specified by the user, and handle it internally by itself.


Not sure how that would work. I think it's necessay to qualify aovs with 'output' so that the result of that variable are stored with the micropolygon along with Ci & Oi. Then the hider can composite the aov. If the aov was never known to the shader, how would the renderer know what variables to hold onto once the shader has finished executing?

Well, the rib already knows the variables that are being spit. So no need for that info to be in shaders. It is useless there.

That sounds cool. I think the issue for me is that if mayas running, and all this geometry data is all ready resident in memory, whats the point in exporting it all to a rib file (probably on a server) to re-read it all back in to memory again.

You are problably not going to get away from that completely. Even if your 3d package comes with its own renderer, for efficiency issues, the way the data is stored for animation and for rendering is usually different. You can obviously get away with the idea of saving files, but not discard the need of transfering data from anim package to renderer.

playmesumch00ns
06-20-2005, 08:10 AM
http://graphics.stanford.edu/papers/allfreqmat/glossy_allfreq.pdf

The Haar wavelet implementation they explored looks rather very interesting than obscene to me; the method is capable of view and light variation at high frequencies. Though the frame updates take longer than with spherical harmonics, the overall compression and memory performance seems to be way better.


For the quality of the image you get, yes the memory performance is better than for spherical harmonics. But it's far too heavy to be of practical use.

rendermaniac
06-20-2005, 04:20 PM
That method works well as long as there's a single person doing so. But it can be madness once you have multiple people trying to hack the code, unless the code is under revision control. And it breaks once a user wants to have a specific version of the shader for a particular shot or similar.
It is still much better to have the shaders be compiled individually. Again, maya2mr also makes this painful as it uses a single file (maya.rayrc) to load all shaders, instead of depend on spitting the proper link lines to the mi file.


This is a limitation of the maya2mr setup though isn't it? You do have the option of compiling seperately, and you have the option of including source if you want to. (the dirtmap shader is a good example of both of these).


The problem is that basecolor in your example is no longer uniform once it is connected. Thus, all your shaders would need varying parameters. Not good. To do that, your render core really has to throw away the idea of uniform/varying being specified by the user, and handle it internally by itself.


I have absolutely no problem loosing uniform/varying. Most of the scenes I've been seeing lately take advantage of shared grids (which seriously still needs more work) where everything needs to be varying anyway. Uniform doesn't offer that much of an advatnage to me - it's a real pain when you can't connect it in Slim. Plus I'd have thought a good compiler should be able to do this sort of optimisation shouldn't it?

Strings could still be tricky as REYES relies on then being uniform. Ahd it would be useful sometimes to be able to connect up string attributes.

Versioning is something that I don't think can be supported by the language and should be outside of it. This is something that the translator should handle. eg by using different search paths.

Simon

gga
06-20-2005, 07:14 PM
This is a limitation of the maya2mr setup though isn't it? You do have the option of compiling seperately, and you have the option of including source if you want to. (the dirtmap shader is a good example of both of these).


Yes, maya2mr. You can obviously always give source code away. Problem is usually not with users that write a single shader but with shaders that come from the vendors themselves.
The issue with maya2mr is that it relies on a custom startup file called maya.rayrc to find all shaders used in the scene when rendering and will load all of them even if they are not used (for the GUI, it relies on an environment variable to find them which is also different from the standard standalone vars of mental ray). All of that is silly.
The proper way to write a converter is to have it always rely on the same environment variable, both for the gui and the rendering part and have only the shaders that are used in the scene be loaded. The way maya2mr works is just lazyness and bad coding on alias' part.

tweeeker
06-20-2005, 07:45 PM
Well, not giving away source also creates complications. For example 3d vendors (and particularly alias) that use mray are usually guilty of not documenting the message passing data that they use in their own custom shaders, as if it was some kind of trade secret or super duper code.

I agree, but for the most part that's alias being awkward. Its still advantageous to have the option of distibuting compiled yet usefull shaders. How many times have we heard 'I have a renderman shader that I'd like to connect in SLIM...'


That method works well as long as there's a single person doing so. But it can be madness once you have multiple people trying to hack the code, unless the code is under revision control.


Ok, I didn't mean its good idea to have *everything* in a single library. But the general gist with renderman networks is that stuff gets compiled on the fly with duplicate shaders here there and everywhere, which makes it harder to maintain.


The problem is that basecolor in your example is no longer uniform once it is connected. Thus, all your shaders would need varying parameters. Not good. To do that, your render core really has to throw away the idea of uniform/varying being specified by the user, and handle it internally by itself.


Sorry about that, it was a bit late to be reinventing sl:) Those paramaters should have been varying for the most part. Although I guess it would be fine to push a uniform output into a varying input. I don't honestly find the uniform/varying qualifiers too big a deal. And like Simon said, for grid merging, more things wind up varying these days anyways.


Well, the rib already knows the variables that are being spit. So no need for that info to be in shaders. It is useless there.

hmm fair enough. so why do we need outputs? (..perhaps for future shader networking :D )


You are problably not going to get away from that completely. Even if your 3d package comes with its own renderer, for efficiency issues, the way the data is stored for animation and for rendering is usually different.

You wouldn't want to get away from writing ribs completely. For farm machines and the like its essential. Apart from anything else it would be too expensive getting maya on every farm box. But for tweeking and re-rendering, where time is a bigger factor, the current workflow is a bit crap.

floze
06-20-2005, 11:06 PM
The issue with maya2mr is that it relies on a custom startup file called maya.rayrc to find all shaders used in the scene when rendering and will load all of them even if they are not used (for the GUI, it relies on an environment variable to find them which is also different from the standard standalone vars of mental ray). All of that is silly.
The proper way to write a converter is to have it always rely on the same environment variable, both for the gui and the rendering part and have only the shaders that are used in the scene be loaded. The way maya2mr works is just lazyness and bad coding on alias' part.
What about the artsy fartsy shader manager? To be honest I (luckily) never was forced to use it, but couldnt it be a way to load/unload shaders scenewide as they're actually needed?

dmaas
06-20-2005, 11:57 PM
The way you solve the uniform vs. varying issue is to make everything varying, and then have the renderer "optimize" the shader code on each grid, propagating uniform values and constants where possible. This has the additional benefit that any parts of the shader that don't affect the final color would be optimized away. (if you're like me, you have a lot of large multi-part shaders where only a few of the parts actually run on each primitive, so this could amount to a significant render time savings - assuming the overhead of performing this optimization could be amortized via caching, or if it could be disabled for trivial shaders).

BillSpradlin
06-21-2005, 04:35 AM
Great thread, tons of nice information and it hasn't gotten ugly, something of a rarity on these boards lately.

tweeeker
06-21-2005, 08:23 PM
up string attributes.
Versioning is something that I don't think can be supported by the language and should be outside of it. This is something that the translator should handle. eg by using different search paths.
Simon

Maybe. Although if you take a maya plugin as an example - a maya plugin always has a version, and a scene that uses the plugin always records what version of the plugin it needs. Why is this any different to ribs (and therefore archives) being more aware of what version of a shader they need?

The real problem I have with the renderman text preprocessing approach is that because its just a bunch of cleverly constructed #define's, its not really possible to update a sub portion of the code then recompile, outside of the shader builder. All the #defines must be declared correctly in the actual sl file, so that unique variable names are created properly for future defines (hope that makes sense). So, more often than not, the only way to rebuild the shader is from the source network in slim or mayaman - which on a large scale is time consuming, if not impossible. If all I wanted to do was add a new aov and modify whatever function to set its value, thats an awfull lot of work thats certainly disproportionate to the task. The way I understand it, mental rays 'properly designed' networking approach doesn't suffer the same limitation.

T

rendermaniac
06-22-2005, 09:33 PM
Maybe. Although if you take a maya plugin as an example - a maya plugin always has a version, and a scene that uses the plugin always records what version of the plugin it needs. Why is this any different to ribs (and therefore archives) being more aware of what version of a shader they need?


Good point - didn't think about it that way. Of course if versioning is part of the language then this would force all translators to use it. Not that's necessarily a bad thing.


The real problem I have with the renderman text preprocessing approach is that because its just a bunch of cleverly constructed #define's, its not really possible to update a sub portion of the code then recompile, outside of the shader builder. All the #defines must be declared correctly in the actual sl file, so that unique variable names are created properly for future defines (hope that makes sense). So, more often than not, the only way to rebuild the shader is from the source network in slim or mayaman - which on a large scale is time consuming, if not impossible. If all I wanted to do was add a new aov and modify whatever function to set its value, thats an awfull lot of work thats certainly disproportionate to the task. The way I understand it, mental rays 'properly designed' networking approach doesn't suffer the same limitation.
T

Absolutely. I personally don't like using the pre-processor if I can help it. There just isn't enough type checking or run time control.

I'm not sure I'd go for structs - I asked this ages ago on c.g.r.r myself. But you would really need pointers to get it to work. RSL already has output variables which lets you pass by reference, which is good enough for me.

It would be really neat to have different shader language bindings, like we do for RIB. Then you could get as complciated as you want. And give you python bindings etc. Is this possible with mental ray?

Simon

tweeeker
06-22-2005, 10:36 PM
Absolutely. I personally don't like using the pre-processor if I can help it. There just isn't enough type checking or run time control.


Exactly.


I'm not sure I'd go for structs - I asked this ages ago on c.g.r.r myself. But you would really need pointers to get it to work. RSL already has output variables which lets you pass by reference, which is good enough for me.


Its a tricky one I guess, I agree that introducing a pointer like concept to RSL would be a little out of sorts. Perhaps if sl structs were always pass by value that might do away with the need for pointers. They wouldn't be quite so powerfull, but as Dan said, often we're just looking to group a bunch of paramaters to help make more maintainable code. The bottom line is, if you want to pass very complex data into a shader (arrays of fluid grid data for example), its not really possible to do through rib. The only option is to dump out secondary files and have a shadeop read them back. That's not the end of the world, just over complicates things quite a bit.


It would be really neat to have different shader language bindings, like we do for RIB. Then you could get as complciated as you want. And give you python bindings etc. Is this possible with mental ray?

Haven't a clue about that... I'm sure gga has the answer tho:)

T

gga
06-23-2005, 12:31 AM
I'm not sure I'd go for structs - I asked this ages ago on c.g.r.r myself. But you would really need pointers to get it to work.

Huh? Why would you need pointers? You can have structs and classes without any pointers. Just look at any modern scripting language.


It would be really neat to have different shader language bindings [snip]. Is this possible with mental ray?

It is possible and some people have already done it for some companies (exposing most of the C api with any scripting language is relatively easy, only functions that are not easy to expose are those that return void*, which are usually the most powerful ones in mray). The main problem is that doing just that is not too efficient.
The main reason why prman can get away with using a scripting language for shading is that the language is evaluated in a simd fashion, thus the cost of calling each function is not per sample but per every 256 points being shaded or so. As you start raytracing, you need to also try to evaluate shaders in a similar way by using ray differentials to consider a group of polygons and not just the single intersection hit so that the cost of using a shading language does not become too big. Problem of course is that with any form of global illumination you also end up with tons of incoherent rays, which kind of throws any SIMD shading approach out the window in terms of performance compared to a standard C++ api.
That being said, you could easily adapt any scripting language as the shading language itself. What you would have to do you would be to have your own color, float, integer, boolean, etc. classes within the scripting language that represent not a single element but an actual array of elements, corresponding to all the elements being shaded. The renderer could then fill in those classes with as many elements as possible to try to amortize the cost of having to call a function for each operation. With the current version of mental ray, you cannot really integrate a scripting language in such a way, afaik.

playmesumch00ns
06-23-2005, 09:18 AM
I'm still having trouble seeing exactly what structs would add to the shading language? As much as anything else, the dot operator's already taken! :)

Binding a scripting language for shading just wouldn't work unless it was simd-aware. Even if you operate on arrays instead of single variables, that only handles the uniform case elegantly. And then what do you do for conditionals? You'd essentially be doing what the shader vm does already (simulate a parralel processor) except you'd be doing it with the added overhead of the scripting language's vm!

Adding oo-style functionality to SL (or any other shading language) is all well and good in theory, but it all basically comes down to adding overhead to the the single most time-critical part of the render pipe.

If it changes in any way. I'd rather it go pure C++

dmaas
06-25-2005, 08:02 AM
The reason I want structs is the following:

before structs:

void do_layer(string texturename; string space;
string projection; point center;
vector size; float opacity; etc...)
{
// do something
}

surface mysurf(string texturenames[]; string spaces[];
string projections[]; point centers[];
vector sizes[]; float opacities[]; etc...)
{
do_layer(texturenames[i], spaces[i], projections[i],
centers[i], sizes[i], opacities[i], etc...);
}


after structs:

struct layer_params {
string texturename;string space; string projection;
point center; vector size; float opacity; etc...
};

void do_layer(layer_params params)
{
// use params.texturename, params.center, etc
}

surface mysurf(layer_params layers[i])
{
do_layer(layers[i]);
}

(this is paraphrased from one of my big shaders which has 17 identical layers with 33 parameters for each layer... I'm sure the major studios have seen worse ;) )

It is possible to emulate structs using preprocessor hacks, so handling them in the shader compiler is not strictly necessary. But that's like saying C++ classes and templates are not necessary since you can also emulate them with the preprocessor.

With regards to language bindings, I like Houdini's idea of using the same language for shading that the animation system uses for expressions and scripts. This would avoid the need to shoehorn data through the narrow shading language interface. For instance, imagine accessing the position of an object from a shader just like you would in a Maya expression, instead of having to dump a named coordinate system into the RIB.

I can also imagine an "adaptive" shading language compiler that would attempt to use SIMD optimization where possible, but fall back to a more general interpeter when it sees something it can't optimize. This way you get both the flexibility of a general-purpose language and SIMD speed for trivial shaders... and it would be a neat way to write procedural primitives - it would be neat to have an analog of SL that could generate geometry.

CGTalk Moderation
06-25-2005, 08:02 AM
This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.