Proof of concept: Maxtor


#101
No, you misunderstand it. If the camera moves or the objects move/rotate/scale, all you are doing is changing their transformation, *not* the geometry. You do [b]not [/b]need to check re-evaluate geometry for this. The only time you check this is if you want *all* of the objects in your scene to re-evaluate in terms of topology.

You can also flag a single object as 'dirty' by adding a MaxToR_Geometry modifier and toggling any switch. This will cause it to be re-evaluated alone.

I have a fairly intelligent caching mechanism set up so by default all objects are deemed static. That is, they don’t change shape on a per-frame basis. They are allowed to transform (move/rotate/scale). This lets you export them all once, and the system will re-use them if you want to render a different camera view, or a whole animation. They will be exported only once.

If you have a deforming body, like a character, you assign him a MaxToR_Geometry modifier and flag him as “Deformable Geometry”. This tells MaxToR that he does change shape frame-by-frame and will thus be re-evaluated every frame. This also allows you to have deformation motion blur. (Which means he will be evaluated N times per frame, where N is the number of motion steps). Note, this is also cached, so if Re-evaluate Geometry is off, it will re-use this data for all subsequent renders, be it from a different camera angle, etc.

The benefit of this is that the system only evaluates the minimum of what is needed, and no more, saving a lot of precious time.

Hope that makes sense.

Hi Baothebuff,

Yeah, the Curves implementation is very preliminary, I'm seeing if I can collaborate with Joe Alter to have native RiCurve export from his hair system in max, as right now, exporting a large number of splines is highly inefficient. BTW, you probably don't want to be raytracing hair/fur. I'd use Deep shadows instead.

Re: raytracing blur. Sure, that should be a trivial change. I just picked 0.01 or so as a reasonable default, but I guess no blur makes more sense.

#102

Can you elaborate on the Ricurve export from Shave and haircut. It sounds interesting. Right now, aren’t you exporting each spline as a ricurve? Why is that inefficient.

About the raytrace shadow and hair, I was just testing the features and getting to know renderman. On that note, how would you actually render hair in renderman, like Sully in Monsters Inc. Ricurve?


#103

Ok, minor update:

  • Basic Raytraced shadows now have no blur.
  • There is a shader pack available for Aqsis now.
    It’s only been tested with the SVN version of Aqsis as they needed to add an extra feature for the shaders to compile. But it may work with the stable build.

Right, that’s the problem, I’m converting each hair to splines first. This step is largely inefficient and unnecessary. Try to set your hair count to 500,000 and see how long it takes to convert it to splines. With the proper API, I could get the hair data straight out of Shave’s engine as a Procedural plugin. Either that, or have it generate a hair archive (DRA) for me to read at render time, without having a million splines in the scene.

About the raytrace shadow and hair, I was just testing the features and getting to know renderman. On that note, how would you actually render hair in renderman, like Sully in Monsters Inc. Ricurve?

Definitely would use RiCurves. They are highly efficient and render really fast in RenderMan. They are typically rendered as a strip of micropolygons along the curve’s length and are highly flexible in their specification. Unfortunately 3ds max splines allow very little control over this. (You can only set a constant width (thickness) over the entire spline, which is why integration with shave would be so advantageous as it allows you to specify root/tip thickness, etc.))


#104

Thanks for the explaination. I understood perfectly. :thumbsup:

One little request.

Is it possible for Maxtor to remember what rollout tabs are open/close so then I don’t have to keep opening and closing the tabs everytime I close it? And also for it to remember the image ratio tag.

Maybe also settings to change the bucket size.


#105

I can’t install Beta software on my workstation but I just went through the manual. Wow. Fantastic work.

MaxtoR_Color is obviously basic, while MaxtoR_Shaders allows advanced surfacing for those familiar with Renderman. Is your goal for MaxtoR to mimic scanline (ie. rendering Max materials)? Or will complex surfacing have to be accomplished with renderman shaders?

Thanks.


#106

Hi fez,

Thanks for taking the time to look at the manual :slight_smile:

MaxToR color modifier is designed only to provide the Cs and Os primitive attributes to RenderMan. Without it, it uses the object’s wire color.

MaxToR shader modifier lets you attach a surface and displacement shader to your object. You can either handwrite it, or use a graphical authoring tool to create it. MaxToR does not currently translate Max’s material network into renderman shaders. It’s doable, but a very complex task, and not that useful in production, as many studios handwrite their shaders.

There have been many additions to MaxToR since when I published the manual, so it doesn’t cover the bleeding edge developments.

Baothebuff,

Your little request is not as little as you may think :P.

However, it is implemented in the latest version of MaxToR, plus some more.

In addition to your request, you can now configure Renderer specific options. These are quite advanced settings that should only be tweaked by experienced users, as you can easily degrade rendering performance by misconfiguring them.

Note: There have been a number of changes to this release, and I suggest you delete your old lastsettings.ini file and let MaxToR create a fresh one.

Here’s a screenshot of some new stuff:

Keep in mind that these settings are renderer-specific thus not all renderer’s will support all the features. Consult your renderer’s manual for details on what is supported.


#107

Great update!
Could you explain a bit about gridsize.

I’m under the impression that gridsize should be = to bucket/ shading rate and not any smaller. Would it be wise to have a check box that adjusted the the gridsize automatically with regards to bucket and shading rate so that you don’t have to mess with it. You can then uncheck it if you know what you’re doing. Sort of like a simple/advance check box.

I think all the raytrace visibility parameters should be on by default. Only when you add the modifier that it turns off.

The reason I’m saying this is because I was trying to make a AO shader and after a long time, I realize that visibility diffuse must be on for it to work using the gather() function. Therefore if you have a scene full of stuff it would be unwise to go and check the visibility on one by one.

On a side note, here’s my first 2 shaders I wrote. 1 surface shader with AO and 1 displace. Whoot!


#108

Hi,

AFAIK, you should not need to adjust the grid size unless you are finding that the renderer is using too much memory and you want to keep it at a steady memory footprint. The grid size does not have to exactly cover one bucket, it rarely does, the default is just a good approximation. The default MaxToR uses is the same as 3Delight’s and I take it they’ve struck a good balance between performance and memory consumption, so unless you really know what you’re doing, I’d say leave it alone :P. It shouldn’t cause any trouble.

I think all the raytrace visibility parameters should be on by default. Only when you add the modifier that it turns off.

The reason I’m saying this is because I was trying to make a AO shader and after a long time, I realize that visibility diffuse must be on for it to work using the gather() function. Therefore if you have a scene full of stuff it would be unwise to go and check the visibility on one by one.

Personally, I think all raytracing should be off by default, as it is a costly operation, and if all visibility options were on by default, an unsuspecting user would get remarkably slow render times without realising he is tracing every object unnecessarily.

What I recommend you do, is assign an instanced modifier to your whole scene. The easiest way to do this is to select all the objects you want, and assign the modifier in the modifier panel. This will instance it to all your selected objects, and when you change the parameter once, all will change.

On a side note, here’s my first 2 shaders I wrote. 1 surface shader with AO and 1 displace. Whoot!

Good stuff! I commend you for going in and actually writing some renderman shaders :slight_smile: Did you render this in 3Delight?


#109

hah, I forgot about the whole instance thing. I know Renderman is not a raytracer but that’s because I’m used to using Mr. :smiley:

Yeah, it’s 3delight. I’ve always wanted to program shaders, or at least learn, but in the past, mental rays been too hard to grasp, with my limited C++ bg anyways, but now that I got a chance to mess with renderman, writing RSL seems to be a lot easier. Although I wish that there were some way to quickly create shaders without programming like the built-in maxtomentalray connection and only program the shaders that are really unique or nonstandard. Something like rendermanformax. ATM, Maxtor is great for TD’s but not really artist friendly for someone who’s used to max. But it IS a huge step in the right direction. :thumbsup:


#110

RenderMan can raytrace nowadays, as you are aware, but not nearly as fast as dedicated raytracers, like VRay, MR or Brazil.

Yeah, it's 3delight.  I've always wanted to program shaders, or at least learn, but in the past, mental rays been too hard to grasp, with my limited C++ bg anyways, but now that I got a chance to mess with renderman, writing RSL seems to be a lot easier.  Although I wish that there were some way to quickly create shaders without programming like the built-in maxtomentalray connection and only program the shaders that are really unique or nonstandard.  Something like rendermanformax.  ATM, Maxtor is great for TD's but not really artist friendly for someone who's used to max.  But it IS a huge step in the right direction.  :thumbsup:

Yeah, that’s kind of why I decided to name it after MtoR :). It didn’t translate maya’s hypershade either, but was really powerful for TDs. RfM on the other hand, while can translate the hypershade, doesn’t output RIBs (among other things), so you lose a hell of a lot of flexibility in a pipeline. LiquidMaya doesn’t either, but was still used on a number of feature films. (BTW, RfM Pro is essentially the replacement for MtoR I think).

I guess at some stage I might try to perform Max->RM shader conversion, I have some ideas as to how to attack it, but it’s not my highest priority ATM.

Thanks for all the useful feedback though, it is well appreciated :slight_smile:

EDIT: There is a minor camera blur bugfix in the latest release, I missed it when fixing transform and deform blur. It’s nothing too major, but if you plan on using a lot of camera blur, I recommend you get the update.


#111

Hey man i see big steps here! Thumbs up!
I’m sorry i can’t be helpful but i’m really busy right now :frowning:
Anyway I know it’d be a pain in the ass, but i think you should have the translation of the (at least standard) max materials as a priority, cause normally the average max users are not so great tds or technical related expert, i mean, one of the reason you use max it’s because it’s immediate and user friendly. Ok, motr doesnt export hypershade, but pixar sells also slim in the rat packet (now renderman studio); this means that in maxtor i must use coded shaders.
Take in consideration what animal logic’s mayaman does, it’s damn cool. Ok i’m making stupid reasonings comparing these things, but i hope you can catch what i’m trying to say.

Apart from this, man, you’re making a marvellous job and I hope to be more useful in the future! See ya :smiley:


#112

Well, nothing’s stopping you from using other graphical authoring tools, like Sler or Shaderman. BTW, Shaderman.Next is coming very soon, and will be open source, so I’m looking to contribute to it, should be a great little tool for making shaders :slight_smile:

EDIT:

Wow, I just tried Sler, I wholeheartedly recommend it. Its free, supports PRMan, 3Delight, Aqsis, Pixie, and is a really powerful and user friendly node based shader editor.
I checked out the latest CVS, but you can try with the alpha release.

EDIT 2, yeah, apparently the CVS is way better than the 0.1 release.

EDIT 3: It’s not quite production ready yet, I’m trying to bring it a more usable state. I’ll send the author a few patches.


#113

I know very well that there’re several sw that can help you to build your own shading networks, but this is not not the point I was speaking at.
Anyway no problem, I know you’re doing a stunning job and i didnt want in any way to decry your massive work.

Keep it up! :applause:


#114

Hey scorpion, awesome stuff! I just noticed this thread (I need to leave the mxs forum more often), great work, keep it up man. I recently bought the renderman books and was getting interesting in coding an exporter for max, this is super cool.


#115

kage_maru, I understand your point, I agree it would be nice to have native max shader translation in MaxToR, and sometime in the future I’d like to have it implemented, but there’s only so much I can do with my time.

I mentioned Sler and Shaderman as an alternative to Slim (which as you mentioned is what Pixar provides with RAT). Of course, they are not as good yet, but I’m working to improve them. I may even start my own branch.

d3coy, thanks for the feedback man :slight_smile: Give it a spin if you like :wink:


#116

On that note, do you have any idea of how to set up shaderman? I can’t get the preview to work with 3delight.

Maxtor is coming along great with a lot of new features. As I was messing with AO and GI in my shader, the trace depth became ever more important but didn’t know how activate it in the .rib, so it’s good that it’s in there right now. I didn’t know about “trace” “int maxdepth”, I kept trying to do “trace” “int maxdiffusedepth” and that didn’t work. I think what you need to think about now is reordering the render UI As more things get implemented, so that things are easier to find and grouped coherently. ie you could group “renderer settings” with the “sample setting” each with it’s own box under the same rollout. I’m just suggesting that because I find myself having to look through all the rollout to find something I though should be in another one. I hope that makes sense. :smiley:

EDIT: Bug? When you have a light and you turn it off but have shadows checked, it tries to creat a shadowmap and gives and error. (only happens if you didn’t generate .sm file already).


#117

I assume you’re using the old version?

What was the error/problem you were facing? Shaderman 0.7.0.0 isn’t the best editor around, but for now, I think its the best free existing tool. (I’m working on developing a Sler-spinoff)

Here’s some tips on making your experience a little more comfortable:

Set up some user environment variables:
SMANHOME = /path/to/shaderman

Make sure %DELIGHT% points to your 3Delight installation.

Set up your paths like so:

Go to your %SMANHOME% dir, and recompile all the shaders with 3Delight except sttexture_air.sl.


   cd %SMANHOME%\shaders
   shaderdl *.sl
 shaderdl sttexture.sl
 

(Note, we compile sttexture.sl again to overwrite the air one.

And you should be fine.


#118

While I agree that the UI should be grouped coherently, I disagree that the Sampling settings should be under the same rollout as the Renderer settings. They are two distinctly different areas of control. Furthermore, the latter are more advanced settings that should rarely be touched and are highly implementation-specific (barely any renderers support bucket-order, for example). Conversely, the Sampling settings are very often tweaked (they control image quality) and are more common in that pretty much all renderers implement them.

One valid suggestion is that perhaps the raytracing settings should be grouped under their own rollout, however I couldn’t justify the need as there are only 2 settings to configure.
That may change in the future.

Besides, most other renderer’s group the settings in a similar way to mine (having Sampling it’s own category, etc.) I looked at a number of other renderer’s UIs when designing mine. (Liquid, RfM, VRay, Mental Ray, etc. and borrows a few ideas.)

EDIT: Bug? When you have a light and you turn it off but have shadows checked, it tries to creat a shadowmap and gives and error. (only happens if you didn’t generate .sm file already).

You mean it doesn’t actually render the shadow map, just tried to pre-process it, right? Well spotted, it was a minor bug, but a bug nonetheless. I’ll hold off from releasing a new version from such a minor fix though, as most users won’t come across it.


#119

Thanks for explaining the shaderman, still doesn’t work though. Good luck on your sler spin-off. Looking forward to it.

I was basing what I said off of the Mental ray UI and FinalRender UI. They have similar layout and yours seem to follow that but a few things seems out of place when going backand forth from maxtor to MR /Fr/scanline. Ie wouldn’t be better if exposure were grouped with camera option? I guess that grouping samples and renderer setting would work well for a raytracer but maybe not as good an idea for renderman. But I’m sure that can all be address once features are finalized.


#120

No it isn’t. 3Delight has faster subdivision surfaces, faster displacement and is up to 200% faster than PRMan when doing ray-tracing. This is comparing PRMan 13.5beta with 3Delight 7.0beta.

Cheers,

Moritz