View Full Version : About Micro polys
09 September 2002, 07:44 PM
Just to know... Is Renderman the only Renderer which use micropolygons?? It's marvelous!
09 September 2002, 07:51 PM
what is micropolys?
what is differerent between normal polygons & micropolygons??
09 September 2002, 08:48 PM
I think FinalRender Stage1 has micro poly displacement in some form or another. I imagine that Mental Ray does too (but I dont know for sure)
09 September 2002, 11:44 PM
finalRender stage1 will have it
VRay has it
Brazil will have it
09 September 2002, 12:24 AM
You guys are great, don't bother answering Jone's question at all. . . and since I can't, I'll guess.
Jone, Micropolys is a way for a good renderer to create "microbumps" the things that are on everything making reflections blurry and softening specular highlights. . .
how's my BS answer?
09 September 2002, 12:46 AM
Micro Poly Displacement is a way to displace geometry in the renderer. Think of it as bumpmaps that actually generates geometry. A bumpmapped perfect sphere will look bumpy, but it's a fake, and the sillhouette will still be a perfect circle. A displaced bumpy sphere will have a bumpy sillhouette. It's a great way to add detail without actually having to model it.
google around for some examples.
09 September 2002, 12:49 AM
er :) a little BS? :D
Jeez, dont people google these days?!?
type in "what are micropolygons"
The renderman renderer initially dices the geometry into tiny flat surfaces. These are called micropolygons. The actual size of these micropolygons can be controlled by a combination of the Shading Rate and the Pixel Sampling
Displacement Shaders actually change the geometry of the object after it is diced into micropolygons, but before the shading is applied. Displacement Shaders are are assigned on a per-object basis. In ShadeTree you must have the Shader Type set to 'displacement' in the Render Options Editor.
Before the renderer calculates the shading of an object, the surface of that object is divided up into tiny individual flat surfaces, called surface elements or micropolygons. Each element is a single plane. This allows the renderer to calculate a normal for that tiny area of the surface. The size of this area can be controlled by the Shading Rate in the Render Options Editor. If you decrease the Shading Rate, more smaller micropolygons are generated for the surface and the closer the rendered surface fits the geometrically defined surface. The purpose of a Displacement Shader is to tell the renderer the new position for each micropolygon.
The Reyes Image Rendering Architecture is roughly the algorithm used by Pixar behind Toy Story, The Abyss, and Terminator 2. Ray tracing isn't the technique used for complex models. When ray tracing arbitrary surfaces that reflect or refract, a ray in any pixel on the screen may generate a secondary ray to any object in the model that needed to be accessed from the database. As models become more complex, accessing any part of the model at any time becomes expensive and dominates rendering time. Therefore, ray tracing algorithm is not suitable for rendering complex environments.
In order to parallelize rendering, it is necessary to define a common representation of basic geometric object. The Reyes algorithm turns every geometric primitive into micropolygons, which are flat-shaded subpixel-sized quadrilaterals. This process is called dicing and the result is a two-dimensional array of micropolygons called a grid.
Before dicing can be done, the image must be split into object primitives with a reasonable bound size. This process is called splitting. After that, the image will be diced to a grid of micropolygons. Shading and visibility calculations are done only on micropolygons which can be vectorized. Parallelism can be exploited if calculations on each micropolygons are independent of each other. Although this is not achievable, texture maps can be used to approximate non-local calculations and make each processing on micropolygons to be independent of each other
Now If I had typed all that, I would have used needless energy & required more food, which would have wasted power & added to the pollution in our environment.
Save our Environment! Search on Google first! :D
09 September 2002, 02:47 AM
I love Vrays Displacement. really quick and high amount of detail
09 September 2002, 07:57 AM
Thanx! Chris & Tumerboy - u guys are Great!
okay.. maybe i didn't realize that there is a google option..
next time i will.. but thanx for good inf.:rolleyes:
09 September 2002, 12:38 PM
Micropolygons where "invented" by a team working at ILM that created the initial RenderMan specification and than its first implementation, PhotoRealistic RenderMan (PRMan, for short).
The reason was that until then renderers had "immediate mode" interfaces that required the animation software to deliver the geometry in simple primitives like polygons only. The problem with polygons is that the animation program usually doesn't know what the resolution of the geometry must be to satisfy the requirements generated by the image to be rendered (camera's pov, motion blur etc.)
The result where bad linear silhoutte edges on geometry that supposed to be (or at least look) curved and these simply wheren't acceptable for feature film work -- at least not for the team at ILM at that time.
Most animation software was polygon based then. It was clear that high quality images required higher-order surfaces and hence RMan is build around higher-order primitives. Polygons are supported but rarely get used. Instead, the renderer is sent a description of the primitive (e.g. a true sphere, a NURB patch, a subdivsion surface) and creates the geometry used for rendering that primitive itself based on the constraints set by the user through the ShadingRate and image size (spatial resolution) and motion information (temporal resolution). The geometry created by the renderer from this ultimately are the micropolygons.
The usual ShadingRate for production is 1, which means that no micropolygon will have a size (area, not edge length) greater than one pixel [edge length is guaranteed be lesser or equal than sqrt(ShadingRate) ]. Regardless how close you move near any curved silhouette, it will always look smooth and round. Additionally displacement shaders can alter the micropolygon vertex positions, resulting in -- in theory -- unlimited resolution for patterns a shader generates. You can zoom in and a clever shader can create appropriate shading and displacement patterns for any zoom level -- how close ever. This is theory of course, in reality most shaders will generate detail as required by the constraints of the respective shot -- not more and not less.
MAX or Maya's displacement capabilities don't even come close to the quality delivered by RMan renderers. Brazil currently utilizes MAX displacements and hence this is true for this renderer too.
The term "displacement mapping" literally got abused by marketing departments of a|w, discreet etc. since the renderers in these apps can't really do "displacement mapping" as defined by those people who came up with this term originally.
Here is an interesting thread on this subject on c.g.r.r:
Almost all RMan compliant renderers out there use micropolygon diced-geomtry at rendering time. Execptions are (to my best knowledge) BMRT, Entropy and AIR. BMRT & Entropy use real primitives like spheres, if no displacement shaders are attached, I believe. And AIR has some controls that resemble those found in Maya or MAX to control higher-order surface to polygons conversion that make the guess seem logical that it uses triangles whose size is determined by e.g. silhouette edge curvature constraints.
Free RMan compliant renderers that use micropolygons are AQSIS and 3Delight.
A funny thing to consider is this: those renderers have been available for years, some even for over a decade.
While many people believe a feature like GI is obligate to have nowadays, if you work on a feature film you'll quickly learn that a renderer's robustness and some features like truly programmable shading, true displacement mapping or 3D motion blur are of much greater importance.
All those features have been in those renderers since they where initially released while some of the hyped newbies like Brazil or fR still lack the most basic ones of them.
09 September 2002, 08:15 PM
the only renderer that is not RMan compliant and has
Micropoly rendering is Houdinis Mantra renderer which
supports a shading language very similiar to RMans and
wanted to add a few problems Micropoly renderer have:
-primitive that cross clipping plane are hard or impossible
to be diced;
-transparency is quite a hit on rendertimes cause hidden
surface removal can't skip primitives behind each other;
-displacement bounds are sometimes hard to set correct
and may cause the clipping problem;
But i agree with Mauritius for Filmproductionquality they
are unbeatable :thumbsup:
09 September 2002, 03:12 PM
Regarding that last post:
- PRMan 11 is said to finally solve that "eyesplits" problem. There aree several workarounds. here, we use an animated trim curve on NURBS geometry and booleans if everything else fails.
- "transparency" can be tuned with a special option that assumes micropolygons at a certain opacity to be 100% opaque. This can speed up complex scenes with lots of partly transparent geometry a lot.
- if your shader is well done and you know what it's actually doing, displacement bounds are never a problem to determine
09 September 2002, 11:50 PM
My brain hurts.
09 September 2002, 02:48 PM
Thanks you for your Highend replies!!:thumbsup:
09 September 2002, 11:54 AM
i've beenstuck in this forum(lighting and rendering) for two days and cant seam to get enough of it :thumbsup:...my parents are wondering what i'm straining my eyes to see :D
hmmm...may be i should get out more ;) (max,maya forums)
thanx every body...this is better than porn...sometimes hehehe.
01 January 2006, 04:00 PM
This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.