Catmull-Clark, And The SUB-D Myth?


I’d like to see if we can get some closure on this subject. It’s been quite a controversy, and regardless of it’s actual impact on modeling, I’d like us to finally resolve the issue.

LW, MAYA, C4D, XSI, Mirai, Nendo, and Max et all have some form of Sub-d modeling. But it has come to many people’s attention that they differ wildly in their implementation. Phrases like “yeah, but max doesn’t support TRUE catmull-clark surfaces” abound.

Since I see these phrases so frequently, I can no longer differentiate between what is true, and what is myth. Some of you may know that myself and a group of CGTalkers are creating an all encompassing Sub-D FAQ that lists and explains methods,
terminology, and theory. For my research and documentation to continue, I would really like to get your take on the matter.

This is a bit dry, I know, but what good is a huge CG forum if we can’t talk about esoteric modeling methodologies? :smiley:

Let me start with a quote:

"What we know today as “Sub-D” was originally described in “Recursively generated B-spline surfaces on arbitrary topological surfaces” by E. Catmull and J. Clark (Computer-Aided Design 10(6):350-355, November 1978).

The algorithm was written by Ed Catmull, a co-founder of Pixar Animation Studios.

When a polyhedron is subdivided with the Catmull-Clark method, new vertices (called “face points”) are placed at the center of each original face, new “edge points” are similarly placed at the center of each original edge, and then new edges are added to connect the new edge points to the new adjacent face points.

The positions of the vertices are calculated as follows:

The face points are positioned as the average of the positions of the face’s original vertices;
The edge point locations are calculated as the average of the center point of the original edge and the average of the locations of the two new adjacent face points;

The old vertices are repositioned according to the equation:

Q/N + 2R/N+S(n-3)/N

Q is the average of the new face points surrounding the old vertex,
R is the average of the midpoints of the edges that share the old vertex,
S is the old vertex point, and
n is the number of edges that share the old vertex. "

So what’s the deal? What is “TRUE catmull-clark”? Spline based? XDUGEF(hehe)?

What do you think? I know, long boring post…but I really think we need more elaboration…

MODS: I beg that this topic not be moved to the modeling section as I’d really like a wide survey of the CGTalk community. But of course, it’s your call.




I don’t have an answer but you need not be affraid of it being moved. in my opinion, threads like this are essentially what the general discussion forum should be all about.

theory and the likes.
I hope people will come up with an answer so the damned Sub-D FAQ of yours can finally see the light.



Yes, stop posing with your guns and get back to work on that faq 3dz :blush:


If I recall; Catmull-Clark and Doo Saibin refer more to the math involved in generating the surface more than the implimentation.

i.e. Application X may convert everything to curves and creates surfaces based on that;

while application Y just tesselates endlessly to satisfy the camera distance.

I could be horribly totally wrong though
ehhe take it all with a grain I say.

I’d certainly love some prman and mray experts to come in and discuss it though and tell me just how close to wrong I am


Asumming N is the number of times the algorithm runs for.

As N tends to inifinity you approach a limit surface, this limit surface is infinitely smooth and is what some people call a ‘true’ catmull-Clark surface.

Some 3d packages (max / lightwave) impliment the algorithm to finite N, which is then rendered as a normal poly object.

Because of the way Reyes and Renderman Compliant renderers (PRman, entropy etc) work, they can effectively render the limit surface because the surface is diced to a subpixel level, so as far as the resolution of the output is concerned the limit surface is what is rendered.

Maya is somewhere in between, a tesselation factor controls the number of N that used at rendertime i think. Tho in the viewpoint a fixed number is used.

Furthermore Mayas subd’s have extra properties that you can use heirichical modelling techniques. I dont think this is described by the original paper and seems to be an addition of Alias.

Lastly Pixar own the patent for using a technique to crease subd edges. I think this is described in the original literature. Its not something the finite N renderers can really implement.

I prolly got some of the above wrong but i think its fairly accurate.


So let me see if I can understand this…

Max and LW are using the same core math, but it’s the output that is different. Sorta like a poor man’s version. The final output is polygons, which need to be smoothed VIA a given integer multiplier. Like in Max, for example, you apply a “meshsmooth” modifier to your control cage level polygonal model. Meshmooth activates the catmull-clark algorithm on the mesh, but uses N to subdivide the mesh. It then unifies the mesh’s smoothing groups, and you then have a mesh that is subdivided, but at the geometric level.

Sorta like pixels Vs. vectors? Max and LW use whole numbers to sub-divide a mesh, but the mesh has no other “intellegence”. A “true” catmull-clark smoothing system has a theoretical control surface that is completely resolution independant?

With “true” catmull-clark, the math is being processed in more of a floating-point way. The smoothness is consistant no matter the distance from the camera because the control surface is being sub-divided at a sub-pixel level. I wonder if similar technology is involved in renderman compliant software being able to produce such amazing displacements.

If that is how it works, it makes complete sense.

Thanks for the input, guys. Anyone else? Do I understand it?




You’re on the right lines 3DZ.

Basically a mathematically-defined smooth surface, such as NURBS or a subdivision surface looks smooth no matter how close you get to it, because it is mathematically smooth. What we mathematicians refer to as C2 continuity.

A polygonal surface has C0, or positional continuity, wihch ,means that points on the surface change smoothly into one another, as opposed to having gaps in the model.

C1 continuity means that the tangents of points on the surface change smoothly into one another. The tangents (or consider the normals if you prefer) on a polygonal surface change abruptly when you get to the edge of a polygon.

A NURBS or subdivision surface (NURBS are one type of B-splines) has C2 continuity. This means that the curvature of the surface (the second derivative) changes smoothly over the surface. This is the mathematical definition of the word smooth.

So what does this mean? Well, at a mathematical level, C2 surfaces are smooth no matter how close you get, whereas a C0 polygonal surface can only fake being smooth by increasing the density of the mesh and blending the normals from polygon to polygon.

AFAIK Maya adaptively tesselates the subdivision surface at render-time to give a smooth appearance (since Maya, like most renderers, only ever handles triangles).

What makes the REYES algorithm (i.e. Renderman) so great is that it was designed for rendering smooth surfaces. PRMan dices an object into tiny little “micropolygons” of a size you can specify roughly by altering the shading rate parameter. A shading rate of 1 makes all the micropolygons roughly 1 pixel in size. What’s more, different chunks of the same object are diced seperately so you have the optimal trade-off between quality and memory/speed on 1 object.

This micropolygon dicing is also the reason why PRMan’s displacements are so scrumptious. To get fine displacements out of the Maya renderer, for example, you have to increase tesselation of an object to an insane amount that cripples the renderer. PRMan, by contrast, does fine displacement without increasing the render time at all.

AFAIK mental ray now renders micropolygons too. Yummy!


thx for the headache guys!!


Originally posted by stal3fish
thx for the headache guys!!

I agree,that math that goes into these programs is really amazing.
I know to those who know it seems simple but to me it spins my brain everytime I read this



Wonderful info. Now I understand how it works. I also understand where all the rumors and myths come from.

I really apprecieate every one taking the time to respond to such a boring post. I’ve finally gotten some answers to a problem that his been plaguing my brain for years.

Thanks a ton.




Interesting read this. A lot of people don’t realize (appreciate?) the immense math under the hood of our favourite 3D programs.

So tell us more :slight_smile:


I don’t know what frightens me more… the math itself, or the fact that I may have understood some of that…


Another aspect, particularly important when dealing with subdivision surfaces, has to do with setup and culling. In some renderers, taking Max as an example, you provide the renderer with a subdivided surface which requires the renderer to do much more work before it can even start drawing the surface. For example. You take a cube in Max, and in order to render that cube as a subdivided sphere, you have to subdivided it to N. If N is 4, you’ll be giving the renderer over 6,000 points and over 1,500 faces that need to be setup and culled before the surface can be drawn. With the end result, you’ll see faceting if the camera gets close enough to the model. In other renderers, you just give the render the cube, 8 points, 6 faces, and tell it to render as a subdivision surface. the renderer can set the cube up, immdiately disregard the back face(s), subdivide, cull, and dice the resulting surface. It sounds like more work, but it goes faster because the renderer has less setup and culling to do.


This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.