Pixar's OpenSubDiv enters open beta

Become a member of the CGSociety

Connect, Share, and Learn with our Large Growing CG Art Community. It's Free!

THREAD CLOSED
 
Thread Tools Search this Thread Display Modes
Old 08 August 2012   #46
Originally Posted by Cessen2: It does seem a little odd to me that Pixar has decided to use the Microsoft Public License instead of a BSD variant. The rest of the industry has pretty well standardized on BSD variants for open source library code.

Of course, it's up to Pixar what license they use, and I'm not suggesting that they are doing something wrong (the Microsoft Public License is actually a really nice license, IMO). It just strikes me as a strange choice to break with the rest of the industry.

For the record, the Microsoft Public License is compatible with the GPL v3, but not with the GPL v2:
http://www.opensourcelegal.org/?page_id=532

Blender is licensed under under the "GPL v2 or any later version" terms. So it may be possible to use MPL code in Blender due to the "or any later version" clause. But I'm not a lawyer, so grain of salt and all that.


OpenSubDiv was created in parts by Charles Loop of MSR (Microsoft Research:http://research.microsoft.com/en-us/um/people/cloop/) and thus contains some Microsoft tech that's why the MPL license was used.

The research paper: http://research.microsoft.com/en-us...oop/tog2012.pdf

Last edited by MAK : 08 August 2012 at 11:24 PM.
 
Old 08 August 2012   #47
Originally Posted by cgbeige: GoZ in ZBrush and Mudbox support Maya creases:

https://vimeo.com/13115604


I'm aware they're do now. But they didn't back in 2006 - which is when we were first hoping to implement this workflow.
 
Old 08 August 2012   #48
Originally Posted by mister3d: I don't have Maya, but I'd wait until it's officially implemented, unless you're a programmer.

I mean, make use of the code. I thought of it as an api or library with handy functions for converting geometry cages to subdivided versions of themselves? But, it seems I might be coming back to this thread in a while and giggle at myself for asking since clearly there's something I'm not getting.
__________________
The Pirate
Duplicator Series

Last edited by marcuso : 08 August 2012 at 09:49 AM.
 
Old 08 August 2012   #49
blenderartist forum

''Just figured I'd pipe in here. While at the Blender booth on the Expo floor at SIGGRAPH, I had the pleasure of talking with the presenter in the demo vid posted at the end of page 4. He actually came to the booth seeking us out... in part because of concerns raised in this thread (yes, they read this thread). He and the rest of the OpenSubdiv team are very interested in seeing to it their code integrated in Blender. As already mentioned, they're already in touch with Ton and Nicholas to work something out, so progress is being made.

Also, note that this is a somewhat early stage of OpenSubdiv. Apparently it can be optimized and made even faster... like an order of magnitude faster than what was in the demo.
''
 
Old 08 August 2012   #50
Originally Posted by marcuso: Hah! Yes, apparently. But so how do you get into using it?


I second that question...


1. Anybody knows how to use it in maya? https://github.com/PixarAnimationStudios/OpenSubdiv

2. Does it require a Renderman engine; or mental ray, maya software could render meshes openly-subdivided?
 
Old 08 August 2012   #51
I suspect there are very few programmers in this particular field on this site capable of doing anything useful with this offer. As I understand it, it has nothing to do with any software, it is merely a set of equations translated into code and optimized for performance using various gpu standards. There's still the task of implementing the code into whatever software you plan to use it with.

In the examples, there is a ready-to-compile source for a maya plugin so that one may try them out for oneself, but it's most likely only something to get potential developers up to speed in creating a proper plugin for both maya and other softwares.

Surely, it wont be long before people start posting their solutions for maya and others and shortly thereafter some giant will make it native in their software.

I was mostly curious as to how I could play around with it in smaller sandbox pet projects, as I've been getting into this type of coding lately.
__________________
The Pirate
Duplicator Series
 
Old 08 August 2012   #52
Originally Posted by RickToxik: 2. Does it require a Renderman engine; or mental ray, maya software could render meshes openly-subdivided?


For a renderer to make use of this code, it would also have to have it implemented as most renderers are capable of subdividing meshes on their own, using potentially different sets of algorithms, which is what I believe earlyworm was talking about earlier (no pun intended). Maya may be subdividing a mesh according to one set of rules, whilst renderman did it differently.

Someone please correct me if I'm wrong.
__________________
The Pirate
Duplicator Series
 
Old 08 August 2012   #53
Technically (I suppose) the object is not 'subdivided in realtime',
what is logical is that it only recalculates the position of C-C verts..

I suppose that converting from x software mesh data arrays to pixar subd, method creates a delay depending on how dense it is..

Anyway this all, is good news..
I just wished Pixar were not so dependent with Maya or AD products..
__________________
may not be following this thread.
.
 
Old 08 August 2012   #54
Originally Posted by earlyworm: The main difference between Pixar SubDs and other implementations is in the way that it subdivides the geometry - both in terms of geometry and UVs. Pixar has patents on it's method of subdividing geo, which has resulted in major inconsistencies in dealing with subdivision surfaces in a CG pipeline as other developers can't match the method used by Pixar.

So Maya, Max, Softimage, etc will all produce subdivided geo in it's own unique way, Mudbox and Zbrush will both do something else and none will match what Pixar's PRMan does or what any other renderer (mental ray, vray, etc) will do. The renderer is the most important part in this equation as it's output is what you see on-screen.

On top of this, not all software applications support the same SubD features. I remember a while back we wanted to use creases on SubD surfaces - the benefits were obvious as it meant you didn't have to spend time cutting and adjusting additional edges to produce nice looking hard-surface objects - also less geometry made rigging and rendering quicker. Now Maya did support creases and it was easy to visualise the creasing effect inside Maya and transfer that info to the renderer and get approximate results. The problem was neither ZBrush, Mudbox or Cyslice supported creases, so it meant that we couldn't sculpt or extract displacement from those objects. This meant we were back to cutting additional edges into our models.

I've only seen one VFX studio deal with this correctly and not through artistic hacks - it did so by implementing it's own SubD library (not all countries recognise software patents) which allowed it to match Pixar's method of subdividing geo. That way it didn't matter what software application you did the subdivision in, the results would be identical to that of the renderer.

A few practical benefits of this were - better representation of model inside Maya when modelling, rendered hair/fur didn't appear floating above the skin as the fur was generated from the same limit surface. Textures didn't show any artifacts (distortion, seams) as texture artists were painting on geometry and UVs which matched those of the renderer.

The fast GPU implementation and SubD features (creases, etc) presented here are certainly cool pluses - but it's not the main reason why this is good news - the winner on the day here is pipeline consistency.


Quoted for agreement, this is the big win, every 3d app can finally have the same way of subdividing their geometry, which means we will finally be able to move info around without issues and artifacts.

- Neil
 
Old 08 August 2012   #55
So if I understand correctly this will result in a new standard for model transfer like .obj or .fbx ? Not only will it be interchangeable but also incredibly more efficient. I wonder if it will end up being integrated in games...

I know the cars in GT5 were modeled using subdivision surfaces then baked down where a ton of time is spent optimizing. I guess this way they could just render the control cages dynamically in the engine and subdivide them as the camera gets closer to each model? I'm guessing a lot of memory would also be saved which can be spent on texture data? In the Need for speed titles they made mid-poly models for the cars and a hell of a lot of work went into crafting the cages so they reflect well, seems like this would speed up content creation a lot with near unlimited levels of detail, perhaps even sub-pixel mesh density? Maybe the lighting would choke the engine, I don't know too much about these things... but I'm pretty convinced we are nearing the stage where technical limitations of hardware compromising visuals is just going to cease to exist. Everything is going to get a whole lot more dynamic, the interactive possibilities in such a situation would be endless

Last edited by conbom : 08 August 2012 at 07:09 PM.
 
Old 08 August 2012   #56
Originally Posted by conbom: So if I understand correctly this will result in a new standard for model transfer like .obj or .fbx ? Not only will it be interchangeable but also incredibly more efficient. I wonder if it will end up being integrated in games...


Not exactly. But, potentially, if you have a poly cage in 3dsmax and it's subdivided, when you go to maya and subdivide the same poly cage, you'll get identical subdivs. This is really important for stuff like transferring stuff between 3d apps and paint programs, because if you paint on a subdivided mesh in a paint app, and then try and use the map in your 3d app, you'll get horrible artifacts if the way the 3d app is subdividing its mesh isn't the same. And finally getting usable creases between platforms will be super useful.

- Neil
 
Old 08 August 2012   #57
Quote: I suspect there are very few programmers in this particular field on this site capable of doing anything useful with this offer.


That is probably true : the primary target for this code release are the major CG software vendors, along with the studios who are doing serious software development.

One of the motivations for this probject was that 35 years after CC subdivision had been published and 15 years after it's been generalized, not a single authoring application was supporting the full feature-set, and worse, most of that code behaves differently - sometimes in minor ways, but often significantly so. I think this qualifies the algorithm as "gnarly". You could have the bestest renderer ever implementing these features (PRman) and it still wouldn't matter, because your tool-chain doesn't give you a way to author the content to the full extent of the spec. Earlyworm's post is spot-on.

The real kicker is when you try to add deforming sculpted assets to this sauce (Mudbox / ZBrush) : unless the maths are *exactly* right, displacement simply isn't going to happen. Displacement texturing brings a lot of problems to the table that we have been mostly able to ignore or filter away with color textures so far. Do some experimentation with tangent space normal mapping to get a small taste of the problems (or ask John Carmack ). The early work shown here sidesteps very conveniently many of these problems : no deformations, object at the origin, REYES + PTex filtering... Nice solution for background props, definitely not ready for hero creature work.

OpenSubdiv solves some of these problems and is one of several necessary steps following the early PTex work that will lead us towards the goal of fully realizing the potential of this new generation of sculpting tools (Mari / Mudbox / ZBrush / 3DCoat).

Quote: Technically (I suppose) the object is not 'subdivided in realtime', what is logical is that it only recalculates the position of C-C verts..


We may be arguing semantics here... but the only way to recalculate certain verts positions is to apply the subdivision algorithm for each level of subdivision. So effectively, the object is re-subdivided from scratch for each frame (and in the case of the demo, the entire mesh is subdivided as the code is not using the adaptive part of the algorithm yet). What makes this possible is a set of tables that are built by analyzing the topology of the mesh : so you can move the control vertices around "freely", but you can't add or remove a vertex without having to modify these tables. This isn't useless for modelling apps though, as now there finally is a "reference" implementation out there to compare to numerically, and frankly, we may be able to push the CPU code to a point where it's a viable solution for a modeling app. The fact that it matches 99.999% (precision threshold is 1.e-6) PRman is icing on the cake. Once the maths are matching between apps, making the data (and file formats) portable becomes trivial by comparison.

Last edited by shehbahn : 08 August 2012 at 11:38 PM.
 
Old 08 August 2012   #58
Originally Posted by conbom: So if I understand correctly this will result in a new standard for model transfer like .obj or .fbx ?


It's a common set of rules on how to subdivide geometry and it's UVs. You'll still need to use OBJ, FBX or Alembic to transfer geometry between 3d applications.

Most 3D applications and renderers have the ability to subdivide geo - but they all use a different set of rules - this results in geometry which is often similar looking but not the same.

Especially when texturing and sculpting you want to paint-sculpt on Geo/UVs which match what the renderer is going to do with the Geo/UVs at render time. In fact it doesn't really matter what 'rules' you use to subdivide the geometry, just as long as your using the same 'rules' throughout the pipeline.

This is an old blog post I did way back which kind of demonstrates the problem... http://earlyworm.org/2008/subd-uv/
 
Old 08 August 2012   #59
Originally Posted by shehbahn: We may be arguing semantics here... but the only way to recalculate certain verts positions is to apply the subdivision algorithm for each level of subdivision. So effectively, the object is re-subdivided from scratch for each frame (and in the case of the demo, the entire mesh is subdivided as the code is not using the adaptive part of the algorithm yet). What makes this possible is a set of tables that are built by analyzing the topology of the mesh : so you can move the control vertices around "freely", but you can't add or remove a vertex without having to modify these tables.

By quickly reading the paper: Feature Adaptive GPU Rendering of Catmull-Clark Subdivision Surfaces I understand, that the tables are created in the CPU, and for each subdivision level, but only once. Then the GPU does the rest for each frame, including tessellation.
From a topological point of view, the tables(connectivity), is what I consider the subdivision part, Ignoring tessellation.

Yes, it's probably semantics or interpretation.

If I were to do this using script code, I would subdivide the mesh k-levels, only once (tables/indices generated), then I would only move the verts. That would be tons of times faster, than recreating the tables each frame.
__________________
may not be following this thread.
.
 
Old 08 August 2012   #60
Quote: then I would only move the verts. That would be tons of times faster, than recreating the tables each frame.


The tricky part is that in order to move the weights you have to iterate through each level of subdivision successively. This is where the 99 Stam paper was also leveraging tables : pre-compute the catmark weights for all possible topological conformations, which takes out a lot of the calculations. Unfortunately, semi-sharp creases introduce a wrinkle on this scheme as the number of possible conformations is now infinite, so some of the computations have to be taken back out of the tables (I believe Autodesk owns some patents on this stuff still...). We could also talk about stencils which is yet another method... They all have their strengths & weaknesses, but what we are really trying to tackle w/ OpenSubdiv is sculpted assets with lots of very heavy displacement. I don't think there is anything yet that can deal with this kind of massive geometry, so while the theory isn't exactly completely revolutionary, the implementation certainly opens up a lot of new and exciting paths IMHO.
 
Thread Closed share thread



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
CGSociety
Society of Digital Artists
www.cgsociety.org

Powered by vBulletin
Copyright ©2000 - 2006,
Jelsoft Enterprises Ltd.
Minimize Ads
Forum Jump
Miscellaneous

All times are GMT. The time now is 07:33 AM.


Powered by vBulletin
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.