Dicing nCloth for VERY high resolution near camera


#1

Hello Everybody.

I am struggling with a shot where there is a flag close to camera and trails off into the distance. The flag is supposed to be HUGE and needs to have a lot of lovely detail flowing along. Obviously this needs to be a high resolution simulation near the camera and lower resolution far from the camera, but how is it achieved?

I have tried slicing the mesh and adding a Weld nConstraint, however where the meshes join I see tension and artifacts.

Can anyone suggest a good method for splitting the mesh into portions, subdividing the nearest pieces, and joining them up again for the simulation.

I’m tearing my hair out (what little is left) and REALLY need some help.

Thanks

Dan.


#2

I don’t quite get your question.

-If you want the resolution of simulation itself. Divide the flag mesh and simulate.

-If you want the resolution of the flag texture. Increase the texture resolution.

I assume you are rendering a video, you don’t have to simulate all the way to the end, but somehow split the process when the camera is far away enough to switch to low resolution simulation.


#3

How dense does the geometry need to be for the close up sections?
Can you not simulate that much geometry for the whole flag or are you just trying to optimize?

Because nCloth results can change so much with different mesh densities I’m not quite sure of a way to simulate different resolutions and having them match when you bring them together.

Jake.


#4

I don’t remember if there is a way to put a subdivision surface before the nCloth node. But if there is you could use a distance to camera to tessellate the cloth.

I dug this up:

http://forums.cgsociety.org/showthread.php?t=126951

Also you can subdivide the mesh itself in segments. Don’t use any constraints. Just select parts of the mesh and subdivide it.

Remember also that there are two objects you are dealing with here. One is the cloth mesh and one is the output mesh.

Each nCloth object is composed of two separate meshes: the input mesh (pMeshShape) and the output mesh (outputCloth).

Have a look here:

http://help.autodesk.com/view/MAYAUL/2015/ENU/?guid=GUID-4004174F-8C3A-43CC-8B1C-D7DFD3EC8194

This may open the door to what you need.

You can edit the nCloth output mesh.

Also you can use the output mesh as a source for a wrap deformer and allows you to use another mesh as the actual render cloth.

Just tossing a few things out that might lead to a resolution depending on what you need.


#5

You can just smooth faces instead of chopping and welding.
You will still run into some issues but it may work for your shot.
It will probably require more tweaking than if you just ran a really high res sim for the whole object, but give it a go. :wink:

[VIMEO]114515344[/VIMEO]

Attached is a test scene. Forces will react differently depending on number of polys in a mesh. You will have to play with all the Maps (Mass, Lift, Drag, Resistance, etc) to balance the dense vs low res areas better than my quick test.


#6

Well, this is wonderful.

Thank you all for your replies.

Howard, I’ve had a look inside your scene, and I was wondering why you triangulated the mesh. I wouldn’t think about doing that. Any particular reason?
I can still see some pulling and tension at the bondaries but it’s a lot better than the results I was getting by splitting the mesh up and welding back rogether.
Thanks very much for making that setup and video . I appreciate it!
I’ll have a play and see if I can use your method.

I did forget to mention that the camera is travelling across the flag, and also two objects will come into view and tear the flag along an edge loop, oh, and I also have to match a previs flag animation. But apart from that it’s quite simple, really.

Thanks again, everyone!

Dan


#7

Sometimes triangulating it will prevent the artifacts where the resolution changes, but either way I think (Duncan have an answer?) you will always get some sort of artifacts when modeling like this… maybe a better modeler will have an answer.


#8

I’d suggest subdividing the input mesh near the camera as mentioned, and keeping it all one cloth. I don’t think one should need to triangulate, though, and keeping it mostly quads will result in a more even stretch resistance. The output meshes of the cloth is set to have a fixed quad flip for triangulation, so the quads are consistently triangulated the same way nucleus does, and one doesn’t get the adaptive shaped based triangulation that could flip during an animation. However in the resolution change regions one could get faces with more than 4 edges, so perhaps there could still be some problems in these regions that might be fixed by triangulation.

However there will be a bit of a problem in that the more dense regions will effectively act as if they have more mass and exert dispropotionate pull on the lower res portions. One could paint mass per vertex lower in the high res regions, although one would probably also need to paint drag per vertex to be lower as well, as the lower mass will result in more air drag. Also the bend/stretch resistance will appear to be higher in the lower res regions. For stretch it is likely a non-issue as one would just set the stretch resistance and substeps very high to keep the stretch minimal everywhere. However if you want a little bend resistance then you might need to also paint the bend resistance per vertex to be lower in the lower resolution regions. (although perhaps this won’t affect your simulation that much)


#9

Yep, I definitely had to play with some of the maps to compensate.

Pretty cool that it somewhat works!
:smiley:


#10

Thank you Duncan. Lots of help there.

Is there a difference in using a poly smooth instead of an ‘add divisions’ or is it the same thing?

Will having denser regions in a single mesh affect any nConstraints? Specifically I will need a Tear constaint acting in the region of high-to-low mesh density which (to tear along a specific edge), and an Attract to Matching Mesh over the whole nCloth.

Thanks.


#11

First, dont use Tear Constraint. Instead pre tear the vertices you want, and use Weld Adjacent Borders Constraint.

And if you tear in the middle of the high->low area, you will definitely get artifacts, no way around that I think.

I suggest you model the resolution mesh change in a different area than where the tear is if possible.


#12

Howard, you are RIGHT about that!

I’ve changed my method now. Tearing along an edge loop while simulating the cloth won’t work - the edge loop keeps moving around!

I have decided to sim the whole cloth (with enhanced resolution near the camera), then split the OutputCloth mesh along the edge loop POST sim. Then I apply a bend deformer on the separated pieces to create the opening up of the cut.

This had better work, I’m running out of version numbers!

Thanks again for all your help.

Dan


#13

Tearing the output mesh the way you describe could work ok if you just want to fray some edges, but I’m not sure how well the deformer will work as the mesh deforms.

The tear constraint will be more efficient and controllable if when you create it you select all the edges on the cloth you wish to be tearable. Then set the glue strength to determine the break point( it may help to keyframe it).

This will have much the same effect as the pre tearing of the verts that Howard suggests. The difference is that the tear constraint also sets up a merge vertex and smooth normals downstream of the output cloth mesh, so that the tear edge is smooth shaded. Note that the merge means that the topology of the input mesh no longer matches the output mesh, and also the topology of the final mesh then changes as it tears.

Note that with the topology change any additional constraints after a tear constraint should be created by selecting verts on the input mesh, not the output, as they may no longer match.


#14

Thank you, Duncan, for the advice.

I think splitting the mesh POST-sim will give a cleaner result for this particluar shot.

Here is my method:

  1. Import alembic mesh from Softimage

  2. Duplicate, keeping input connections. Soften edges and merge any stray verts.

  3. Duplicate, keeping input connections. Deform with an amimated lattice to move the mesh to where the cutting geometry is going to be.

  4. Duplicate, keeping input connections. Now smooth the mesh in areas of interest (using SOuP group node for selections). Subdivide up to three times.

  5. Go to start frame and duplicate the mesh once more, but without input connections.

  6. Convert to nCloth and make a Attract to Matching Mesh constraint, choosing the mesh I made in step 4 (subdivided, deformed and smoothed, but still with the Alembic history attached).

  7. Simulate the nCloth

  8. Select the outputCloth and choose the edge loop to split and separate the mesh.

  9. Add some bend deformers and keyframe the opening of the ‘cut’

The most problematic area is the Attract to Matching Mesh nConstraint which is producing some bouncing and stretching, but I guess that will dissapear with tweaking damping and stretch resistance.


#15

The damp and motionDrag on the constraint can help with the bouncing. Also the constraintStrength and substeps will have an effect( it depends on the desired tracking to the source mesh ).

A somewhat simpler setup would be to just use input mesh attract on the nCloth instead of duplicating the mesh and creating a matching surface constraint, but I assume you likely have a reason for using that.


#16

Thanks again, Duncan.

The Input Mesh Attract is extremely helpful and a lot simpler than my previous method.
In fact it is probably the best way to do a shot like this.

Only experience will teach you the best way to approach a particular type of shot, I suppose. No matter how good your technical skills and understanding of the tools available, choosing the best route is the mysterious part of this job. I’m learning (slowly) !


#17

The input attract is used quite a bit and has gone through lots of revisions to help deal with things like wiggle and off by one step issues. There are not many cases where the attract to matching mesh would be required.