Texture of MPxSurfaceShape in Viewport 2.0


#1

Hello,

I coded a custom MPxSurfaceShape similar to that of apiMeshShape SDK, displayed in Viewport 2.0 with SubSceneOverride and GeometryOverride.

I want to implement texturing in Viewport 2.0 but I don’t know how to use the texture container (MTexture or MTextureManager, I guess) in SubSceneOverride/GeometryOverride, and there’s not any example inside apiMeshShape.

Could anybody post a simple example? What would be the equivalent in Viewport 2.0 to classical uv containers and glTexCoord2f() command used in old Maya Viewport?

Many thanks.


#2

So GeometryOverride is not meant to provide shaders. It is purely overriding the geometry definition of a surface. The benefit of this approach is that Maya handles the shader support so standard materials should just work out of the box.

SubSceneOverride is a nicer interface to work with. In the SubScene case you are required to provide both the geometry data (vertex/index buffers) and the shader as part of the render item. In this case, you would need to look at MShaderManager or if you have a shader in the scene then in 2016 you should be able to use MRenderItem::setShaderFromNode() to link the shader pipeline from a surfaceShader against the render item.

Texturing is part of the shader pipeline, so you would have to specify your texturing as part of your shader.

Of course, if you want full draw control over your surface and want to specify your texturing directly, then perhaps MPxDrawOverride is what you really want.


#3

Thanks for the reply!

I tried to do it for the SubSceneOverride with the apiMeshShape example. I created an MShaderInstance fTexturedShader similar to fShadedShader, and an MVertexBuffer fUVBuffer among fPositionBuffer and fNormalBuffer, as well as an MIndexBuffer fTexturedIndexBuffer.

When updating geometry I added positions, normals and uvs:
MVertexBufferArray texturedBuffers;
texturedBuffers.addBuffer(“positions”, fPositionBuffer);
texturedBuffers.addBuffer(“normals”, fNormalBuffer);
texturedBuffers.addBuffer(“uvs”, fUVBuffer);
setGeometryForRenderItem(*texturedItem, texturedBuffers, *fTexturedIndexBuffer, &bounds);

When filling the buffers I did:
// Acquire vertex buffer resources
const MVertexBufferDescriptor posDesc("", MGeometry::kPosition, MGeometry::kFloat, 3);
const MVertexBufferDescriptor normalDesc("", MGeometry::kNormal, MGeometry::kFloat, 3);
const MVertexBufferDescriptor uvDesc("", MGeometry::kTexture, MGeometry::kFloat, 2);

fPositionBuffer = new MVertexBuffer(posDesc);
fNormalBuffer = new MVertexBuffer(normalDesc);
fUVBuffer = new MVertexBuffer(uvDesc);

float* positions = (float*)fPositionBuffer->acquire(meshGeom->vertices.length(), true);
float* normals = (float*)fNormalBuffer->acquire(meshGeom->vertices.length(), true);
float* uvs = (float*)fUVBuffer->acquire(meshGeom->vertices.length(), true);

// Fill vertex data for shaded/wireframe
int vid = 0;
int pid = 0;
int nid = 0;
int uvid = 0;
int uv_len = meshGeom->uvcoords.uvcount();
for (unsigned int i=0; i<meshGeom->vertices.length(); i++)
{
    MPoint position = meshGeom->vertices[i];
    positions[pid++] = (float)position[0];
    positions[pid++] = (float)position[1];
    positions[pid++] = (float)position[2];

    MVector normal = meshGeom->normals[i];
    normals[nid++] = (float)normal[0];
    normals[nid++] = (float)normal[1];
    normals[nid++] = (float)normal[2];
    
    if (uv_len > 0)
    {
        // If we are drawing the texture, make sure the  coord
        // arrays are in bounds.
        float u, v;
        int uvId1 = meshGeom->uvcoords.uvId(vid);
        if ( uvId1 < uv_len ) {
            meshGeom->uvcoords.getUV( uvId1, u, v );
            uvs[uvid++] = u;
            uvs[uvid++] = v;
        }
    }
    
    vid++;
}
fPositionBuffer->commit(positions); positions = NULL;
fNormalBuffer->commit(normals); normals = NULL;
fUVBuffer->commit(uvs); uvs = NULL;

To fill the index buffers I coded:
fTexturedIndexBuffer = new MIndexBuffer(MGeometry::kUnsignedInt32);
unsigned int* texturedBuffer = (unsigned int*)fTexturedIndexBuffer->acquire(3*numTriangles, true);
// Fill index data for textured
unsigned int base = 0;
idx = 0;
for (int faceIdx=0; faceIdx<meshGeom->faceCount; faceIdx++)
{
// Ignore degenerate faces
int numVerts = meshGeom->face_counts[faceIdx];
if (numVerts > 2)
{
for (int v=1; v<numVerts-1; v++)
{
texturedBuffer[idx++] = meshGeom->face_connects[base];
texturedBuffer[idx++] = meshGeom->face_connects[base+v];
texturedBuffer[idx++] = meshGeom->face_connects[base+v+1];
}
base += numVerts;
}
}
fTexturedIndexBuffer->commit(texturedBuffer); texturedBuffer = NULL;

However, the plugin does not work. When plugging a texture from a surfaceShader node, no shader is shown.
Is the code properly written? At least the way how the UVBuffer and the index buffer are filled?

Many thanks!


#4

Thanks for the reply!

I tried to do it for the SubSceneOverride with the apiMeshShape example. I created an MShaderInstance fTexturedShader similar to fShadedShader, and an MVertexBuffer fUVBuffer among fPositionBuffer and fNormalBuffer, as well as an MIndexBuffer fTexturedIndexBuffer.

When updating geometry I added positions, normals and uvs:
MVertexBufferArray texturedBuffers;
texturedBuffers.addBuffer(“positions”, fPositionBuffer);
texturedBuffers.addBuffer(“normals”, fNormalBuffer);
texturedBuffers.addBuffer(“uvs”, fUVBuffer);
setGeometryForRenderItem(*texturedItem, texturedBuffers, *fTexturedIndexBuffer, &bounds);

When filling the buffers I did:
// Acquire vertex buffer resources
const MVertexBufferDescriptor posDesc("", MGeometry::kPosition, MGeometry::kFloat, 3);
const MVertexBufferDescriptor normalDesc("", MGeometry::kNormal, MGeometry::kFloat, 3);
const MVertexBufferDescriptor uvDesc("", MGeometry::kTexture, MGeometry::kFloat, 2);

 fPositionBuffer = new MVertexBuffer(posDesc);
 fNormalBuffer = new MVertexBuffer(normalDesc);
 fUVBuffer = new MVertexBuffer(uvDesc);

 float* positions = (float*)fPositionBuffer-&gt;acquire(meshGeom-&gt;vertices.length(), true);
 float* normals = (float*)fNormalBuffer-&gt;acquire(meshGeom-&gt;vertices.length(), true);
 float* uvs = (float*)fUVBuffer-&gt;acquire(meshGeom-&gt;vertices.length(), true);

 // Fill vertex data for shaded/wireframe
 int vid = 0;
 int pid = 0;
 int nid = 0;
 int uvid = 0;
 int uv_len = meshGeom-&gt;uvcoords.uvcount();
 for (unsigned int i=0; i&lt;meshGeom-&gt;vertices.length(); i++)
 {
     MPoint position = meshGeom-&gt;vertices[i];
     positions[pid++] = (float)position[0];
     positions[pid++] = (float)position[1];
     positions[pid++] = (float)position[2];

     MVector normal = meshGeom-&gt;normals[i];
     normals[nid++] = (float)normal[0];
     normals[nid++] = (float)normal[1];
     normals[nid++] = (float)normal[2];
     
     if (uv_len &gt; 0)
     {
         // If we are drawing the texture, make sure the  coord
         // arrays are in bounds.
         float u, v;
         int uvId1 = meshGeom-&gt;uvcoords.uvId(vid);
         if ( uvId1 &lt; uv_len ) {
             meshGeom-&gt;uvcoords.getUV( uvId1, u, v );
             uvs[uvid++] = u;
             uvs[uvid++] = v;
         }
     }
     
     vid++;
 }
 fPositionBuffer-&gt;commit(positions); positions = NULL;
 fNormalBuffer-&gt;commit(normals); normals = NULL;
 fUVBuffer-&gt;commit(uvs); uvs = NULL;

To fill the index buffers I coded:
fTexturedIndexBuffer = new MIndexBuffer(MGeometry::kUnsignedInt32);
unsigned int* texturedBuffer = (unsigned int*)fTexturedIndexBuffer->acquire(3*numTriangles, true);
// Fill index data for textured
unsigned int base = 0;
idx = 0;
for (int faceIdx=0; faceIdx<meshGeom->faceCount; faceIdx++)
{
// Ignore degenerate faces
int numVerts = meshGeom->face_counts[faceIdx];
if (numVerts > 2)
{
for (int v=1; v<numVerts-1; v++)
{
texturedBuffer[idx++] = meshGeom->face_connects[base];
texturedBuffer[idx++] = meshGeom->face_connects[base+v];
texturedBuffer[idx++] = meshGeom->face_connects[base+v+1];
}
base += numVerts;
}
}
fTexturedIndexBuffer->commit(texturedBuffer); texturedBuffer = NULL;

However, the plugin does not work. When plugging a texture from a surfaceShader node, no shader is shown.
Is the code properly written? At least the way how the UVBuffer and the index buffer are filled?

Many thanks!


#5

I don’t think the issue is with your geometry population. You didn’t share your MShaderInstance creation so I can only guess. There are two ways of specifying a shader on a render item:

  1. You create an MShaderInstance from MShaderManager (via a file or existing shader)
  2. You use MRenderItem::setShaderFromNode() where you specify the exact surfaceShader node in the scene that you want to use to render the geometry.

It sounds like you’re trying to mix 1 & 2, getting an MShaderInstance but expecting it to behave like a surfaceShader node. When you use 1, your plug-in is responsible for setting up all of the MShaderInstance parameters (including textures). See the MShaderInstance class definition. When you use 2, you shouldn’t need to do anything to setup the textures.

And when you are adding code to your post, please remember to use the code tag so that the formatting is proper (the # icon in the toolbar).


#6

Hello Keilun,

About the MShaderInstance definition, the example in the apiMeshShape does both: it checks whether there’s a surfaceShader plugged, otherwise it assigns a stock shader of kind k3dBlinnShader (see the attached code). I need to ask you:

  1. If I use a surfaceShader node and plug it with MRenderItem::setShaderFromNode(), then all the texturing/lighting parameters (uv mapping and normals) are specified inside the surfaceShader?

  2. Such surfaceShaders, are they defined by this MPxShadingNodeOverride (MPxSurfaceShadingNodeOverride in my case as I’m using MPxSurfaceShape)?

  3. If I use a stock shader instead, I don’t really know how. For example, now a k3dBlinnShader is used. In the API there’re also k3dSolidTextureShader (“An instance of a stock solid texture shader for 3d rendering”) and k3dFloat2NumericShader (“An instance of a stock shader for drawing 2 float values per vertex for 3d rendering”). I don’t know if they are what I’m looking for, neither how to use them, because in the documentation it’s not explained. If for instance I use k3dSolidTextureShader, is this shader assuming that I’m storing a buffer of uv coordinates and positions/normals, and looking for it when rendering the surface?

Thank you very much!

	// Update shader for textured item
	if (fMesh->materialDirty() || (!fTexturedShader && !texturedItem->isShaderFromNode()))
	{
		bool hasSetShaderFromNode = false;
		
		// Grab shading node from first component of first instance of the
		// object and use it to get an MShaderInstance. This could be expanded
		// to support full instancing and components if necessary.
		MObjectArray sets, comps;
		if (node.getConnectedSetsAndMembers(0, sets, comps, true))
		{
			for (unsigned int i=0; i<sets.length(); i++)
			{
				MFnDependencyNode fnSet(sets[i], &status);
				if (status)
				{
					MPlug shaderPlug = fnSet.findPlug("surfaceShader");
					if (!shaderPlug.isNull())
					{
						MPlugArray connectedPlugs;
						shaderPlug.connectedTo(connectedPlugs, true, false);
						fLinkLostCallbackData.push_back(new ShadedItemUserData(this));
						if (connectedPlugs.length() >= 1 &&
							texturedItem->setShaderFromNode(
														  connectedPlugs[0].node(),
														  instances[0],
														  shadedItemLinkLost,
														  fLinkLostCallbackData.back()))
						{
							assert(texturedItem->isShaderFromNode());
							hasSetShaderFromNode = true;
							break;
						}
					}
				}
			}
		}
		if (!hasSetShaderFromNode)
		{
			if (!fTexturedShader)
			{
				fTexturedShader = shaderMgr->getStockShader(
														  MShaderManager::k3dBlinnShader);
			}
			texturedItem->setShader(fTexturedShader);
			assert(!texturedItem->isShaderFromNode());
		}
		
		fMesh->setMaterialDirty(false);
	}

#7
  1. Yes they are contained by the node. So suppose you grab an MObject for the node ‘blinn1’ in the scene. And you invoke MRenderItem::setShaderFromNode( blinnObj ). Then all of the shading will be handled by the implementation of blinn1. Since the implementation of the blinn shader is internal, there’s nothing you need to do. In the scene, you can add a file texture onto the blinn shader like you normally would and the shading is all handled internally.

If you specify a surface shader that is a plug-in shader, then it will invoke the plug-in shader when it comes time to render that render item.

  1. Per my answer in 1, the implementation depends on the node that you link to the render item.

  2. This is where it gets kind of painful. I don’t know if we document all of the parameters. That said, you could programmatically query all of the parameters to the stock shaders from MShaderInstance::parameterList and infer their usage. Perhaps there’s documentation on the stock shaders somewhere, but I have yet to find that documentation.


The shader implementation for apiMeshShape looks fine to me. So perhaps it’s how you expect it to be used. The way the apiMeshShape works is that it expects a shader to be assigned to the shape which involves some understanding of how shader assignments work. At its most basic level, you need to:

  1. Create your surfaceShader (say blinn1)
  2. Create a shadingEngine (say blinn1SG)
  3. Connect blinn1.outColor to blinn1SG.surfaceShader. Now we have a shader that can be assigned.
  4. The shadingEngine is a special type of objectSet called a partition which ensures that members can only be a member of one partition at a time. The shadingEngine defines the set of objects or components that are to be shaded by the shader connected on the surfaceShader attribute.
  5. So now we want to assign the shader to the mesh. I’m only going to cover object assignment without instancing here to keep it simple.
  6. Every shape will have an inherited attribute called instObjGroups. This is an array attribute.
  7. Connect yourShape.instObjGroups[0] to blinn1SG.dagSetMembers[x]. I put x in there because the index doesn’t matter. So you could do something like:
connectAttr -na yourShape.instObjGroups[0] blinn1SG.dagSetMembers;

The -na means nextAvailable and will work for destination attributes whose index does not matter such as for object set membership.

  1. That connection now means that your shape is now part of that object set.
  2. instObjGroups is an array and index [0] is always the original object path. If you want to work with instances, then you would connect an index above 0 corresponding to whichever instance you need. The instanceNumber is accessible off of the MDagPath class.

Component level assignments get a lot more complicated, so I’ll leave it at that for now. Give that a try.

  1. Create your shape,
  2. Assign the shader like above
  3. See if it works. Step into the code to make sure it’s going into the path you expect it to.

#8

Hello Keilun,

Thank you for your detailed answer. Indeed, the apiMeshShape example in the SDK works. I “solved” the problem by changing the devkit 2015 for the new 2016 one. I had posted this issue also in Autodesk forum (http://forums.autodesk.com/t5/maya-programming/texturing-mpxsurfaceshape-in-viewport-2-0/td-p/6246358).

However, I’m not happy. With this setShaderFromNode() command and the usage of a surfaceShader, with VP2 I feel like it is “cooking” the scene on its own, opaquely, and I lose control of the mapping. For example, in that thread in Autodesk forum you can see a picture of a cube with a checkers texture. The texture is not well mapped, because the grid of squares is only on two sides of the cube, and on the sides it’s distorted. I have no idea about how to change and map it uniformly.

With the legacy viewport it was great, because I could use all my OpenGL knowledge, and I knew exactly what I was doing with every line of code. With SubSceneOverride I get mad at a lot of commands and buffers/indexs lines, most of which are not well documented. However, it’s true that the renderings now look very good.

I wonder if the second part of your post allows for such control. How do I use a shadingEngine exactly? Does it deal with Maya interface and this Hypershade window? I’m a total Maya noob, I only know coding C++ projects and launching them with the MEL command window, nothing else yet…

Many thanks!


#9

So the key thing to remember is that VP2 uses only programmable shaders in its pipeline which is why you have to jump through more hoops than usual to get things up and running. So I think that if you want the level of control that you used to have with fixed function OpenGL, then I would encourage you to look at writing your own hardware shader that achieves the effect you’re after. Once you have the shader, you can use MShaderManager to create an MShaderInstance using a Cg/GLSL/HLSL effect file and specify the parameters that you’d like. If you so desired, you could also augment the buffer streams that you specified from within your plug-in.

Using MRenderItem::setShaderFromNode really targets the user that wants to create a renderItem and permit the user to use built-in Maya shaders. So yes, in order to use it effectively, you would need to have a grasp of how to setup Maya shaders.

There’s a good number of tutorials I’m sure you can find on how to setup materials in Maya which you could follow so I won’t detail them here as that’d likely be a waste of effort. I would encourage you to run through a tutorial and then play around with the surface shader that you’ve bound to your render item so you get a feel for what’s possible. If it’s still not what you’re after, and you’d prefer to have everything baked into your plug-in, then I would go the route of providing your own programmable shader.


#10

Hello Keilun,

After significantly upgrading my debugging skills, I’m happy to have the amount of control I was looking for. I’m using an MPxGeometryOverride instance. I create data streams for normals and uv coordinates, following the example in apiMeshShape, and the textures are properly mapped now, just like I did before with OpenGL, and greatly rendered.

For the record, to whoever faces similar problems (never give up!), I post here some hints/tricks/openquestions you may have not noticed:

  1. To launch an instance of your node, don’t do it by double clicking on a .mel full of commands. Better write them inside Maya, on the script editor. Once I launched an apiMeshShape node with the .mel of the SDK, I just couldn’t change its shader node from default Blinn, neither put a texture. Then, I just typed
createNode apiMesh; 
createNode apiMeshCreator;
connectAttr apiMeshCreator1.outputSurface apiMesh1.inputSurface ;
sets -edit -forceElement initialShadingGroup |transform1|apiMesh1;

and this node was completely tunable, able to receive texture mapping, etc. I have no clue about the reason for this.

  1. If you configure a new shader for a given render item, you have to properly set it so that the requirements list detects it. Do not forget to set it by putting a reference to its string identifier. I.e. if you have an MRenderItem “newItem” and want to attach a shader “snew”, write
newItem->setShader(shader, &snew);
  1. If you have troubles when rendering items (they basically don’t appear in the frame rendering), take a look at how the MRenderItem was created, the RenderItemType. I discovered that “DecorationItem” is not usually rendered (documentation: “A render item which should be considered to be part of the viewport UI (e.g. selection wireframe, components, etc.)”), while “MaterialSceneItem” is usually rendered (documentation: “A render item which represents an object in the scene that should interact with the rest of the scene and viewport settings (e.g.
    a shaded piece of geometry which should be considered in processes like shadow computation, viewport effects, etc.)”). I say “usually” because sometimes the rule does not apply (I found it in some cases when using SubSceneOverride, to render the wireframe of the cube/sphere). I’d really thank some clarification at this. At least, what really means DecorationItem, MaterialSceneItem, NonMaterialSceneItem, etc.

  2. If you want a particular render item that is being rendered to disappear in the following frame renderings, let’s say “vertexItem”, enable it to false:

vertexItem->enable(false);

However, this does not mean that if you enable it to true it will necessarily appear in the rendering (coming then back to point 3).

  1. Having worked with both geometryOverride and SubSceneOverride, I still don’t figure out the real, functional difference between them, always in the context of handling MPxSurfaceShape and MPxNode nodes. Autodesk documentation (http://help.autodesk.com/view/MAYAUL/2016/ENU//?guid=__files_GUID_75A7DB0B_A00E_4E3F_9BD7_436F9ED3B0DC_htm) says that “MPxSubSceneOverride falls in between MPxDrawOverride and MPxGeometryOverride with respect to the amount of control given to the implementation. MPxDrawOverride allows full control over the entire draw of the object, but as a result the implementation is entirely responsible for the draw. MPxGeometryOverride is relatively simple to work with; but as a result, only limited control of the draw is available.”
    I personally think that geometryOverride, despite being longer in code and more sophisticated, allows for more control than SubScene. I have still not played with MPxDrawOverride. Would you know why is that said in the documentation?

Many thanks!


#11

GeometryOverride provides you with the least control. The reason that’s the case is because you’re only expected to provide the geometry buffers for the requested geometry requirements. Now you can insert additional render items into the list if you want, but managing the list is a real pain as you have no real index into the list and the list already comes prepopulated with items from the standard Maya geometry pipeline. To add to that, you’re entirely at the mercy of the callbacks from Maya. If it doesn’t call you back when you were hoping it would, you have to find another way or you’re out of luck. Due to all of the overhead, GeometryOverride doesn’t perform as well when you have a lot of render items to add/manage.

SubSceneOverride provides you with “more” control in that you are responsible for defining the full render item list. That list is indexed and you can search it very quickly for add/removal. You’re responsible for defining the geometry buffers and the shaders for each item. This interface is invoked per frame and you can control when it calls update on your class so you have more control rather than the limited callbacks of the geometryOverride interface.

DrawOverride is full override so you have to implement everything from scratch.