The biggest limitation I think is the relatively small texture cache, where all textures go while they’re being used. It seems this isn’t going to get much larger in the next-gen hardware either. Normal maps can be compressed, but I’m not sure if they stay compressed in the cache.
Hardware-accelerated displacement and tesselation methods are becoming more common, so I think we’ll see more of this in real-time apps like games. In that link, click on the grey alien picture for a similar methodology to the Neckling. Memory is still an issue though. But displacement maps can be palettized or use cache-friendly compression like DXT, so that’s an advantage, and they also change silhouette and self-shadow properly, things that normal maps can’t do.
We’re working on some real-time technology that blends displacement mapping, and allows you to deform meshes per-pixel, like if you shoot a character in the arm, you see the flesh pucker inwards exposing flesh. Eventually with more hits you see bone. The surface is tesselated on the fly around the wound, so you get all the gory details. Cool stuff, can’t wait to show it. If you go to our video page, you’ll see a snake monster… he’s blending his whole body in the video, but will be per-vertex based on collisions soon, then per-pixel after that. One step at a time.
Neato stuff Taron, thanks for sharing the how and the why. Looks very promising.