Sub pixel/poly displacment - definitions


#1

Can anyone give me one?

A friend at work suggest that messiah’s displacment is ‘sub-poly’ not ‘sub-pixel’, in that you have to define the detail of displacment via sub poly count.

I ask you, what is the difference between sub poly and sub pixel displacment?

Also is this what TrueBUmp was all about?


#2

That’s true, messiah’s displacement is “sub-poly”.

Think of sup-pixel displacement as a kind of “automatic” sub-poly displacement. That is, the necessary subdivision-level" is determined on-the-fly and changes based on what’s being rendered and where. One of the advantage of this is that you generally don’t have to worry about setting a sub-level.

While messiah doesn’t do automatic sub-level, it does afford you more direct control over the subdivision. We’ve approached our system as a kind of hybrid: bump is handled in conjuction with displacement for efficiency and to allow you to use lower sub-levels in most cases. The advange of having a “static” sub-level is that the displaced geometry can actually be traversed via the API allowing for some very intersting shading possibilities, and it’s possible that the sub-polys can be cached to further speed rendering. It’s also prone to less render errors and other oddities, and can be faster to render.

Note that messiah’s “Parametric-Low RAM” displacement is a variant on sub-pixel displacement in that the displacement is generated on-the-fly, and will be the mechanism by which we will ultimately build out full sub-pixel support. However, I predict that users will likely stick with “Pregenerated Polygons” because of the afore mentioned benefits and others.

All I can tell you about TrueBump is that it looks virtually identical to displacement… but without actually displacing the model. I believe that Gary Chike did a side-by-side comparison of messiah’s bump & displacement using his zombie head model. In many cases, there is little detectable difference, but the speed difference is definately evident.

-lyle


#3

Thanks for clearing that up. :slight_smile:


#4

Hi,

Imilton,
What is difference between Studio2’s Disp/Bump combo and just applying bump on top of displacement like in LW or other renderers?
Lightwave and other rendering software seem to be capable of achieving that effect ( I haven’t done a direct comparison yet)… by just putting the bump image on both displacement and bump mapping, but with the additional flexibility of controlling bump and displacement independently. I suspect Studio is doing some sort of normal correction or something like that, but would be nice to know the official explanation.

But… I was expecting a little more (or less depending which way you look at it) with regard to memory consumption when using displacement.
In practice I cant do much more with Studio’s displacement than I can with LW, maybe 30% more?.. That’s still cool, but with the project I’m working on, I need loads of actual geometric displacement, the disp/bump combo just doesn’t cut it in this case… the overlap/parallax is vital in this case…think underwater coral reef rock.

Render times in this quite extreme case is at a pleasing (for a still) 1 hour 30mins (dual Opteron 248’s) with memory usage at the limits, DOF, Motion Blur, Displacement, Radiosity, SSS… well, everything really! I got to a stage in my Studio2 extreme test project where individual scene components look great (btw, I’m extremely happy with the quality/flexibility of Studio’s renderer/node based surfacing)… but I cant add anything else to scene because memory usage is way out there, due to heavy displacement :frowning:

I guess my expectations were a little higher, somehow I sorta got the impression that the feature was indeed Sub-Pixel Displacement

Can someone help me upload an image to this forum? Thanks in advance :slight_smile:

Parametric vs Pre-generated:
In my case experimenting with “Parametric-Low RAM” and “Pre-generated Polygons” didn’t make any difference to memory consumption when using displacement, am I right in saying that this setting only affects un-displaced geometry?

Imilton, are you aware of a recent realtime technique for faking displacement mapping, called Parallax/Offset Mapping? Here’s some of the best examples I’ve found: http://www.infiscape.com/doc/parallax_mapping.pdf

[http://farcry.filefront.com/file/Far_Cry_Polybump_Offset_Mapping_Example;26574](http://farcry.filefront.com/file/Far_Cry_Polybump_Offset_Mapping_Example;26574) If you got FarCry :)

[http://www.reallyslick.com/pictures/offsetmapping.jpg](http://www.reallyslick.com/pictures/offsetmapping.jpg)

[http://www.opengl.org/discussion_boards/ubb/Forum3/HTML/011292.html](http://www.opengl.org/discussion_boards/ubb/Forum3/HTML/011292.html)  discussion on the matter

[http://www.delphi3d.net/download/uberbump.zip](http://www.delphi3d.net/download/uberbump.zip) Freegin great demo, the bumps even cast shadows!

Now, we cant have realtime graphics doing much better bump-mapping than every other off-line rendering software out there, now can we? :slight_smile:
Imagine this combined with coarse ordinary displacement, I tried to convince another developer to investigate the matter with not much success
maybe you will listen :wink:

Btw, a wee bug you should be aware of… when Messiah reaches the memory limit it doesn’t politely inform you and stop rendering… on my 2GB ram machine it gets to 1.6GB usage and crashes.
Also, sometimes Studio doesn’t return the memory if you stop a rendering in progress, and therefore (it seems) causes instability down the line.

Cheers Serg


#5

I can confirm this behaviour.


#6

SergO: I can confirm most of your findings. I also can’t see any difference between Parametric/Pregenerated in the current release other than the latter shows a bit less subdiv-seam artifacts.

I know what you mean with having “high expectations” - in the end, everybody wants to get finally rid of thinking about polygons… :slight_smile:

I have the problem with the memory crash too, mostly when trying to render quite large print res images: they render fine but saving the image seems to need more memory and in that last moment messiah crashes. I am left with all the finished Chunks on disk but no way of reusing them.
It would be cool if messiah could calculate the needed RAM before actually starting to render and give the user a warning if it will be needing too much.

On a related topic:
I found out recently that WinXP Pro has a switch to allow the usage of more RAM per app in the boot.ini:
multi(0)disk(0)rdisk(0)partition(1)\WINDOWS=“Microsoft Windows XP Professional” /fastdetect /3GB
Eyeon/Digital Fusion recommended this switch lately on systems with 2GIG of RAM and more.

The second thing I found is an option in VC++ net that allows apps to use more memory too. The linker switch is called “/LARGEADDRESSAWARE”…

Now this got me thinking if messiah may be able to use more memory with that options, even on 32Bit winXP? I have 2GIG physical RAM and 4 GIG virtual RAM. Since the problem is mostly the saving at the end of the render, I wouldn’t mind some heavy use of the swap file :slight_smile:

But I am no expert in this memory stuff so just take it as an idea…


#7

Thanks for the Tip :slight_smile: I wasn’t aware it could benefit systems with less than 3GB.

The crashing I’m experiencing happens during the subdividing for displacement phase, before it starts painting pixels… memory goes up and up until it hits 1.6Gig and bam… Reproducible every time here, pmg must know about it already… surely they did a how-high-can-you-go test.

The crashing you see with the print res stuff sounds an awfull lot like rendering print res with Lightwave…
Many times I’ve waited hours and hours for a LW print render, only for it to crash when it’s displaying the final render or to not save the image at all. I was hoping that wasnt the case with messiah too.

Cheers

Serg


#8

Sorry, my post was unclear:
I have the same problem with crashes with high subdivisions (just not at 1.66 GB but near to 2 GB RAM usage). When I reduce the subdivision amount I can render, but I get the other problem on save with large images.
Sorry for the confusion.

I also tried to not showing the image (preview = text) with no success.

I can only assume that in the moment when the chunks have to be glued together, the memory consumption is basically the same as if messiah would render to RAM from the start. It could maybe work if the subdiv memory would be released before the image is stitched into one file.

If I assume right, this behaviour makes the cool idea of disk chunks a bit useless… :shrug:

Are the disk chunks in a format that any tool can read? Lyle?

Would it be possible to “continue render” when some chunks are already there?


#9

I’ve always though that Parallax / Offset Mapping / PolyBump / DOT3 Bump Mapping are all just different names / slight variations on ’ Normal Mapping '… or is ’ Normal Mapping ’ just a different name for the technique too :hmm:


#10

Parallax and Offset Mapping are two names for the same thing, it’s a technique that gives the illusion of paralax/raised bumps.
Basically it looks like displacement mapping (but without the deformed silluetes), see the jpg I linked here http://www.reallyslick.com/pictures/offsetmapping.jpg , you can see how some stones actually obscure adjacent pieces, so it’s kinda like a uber bumpmapping, way beter than ordinary bumpmapping like what we use now but not as good as displacement…

check it out the demo I linked, its really impressive. makes you wonder why no-one has it in sofware rendering.

Serg


#11

I am also having repeatable crashes when rendering large models(polys in the 10-20K range) in res above 640x480 with displacement. It never fails to crash when it comes to the anti-aliasing portion of the rendering. And it doesn’t matter if I am rendering with Pre-gen or Param Low Ram.

Cheers,


#12

So what is the definite word here for Messiah 2.0? Is their frontpage inaccurate with their claim that it features sub-pixel displacement or is this thread regarding earlier versions of Messiah?


#13

messiah:studio 2.0b has subpolydisplacement that in a lot of cases can work/look like subpixeldisplacement and future versions will go further in this direction.
I think the definitions in this area aren’t completely clear to everyone yet (even ZBrush is using subpolydisplacement but is most of the time refered to as subpixeldisplacement) so the word on the homepage may be just a misstype of the guy who did it, not knowing the difference.


#14

Ahhh! so hm… a more technical question here then. With sub-poly displacement i assume that it subdivides the polygon or part of polygon depending on how much geometry it will need. Ie. it will adjust the subdivision level on a poly by poly basis as opposed to a pixel by pixel basis. This would account for the larger memory hit than a sub-pixel displacement but would still be immensly more effective than subdividing the entire object and just apply a normal displacement.

I wasn’t aware that ZBrush uses sub-poly actually. Thanx for the info.


#15

DocuWild:
I’m just a user, so I’m not inside the code :slight_smile:
Addaptive Subdivision is something I hope to see in future versions - only subdivide where there is something happening in the displacement channel. But this needs quite advanced subdiv algorithms. Let’s see what pmG can come up with!

It seems that “real” resolution independend subpixeldisplacement needs some kind of volumetric raymarching like Hypervoxel Surfaces in Lightwave. That is quite slow since the volume has to be sampled for geometry.
I am not sure if not all apps use some kind of subpolydisplacement in the end, with addaptive resolution and better memory handling this seems to be much faster/easier to do.


#16

Read this:

http://www.cgtalk.com/showpost.php?p=1490961&postcount=15

-lyle


#17

This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.