We are all using 40-year-old technology

Become a member of the CGSociety

Connect, Share, and Learn with our Large Growing CG Art Community. It's Free!

Thread Tools Display Modes
  11 November 2012
Originally Posted by jecastej: But for continuous surface animation and deformation used for character animation today nothing beats polys. At least not any general or commercial application I know.

There are a few logical reasons for that...

1) Detailed 3D Voxel objects take up a lot of memory and harddrive space. On a game console with only 512 MB RAM and 5.6GB DVD game discs, using voxels to create a whole 3D game is virtually impossible. You could have Voxels as the basis for procedural 3D terrain (like in the very old Novalogic game Comanche). Maybe a few Voxel powering a fluids effect like smoke. But that's about it.

2) 32-bit Versions of 3D software could only address 2/3 GB RAM max. So again, the memory penalty of using 3D Voxels for objects was a PITA to work around. So these applications focused on Polygons, NURBS and SubD-surfaces instead. Those were much more memory-friendly + GPUs are designed at hardware-level to render millions of Polygons to the screen at interactive rates, not Voxels.

3) Whatever is used in Games and Movie VFX - mostly Polygon, SubD, NURBS based stuff currently - determines the major tools that are built into 3D software like Maya, Max, Softimage, C4D.

Euclideon claim to be solving 3 Voxel-related problems at the same time:

1) Fast rendering of tens of millions of 3D Voxels onto the screen at at least 25 FPS framerate. (They claim they will hit 60 FPS on CPU-only, once their code is more optimized)

2) Compressing Voxel data to 5% - 20% of its normal size, and being able to render those Voxels to screen and manipulate them without having to compress/decompress all the time.

3) Being able to work with huge Voxel datasets up to 140 Terabytes in size (important if you are doing large-scale visualization using LIDAR created point-cloud scans, e.g. in Urban Planning)

If they succeed on these points, we may eventually see this tech creep into the major 3D DCC softwares.

The companies making 3D software can create their own version of Euclideon's "Unlimited Detail" technology - maybe they are working on this right now, and we simply don't know about it yet.

Or in the worst case, companies like AutoDesk can give Euclideon a few Million Dollars and simply license their tech for inclusion in Maya, Max, Softimage, et cetera.

Either way, all we can do right now is sit back and wait for Euclideon to finish their work, and actually put their tech on the market...
  11 November 2012
hey guys, the cartesian coordinates system we use to define objects in 3d space is about 400 years old, we should do something about it !
Free rays for the masses
  11 November 2012
Originally Posted by jecastej: Sorry I did not knew about that and I did a wrong assumption as I don't hear to much over Subds today. Do you know on what hardware they handle SubDs, is it something they work with every day on regular machines?. I guess they use them for character animation.

Good, I guess I am going to investigate more on the subject but first I am going to open Maya and find out how fast and flexible Subd are on today's hardware.

This year at SIGGRAPH, Pixar released the open beta of their OpenSubDiv libraries and held a few demos (which you can watch online here at Autodesk's Area and here at NVIDIA's SIGGRAPH 2012 portal). You might not have heard much about SubD's until today, but you're sure to hear a lot more about them in the future if Pixar has their way. Manuel Kraemer, one of the presenters in the SIGGRAPH demos, says that practically everything is modeled in SubD's at Pixar, and because OpenSubDiv demos were held at NVIDIA's SIGGRAPH booth, you can guess that NVIDIA graphics cards can handle SubD's very well.

Support for SubD's in Maya and Mudbox are pretty good, but not up to the standards required by Pixar. And that is where OpenSubDiv comes in. If Maya, Mudbox, Z-Brush, and other 3D software that support SubD's were to use the OpenSubDiv libraries, then all the subdivision surfaces would display onscreen exactly like the render output from Renderman. It's this onscreen accuracy that promises to let animators and technical directors work faster with the confidence that what they're seeing is closer to what will be rendered than using low-poly mesh proxies.

Voxels are currently too resource-heavy for production work, which means voxels are currently too slow for character animation. So you might see one or two Pixar characters modeled with voxels if the story requires it, but it's highly unlikely that Pixar will switch from SubD's to voxels for the whole cast.
  11 November 2012
Originally Posted by jecastej: Sorry I did not knew about that and I did a wrong assumption as I don't hear to much over Subds today. Do you know on what hardware they handle SubDs, is it something they work with every day on regular machines?. I guess they use them for character animation.

Good, I guess I am going to investigate more on the subject but first I am going to open Maya and find out how fast and flexible Subd are on today's hardware.

In feature film work nearly every thing is modeled as a subdivision surface using polygons. However very few vfx studios will directly render polygonal models - even hard-surface models are rendered as subd surfaces.

I think you may be getting confused by Maya's native subdivision surface geometry - those aren't really used at all.

One of the nice things about subdivision surfaces is their topology can be described using the same info as that of polygon model, so you can model using polygon modeling tools and just set an attribute on the model so that it renders as a subdivision surface. 3d applications will allow you to preview the rendered shape in the viewport (in Maya select the a polygon model and press the '1', '2' & '3' key to see this in action).
  11 November 2012
So far, there's been mention of only a few things in which volume rendering is more accurate than polygon rendering with regard to image output. I believe SSS was one of them, as it is an effect that currently tries to simulate what light does in side an object's volume without modeling what is actually in there. I can expecially see a future for voxels in games, especially with regard to dynamic object damage. I also assume it may make for better simulation of muscles and soft tissues. For situations requiring these accurate lighting simulations and modeling of internal structures they will inevitable become more commonly used.

For still images I define photorealism by an image's ability to either trick me, or at least not to call too much attention to itself. Polygons are quite capable of producing convincingly realistic images. It seems some that feel that just being able to model light and surfaces is not enough, that we need to move toward simulating every interaction on a molecular level. How quaint will our current tools seem if everything continues to move in that direction?
  11 November 2012
All of it is old as all of it is just math and that is thousands of years old.

The problem is that the way the math is implemented in the programs is seriously flawed to some degree, which is where you get into the debates.

Sitting around manually pushing points by hand is definitely old school 3d.

Z-Brush is going in the right direction with digital clay. Screw the point by point and let the computer do what it does best: crunch the polygons for you while you just deal with different manipulators and effects. Then use that ancient math again to save out the final object in whatever detail you need..

The next step is to be able to generate curves and polygons using the same approach along with better methods of retopologizing the final mesh without having to manually re-skin it. Again something better suited to CPU number crunching than to manual entry.
Well now, THATS a good way to put it!
  11 November 2012
I think everything has its place

We use fur and hair rendering for fur and hair instead of polygons.
We use particles for dust and smoke instead of polygons.
For fluids we use fluid simulation rendering instead of polygons.

Those all very well could theoretically be done with polygons, and could possibly even produce a more realistic render.

A spec of dust is a rather complicated shape when viewed under a microscope. It'd take several million polygons to properly represent it. But in most shots, an antialiased pixel is more than enough.

Technology like what the unlimited detail engine is doing I don't suspect would be any different.
  11 November 2012
Thread automatically closed

This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.
CGTalk Policy/Legalities
Note that as CGTalk Members, you agree to the terms and conditions of using this website.
Thread Closed share thread

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Society of Digital Artists

Powered by vBulletin
Copyright 2000 - 2006,
Jelsoft Enterprises Ltd.
Minimize Ads
Forum Jump

All times are GMT. The time now is 04:38 AM.

Powered by vBulletin
Copyright ©2000 - 2018, Jelsoft Enterprises Ltd.