EIAS 7 Wishlist


#61

My work is likely different from others on this forum, so let me elaborate on this subject
As an industrial designer I am producing mainly stills for new products on extremely tight deadlines sometimes only 24 to 48 hrs.

I have many scenes in Animator with many millions of polygons, and often have into the thousands of objects (many, many duplicates) with lots of texture maps.

These projects are unweildy to say the least. Open GL chokes. Even software drawing mode is very slow in outline mode. I am always turning object groups on and off to improve screen redraw.

Rendering for me is pretty fast. I use GI for smaller projects and get render times of 5 to 15 minutes per still (I may produce as many as 30 stills for a project with multiple concepts so as a whole that’s still a lot of render time!). My large projects would take over an hour with GI per still, so I usually fall back on Phong and get good quality in under 5 minutes.

Increasing render speed would be great as I could use more of the “advanced” render features like GI. But slowdowns and workflow in Animator are a legitimate performance bottleneck.

I can appreciate the limits of MP/ Multi-threading, but this is how hardware companies are going to give us more power in the forseeable future. Based on initial 3d benchmarks the latest 3Ghz Xeon from Intel/Apple is roughly equivalent to a 3.2 Ghz G5. Not bad, but that’s only about a 60% performance improvement over my dual 2 Ghz G5 (we’re talking single thread performance) This is WAY off from Moore’s law. Only a 60% performance improvement over a 3 year time frame. However, there are now FOUR of those processors in a single workstation. That’s a lot of untapped potential power.

My point is that the real challenge for software developers is to re-think thier apps, maybe even consider re-writing major parts of them to squeeze every ounce of performance from those extra processors. I understand that much of the calculation in 3D is linear, but being creative with how tasks are divided may yield some good performance improvements.

I will soon get back to the point of this forum with some more practical feature suggestions. Thanks to all for their input. I think this is a great discussion, and I hope Matt, Blair, Igors, and the rest of the EIAS Programming gang are following this and are inspired to make our favorite 3d App better and faster.


#62

By preview I was meaning ‘Snapshot’. However you bring up an interesting subject! I am very much in favour of this kind of feature.

But by pre-render do you mean a no frills snapshot? Perhaps add a render option to the linking editor 3d view? Then you could render one group at a time with no frills…

Thinking out loud,
Ian


#63

Well, I’d like to see better material previews. A more consistent numerical data input. More feed back during rendering. I’d like to see the product marketed- no-one knows about it. This isn’t healthy.

Martin K


#64

Hi, Ian

We too :slight_smile: We heard many times about “better material preview”, but what is it? No one knows and no one app has an ideal preview. What we do now to setup material? Something like: edit - snapshot - edit - snapshot… many times. Often it needs to simplify scene: turns off other objects, lights, set additional cameras etc.

It would be nice if host does this routine work for us. For example: we are in Texture Window, press F7(?) and let host runs Camera with actual object only, simplified preview lights and actual orbit view. Add some “variants”, for example:

  • render a whole material;
  • render actual texture only;
  • render textures stack up to actual one

Another one: re-render a selected/desired object only in previous screenshot screen. Yes, it will have a jagged edges etc. - that’s only a draft sketch. But IMO it shows enough well how material looks in scene.

And who knows, maybe this set of modest features would give much more than a “super” preview promises “real-time” but ends with 50K of polys. Or with fractal noise with 5 and more octaves. Or with many other things.


#65

Hi, Igors,

That is actually exactly what I was thinking off! :slight_smile:
Ian


#66

Ola,

One of the best ways I think its the way to preview I think Its Fprime:
http://www.worley.com/fprime.html
or L-Pics from Pixar
http://www.vidimce.org/publications/lpics/

If we could have a window (the snapshot preview window) be feeded by camera caches in a loop in the background without camera window lauching… without anti-aliases and using the best of all MP processors we could have a interesting and almost realtime preview.

Tomas Egger


#67

What’s the reason Renderama has that big an impact in local multicamera rendering? The fact that it treats local cameras as networked ones? Or is it just slow managing things?

If producing a multithreaded Camera is so complicated, perhaps it would be better to create sort of an specialized Renderama-like app, say, “MetaCamera”, able to talk more directly to several local Camera copies and do tricks such as dividing the frame into four stripes (would that route produce less overhead than assigning different frames to each pooled Camera? Would it be faster for Renderama to address several networked MetaCameras, each networked PC producing single frames instead of fours by using the striping technique?), or knowing when there is a pending preview job and having some preference for resolving it even if it is in the middle of an animation job (assigning one Camera to it, or pausing the main job to dedicate all Cameras to it)…

For a striped image task division technique, perhaps MetaCamera could be smart enough to try pre-ordering things like precalculated shadows and cubic reflections so that they can be shared among the pooled Cameras: even if they have to wait at idle for one of them to build these, in the end a MetaCamera would be faster than every Camera doing it itself, I guess.


#68

Hi, Juanxer

All is not very complicated was done in previous 10-15 years :slight_smile: LOL
Yes, multithreaded render is hard to implement but same time it promises effective/attractive results. If Tomas and Jens run a serious RT/GI scene on their quads and have got x3 and more speed up - they can say only “wow” :slight_smile: Multithreaded technique does not “duplicate” memory allocation same as models’, plug-ins’ and shaders’ loading for each render instance. The discussion sorta “what’s better: MP or network render” is obsolete IMO - now it’s time when 3d app should have both


#69

I Agree Igors!
Well Said.


#70

Interesting note about rumored multi-threaded OpenGl for OSX.

I don’t know the limitations of this, or if graphics cards more of a bottleneck, but it is rumored to have significant impact for games.

Perhaps this could bode well for increased redraw performance for Animator?

http://www.insidemacgames.com/features/tuncersblog.php?ID=106


#71

What I wish for is an integrated sub-D modeling - maybe Silo/Nevercenter & EIAS should partner and that would also create many more users for both sides.
If not EI should make their own - I think this may be one of the reasons that potential new users may pass it by.
But even still it is a pain to go back and forth between apps as it stands.


#72

Ok…shameless plug here…Paralumino’s geometry series of plugins allow base level, internal modeling capabilities to EIAS that really make life easier. Combined with Konkeptoine’s Encage and you have the foundation for what you’re looking for. Is it as powerful as a dedicated modeler? Alas, no. We admit that its not. However, we are constantly working to add new modeling tools. We have 3 more in the pipeline and plans to upgrade Trestle in the future.

Of course I’m being biased here, but now that I can generate geometry within EI, I’d have little need to go to an external program unless the shape is highly organic or has sophisticated modeling requirements.


#73

Yes, I’m aware of the plugs and I think it is a great solution for some things…and good luck with your new site/services BTW.

As much as I loved/hated EIM, it probably didn’t have much of a future anyway.
I really like Sub-D modeling and it seems to becomeing a standard modeler in many, many apps.
Some packages have many different flavors of Sub-D tools and it can be really easy and fun to work with and yes it can also be a challenge depending on what you are trying to accomplish.

But anyway, like you stated, I’m looking for a level of control and integration with EI.

but this is a wishlist…?

thanx
k!


#74

I’m not a technical person, so I’m completely in the dark about what goes into the GI implementation in EIAS. What I do know - and I’m simplifying here :slight_smile: - is that:
-it makes my pictures look better
-it’s relatively easy to use
-it doesn’t take forever to render
-I use GI in almost all my work since upgrading to v6.5

but there is that noise in animation issue which (I learnt from another thread) is part and parcel of GI. Yup, I’m aware of ways to minimise it. But, looking at the PIXAR presentation here:

http://graphics.pixar.com/index.html

under the topic ‘Statistical Acceleration for Animated Global Illumination’, I wonder if EIAS 7.0 (or later) could implement PIXAR’s solution to the noise issue. Or is a similar solution already being implemented in EIAS 6.5’s GI? Am I completely misunderstanding the PIXAR video presentation on the topic?


#75

Hi, Aziz

We aren’t familiar with pdf this link points to , so, sorry if our considerations are superficial a bit. But from the description we’ve read the situation looks very typical.

“The resulting animation has greatly reduced spatial and temporal noise, and a computational cost roughly equivalent to the noisy, low sample computation”

Aha, clear, less noise in animation and great speed. We’ve no any foundation to say it’s not so. However, look for what is a cause of these improvements, or, in other words, what this technique is based on:

“We begin by computing a quick but noisy solution using a small number of sample rays at each sample location. The variation of these noisy solutions over time is then used to create a smooth basis. Finally, the noisy solutions are projected onto the smooth basis to produce the final solution.”

In practice and in simple words it means: this technique assumes a series of frames is in use(“over time”). Thus we cannot hope for speed up/benefits if we’ve not a “database of pre-rendered frames”. Thus we’ve a guaranteed bunch of problems sorta “how to sync database if something is changed in scene besides camera motion”.

Generally, the idea “to use data of previous/next frames” is very popular in theory, but…not in practice :slight_smile: Yes, great improvements, but… too complex and (more important) too unsafe for user


#76

Ah ha, clear, “modeler = extruding vertices + Sub-D”. Animated statues etc. We think it’s not a whole modeler yet, and, maybe, not even its main part. Brian is absolute right with his sentence “COMPLETE set of modeling tools”. BTW: interactive vertices operations are also planned.


#77

tomas, i agree completely, btw. fprime changed my workflow completely and is one of the first reasons for me to use more and more lightwave for complex texturing/ lighting tasks (especially for interior perspectives).

the reason is simple. the preview and tweak cycle to get a complex viz project done is quite slow on EIAS. this is partly due to the missing material previews, but also because camera is not multithreaded, thus not using the full power of a quad processing unit for all the hundreds of small preview renders a complex project involves. after all, at least if you are doing architectural stills, the scene setup time (including lighting and texturing) is the most time consuming part of the image creation process. even the fastest camera rendering speed cannot compensate for a lengthy scene setup process. in my experience the scene setup time takes more than 3 times the rendering time (but this depends, of course how fast you are, what hardware you are using, etc…). anyway, there is a lot of potential for improvements in this area, imho.

of course, if you are doing animations, the sheer rendering speed of camera has again much more weight and makes those points appear less important!

i hope very much that the next version of EIAS may include some sort of improvement in this area - a sort of ‘instant’ raytrace/ GI preview tool like fprime would be THE killer feature for us viz guys :slight_smile:

cheers

markus


#78

A few more things -

I’d like to see the handling of large texture and image files improved, in particular, HDRI’s.

On the PC a 50-55mb HDR into Imageviewer the loading time is very slow and often will crash the app (its been some time since i tried this though) loading a Light gel is slow also.

Loading of large texture maps can be slow depending on size.

So my suggestion is that wherever we are waiting for a preview to be calculated, ie, in the light gel tab or the texture editor window, why not have a cut-off point (file size) at which a texture or image preview would not be automatically calculated, instead maybe a message displayed like “large image - click to preview” then the preview activated at the user wish.

The other thing is Transporter,

i’m still unhappy with the handling of Zbrush .OBJ’s, large files crash the app, and the ones that do come though have the texture alignment polygon triangulated, in other words, uv mapped models never come though as 100% quads.

Also, Transporter always seems to be “recomputing vertex normals” whether the “process vertex normals” checkbox is selected or not, i thought it might be the same thing… maybe not, but anyway it would be nice to have the option of turning off the “recomputing vertex normals” thing cause it slows down the loading quite a bit.

Reuben


#79

EI Wish List

Being that EI’s greatest strength is rendering and easy of use I propose we take a solid stand in separating ourselves above the crowd in the area of Zbrush Rendering for animation. I believe it’s apparent by now, with Pirates of the Carribean, and WETA MudBox, ZBrush 2.5D modeling and painting paradigm is HUGE! The real deal. Tops. The shiznizzle! We should focus on it and EI become a smart, fast, easy and doable tool to render and mildly animate large ZBrush scenes with humongous polycounts. Thats what EI does well already anyhow.

This is my first request.

ZTool importer.

  1. Steamline the ZBrush “imports” to where it’s perfect, easy handling, better and unlike any app currently using ZB. This would be a “new” approach beyond what it’s already capable of. Literaly pull the Ztool import engine out of Zbrush. Zscripts or licensing. however or whatever it take to acheive this level of perfection. Best case scenario would be to simply import as Ztool directly into EI. No other app, save ZBrush does this so I don’t know how intense or challenging this would be. Undoubtably, Zbrush’s Ztools imports with the maps and high/low subdiv proxy as “adjustable” depth and RENDERs. That’s the perfect process. They do it. That ZTool type import mechanism would be the goose that lays golden egges for EIAS as a Hollywood film render.

EI would be completely unique as a Zbrush Render app if they had access to Pixologic’s “Magic Zbrush” button. “Load Ztool”. No placement or adjustment of textures whatsoever unless so desired. Just import the Ztool and pick your subdiv resolution, RENDER.

Geometry could be either Encage or EIAS supports an UBERNURB subdivision geometry. I suspect an import with Encage and the Subdiv tool to be more practical and resourceful than EITG writing another geometry type than polgons. Automate this process to feed into Encage if not their own non-ACIS uber nurbs with out linking. Maybe Palumino can do all this.

Texture would be like a Fact File with associated textures. They just go in the right window, with UV checked. No placement or editing. No setting up rotation (-180).
Maybe it can have an import box like FBX where this happens once and is saved as a preset. EI would read each map file type (bmp,psd, etc.) as Zbrush exports it. (color maps, disp, normals). What ever was the last settings/resolutions that it what EI would refernce and import.

So 64 bit images, or was it 32 bit depth? Zero midpoint gray and would all have to be suported in EI. We know how to do this manually, why should we redo it everytime on a computer? I think we should nail like we NAILED FBX!!.

I just bartered a new MacPro from a ZBrush/FBX/Maya work…Bring it!!

I’m ready. Let’s do it.


#80

This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.