PDA

View Full Version : Why do games render in REALTIME while 3d rendering takes forever?


Animals
12-29-2009, 03:19 PM
Hi, I have been thinking about this question forever. How come you play suberb graphic 3d games wiht more than 25fps! At the same time if you want to try to render just one frame of the same quality in 3dsmax it would surely more than 10 seconds... it would take minutes.. no way the pc would render 25 per second in 3dmax with a superb game quality ( like Crysis etc..)



Best regards
Kalle

scrimski
12-29-2009, 03:38 PM
What happend to the search function?



Nice, post order messed up again.

mister3d
12-29-2009, 03:39 PM
Most things in games are precalculated, thiough less and less. You can't render raytracing in games, especially complex effects, like blurry reflections and so on. Game engines differ from high-end renderers, as they are aimed towards speed, whereas high-end renderers towards quality. And despite you render teapits in 10 seconds, not a game engine will be capable to render what a high-end renderer can in 5 minutes.

ambient-whisper
12-29-2009, 03:40 PM
because games have optimizations to make that scene render as fast as possible. the game knows what to expect with each scene so it can be optimized.

in a regular 3d package you are open to do anything you want. experiment as much as you want, and most 3d artists dont care about optimizing as much as possible. most jobs ive had, the texture sizes for a lot of textures has been like 4k. even for smaller objects where 512 might be as much that would be needed for that specific shot. do we optimize each texture for every shot? no.

games are also optimized toward the very specific hardware, while 3d software isn't everyones computers are different, so software developers have to make very general software to work on a much wider hardware variety. console game developers have what, 2 consoles to worry about?!

however, everything you render in a 3d software package is always rendered at higher quality than a 3d engine. it might look similar, but theres a ton of filtering going on that will slow down a render. anti aliasing, raytracing, etc.

InKraBid
12-29-2009, 03:44 PM
First of all they're completely different engines. In a game, you have the whole program streamlined to the single task of graphic output of a given model-, shader- and texturecollection. Whereas in a 3d package, you have a massive pack of logarithms, shaders, modelling rules and scripts, calculations and so on. To put it easy, it's the difference of reading a book, and writing a library of books.

Animals
12-29-2009, 03:47 PM
First of all they're completely different engines. In a game, you have the whole program streamlined to the single task of graphic output of a given model-, shader- and texturecollection. Whereas in a 3d package, you have a massive pack of logarithms, shaders, modelling rules and scripts, calculations and so on. To put it easy, it's the difference of reading a book, and writing a library of books.


thank you
concerning polygon and lighting optimizing in games I fully understand that but still if you make the simplest scene in 3dsmax with a single simplest light for example you will still get vastly slower renderings that take minimum minimum 1 second( but it is always more if you have a couple of simple boxes and simple meshes etc)..


"First of all they're completely different engines" hmm that makes sense I think? so you mean no matter how simple and crude I will make a scene I will never get close to the fast rendering of games. .. hmm somehow I believe 3d programs should really follow parlty the path of the videogames.. I see it is hard to realise but seriously, everytime I play games like crysis( specially the intro and the faces) you have to think that it would be faster to render in 3d packages...


scrimski (http://forums.cgsociety.org/member.php?u=48634), it is very hard searching such stuff , cuz it is not just a moviename or a name for a thing it is not a sentence either .. I couldnt find anything anyway, I wanted to know why game renders faster, sure I know it is optimized but still it is not enough..

ambient-whisper
12-29-2009, 04:12 PM
and each time your result would look like a videogame ;)

also factor in the fact that most games have some sort of pre-baked lighting into their textures. with a 3d package your shadows will always be infinitely more accurate than what you get in a game engines. that alone will give you a huge difference in render times.

there is soo much cheats going on in game engines its crazy. look at uncharted 2 for example, and the realtime shadows they got going in that game. often the shadows are soooo pixelated. nobody would ever get away with such low shadow settings in their pro work.

Animals
12-29-2009, 04:19 PM
yeah the 3ds max render would look horrible but still takes ages to render compared to the superfast games:D so it the whole engine behind the game that is the secret I guess?

ambient-whisper
12-29-2009, 05:22 PM
again, you seem to miss the fact that everything in games is pre set to look the way it does. there is a ton of artistry behind the games. game engines dont make models look amazing regardless of the features of the engine, or how fast it can be displayed.. anyway im done with this topic.

scrimski
12-29-2009, 05:37 PM
concerning polygon and lighting optimizing in games I fully understand that but still if you make the simplest scene in 3dsmax with a single simplest light for example you will still get vastly slower renderings that take minimum minimum 1 second( but it is always more if you have a couple of simple boxes and simple meshes etc)Apples and oranges. Bake the light into a texture, apply that texture, delete all unvisible faces and render again.


scrimski, it is very hard searching such stuff , cuz it is not just a moviename or a name for a thing it is not a sentence either .. I couldnt find anything anyway, I wanted to know why game renders faster, sure I know it is optimized but still it is not enough.This "zomg why (enter software here) renders so slow, answer please. Keep in mind I will ignore any relevant statement. " threads pop up on a regular basis with always the same result.

And by the way -using 'realtime' as a search term searching for thread in general discussion I found at least three results and I do remember much more threads about it.

sundialsvc4
12-29-2009, 05:53 PM
Games render in real time, of course, because they have to. Fortunately, they are never going to be displayed on a screen that is several stories tall. (And if the player spends too much time looking for flaws in the graphics, he gets blown to bits by an evil zorg. :))

The "rendering" that you speak of is really more a form of playback than true rendering, because much of the material that you see is pre-calculated. The game is selecting from banks of this pre-calculated material, wrapping it onto comparatively simple shapes, adding shadows and sending it out. The pre-calculations were the hard part. The game avoids them by the classic tradeoff of "speed vs. space," and of course special-purpose massively parallel hardware.

As for the relative slowness of other types of rendering, that's entirely under your control. If you just "push the big fat Render button and waits for what ya gets," then, yeah, you can get very familiar with War and Peace before starting in on Gone With The Wind. But if instead you are resourceful and clever in how you go about it, you can achieve some dramatic improvements.

The secret incantation is: "node-based compositing."

Artbot
12-29-2009, 06:28 PM
The "rendering" that you speak of is really more a form of playback than true rendering, because much of the material that you see is pre-calculated......

That's exactly the point I was going to make. A game engine is not calculating the same amount of detailed information that a 3d renderer is. It may come to that one day, though I think that day is still very far away. There's simply no need for it when there are faster and cheaper "cheats" to get the desired performance.

But overall, comparing the two is a bit like wondering why your car will not go 200mph on the track like that F1 car next to you can. After all, your car has an engine and 4 tires - why can't it do the same thing? Specialization!

JohnnyRandom
12-29-2009, 06:29 PM
yeah the 3ds max render would look horrible but still takes ages to render compared to the superfast games:D so it the whole engine behind the game that is the secret I guess?

Think about it this way, running DirectX in 3ds Max 2010 with AO/shadows/lighting in the viewport is pretty much like a hybrid game engine. :)

Tora_2097
12-29-2009, 06:52 PM
also when launching a rendering from your 3d app you are nowadays still using mostly the CPU only.
When you are playing your fps shooter you need a powerful graphics card as well. Try running Crysis with CPU only. ;)
If you look at tech like Iray and Caustic software for example which are using GPUs as well you can see the gain in speed can be tremendous.

Benjamin

Animals
12-29-2009, 07:38 PM
also when launching a rendering from your 3d app you are nowadays still using mostly the CPU only.
When you are playing your fps shooter you need a powerful graphics card as well. Try running Crysis with CPU only. ;)
If you look at tech like Iray and Caustic software for example which are using GPUs as well you can see the gain in speed can be tremendous.

Benjamin

than using the GPU in 3d aplication should help a lot.

BTW, so in theory if we could make a scene in a 3d application where the polygons numbers are very low, shadow/lighting/textures casting are lowered "(NOTE)"to ""the same degree" as the games are. what is the result then? it would still render ultra slow i would guess? But that is due to the fact that it is only the CPU working or becuz you can never ever lower the graphics to such degree(which I find hard to believe)

mister3d
12-29-2009, 08:37 PM
than using the GPU in 3d aplication should help a lot.

BTW, so in theory if we could make a scene in a 3d application where the polygons numbers are very low, shadow/lighting/textures casting are lowered "(NOTE)"to ""the same degree" as the games are. what is the result then? it would still render ultra slow i would guess? But that is due to the fact that it is only the CPU working or becuz you can never ever lower the graphics to such degree(which I find hard to believe)
You have been explained several times that high-end renderers use another algoerithms and create far superior graphics. With a scanline renderer and prebaked lighting it will render very fast. Nobody needs 24 fps with a game graphics, we need a high-end graphics. Your game engine will NEVER render anything close to what most designers need.

neuromancer1978
12-29-2009, 08:49 PM
Read up on the history of game engines first before making generalizations and comparisons between game engines and 3D apps.

First off read about Quake, since it is after all the first true 3D engine and you will find out why game engines are able to do what they do. It has to do with the way the world is designed, the light baking and how the engine renders the world. Grant you Quake is old but it started the 3D first person shooter genre. Why? Because the engine was that good to render 3D worlds on 40-50 mhz PC's without a GPU. The GPU market really took off because of Quake in fact.

Of course things have gotten complex and engines have gotten increasingly better at rendering more stuff at once.

3D apps are not designed this way, while they can in fact render at 30+ fps and use real time shaders, they are not designed to render these complex worlds in real time. The actual 3D engine of the app is meant only to power the viewport. The rendering engine itself is also not meant for speed - it is meant for detail and accuracy. Depending on the render engine there are variations on what it is capable of.

A real time 3D engine is simply a rendering engine that is designed for speed, using tricks and methods that cut down the render time to allow 30+fps and can take advantage of other libraries for things like shaders, physics and ai.

http://en.wikipedia.org/wiki/Quake_engine
http://en.wikipedia.org/wiki/3D_engine

705
12-29-2009, 08:59 PM
in short answer: game (mostly) does not calculate correctly.
by saying correctly i mean physically and realistically. game is full of textures. they even texture some shadow (bumped wall or street road). But you did not notice that because your attention is to the enemy in front :D

mostly, realistic art/scene made in game is pre-set. which means if you change the POV a little bit and spend some times to learn how they do it, you might notice the trick.

i used to have a real time graphic class and at the beginning of the course, the lecturer said "in the next 3 month, you'll learn how to lie in CG" :D

CHRiTTeR
12-29-2009, 09:18 PM
I hate this question. Every client that demands some 3D work asks this...

The short answer is: games dont render as accurate are more restricted.

Artbot
12-29-2009, 10:47 PM
so in theory if we could make a scene in a 3d application where the polygons numbers are very low, shadow/lighting/textures casting are lowered "(NOTE)"to ""the same degree" as the games are. what is the result then?

Why don't you just try it? Seems that only hands-on experience will convince you more than any number of experts here will.

Kanga
12-29-2009, 11:10 PM
Also if your renders are taking ages in your 3d app there is a good chance you dont know how to optimize your scene. I see people throwing billions of polys, giant textures with every effect know to god plus a zillion omni lights in a scene, added a couple of billion polys you dont see and no backface culling, raytrace bounce up to infinity, not render in passes on and on. Also not knowing the render engine helps to bring your render times to their knees.

Games are optimised for speed and good looks and there is a world of technique that goes into making that happen with lots of tricks.

Some people do use game engines to make straight animation and there are alot of cut scenes done with engines but for things like feature films or even ads this quality doesnt hold up when it is out of a game scenario.

Someone playing a game as opposed to just sitting observing are very different situations. Have you seen Avatar in the theatre? Try doing that with a game engine :)

Even though I think games are way more interesting than movies.

sundialsvc4
12-29-2009, 11:27 PM
than using the GPU in 3d aplication should help a lot.

BTW, so in theory if we could make a scene in a 3d application where the polygons numbers are very low, shadow/lighting/textures casting are lowered "(NOTE)"to ""the same degree" as the games are. what is the result then? it would still render ultra slow i would guess? But that is due to the fact that it is only the CPU working or becuz you can never ever lower the graphics to such degree(which I find hard to believe)
First of all, Animals, get rid of the notion of "equals ultra-slow." That mental association is misleading you. Leave it behind.

The CPU is an ultra-fast general purpose processor. The GPU ("the graphics card") is a massively-parallel special purpose slave coprocessor linked directly to the video output bus ... usually with inadequate cooling. :rolleyes:

In my 3D package (Blender) there is very substantial support for using the GPU to do many things. Blender has a built-in game engine of impressive power, and you can leverage that power in many ways that are not related to games. I am quite sure that other 3D packages do the same. (However, please note that Blender's "world" is, and always has been, video, whereas other 3D packages are tailored for other things such as film. Every package has a target-audience. The GPU probably has nothing to offer a feature-film computation.)

But let me instead focus on "slow." Even my CPU-based methods are not "slow," because I break every shot down. Color, reflection, specular, shadow. Front to back (Z-depth). Moving/non-moving. And soon, I'm ready to begin a process that's very much like a multi-track audio "mixdown." A compositing node network, made up of repeated "stock" groups, combining maybe a hundred discrete inputs in a complex scene. Should any part need to be redone, only that part is revised. If I decide to toss something else for sweetness or what-not, no pain. "Running the mix" takes minutes if not seconds: "geek" that I am, it's automatic!

The upshot? Very satisfactory workflow times, through efficient use of available resources, with or without a GPU. (Gobs of experience as a computer programmer don't hurt neither.) It's a matter of thoroughly knowing your software and efficiently applying it.

grafikimon
12-30-2009, 12:12 AM
unity has a free version. why don't you grab it and do the opposite test. do a render of the textured box in there and in your 3d software and review what the render looks like. try some complex models in both. see what happens with reflection on and so on.

many more knowledgeable than me have already explained the rest. The best way is simply to just compare them next to each other. take a screen grab and render and i'm sure you'll see where the cheats are. Like JPegs unless you know what you are looking for you never notice the defects.

CaptainObvious
12-30-2009, 01:11 AM
Another contributing reason worth mentioning is the fact that with games, there is less memory shuffling overhead. When you press "Render" in a 3D package, it usually exports the scene from the internal format to the one used by the renderer. Then the renderer parses the scene, allocates memory, tessellates geometry, creates acceleration structures and octrees and whatnot, loads image maps as needed, etc. Usually, this will happen once per frame regardless of what's in your scene.

With a game engine, on the other hand... all of this is already pre-calculated and saved to disk, or it pre-calculates while the level is loading. The game engine loads the whole thing up and then it keeps it in memory until you're finished.

Nightez
01-24-2010, 11:56 AM
First off read about Quake, since it is after all the first true 3D engine and you will find out why game engines are able to do what they do.
http://en.wikipedia.org/wiki/Quake_engine
http://en.wikipedia.org/wiki/3D_engine

What is your definition of "True 3D" engine? Because Quake certainly wasn't the first game to use texture mapped full 3D polygons.

Stone
01-24-2010, 12:17 PM
i think an easy to understand comparison is that-

game engines are real-time in much the same way that the viewports of your 3d program are realtime. you get a preview of a certain quality that can do only certain things

and just like your viewport, a game engine slows down on stuff that gets too complex. it has a very specific purpose and does that very well, achieving its goal by hacks and compromises.

/stone

Hirni_NG
01-24-2010, 01:01 PM
I hate this question. Every client that demands some 3D work asks this...

The short answer is: games dont render as accurate are more restricted.

Usually mentioning that high quality real-time graphics such as seen in Crysis/CoD etc. come with a price tag of +5m$ helps with negotiating.

Laa-Yosh
01-24-2010, 06:31 PM
GPUs render with far lower precision, too. Colors aren't as accurate, shadows are fuzzy and buggy, textures are blurry, there's almost no antialiasing at all. Lighting is calculated per vertex only and then the values are interpolated for the per-pixel shading. And so on and so on.

Game engines make huge sacrifices and accept a lot of restrictions to be able to perform less calculations. Offline renderers are more flexible, they do more work and that's why they're slower, even if you're only rendering a teapot. You'll find that the AA, the shadows, the shading on that teapot is all far, far beyond what a game engine can get you, that's why it takes longer to compute.

KevBoy
01-24-2010, 07:12 PM
I believe it has been fairly stated, game engines pre-render before hand. A typical level for Quake could take up to a day to calculate collision, visibility, lighting and AI calculations.

Doom 3 however drastically cuts this to a pre-rendering of just a few minutes. Most of the lighting being rendered is infact real-time, but it don't look so good.

Unreal Engine 3 is a recent engine that makes a good compromise, most lighting is pre-rendered, a typical level takes around 30 minutes to calculate all
of its components.

CHRiTTeR
01-24-2010, 10:58 PM
I believe it has been fairly stated, game engines pre-render before hand. A typical level for Quake could take up to a day to calculate collision, visibility, lighting and AI calculations.

Doom 3 however drastically cuts this to a pre-rendering of just a few minutes. Most of the lighting being rendered is infact real-time, but it don't look so good.

Unreal Engine 3 is a recent engine that makes a good compromise, most lighting is pre-rendered, a typical level takes around 30 minutes to calculate all
of its components.


a few minutes to precalc? I dont know but last time i checked it does take quite some time render a good normal map... i think rendering normal maps for a complete doom3 level will take more then just 'a few minutes.
Also keep in mind you have to model both the high res and lowres mesh.

cowtrix
01-24-2010, 11:34 PM
This is kind of like asking a realist artist why he can't paint as fast as an impressionist.

BigPixolin
01-25-2010, 01:39 AM
Why are race cars faster than tractor trailer trucks?

R10k
01-25-2010, 02:29 AM
Why are race cars faster than tractor trailer trucks?

Yes but each have wheels, right? So, if we took the engine from a tractor trailer truck and placed it in a race car, I'm guessing in theory the race car would go at the same speed as a truck.

KillahPriest
01-25-2010, 02:50 AM
Yes but each have wheels, right? So, if we took the engine from a tractor trailer truck and placed it in a race car, I'm guessing in theory the race car would go at the same speed as a truck.

But if we took a space shuttle rocket and put it on that same truck....now we're on to something!

pjz99
01-25-2010, 03:09 AM
I don't see that anyone came right out and said it: game graphics are universally awful compared to pretty much anything you can do with a serious render engine. The thing is, you accept this because you aren't paying attention to the flaws because you're busy interpreting motion and experiencing the game. Take a still frame from any game and examine it closely, and you'll see tons and tons of flaws - these flaws can be hidden well, and sometimes hard to pick out, but they will always be there. Shadows and lighting will always be wonky, some objects will be obviously polygonal, a great many objects will be things like a plane of polygons that is textured with a transparency map to look like a tree, etc etc. This is aside from the fact that in typical games, the environment is tightly controlled and all reflections are mapped, all lights are fixed and whatnot; game graphics are just plain low quality, which allows a lot higher speed.

Do you think that makers of 3DS Max and Maya and Cinema 4D and all the other big dog apps wouldn't like to give everyone high quality renders at 30 frames per second? :)

Hirni_NG
01-25-2010, 08:55 AM
Why are race cars faster than tractor trailer trucks?

This is the only thing I will say in the future when this topic comes up. (And it seems to come up a lot lately...)

derwonder
01-25-2010, 11:16 AM
First of all they're completely different engines. In a game, you have the whole program streamlined to the single task of graphic output of a given model-, shader- and texturecollection. Whereas in a 3d package, you have a massive pack of logarithms, shaders, modelling rules and scripts, calculations and so on. To put it easy, it's the difference of reading a book, and writing a library of books.


really? I always did wonder about this too, so I guess that means you can actually convert maya or 3ds max into a game engine? lol, basically saying you can play with the code since it's there on the melscript slot

TheThidMan
01-25-2010, 11:42 AM
Surely it's because games use the graphics cards to render, processors designed specifically to do that task. 3DS max/mental ray/wotever uses the CPU, a general purpose and unwieldy processor designed to do everything.

EDIT: For example you can get raytrace rendering cards. They don't render in real time, but they're infinitely faster than any 8 core xeon setup for raytrace rendering. Special-purpose processors will always be orders of magnitude more powerful than a general purpose processor.

Wongedan
01-25-2010, 12:26 PM
because :

3d rendering software meant to be infinite possibilities.

while real time engine has to constrained to specific frame rate and hardware specification.

in realtime engine everything has been optimized. the model has been compiled, or baked.

in render software for example , 3ds max, it still allow you to to tweak model widely and press render button to see, actually that really2 cost processing time. but as 3d editing software it mandatory to have that. so you dont have to go to evil process called : COMPILE

for easy example
try bake or paint all the shadows in your texture ....
and see in your xsi, maya or max view port, it does look pretty isnt it?
but the problem is you cant tweak the light physically .....


or if its a next gen like , try to have realtime engine normal map shader and see the result.
compare it with applying multi layered material in ur 3d software.

if you lucky that realtime engine is look asgood as rendered result,
but pre-rendered 3d still give you complete freedom to add details custom materials as many asyou like.

stew
01-25-2010, 04:36 PM
EDIT: For example you can get raytrace rendering cards. They don't render in real time, but they're infinitely faster than any 8 core xeon setup for raytrace rendering. Special-purpose processors will always be orders of magnitude more powerful than a general purpose processor.
...as long as you use them for that special purpose and nothing else. You can trace rays on a GPU, but that doesn't mean that your GPU can do everything that Mental Ray does and more. With all the GPU raytracing popping up left and right, I haven't seen any of them that would come close the flexibility that we take for granted in CPU rendering. Try to find arbitrary programmable shaders, DSOs or texture caching. NVIDIA's OptiX SDK, which is probably one of the more flexible GPU raytracing solutions out there, doesn't even support MIP maps yet.

The future will be exciting, and I can't wait for production ready REYES implementations in CUDA or OpenCL, but it won't happen overnight.

spindraft
01-25-2010, 04:41 PM
yeah the 3ds max render would look horrible but still takes ages to render compared to the superfast games:D so it the whole engine behind the game that is the secret I guess?

Bake the textures & setup a dx shader, and there you have your realtime teapot. It's the optimizing / finalizing of the materials that allows you to setup it up to render quickly, because you're eliminating variables............resulting in fewer necessary calculations.

In a layman terms, a game engine renders faster because everything's already been previously rendered & setup specifically for it, therefore it has far less work to do.

ndeboar
01-25-2010, 04:42 PM
One of the things that annoys me in these disiussions (sorry if this has been covered already), is that games have a huge amount of pre-computation.

Eg, when i load up a level of COD MW 2, it takes a good minute or two to load. A lot of pre-rendering happens at this stage. Also, loads of models have occlsuion and other shader effects baked in, which would have had to be renderd (not in realtime) at some point.

And, i've never seen a game run in realtime with proper edge anti aliasing.

Nick

ambient-whisper
01-25-2010, 05:27 PM
The future will be exciting, and I can't wait for production ready REYES implementations in CUDA or OpenCL, but it won't happen overnight.

Until then, theres a bunch of fun tools to play with:)

http://furryball.aaa-studio.cz/

http://www.youtube.com/watch?v=lT-7bnPPVgg&feature=player_embedded

ZacD
01-26-2010, 12:26 AM
Game engines cheat/fake stuff more than mental ray/vray does. You're limited on how complex shaders are, and you may have to bake lighting (not with cry engine 3 though, amazing stuff, btw). Also you are limited on effects, particles, polycounts, texture sizes, and animation. And of course the whole gpu thing.

sundialsvc4
01-26-2010, 02:39 AM
There's a huge difference between the two scenarios. All of the realtime rendering that you see being done in a game has been devised to be done in just that way. You have no easy way to know just how much material was prepared "off line" during the process of building all those game files.

"Non-real-time" rendering tasks, such as the "slow" ones you describe, obviously might be able to exploit the power of the graphics card. (Rest assured that programmers make full use of available hardware whenever they can.) But the graphics card is a special-purpose processor, and "CG rendering" is a general-purpose objective. It can only go so far.

If you are comparing the GPU to "the Big Fat Render Button," though, you really are making an unfair comparison. That's simply not how it's done. The beautiful scenes that you see in CG motion pictures do not "spring fully-formed like Venus from the digital surf." They're built-up in carefully designed layers. Some of those layers might have been generated using GPU-like hardware. Others are entirely the product of a CPU, or many CPUs. Regardless, they were developed in stages, not "all or nothing."

CryingHorn
01-26-2010, 07:03 AM
If you take a completely empty scene and render 100 frames on let's say 720p with mr or vray or whatever... it will be far from real time speed. Why? Because it still calculates a lot of thing even if scene is empty.



Now this raises a question. If real-time engines have so many optimizations tweaks and other intelligent stuff to make things go faster with the quality as good as possible why a high end rendering software canít put even the line in script "skip all calculations if bucket is empty"

R10k
01-26-2010, 08:07 AM
...why a high end rendering software can’t put even the line in script "skip all calculations if bucket is empty"

Think about it for a second- how can it tell the bucket area is empty? Via calculations. Where do the cast rays go in a scene? If one area of your scene is empty, and a ray passes through that space to another area that isn't empty, do you 'skip all calculations' in the empty area then?

Maths people are (and I'll use a scientific term here) 'hella smart'. They've thought through this kind of stuff, because the tech behind your average rendering engine is really, really complex, and if something as simple as what you're suggesting were possible, I'm positive that'd be the first adjustment they'd make.

Kabab
01-26-2010, 08:09 AM
I think people are seriously over estimating how much has to be baked offline...

If your target modern hardware you don't have to bake anything...

A modern engine utilising DX11 can do high quality shadow maps, SSAO, various post filters, Displacement mapping and per pixel lighting, thousands of dynamic light sources without any baking.

Only if you target old hardware do you need to go down the bake path...

R10k
01-26-2010, 08:37 AM
If your target modern hardware you don't have to bake anything...

I don't think I'd go to that extreme. Realtime shadowmaps don't look that good (each type that currently exists), they have limitations, and aren't practical for large scenes. SSAO looks okay, but baking high detail AO maps will always look a lot better. Realtime displacement mapping is in its infancy (and hardly high-poly) and thousands of dynamic light sources? Yeah, not really ;)

It's all about level of detail. If you want it looking pretty average, and limited to bleeding edge hardware, sure, you don't need to bake anything. But, people do because that's how to get quality results, and extra speed is always handy when games need good ol' cpu power for more than purdy graphics.

InKraBid
01-26-2010, 09:04 AM
"hmm somehow I believe 3d programs should really follow partly the path of the videogames.."

The newer versions of 3Dpackages are doing just that, and gives the ability to see lighting, ambient occlusion and some materials in high quality -realtime- in your viewport (http://www.youtube.com/watch?v=_Bajid_n4oY). So you could make a preview movie that comes quite close to a game's graphics. -But- when you hit the renderbutton, it engages the CPU and starts tracing every single ray of light, calculates how the light bounces and breaks in every material and so on..

(http://www.youtube.com/watch?v=_Bajid_n4oY)

Kabab
01-26-2010, 09:07 AM
I don't think I'd go to that extreme. Realtime shadowmaps don't look that good (each type that currently exists), they have limitations, and aren't practical for large scenes. SSAO looks okay, but baking high detail AO maps will always look a lot better. Realtime displacement mapping is in its infancy (and hardly high-poly) and thousands of dynamic light sources? Yeah, not really ;)
Realtime shadows maps can look great! when your targeting dx11 really good also they are fine in large scenes these days dude to cascade shadow maps.... The SSAO you can achieve via compute shaders looks heaps better as well not quiet raytraced levels but still pretty nice... You can basically displace down to a triangle per pixel or very closes I can't see why you would need more....

It's easy doing thousands of light sources just look up deferred rendering it has it's draw backs but generally its very good many new engines have gone down this path...

Good read here http://www.slideshare.net/repii/parallel-graphics-in-frostbite-current-future-siggraph-2009-1860503

colinbear
01-26-2010, 10:40 AM
something that i dont think has been considered yet is the latency or time it takes to write rendered data to disk. even if your renderer could deliver 720p images at a relatively low 30fps, you are still talking about huge amounts of information that you need to push onto storage media.

1280 x 720 pixels, each of which is RGB at half-float (16bits per channel) = about 5.27 Mb per-frame uncompressed. (1280 x 720 x 3 x 2bytes )

@30fps thats about 158 Mb/second of data bandwidth just to write blank frames, let alone actually render anything. thats already beyond what most standard disk I/O on consumer level machines can achieve.

all of this also assumes the rendered data exists in memory somewhere. with GPU rendering the frames exist on the 3d hardware's memory, so you first have to transfer this back across the bus into main memory prior to writing to disk. all of which add up to less-than-realtime performance, and we haven't actually rendered anything except transfer blank frames to disk.

in order to render anything we have to shift data in the other direction, transferring geometry information and texture, shader data from disk, into main memory, and then over to the graphics hardware (assuming it can fit in the available texture memory) before we can do anything with it.

so, irrespective of clever techniques and tricks that can be applied to generate good looking renders, there are some real considerations which make GPU acceleration of final-frame rendering less than trivial.

CryingHorn
01-26-2010, 10:44 AM
Think about it for a second- how can it tell the bucket area is empty? Via calculations. Where do the cast rays go in a scene? If one area of your scene is empty, and a ray passes through that space to another area that isn't empty, do you 'skip all calculations' in the empty area then?

Maths people are (and I'll use a scientific term here) 'hella smart'. They've thought through this kind of stuff, because the tech behind your average rendering engine is really, really complex, and if something as simple as what you're suggesting were possible, I'm positive that'd be the first adjustment they'd make.

That actually makes sense, thanks. For a completely empty scene it's still a bit mystery but I understand what you are saying. I guess there could be more threshold parameters for accuracy. We have those for antialiasing, shadows, FG, GI and etc maybe there is also something else for that too.

stew
01-26-2010, 11:16 AM
A modern engine utilising DX11 can do high quality shadow maps, SSAO, various post filters, Displacement mapping and per pixel lighting, thousands of dynamic light sources without any baking.
...as long as they're not simultaneously ;)
You can defer shading as much as you want, shadows for 1000 lights will require 1000 shadow maps which will require 1000 shadow passes and 1000*map size*depth bytes of VRAM.

Also, almost every single $COOL_REAL_TIME_EFFECT presentation I've seen at SIGGRAPH had a slide that said "by the way, this won't work with transparencies". In fact, if you want anything semitransparent in your scene, you can't the deferred shading method shown in the linked presentation. The ability to render order independent transparency at arbitrary depths is IMHO one of the most significant differences between offline rendering and OpenGL/DirectX rasterization. Certain methods exist, but they either have significant overhead (depth peeling) or depth limitations (e.g. stencil routed k buffer).

Nightez
01-26-2010, 11:51 AM
Anyway I'm quite very shocked the thread starter hasn't noticed. That there's a vast world of difference in the quality between pre-rendered CGI and 'in game' graphics.

In summary we can say game engines pre-bake alot of stuff and they don't employ any ray tracing. Ray tracing is what creates the realistic lighting and images that you see in movies. Games are not going to look as good as CGI until you have hardware capable of doing the calculations for ray tracing in real time.

For now you are not going to find a game with visuals that rival a big Hollywood movie like Avatar or Lord of the Rings. Even the original Toy Story probably has yet to be matched.

BoBoZoBo
01-26-2010, 02:46 PM
Apples to oranges, but people are working on changing that.
Not only are the render type completely different, but so is the language and the hardware that specific language is designed for.

What is you rendering pipeline? What render/game are you comparing to each other? Are you using a CPU for renders or the GPU? Is it a top end video card - top end processor.

Even if you have the best of each, a video game rendered on a GPU versus a still frame rendered on the CPU is no contest - the GPU will SMOKE the CPU.

This is mostly because GPUs are specialized to run game T&L/Shade code and the game is using specialed code specifically designed for the GPU. Mix that specialization in code with the number of cores in a typical GPU and you get a very fast render.

Rendering on a CPU is different. a CPU has to take many different types of code into account and therefore is much less efficient in certain types of calculations. Not to mentions current popular 4 core CPUs donít even come close to the 100+ cores a good GPU has.

Now, like I said this landscape is changing fast. Other industries are seeing the value speed of the GPU and are changing their code in order to take advantage of the processing speed. nVidia CUDA is an example of that. They, AMD and intel are all workjing on real time raytracing using the GPU and its philosophy in programming. But multi-core programming is just coming of age.

In the end, you need to trealize the difference between the two types of renders and be aware of your hardware/software configuarations to optimize your render times.

teruchan
01-26-2010, 04:12 PM
Having worked in the games industry for years, when I went to TV and film I often used game techniques (i.e. baking lighting into a set) to speed up renders and get more complex effects that our farm may otherwise not have liked. It can be done with a little planning and effort, and it can be worth it...

Until the producer comes and says, "Yeah... can you move that light 2 inches to the left?"

imashination
01-27-2010, 01:00 AM
thousands of dynamic light sources without any baking.

By 'thousands' you mean 'usually one, occasionally 2-3 in special circumstances' right? If you show me any game which runs more than half a dozen lights in real time Ill be monumentally impressed.

moogaloonie
01-27-2010, 06:36 PM
Let's get to the heart of why this keeps coming up:

Almost every 3D package (model/render/animate) primarily uses a software renderer. Real-time 3D didn't used to be the domain of games exclusively. 3D packages tended to have the best 3D technology under the hood of any software you could buy. Then the GPU came along and people bought them to play Quake or improve the realism of their flight sims. 3D software continues to improve but the code didn't keep pace with the hardware. Treating this like a stupid question, as nearly ever professional seems to do, misses the obvious; that much of what we see rendered in real time appears to us to be as good as what we see rendered in software on the CPU, especially when down-rezzed to a phone or youtube video.

The question is, when there used to be so many all-purpose packages trying to differentiate themselves and create a market, why are the two which probably touted the GPU's benefits the loudest no longer with us? I'm talking about the short lived "VR" modeler Merlin, and trueSpace which began the GPU migration with version 4 and by version 9 was entirely tied to DX9 for that reason.

And why haven't other programs attempted this is another good question. The author of Anim8or was/is an nVidia employee. It didn't stop him from writing a scanline renderer, and later a raytracer, from scratch when he should've been more than capable to get the same results (of the former especially) using the GPU.

At the low end, programs like Daz Studio, Carrara, Poser, Pixels, etc. are always listing enhanced OpenGL previews. With almost every version the lighting fidelity, texture accuracy or shader approximation is improved to some degree. But rarely do these programs make any attempt at shadowing or rendering displaced or refractive materials...

I think there is a minimum level of quality at which point something becomes watchable. A 3D Playstation game just looks like interactive origami. A game like Mass Effect could pass for some cool new show on Syfy.

Before I shut up I thinks it's also important to remember that these people, of which I am one, don't expect our animation program to spit out frames at 120fps. We want to cut weeks down to days, or minutes to seconds even, by using the same methods that make our modern games look so nice. Aside from blender there just really aren't enough options for doing that.

Anchang-Style
01-27-2010, 07:07 PM
Well eventhough i guess everything was said and done before i found a report about that matter some time ago that gives quite a nice view on the matter:
rasterization (gaming method) vs. raytracing (High End Render Software)
http://www.pcper.com/article.php?aid=334
Maybe this gives some insight why mental ray and so on take a lot longer. But also why raytracing might be the way to go as soon as hardware is capable of doing it in realtime.

Nightez
01-27-2010, 07:32 PM
Let's get to the heart of why this keeps coming up:

Almost every 3D package (model/render/animate) primarily uses a software renderer. Real-time 3D didn't used to be the domain of games exclusively. 3D packages tended to have the best 3D technology under the hood of any software you could buy. Then the GPU came along and people bought them to play Quake or improve the realism of their flight sims. 3D software continues to improve but the code didn't keep pace with the hardware. Treating this like a stupid question, as nearly ever professional seems to do, misses the obvious; that much of what we see rendered in real time appears to us to be as good as what we see rendered in software on the CPU, especially when down-rezzed to a phone or youtube video.

The question is, when there used to be so many all-purpose packages trying to differentiate themselves and create a market, why are the two which probably touted the GPU's benefits the loudest no longer with us? I'm talking about the short lived "VR" modeler Merlin, and trueSpace which began the GPU migration with version 4 and by version 9 was entirely tied to DX9 for that reason.

And why haven't other programs attempted this is another good question. The author of Anim8or was/is an nVidia employee. It didn't stop him from writing a scanline renderer, and later a raytracer, from scratch when he should've been more than capable to get the same results (of the former especially) using the GPU.

At the low end, programs like Daz Studio, Carrara, Poser, Pixels, etc. are always listing enhanced OpenGL previews. With almost every version the lighting fidelity, texture accuracy or shader approximation is improved to some degree. But rarely do these programs make any attempt at shadowing or rendering displaced or refractive materials...

I think there is a minimum level of quality at which point something becomes watchable. A 3D Playstation game just looks like interactive origami. A game like Mass Effect could pass for some cool new show on Syfy.

Before I shut up I thinks it's also important to remember that these people, of which I am one, don't expect our animation program to spit out frames at 120fps. We want to cut weeks down to days, or minutes to seconds even, by using the same methods that make our modern games look so nice. Aside from blender there just really aren't enough options for doing that.
Actually 3D packages are still ahead of the curve and have the more advanced tech. Most 3D rendering technology starts off in these high end packages before eventually trickling down into games (when GPU manufacturers start to implement these features). Its only a matter of time until we start seeing games with real time ray tracing & proper shadows, once the hardware becomes fast enough to handle the calculations in real time.

Games currently are very far from looking photo realistic.

moogaloonie
01-27-2010, 07:56 PM
Actually 3D packages are still ahead of the curve and have the more advanced tech. Most 3D rendering technology starts off in these high end packages before eventually trickling down into games (when GPU manufacturers start to implement these features). Its only a matter of time until we start seeing games with real time ray tracing & proper shadows, once the hardware becomes fast enough to handle the calculations in real time.

Games currently are very far from looking photo realistic.

I didn't say that application tech wasn't moving forward. Look at the rise of sculpting apps for an example. I meant that some of the early VR style interfaces of programs like Caligari or Cinema4D, most still in wireframe, were more visually impressive of many games of that era.

Also, it's not so much about things being photo-realistic as it is about them being watchable. The average "A" game of today is better looking overall than the average rendered cinematic from an "A" game released ten years ago. To our then untrained eye, today's game graphics from a game like Forza, Gran Turismo or even Madden might pass for photoreal. I've heard of more than one person saying they've mistaken an NBA or NHL game for an actual broadcast at a glance. The first time I saw GTAIV was at a party and I even mistook that for a movie (from across the room) until the camera gave it away.

The rise in interest in machinima is a testament to the youtube generation's often being more interested in telling a story than rendering a perfect wineglass or a rusty nut and bolt.

ArtemisX
01-27-2010, 08:20 PM
sry not read entire thread so this might already be out there, if not it can't be missed -

http://www.game-artist.net/forums/scene-movie-competition/9540-scene-movie-results.html

this was a competition for producing a screen grab within a game engine

rules here: http://events.game-artist.net/scene_from_a_movie/rules.php

the winner was a shot based on blade runner, done with Cryengine2 (TY to Sebastian - it seems like an early intention was for it to be UT3 going by their submission threads), to be fair there are alot of good "single shot" rendering i've seen on cgsociety, but as a screen grab from Cryengine2 this is quite simply amazing.

seems like a good example for the OP's question - and the basis of many arguments with my CG department at work (im an architect by trade).

http://www.game-artist.net/forums/highlightsimages/SFAM_Images/Blade1.jpg





http://www.game-artist.net/forums/highlightsimages/SFAM_Images/Blade2.jpg

edit: the recent GPU info by Nvidia - the Fermi GF100 is reputed to beable to produce raytraced images at 0.5 frame a second (if i remember correctly) that look like this. if render engines were ever to be in doubt its to start this year:

http://techgage.com/articles/nvidia/fermi/nvidia_fermi_raytracing.jpg

Bullit
01-27-2010, 08:40 PM
Unfortunately because 3D graphics applications were coded ages ago, even then i never understood why 3D apps weren't biased towards Earth reality(units,forces,lights) from start, since most scenes are supposed be on Earth. They are going feel obsolete soon vs hardware if they don't change.

They remain with flexibility and have also advantage head start that game engine builders don't have a clue to expand their business until recently. Only later we are seeing limited some openings in that part.

I am pessimistic unless people from outside industry cames and change the game(pun not intended).

sebastian___
01-28-2010, 12:27 AM
Those screengrabs are done in Cryengine not UT3. And before anyone ask everything is real time. No baking.

And I think everyone missed the point of OP. Even if you would render a scene with a few million poly with no antialiasing - low quality AO in Mental Ray and some shaders, it would still be much much slower that an IDENTICAL picture in a real time engine.

I mean - how fast do you estimate would render the following two in Mental Ray ?

Two real time renders with no precalculations or baking :

http://img2.imageshack.us/img2/8282/aq7j1.jpg

http://img130.imageshack.us/img130/756/1zmglmcr.jpg

R10k
01-28-2010, 01:36 AM
I mean - how fast do you estimate would render the following two in Mental Ray ?

Please tell me you're not seriously asking that question.

...it would still be much much slower that an IDENTICAL picture in a real time engine.

Oh my goodness. You are serious.

Look, here's the thing- and this is an important question that'll help explain it. Why are race cars faster than tractor trailer trucks?

sebastian___
01-28-2010, 02:39 AM
You are replying with a question to my question. But I asked first :)

R10k
01-28-2010, 03:37 AM
But I asked first

Yes, but this thread answered you first :)

Come on, it's easy to see why. Mental Ray is doing a lot in the background, even if you break its legs and make it produce an image like that. If on the other hand you let the Ferrari pick up speed, you'll quickly notice Cryengine fall over in the dirt and hurt itself, trying to go where it can't.

Cryengine is a game engine. It's built from shortcut technologies, because speed and cutting corners is what it needs to do. Mental Ray cuts corners too, but they're a concession, not a design mandate. Just because someone can make a game engine image look decent (if you stand waaay back from any surface and shrink down your final image to make it look more detailed), doesn't mean it can come come even slightly close to what MR is doing- or can do. The very reason behind its construction (the way it works, the way its shaders are built, etc) is high end output, not 'as fast as possible while looking good'. If you want that, you turn to a game engine. Everything from its GPU shaders to rendering methods is designed for that.

sebastian___
01-28-2010, 03:52 AM
I agree. Mental Ray is capable of producing an image with a quality of one million times better.

Still the OP is right. If an artist would try to build two scenes with an identical result in both render engines the render time would be very different.

Now we can extrapolate further from this. What if a programmer would try to match the Mental Ray quality but using the real-time engine ? Writing new code so the focus would no longer be the speed - but matching the quality ? Would it be still faster than Mental Ray ?

CHRiTTeR
01-28-2010, 04:59 AM
yes, but who wants a movie with crappy cg like that while you can do it much better.

you're arguments are kind of pointless

and regarding yr question about a game engine with mental ray quality:
It wuld be just as fast/slow. Thats why there is a difference in the first place.


you have a brain, right?

mister3d
01-28-2010, 05:21 AM
Now we can extrapolate further from this. What if a programmer would try to match the Mental Ray quality but using the real-time engine ? Writing new code so the focus would no longer be the speed - but matching the quality ? Would it be still faster than Mental Ray ?
No. .
Programmers spend years trying to optimize to a certain hardware. Realtime renderers like Vray RT is the best result you can get using GPU for hi-end rendering.

ArtemisX
01-28-2010, 10:00 AM
Yes, but this thread answered you first :)

Come on, it's easy to see why. Mental Ray is doing a lot in the background, even if you break its legs and make it produce an image like that. If on the other hand you let the Ferrari pick up speed, you'll quickly notice Cryengine fall over in the dirt and hurt itself, trying to go where it can't.

Cryengine is a game engine. It's built from shortcut technologies, because speed and cutting corners is what it needs to do. Mental Ray cuts corners too, but they're a concession, not a design mandate. Just because someone can make a game engine image look decent (if you stand waaay back from any surface and shrink down your final image to make it look more detailed), doesn't mean it can come come even slightly close to what MR is doing- or can do. The very reason behind its construction (the way it works, the way its shaders are built, etc) is high end output, not 'as fast as possible while looking good'. If you want that, you turn to a game engine. Everything from its GPU shaders to rendering methods is designed for that.

Doesn't it depend on what the final image is for, and what the time constraints are? I see no point in working on an image - or increasingly a flythrough to produce a render that could take a number of days when you have a deadline to hit. Especially if you could really be working on the final image (to be rendered live on a gaming platform) upto the last min with a render time that is a fraction of a second. There is a balance here that needs to be assessed between speed and quality that is increasingly becoming rather blurred.

At this point with technology which is likely to increase in quality faster gaming based rendering or proper dedicated modelling & rendering software? Look back over the last 10-15 years proper rendering has improved steadily to a obvious render to something that is hard to discern from a photograph. whereas gaming has gone from practically nothing, or at least basic 3D games like Doom, to something like those Crysis shots above. I know where my money would be right now.

Ultimately comeback in a year or two, if Mental Ray/Vray/Maxwell haven't really improved their game somehow, they'll be rather redundant. The production order will become Plan-model-texture-play-screenshot.

mister3d
01-28-2010, 10:13 AM
Ultimately comeback in a year or two, if Mental Ray/Vray/Maxwell haven't really improved their game somehow, they'll be rather redundant. The production order will become Plan-model-texture-play-screenshot.
But you have GPU versions of mental ray and vray? They utilize GPUs. Just wait 5 years and it will be almost realtime and much slimmer.

R10k
01-28-2010, 10:40 AM
I see no point in working on an image - or increasingly a flythrough to produce a render that could take a number of days when you have a deadline to hit.

Then don't. Seriously, I don't get why you've mentioned this. If you're happy to sacrifice image quality for the sake of speed, then don't render it. If you want things looking amazing, get your deadline extended. Problem solved.

Ultimately comeback in a year or two, if Mental Ray/Vray/Maxwell haven't really improved their game somehow, they'll be rather redundant. The production order will become Plan-model-texture-play-screenshot.

Yeah, I'm not convinced. The future is stuff like Avatar. And, if you read how that was done, a lot of it was a throwback to turning the detail level up to 11 and letting things render for a freakishly long time. I mean, sure, I agree there will be some GPU assisted overlap to help in some situations, but it's silly believing every production renderer will need to become a game engine hybrid to avoid becoming redundant.

The very thing that makes game/GPU based engines fast is their lack of accuracy. The thing that makes a proper renderer good is the extra accuracy. If you combine the two, you muddy the waters, meaning the potential for visuals like those in Avatar go flying right out of the window.

Hirni_NG
01-28-2010, 10:41 AM
Not sure if this has already been mentioned, but everyone who wants to work with GPU graphics in a production context can do so:

http://www.studiogpu.com/
http://furryball.aaa-studio.cz/

not to mention all the free graphics and game engines and mod tools for even the most sophisticated engines.

So if your render quality can deal with the limitations implied (and there are quite a few dealbreakers for certain requirements), it makes sense to use those renderers.

Heh, it is kind of funny that someone simply puts two images side by side and thinks that one can show the thousands of highly skilled programmers, both game engines and renderers as well as thousands of researchers which includes the inventors of all methods applied something new.

moogaloonie
01-29-2010, 07:58 PM
Not sure if this has already been mentioned, but everyone who wants to work with GPU graphics in a production context can do so:

http://www.studiogpu.com/
http://furryball.aaa-studio.cz/

not to mention all the free graphics and game engines and mod tools for even the most sophisticated engines.



Woooah... don't have the budget for either of those. So there isn't one single 3D package that uses GPU rendering primarily? I just find that strange given how many 3D programs there once used to be.

If 3D-Coat ever adds animation, it'd be perfect for some of the stuff I've been wanting to do.

http://www.youtube.com/watch?v=R-_vanvzAVE

Nice depth-of-field at the end there...

imashination
01-29-2010, 08:10 PM
http://img2.imageshack.us/img2/8282/aq7j1.jpg

Thats great, except its massively, hugely limited. You cant:

Add a glass vase in front which refracts the background through it.
Use any texture over 8000 pixels
Have any project larger than 2 gigs incl textures
Sub surface scatter through a volume
Render any vaguely realistic hair
Apply any realistic DOF
Have a project larger than the game engine's world size limit
Output multiple passes

Do we really need to go on or can you still not understand why a full render engine which is doing far more, will ultimately be slower?

Or let me put it another way, if it could be done, it would be. People are working on it, but its years away from being a stable and feature rich technology. Theres no conspiracy to stop it happening.

sebastian___
01-29-2010, 11:02 PM
I think this render engine is great if you just happend to have a project animation which requires :
- Not photo-realistic rendering - but close
- You need huge forest and massive vegetation density and an incredibly easy and realtime way to build the forrest / grass field..so on
- DOF, motion blur, soft shadows, skin shaders with sss, facial editor with extended morph editing. In cryengine 3 - global illumination with color bleeding (not mental ray quality though)
- model and build organic geometry in realtime - with deform or blur brushes - with two types/layers of realtime AO - After that paint with textures and displacement
- fog, light rays, particles - dust particles glittering in the light rays, smoke, volumetric clouds
- Camera animation - with option to edit multiple cameras cut right in the editor's track view

If you need something else - than this editor is not for you.

But if you need exactly this - then you are in luck. Because you can make all this in real-time. And that can help you make a better work because you don't have to wait for a render. You see instant what is wrong. Even color correction , contrast, gamma, levels and more - are real time.

But most of the points mentioned can be done :
- glass vase with refraction - possible.
- texture with 8000 I don't think so
- you can have larger projects with a 64 bit OS
- There are two types of subsurface scatter shaders included (I don't know about the quality though)
- There is a hair shader. But still the hairs would have to be geometry probably
- There is a realistic DOF shader with bokeh in the works
- you can have larger projects - the engine does not have a world limit

You can output many types of passes .

Alpha pass , z-depth pass - and many others :

Alpha pass :

http://img193.imageshack.us/img193/8957/palmtreephotoc.jpg

post process DOF made with z-pass

http://img62.imageshack.us/img62/1944/m0100000.jpg

Kabab
01-30-2010, 12:06 AM
Please tell me you're not seriously asking that question.



Oh my goodness. You are serious.

Look, here's the thing- and this is an important question that'll help explain it. Why are race cars faster than tractor trailer trucks?
No he has a point...

The answer is quiet simple really... Crytek is 100% rasterization no ray-tracing obviously this maps to GPU very well so you get crazy fast speeds...

In mental ray you have an option to turn off ray-tracing and do a pure raster rendering but this of course is pumped to the CPU becuase until recently GPU have not been able to do the wide variety of effects possible with a software rasterizer, this however has changed now particularly with DX10 /DX11 imho...

So the criticism is really why can't we have mental-ray etc etc render via the GPU when we turn off ray-tracing but keep the normal work flow... Which is a valid point imho..

Alias tried something similar back in the day with the "Hardware Renderer" but i think it was immature and poorly implemented.

imashination
01-30-2010, 09:25 AM
But most of the points mentioned can be done :
- glass vase with refraction - possible.
- texture with 8000 I don't think so
- you can have larger projects with a 64 bit OS
- There are two types of subsurface scatter shaders included (I don't know about the quality though)
- There is a hair shader. But still the hairs would have to be geometry probably
- There is a realistic DOF shader with bokeh in the works
- you can have larger projects - the engine does not have a world limit

You can output many types of passes .
Alpha pass , z-depth pass - and many others :

Alpha pass :

You cannot refract or reflect light in any game engine, none of them can do this. There are various "make it wobbly" and "render whats behind you" effects but none of them can bend light in any useful way.

The render project needs to fit on the gfx card's onboard ram. The average amount of ram on existing system gfx cards is 256 megs. The average ram on a brand new gfx card is 750 megs. The highest amount of ram on an available gfx card is 2 gigs.

The only SSS shaders in game engines equate to simplistic "however bright it is on the front, make it the same brightness on the back" This is what makes the plant vegetation look decent. But theres no way the cry engine is going to give you a decent glass of milk.

The hair systems look horribly CG, not even vaguely realistic

Im not talking about alphas (and your image doesnt show an alpha pass, just materials with alphas in them) Im talking splitting out the image's reflections, specular etc.

Anchang-Style
01-30-2010, 10:39 AM
Still i think Crytek really gave a big boost to realtime graphics, sadly in their games they don't show much creativity in using their possibilities. It would be interesting to see what Crytek are able to do by using a hybrid engine...atleast raytracing for light effects or something. Because i think with their CE2 they really managed to get nowadays systems to the max and make it show (unlike the STALKER engine...is that the Chrome engine? which runs and looks like...oh well you know what i mean). Don't want to say that UE3 is bad (eventhough the output differs widely in quality...just compare what was done in GoW1 and 2 and compare it with Square Soft and Mistwalker who weren't able to get more than a better PS2 look out of it) but their shadow system is just not perfect. What was my point again?
The Quake 4 experiment gives a nice example of what raytracing adds to the visuals.

But isn't there a realtime version of vray?

Hirni_NG
01-30-2010, 12:40 PM
No he has a point...
So the criticism is really why can't we have mental-ray etc etc render via the GPU when we turn off ray-tracing but keep the normal work flow... Which is a valid point imho..


As a matter of fact, mental ray can use OpenGL to accelerate scanline rendering and can do that since 2005 or so. Also, one can write cg shader to perform GPU shading, though this has been deprecated because it does not make sense any more, with MetaSL and new developments.

Also, the Gelato renderer which is strongly GPU accelerated has been around for years, though is also deprecated because the technology it builds on is old.

It seems that most of the arguments come from the fact that people are not really up to date, GPU rendering is not the future, it is already here. There are several DirectX based renderer and pretty much every render company has announced GPU accelerated renderers, Octane, VRay RT, Luxrender GPU, iray, fryrender are just those off the top of my hat.

As for the Crysis pictures, it must be clear that the complete geometry has been painstakingly tuned and that all effects and methods used have been finely optimized investing a high amount of manpower just to get this look. Lets take a look at some more non-optimal viewpoint of
Crysis:

http://ve3dmedia.ign.com/ve3d/image/article/745/745255/new-crysis-dx10-screenshot-20061110001316019.jpg

http://img4.imageshack.us/img4/3442/crysis2009112719240338.jpg
The character in front shows loads of poly edges and rather low quality shading. All the plants are clearly billboards with repeating textures and instances, the grass also shows the usual bollboarding effects though this is mostly disguised by the nonexistent variation of lighting in the shadows.

The simple fact that in a computer game the user takes control of the viewpoint and that this viewpoint has to be delivered in around 20 milliseconds does not really allow a comparison to a productions where viewpoints are determined and where render times can be arbitrary.

sebastian___
01-30-2010, 12:45 PM
You're right. Almost everything is faked in cryengine. A real refraction would be a raytraced one. Still I just played in viewport with the refraction and fake as it is - it looks pretty good. At least on par with the other faked features like faked AO and so on.

http://img707.imageshack.us/img707/3518/screenshot05161.jpg
http://img402.imageshack.us/img402/3280/screenshot05171.jpg


fake reflection
http://img119.imageshack.us/img119/6250/screenshot0097res780.jpg

The sss are fake as well but I think there are 2 or 3 different ones. One for leafs, one for ice and snow and one for skin. The skin shader is based on premade maps with sss information. Still it's very convenient. You have also many sliders like melanin, sss multiplier, rim power and so on.

As far as I know - in 3d movies they are also doing tons of cheats so they can finish on time.

Some sss skin shader shots

http://img706.imageshack.us/img706/5964/crysisf.jpg


http://www.hiphopgamershow.com/wp-content/uploads/2009/08/crysis-images.jpg

I was not talking about the hair in game. Of course it looks low quality CG. It was made for a game. But the engine has a hair shader with hair-like specularity and so on. You could do a much better hair with it.

About the alpha channel - that photo has a Crysis palm tree on top. I could composite that palm tree because it was saved with an alpha channel. Like this :
http://img36.imageshack.us/img36/5508/transparentalphagray.jpg

And again. All kind of passes are possible. For the palm tree composite I took an AO pass, and a shadow pass. Plus the picture with the alpha.


Hirni_NG : the first one is a very old picture. It would be funny to be a picture from the Blur rendering which was rendered with a software render. Probably Brazil. A few years ago Crytek commissioned Blur Studio to make a rendered previz for the upcoming Crysis.

The second one looks like medium quality. About the low poly models. Nobody is forcing you to use low poly models. I posted here on cgsociety a picture with 500 mil poly. Even with that kind of poly count - the render is still faster - the point of this thread.

mister3d
01-30-2010, 01:33 PM
Those are quality effects produced either with Vray or Mental Ray.
http://img190.imageshack.us/img190/8992/12498268.jpg (http://img190.imageshack.us/i/12498268.jpg/)
We need precise contact shadows and reflections, and other effects, more realistic than those the Crytek engine can produce. All those cheats are available for a long time in raytracing engines and for your disposal. For example baking GI, AO, baking shadows or using depthmap shadows, using interpolated reflections or reflection maps, post- DOF and MB. Your point is "game engine", whereas it's GPU vs CPU. There are already GPU-accelerated rendererrs and will only improve with time. Wait a year or two and rendering will be blazing-fast.

sebastian___
01-30-2010, 02:34 PM
If you need raytrace effects than you are forced to use a render like Mental Ray or Vray .. and so on. No argue here. But if your project/animation doesn't need to be photorealistic - working in real time can be amazing.
I can't wait for the times when I will work with a raytracing engine with the speed available now in cryengine.

I replied here to statements like :
"Real-time game engines obtain their speed because everything is baked".
"There's no advanced specular - no nothing ... no high poly ... no shaders - everything is flat ... with phong shading only ..."

EDIT:

since you posted a caustics rendering I will post also a fake caustics rendering.

http://img204.imageshack.us/img204/8754/screenshot05181.jpg

http://img208.imageshack.us/img208/8716/screenshot05201.jpg

mister3d
01-30-2010, 02:42 PM
If you need raytrace effects than you are forced to use a render like Mental Ray or Vray .. and so on. No argue here. But if your project/animation doesn't need to be photorealistic - working in real time can be amazing.
I can't wait for the times when I will work with a raytracing engine with the speed available now in cryengine.

There is a certain drawback with a game engine: if you want to push it further, it won't. And with mental ray I can lower the quality to something similar. The gain in speed is not as amazing (ok, even if it's 10 minutes vs 100, it doesn't matter much as it's still very fast in either case). And soon we will see GPU renderers or hybrids.

Hirni_NG
01-30-2010, 03:05 PM
Hirni_NG : the first one is a very old picture. It would be funny to be a picture from the Blur rendering which was rendered with a software render. Probably Brazil. A few years ago Crytek commissioned Blur Studio to make a rendered previz for the upcoming Crysis.

The second one looks like medium quality. About the low poly models. Nobody is forcing you to use low poly models. I posted here on cgsociety a picture with 500 mil poly. Even with that kind of poly count - the render is still faster - the point of this thread.

Unfortunately, poly count is not a good metric for rendering speed. In fact, CPU real-time raytracers do a better job at interactively displaying a huge amount of polys.
Did you also take into account the export as well as game engine and asset loading time into your calculations?

It should be evident that game engines simply cannot meet the requirements in flexibility and possibilities of a standard production and that caustics produced by projecting an animated texture is simply not comparable to a full photon-tracing or even pathtraced solution.

sebastian___
01-30-2010, 03:19 PM
Well, it's funny. Beowulf has projected fake caustics. So - a movie made with a studio with hundreds of peolpe and tons of resources - CAN USE faked methods.
But a single individual with no resources can't resort to such low cheap ways.

Yes - it's true - a faked caustics is not the same to a real caustics render. BUt if big studios can use faked methods - then so can I.

DanielWray
01-30-2010, 03:28 PM
You know for real-time engines you still have that time when you bake the data using 'off line' rendering engines.

Also try even coming close to something of this quality in a real-time engine and see how far you get;

http://fusedfilm.com/wp-content/uploads/2009/06/monsters_inc_028.jpg

Now the OpenCL and Cuda accelerated engines are a different matter altogether, they aren't running at 60fps, but they don't need to be, still though there is a long way to go before your getting shots like the one above out in a few seconds.


EDIT: The above example is rather simple, a single character. Try doing something along the lines of;

http://farm3.static.flickr.com/2587/4054882718_4fd243bb4f_o.jpg

or

http://www.slashfilm.com/wp/wp-content/images/zz476ae106.jpg

Now at 4k I can pretty much guess than any engine would crumble under the massive amounts of geometry, shaders, 'HUGE!' textures and lighting calculations required, not to mention the amount of Vram that would be required, way more than 2Gig.

sebastian___
01-30-2010, 03:57 PM
The Avatar picture wasn't necessary. As I already said - I don't believe the cryengine would be a good choice for a photorealistic rendering. But how many 3d cartoon films produced have the quality of Avatar ? How many 3d cartoons have a highly stylistic non photo-realistic look ?

And do you think that Avatar picture was made in one shot ? That quality was straight from their renderer ? They also have many cheats and compositing work and passes to get to that beauty.


And oh look. On the first page on cgsociety news :

- Weta Workshop adopts StudioGPU MachStudio Pro for TV production pipeline.

Weta worked on Avatar.

StudioGPU MachStudio have interesting features but judging by the pictures posted on their website - the cryengine shots look nicer.

DanielWray
01-30-2010, 04:16 PM
Yes, Yes I do know that Avatar's production would have taken many many passes to get the final result and it would have taken many different render passes and FX layers to achieve that.

But it's not the point, in a game engine you would need to have baked, or at least pre-calculated a lot of stuff which uses off-line computing time. So you aren't saving any time, because the lower quality of a game engine, such as CryTek would mean you would have to fix more frames and to correct issues that are wrong, such as objects going through bounding boxes and a host of problems that occur regularly in gaming engines.

Do you understand that a dedicated rendering engine, be it accelerated by the Video card or not will always produce better results, in the right hands of course than a game engine can currently produce.

Also look at a number of crysis screen shots and you'll see the flaws that occur.

sebastian___
01-30-2010, 04:35 PM
I understand and I agree (as I said before).

Still - I'm not saving time ? Precalculated stuff ?

Le me just describe two types of workflow :

First : I'm building a piece of a forest and a nice grass field with many flowers and some stones of different size . and maybe tons of leafs green and yellow scattered on the ground.
I'm doing this with Vue. Or 3ds max and Vray. And it's an extremely slow process with very slow and annoying viewport refresh due to the thousands of objects (or more). The render of a single frame is also incredibly slow. Goodbye tons of adjustments and choosing in realtime a nice position for camera.

Or I'm doing the same thing in cryengine and it's super fun and super fast. I just paint with the brush and with a single stroke I'm painting with grass, stones, flowers and everything. I can adjust the AO and other stuff. The quality is not the same though - still worth it.

DanielWray
01-30-2010, 05:09 PM
If it were as simple as that studios would actually do it, and I can understand what your getting at it but it simply isn't flexible enough for most needs.

Perhaps for low-budget animations where quality isn't so important and demands aren't all that great, then something like CryEngine could be used, but as I see it, it would be in very limited use.

CaptainObvious
01-30-2010, 08:02 PM
The second one looks like medium quality. About the low poly models. Nobody is forcing you to use low poly models. I posted here on cgsociety a picture with 500 mil poly. Even with that kind of poly count - the render is still faster - the point of this thread.
How are you going to render 500 million unique (ie, non-instanced) polygons on a GPU? I've never seen an engine that can handle that many. 500 million triangles might mean roughly a billion vertices (depending on topology, of course). Let's say a billion, to make it nice and simple. Suppose each vertex needs 32 bits of positional data for each axis. That's eleven gigabytes of memory for the vertices alone. I don't think any real-time engine allow for dynamic handling of memory, either. Some offline render engines are pretty good at that.

Besides, regarding polygon count... as long as you're not deforming geometry (and end up needing to refresh the acceleration structures), ray tracing scales significantly*better with poly count than normal OpenGL or Direct3D rasterization does. I've rendered tens of millions of polygons in real-time in FPrime more times than I can count, even as the OpenGL viewport was struggling with it. As long as the resolution isn't too high and you're only tracing camera rays, it can actually be significantly faster than an OpenGL viewport of the same geometry.

sebastian___
01-30-2010, 09:15 PM
This was the picture. Of course this is not real time. But still faster than a CPU render. The render was not realistic with sss shaders - but was trying to make a point. But I don't know if they are instanced or not.

http://img162.imageshack.us/img162/4991/lex4artdaisiesfieldwith.jpg

I noticed the videocard memory is important in real-time rendering. For example with a videocard with 768 MB I could render a maximum of 4800 resolution. With a simple scene - a little higher. With a 1.5 GB the resolution could be increased to 9000 or more.

That would be a benefit and a limit of the real-time solution.
It's like in a professional sound studio. You have two solutions. You use a software only editor with software real-time audio effects. You don't have a limit. You are limited by the CPU and memory. But the system could become sluggish and slow.

Or use a hardware solution. There's little pressure on the CPU - the system is very responsive - but you have a limit on how many effects you can use simultaneously. You want more effects - buy a bigger card / or more cards. Still - most sound studios go with the hardware solution while the individuals choose the software only solution.

imashination
01-30-2010, 10:24 PM
Well, it's funny. Beowulf has projected fake caustics. So - a movie made with a studio with hundreds of peolpe and tons of resources - CAN USE faked methods.
But a single individual with no resources can't resort to such low cheap ways.

Yes - it's true - a faked caustics is not the same to a real caustics render. BUt if big studios can use faked methods - then so can I.

Youre 100% missing the point and frankly I think youre doing it on purpose. Faking and cheating is not a problem. The problem is that using any of the realtime engines youve listed, the user has absolutely no choice in the matter.

If they want most of the effects mentioned, they have to pre-generate a rippling caustics animation in some other app, they have to prebake much of the shading into the textures. A full render engine gives you the options.

COD: MW2, the most expensive game ever made and arguably the one with the most impressive graphics yet seen. MW2 has realtime lighting and shadows, but much of what you see is prebaked into the textures. A quick example of this, go to the Estate multi player map, the one with the house up on a hill with the greenhouse next to it.

Inside this house is a wooden staircase with gaps between each slat that let light from above filter down through them. These are all hard baked into the texture, as is much of the lighting in most levels. Yes, your gun flashes and casts a shadow, but like most games, youre limited to 1-2 realtime lights at most, rarely do you see more than this because if they did, the draw speed would plummet.

sebastian___
01-30-2010, 11:00 PM
I think you missed where I mentioned 3 times at least that I fully recognize the difference between a real-time engine and a Mental Ray type renderer.

I get your point.
I'm just correcting some preconceived old notions about real-time engines and what they can do. As I have at least a year of experience extensive working with a real-time engine.

And there you go again with the baked and prebaked stuff. The COD: MW2 and Unreal engine maybe require tons of prebaking but that's not true with all real-time engines.

With some engines the matter is simple : You have your textured model in 3ds max, you export it. Just convert your textures in DDS format - a few minutes operation (a drawback - I agree). And in the real-time engine you have your realtime shadows and Ambient Occlusion and in newer versions your low quality Global Illumination with color bleeding. No baking. Now if you want - you can of course bake you shadows and GI in textures. But then you will loose some of the beauty of a real-time renderer.

You don't have to export the caustics. Some renderer have that built in. Including an extensive library of other (customizable) objects, vegetation and presets such as : water, clouds, fire, sparks, waterfalls, smoke..and so on.

EDIT : a funny feature is the real-time motion blur. You can activate the object motion blur - a type of expensive motion blur - and you have motion blur right in the viewport while you work and move stuff. So if you rapidly move a wall or a brick they move with motion blur.

If those features are so hard to believe - I might make a video showcasing the realtime features in viewport

Kabab
01-30-2010, 11:30 PM
How are you going to render 500 million unique (ie, non-instanced) polygons on a GPU? I've never seen an engine that can handle that many. 500 million triangles might mean roughly a billion vertices (depending on topology, of course). Let's say a billion, to make it nice and simple. Oh come on even in full blown production its rare to render that many tri's in one hit...

Lets not forget you can now do proper tessellation on the GPU as well so you can hit very high poly counts with little overhead.

sebastian___
01-31-2010, 12:17 AM
A crowd test I made. Notice the plant dynamics as the men pass through.

Link with longer movie (http://download.vam-online.com/plants-dynamics.mp4) - 2.4 MB

http://img190.imageshack.us/img190/7522/plantsdynamics.gif

cowtrix
01-31-2010, 08:42 AM
A pertinent video. Well, it gets pertinent about halfway through.

http://www.gametrailers.com/video/quake-con-08-rage/38003

Hirni_NG
01-31-2010, 12:30 PM
Well, it's funny. Beowulf has projected fake caustics. So - a movie made with a studio with hundreds of peolpe and tons of resources - CAN USE faked methods.
But a single individual with no resources can't resort to such low cheap ways.

Yes - it's true - a faked caustics is not the same to a real caustics render. BUt if big studios can use faked methods - then so can I.

Really? - Are you sure you are not a troll?


I get your point.
I'm just correcting some preconceived old notions about real-time engines and what they can do. As I have at least a year of experience extensive working with a real-time engine.


So you have created Machinimas using Crysis assets and the CryEngine Sandbox for about a year..


If those features are so hard to believe - I might make a video showcasing the realtime features in viewport

It seems that you have not understood some of the concepts behind the techniques of rendering. There is a basic difference between instanced polys and unique polys as well as there are basic differences between different caustic, shadow, motion blur techniques and their range of applicability flexibility and accuracy.

CaptainObvious
01-31-2010, 03:34 PM
If your clients are happy with the quality you get out of a real-time engine, then by all means go for it. Saves you from having to build a render farm. But I can think of a million cases where the client goes "can we just do... this?" and you have to say no, because the engine doesn't have the capabilities.

sebastian___
02-01-2010, 12:34 AM
A troll ???? You like to say that word don't you ? :)


So you have created Machinimas using Crysis assets and the CryEngine Sandbox for about a year..


I just studied for over a year the engine - so I know technical details about the subject. I can also back them up with pictures and movies. Is something wrong with that ?


A pertinent video. Well, it gets pertinent about halfway through.

http://www.gametrailers.com/video/quake-con-08-rage/38003

Interesting what John Carmack says there. True.

CaptainObvious : I was trying not to discuss here the applications for a real-time engine. I talked about that in the other topic. Here I was trying to point the capabilities of a real-time engine.

But it's true what you said. Maybe a million cases when the engine would not be good ..and a few when it would be most recommended.

CaptainObvious
02-01-2010, 12:44 AM
Here I was trying to point the capabilities of a real-time engine.
And what capabilities are those, exactly? Beyond the ability to render really really quickly, real-time engines are severely limited because of the extreme specialization. Effectively, you're trading generality for performance. Obviously, being able to render quickly is a great advantage, but when it's fine to spend five minutes, and hour or a day rendering a frame, why would you want to limit yourself to what the RT engine can do? There is nothing that's achievable with a game engine, that you couldn't do better*(and more slowly, perhaps) with offline rendering. The only thing you gain, is speed.

sebastian___
02-01-2010, 06:57 AM
The only thing you gain, is speed.

I believe you can gain more than speed. It's difficult to explain - but I will try.

With real-time solutions sometimes you can obtain better pictures. And why ? Because you can try so much more looks and designs. With a slow solution - sometime you give up due to the enormous amount of time required to build a city or a forest.

Another example : users of crymod forum always asked for a way to have shadows enabled even for object located at huge distances. For performance reasons the shadows were not visible on such distance - or replaced with some fake blurry AO. You could adjust the shadows parameters but only by entering some numbers. And I tried some combinations but only when I build some sliders and I could tweak those parameters by changing the sliders in real-time, only then I could easily obtain the shadows. Here are the results :
http://img25.imageshack.us/img25/6716/aerialstitch.jpg

http://img218.imageshack.us/img218/9360/screenshot00761.jpg


Of course that's not the case when you have to make a render of a bottle of perfume or even a room. I would certainly use Vray or Mental Ray render for that. It would be silly not to.

I would also use Vray, MR or LT if I would have to use 3d objects composited with real footage.

Another good use for a real-time engine would be virtual walk-through of a building. Most of the time the architects are using realtime engines with a lower quality then the following picture rendered in cryengine2
http://img122.imageshack.us/img122/1662/14boqs51.jpg

earlyworm
02-01-2010, 10:12 AM
I don't know if some of this has been mentioned already, but some things to consider in the use of software renderers versus hardware renderers (game engines, GPU rendering) in the final render.


AOV (Secondary outputs). With software renderers like PRman and 3Delight you get these outputs particularly for free. With hardware you'd probably have to write specific shaders to do this for each output pass you wanted. And typically this is more than just alpha and depth.
Bit Depth. The renderer needs to be able to handle and output image formats greater than 8-bit.
Economics. This is touched upon in the latest FXguide podcast. There are lot's of costs involved in maintaining a render farm. How many GPUs do you need on your farm to compete with your CPUs? How much does it cost to maintain those GPUs vs CPUs?
Research and Development costs. It takes a lot of effort and money to create the software and hardware pipelines for these sort of things.

That all said, hardware rendering and the technology-used-in-game-engines is being looked at for (and occasionally used in) animation and film vfx. It's just that for most places it isn't quite there for producing final images.

CHRiTTeR
02-01-2010, 11:25 AM
What about normal maps and all that?


you still need to model both the high res model and the low res model and project the difference into a normal map, which takes up quite some time. So its not a timesaver at all.

With a robust off line renderer you dont have to worry that much if you're engine is going to be able to handle it. Just model highres and you're ready to go.

ambient-whisper
02-01-2010, 12:00 PM
dont kid yourself. unless you are ILM or weta where you can brute force your way through, most studios find more efficient ways to get around problems.

normal maps are used in regular rendering as well. most times people opt for bump maps, but normal maps are better imo. as far as not having to worry about geometry in offline renderers.. sure you do. most often people make displacement renders anyway. and with the coming years there will be real time displacement tech anyway. people speak about unique geometry being a big thing in regular renders, and yet what i see is more and more being done using instancing. vray, fry render, modo are all pushing their new instancing tech. thats not much different than what is done in games ( though obviously you can push the numbers higher in offline renders obviously, but still.

what a lot of people here fail to realize is that not only is a lot rendered with layers, and often the regular renders look like ass anyway before being fixed in comp. if you had just spent the time to render using a game engine in layers, use a few good tricks to fool the eye ( movies enough fake solutions to problems anyway ). plus, if you render using a game engine, you dont need it to render at 30fps, or 60fps. games make significant cuts to render at that speed. but if you take the new crysis engine and push it to its limits... ( geometry, lights, complex shaders... optical effects with higher sampling ) then even if it takes 10 seconds to render a single frame, it is still a huge improvement than waiting 20min-1hr per frame. sure your render will come super clean and have that light bouncing where you need it, but for movies where a vfx shot might be high on motion blur and stuff, you wont notice that subtle bounce light anyway.

i worked on a few shows where we were trying to be all purists on the technicalities of making everything proper, looking awesome at close up, and spending months on r&d. in the end what we ended up using was in no way representative of the previous effort, but that is what we realized after comparing the efficient model vs the pure all high and mighty version. there was no difference except one took hours to render, and the other one seconds. when the compositors are given your work and they crush colours, add layers, atmosphere, etc to your work, then it really opens your eyes as to where you should spend your time.


I would recommend for all the purists to take a look at the link below if you can. He employs a ton of cheats to get to the final image. granted a number of his cheats wouldn't work in some animated situations, but a lot of those tricks are there to show that there is no reason to wait hours for a render. ( especially when you are re-rendering and literally wasting hours upon hours. ) http://www.thegnomonworkshop.com/store/product/752/

very few people will be rendering avatar quality anyway so lets not compare to that.

there are two sides to the argument though, because it takes an equal amount of time to set stuff up regardless of the environment. you still need to model all those assets, but usually in games you have to take a second pass for everything so that it is more efficient for rendering. nothing is stopping you from using high rez geo for rendering foreground elements though. what you waste in re-rendering using an offline renderer, you often waste to making revised assets to look and behave right in the game engine.. but my argument has been that game engines shouldn't be discounted, because there is a ton to be gained, and a lot of those tricks that are used in games can very well be used in movie work as well.

CHRiTTeR
02-01-2010, 12:13 PM
Yes i understand that, but we clearly arent talking about work comparable with ilm or weta...

And i think they dont fake their way around that much anymore these days, if you read many recent articles. Brute force ray tracing is gaining a lot in popularity and getting used more and more in big productions.

ambient-whisper
02-01-2010, 12:37 PM
they still fake much anyway. projected shadows for example...the spherical harmonic lighting for simulating the "GI" used in the jungle scenes to produce a nice look to the plants and the leaves scattering, but the method was still a complete fake, and has been used in games for a while now.

the main difference is scale. weta pushed it a bunch further than a regular game would be able to.

I like to read up on game stuff because it interests me, but I have only briefly worked in the game industry. I mostly spend my time on film, commercial, and tv stuff. There are a ton of useful tricks that you can learn from games. if our regular 3d apps had a switch to turn the viewport to render exactly what the unreal or crysis engine do, then I would use it without hesitating :) so long as i wouldn't need to convert my shaders.

R10k
02-01-2010, 02:02 PM
very few people will be rendering avatar quality anyway so lets not compare to that.

Avatar was mentioned by me, because the statement was put forward that everything will be going GPU, and 'old style' rendering is just a waste of time (literally). I don't think anyone's saying 'game engine' rendering has no use compared to raytracing and the like. This thread seems mostly about explaining why anyone would do things the 'slow' way. Avatar is a good example of why, even if almost no one will be able to afford that kind of time investment.

Jettatore
02-01-2010, 02:17 PM
You can set up realtime viewport shaders in many 3D applications. Softimage for example has a fairly robust set of realtime shading and lighting options. It has it's limitations and is more for previewing your work as it would look in a game engine than anything else. Also consider, commercial games are not works in progress, they are finished products and much of the construction is done via previews and trial and error by importing to the game engine and seeing how things look. I will agree though, that being able to render in realtime, eliminating the need for previews and long render times, while quite a ways off, will be a great thing.

ambient-whisper
02-01-2010, 03:00 PM
You can set up realtime viewport shaders in many 3D applications. Softimage for example has a fairly robust set of realtime shading and lighting options. It has it's limitations and is more for previewing your work as it would look in a game engine than anything else. Also consider, commercial games are not works in progress, they are finished products and much of the construction is done via previews and trial and error by importing to the game engine and seeing how things look. I will agree though, that being able to render in realtime, eliminating the need for previews and long render times, while quite a ways off, will be a great thing.
yes but the issue is that you need to use specific nodes to display your shaders in the realtime viewer, and the shaders arent compatible with mental ray. you also need to have a lot of knowledge on how to set those shaders up, which is quite different from how the regular rendering happens in softimage. you cant use the mental ray sky stuff with teh game engine.

what im saying is that currently the tools are very specific toward gaming, or film, rather than having a simple switch and compatible shaders/tools between both methods of working. if something like that existed, im sure there would be a lot of cases where you would sacrifice some quality and push the real time stuff to its limits to get faster rendering than in software, and when needed, you would just use software rendering for the other shots. its been a while but is there even a way to output the realtime viewer stuff to frames anyway? say you are using directX shaders. granted, if a shader would support both methods, there would be a lot of overhead if your scene has a lot of shaders. there would be a lot of unused nodes internally. but...if the setup would be made to ignore, or dynamically delete nodes with a push of a button for any unused nodes ( say you are rendering using software for a shot and dont need the direct x nodes that exist within your shader. ) then you would be fine.

anyhow. i'm babbling :)

Gelero
02-01-2010, 03:16 PM
Hey guys.. Sorry to jump in the middle of this healthy discussion but I got a basic doubt.

Is the lastest CryEngine(SDK) free to download? Or do I have to buy it from Crytek?
Is it coded in C++? What kind of integration it has with 3ds max or Cinema 4d?


Thank you.

INFINITE
02-01-2010, 03:49 PM
A crowd test I made. Notice the plant dynamics as the men pass through.

Link with longer movie (http://download.vam-online.com/plants-dynamics.mp4) - 2.4 MB

http://img190.imageshack.us/img190/7522/plantsdynamics.gif

Why nobody has used this tech to create a 'Predator' Mod I don't know!? Great clip.

sebastian___
02-01-2010, 04:35 PM
You can download the Crysis demo for free. The SDK and the complete editor is included. I think the SDK and the plugins for 3ds max is a separate download. There are plugins for Maya and Sketchup - but only in the forums.

You can export objects from 3ds max with materials, texturing, animated objects, object with morphs for facial animation. The only thing missing is vertex animation export for cloth or realistic water dynamics simulation. The vertex animation export is actually included but not functional it seems.


Some drawback to this editor and probably common to other game editors too :
- For example you don't have a "render" button. You have to write in the console a command. like this : "capture_frames 1" And many things work like that. A good thing is you can bind these commands to sliders or keyframes for a little bit more automation. You can even transform the command into a button.
But still not very user friendly. At least not for everyone.

The same if you wanna export for example from frame 34 to frame 89. You have to assign that command to a keyframe in the TrackView editor. But most of this are documented.

On the other hand you have many realtime controls, buttons and sliders not encountered in other 3d programs - like color correction.

Gelero
02-01-2010, 06:03 PM
Thanks for your answer.

The whole render via command line is not an issue. And it's a very nice feature be able to get specific frames and render that offline.

But my main goal is the interactiveness. I need the whole SDK package to build a preview solution offering real time previsualization to my architecture clients. The flow is about exporting the textured layouts from MAX or C4D into the engine and walk around.

Is CryEngine a good alternative to that?

sebastian___
02-01-2010, 08:30 PM
The cryengine2 is almost ideal for that. At least comparing to the other real time previz solutions. And cryengine3 will offer even better quality.

Don't forget - you can actually use high poly models. Much higher than used in the game. I've seen architectural models exported straight from 3ds max without any optimizing / polygon reducing. Depends of your configuration of course.

I'm not sure about the legal aspects of this. You can use legally the cryengine for free - only for non commercial work.

And for more help read the nice documentation (with pictures and all) available on the net and visit the crymod forum.

Kabab
02-01-2010, 08:44 PM
AOV (Secondary outputs). With software renderers like PRman and 3Delight you get these outputs particularly for free. With hardware you'd probably have to write specific shaders to do this for each output pass you wanted. And typically this is more than just alpha and depth.
Graphics cards are very good at doing this many games create many secondary outputs at runtime and comp them together.

Bit Depth. The renderer needs to be able to handle and output image formats greater than 8-bit.
Most games can render in full float....

sundialsvc4
02-02-2010, 01:30 AM
Yes, indeed, what can be done "in real time with the GPU" is definitely "wonderful and becoming ever-more-so," but it is absurd to limit what can be done ... or even, apparently, contemplated ... to "just what can be accomplished in real time with that tool and with those techniques."

Unless, of course, you're writing a game.

Only in that very specialized(!) application of CG is "frame rate" a ruling constraint, unto which every other consideration in your universe must bow.

I'm not making a value-judgment here: the nature of the problem that you are trying to solve dictates what is and is not an "appropriate" or "desirable" plan of attack. Period.

If you are not writing a game, then you "frankly, my dear, don't give a :eek:" anymore about "frame rates." The GPU becomes a special-purpose "array coprocessor," capable of exploiting "massive parallelism" to solve a particular class of computational problems across large amounts of data at very high speed. Maybe. The GPU that might be installed on any particular computer might be a great one, or it might suck large. It might actually be capable of the sustained throughput that's listed on the box, or it might (literally) melt its circuit-board in the attempt. Also, the results produced by two different units might or might not be (arithmetically speaking) identical. That may be "no big deal" for a game, but it might be a "really deal-killer" for us. It depends. :shrug:

If your computer's GPU is a "decent" unit, and if the particular problem you are trying to solve is one that can be efficiently addressed by that hardware, then ... :beer: sweet!!

In that case, you will probably find that your favorite software already has some capability to exploit the GPU in that way. You might have to "dig" for it, and you won't find the software attempting to "do everything for you," but if you know what you are doing and know how to ask for it, you can probably get it.

But, here is how that GPU capability is going to be applied (in this context): in the manner of an auxilliary mathematics co-processor. We have utterly no reason to be producing output to a video screen, no matter how stunning, if our stated objective is to produce (prodigiously large...) disk files.

And, here is how the capability of both the GPU and the CPU(s) will be applied: in layers. That is to say, in calculating the specific outputs of certain flavors of rendering and texturing and compositing nodes. There is no requirement to do stuff "in real time" in this manner of work. But there is a definite requirement not to do the same thing twice if one can possibly avoid it. (And, to be able to generate exactly the same computational result each and every time, even years later.) The job is broken down into a "data-processing production line," and the GPU might be able to contribute to that job to the extent that it can. ("All hands on deck.")

Gelero
02-02-2010, 11:31 AM
Thank you sebastian___

CGIPadawan
02-02-2010, 11:16 PM
This is a great topic! I must say I agree with Martin Krol.


I read somewhere that the biggest damage to a timetable and a project in CG was normally Rendertimes, backjobs and Repeat Rendering.

So there is definite merit in trying to shorten this.

I also must say that with my own meager experience in CG, I have reached the same conclusion as Martin did in his professional experience: "Most of the time.. The Audience just won't see the difference."

I haven't been doing anything cutting edge (that will change in a bit). But as an audience member myself, I noticed some similarities for example between CG backgrounds in "Angels & Demons" and those in "Assassin's Creed 2"... And for all purposes, I was thinking.. .that if you put one in the other.. the assets can "trick" the eye once put through the same Post-Process to glue them together.

The balance however, is always in pushing more with what you have...And trying to do it with less. It's like the paradox of Grand Prix Racing - Finding more Aerodynamic Grip without Aerodynamic Drag.

In that way, the "High and Mighty" philosophy that Martin Krol refers to and the presumably "faster and dirtier" method employed by those seeking shortcuts... are actually mutually compatible, because it is the balance between them that attains the best result and the most efficiency.

sebastian___
02-07-2010, 10:59 PM
This is a pretty impressive prject in cryengine2.

http://img34.imageshack.us/img34/480/linkcopyz.jpg

http://www.youtube.com/watch?v=IgRECA1V-WI

Proves again that almost anything can be made in a realtime engine. What matters is the concept and creativity. And keep in mind - this project was probably made with limitations so it could be run in real-time. But the quality and level of detail can be higher.

imashination
02-07-2010, 11:12 PM
Nice enough, but it does look unquestionably like a game engine

furryball
02-18-2010, 09:25 AM
Hi All,

I read your discussion about game engines in rendering - you also mentioned our renderer FurryBall (http://furryball.aaa-studio.cz/).

It was also my questin before we start developed our GPU render - "Why do games render in REALTIME while 3d rendering takes forever? (http://forums.cgsociety.org/showthread.php?p=6344927#post6344927)"

Many of you are right, that there are limits, and many effect are faked in "game engines", but on other hand it will give you artistic freedom and big power. I'm personally artist and not programmer and for me it's nice to make everything in realtime and who cares, that something doesn't look 100% realistic. CGI animated movies are about stylization, and not 100% corrected refraction in the vase ;)

If you render still image, you can wait, but if you need 150.000 frames for feature movie, you can't wait 2 hours for nice picture :beer:

BTW thouse render times are 15 second on regular Geforce 285 (3 mil poly, about 3gb of textures, 1280x720)

http://www.aaa-studio.cz/720_Gallery/images/church1.jpg

There is some Fur samples - also GPU rendered

http://aaa-studio.cz/furryballforum/download/file.php?id=30


Here is 150.000 furs in realtime in our engine:

http://www.youtube.com/watch?v=lT-7bnPPVgg

Bye
Jan

sebastian___
02-18-2010, 05:48 PM
furryball : true ! :)

Is there a resolution limit for the FurryBall renderer ?
Can you disable the antialiasing in viewport for faster refresh ?
Bokeh Depth of field possible ?

furryball
02-18-2010, 05:54 PM
Is there a resolution limit for the FurryBall renderer ?
Limit is 8k, but for DX11 it will be at least 16k
Can you disable the antialiasing in viewport for faster refresh ?
Yes of course, you can even use for example just some features, or just 0.5 resolution. Look on tutorials on our web. This Church scene with 3mil poly is make about 15 FPS with lights, for exmaple on lower setting :beer:
Bokeh Depth of field possible ?
Yes, we will add this in the future, but if you will use 16bit it will produce rectangle bokeh, like side effect :cool: and it looks nice.

furryball
02-18-2010, 06:08 PM
Sorry, I post it twice :cry:

mister3d
02-18-2010, 07:32 PM
The fur looks really good. It's an interesting renderer.

CHRiTTeR
02-18-2010, 08:30 PM
Yes indeed!
Are my eyes cheating me or does that hair have GI?!!! :o

furryball
02-19-2010, 05:44 AM
Are my eyes cheating me or does that hair have GI?!!! :o
This hairs render below has no GI - just lights. It's old picture, but now in version 1.1 we have AO and color bleed and it works of course with hairs in real-time. So every hait bleeds and received bleed.

CGIPadawan
02-19-2010, 11:22 PM
Does Furryball only work with Maya?

Are there plans to make it usable with Blender or other apps?

furryball
02-20-2010, 05:07 AM
Does Furryball only work with Maya? Are there plans to make it usable with Blender or other apps?
Yes, FurryBall work with Maya only - it's very close bound with Maya interface, so it's no possible to convert it easy for another app.

sebastian___
03-07-2010, 02:31 AM
Since we talk here about real-time and I've said about the depth of field bokeh I've decided to post my realtime shaders. It's very fun to just walk around in real time, and just adjust the lens and snap photos. Virtual photos :)

Real-time bokeh shader wip with custom shapes like hexagon, circle and others.
I'm not sure if I'm allowed to post a link to another forum. So just google "sebastian bokeh shader"

Nevermind the low poly models

http://img714.imageshack.us/img714/972/helencircleshape.jpg

high res link http://img651.imageshack.us/img651/6267/helencircleshapehires.jpg

http://img715.imageshack.us/img715/226/dofbokehcircleedge.jpg

http://img41.imageshack.us/img41/1701/hexagonedge.jpg

http://img715.imageshack.us/img715/629/penta.jpg

http://img197.imageshack.us/img197/6/00045onlycropped.jpg

http://img221.imageshack.us/img221/593/00034cfx29dofmintresh03.jpg

furryball
03-08-2010, 07:55 AM
Sebastian,
nice work - do you render DOF sized sprites with alpha per pixel (where applicable) or is it some kind of pixel-shader only post-process? How fast is it?

Vaclav Kyba

CGIPadawan
03-09-2010, 03:33 AM
This thread is fantastic!

I mean, imagine! We are approaching a point where you can do realtime renders... or baked FX/billboard FX..

And you simply clam shut about whether or not the work is "purist"... and nobody can tell the difference!

Brilliant! And a big cost saver!

CGIPadawan
03-09-2010, 04:16 AM
Yes - I beleive that this is the future, CGIPadawan.

The main problem is that many and many people are prejudice against "Game engines" and rasterized graphics. :banghead: :banghead:

Well my friend, the true test is to one day take the results away from discussions labeled "Game Engines"... and see if the Prejudiced eyes can really catch the difference.

Actually I am directing a team right now with this vision.. To approximate the cinematic experience while keeping render times extremely low. I can also understand the Purists' concerns. Because sometimes it can feel like we are intentionally short-changing the audience or "cheating". There were many arguments about it at the start! :P

But I remind them... "Look.. the entire idea that something appears to be moving on screen is actually achieved by 'cheating'." And this is not about "Hey let's give them half a cake only". This is about "Can we do something that actually LOOKS like a full beauty render... but it was really achieved faster and more efficiently in ways that are not really software reliant?

I do still stick to that bottomline though. If everybody thinks a rasterized or billboard FX is actual volumetrics, etc... Then you've succeeded.

furryball
03-09-2010, 06:53 AM
Yes - I beleive that this is the future, CGIPadawan.

The main problem is that many and many people are prejudice against "Game engines" and rasterized graphics. :banghead: :banghead:

Genesis
03-09-2010, 02:19 PM
Realtime is not just game engines. There are apps like DetaGen and Showcase that can comfortably move 3 million polys + with a comfortable frame rate, full textures with normal maps, Raytracing, even some post effects.

Both of the above mentioned apps are chasing pre-rendered quality (esp for product and Vehicles) and doing a very good job of catching up and even surpassing what most can do with pre-rendered.

Kzin
03-09-2010, 02:32 PM
try to render your game engine with the same aa settings you use for your rendering and you will see there is no realtime anymore. this is only one point why game engines are so fast. texture resolution is another one. sure, try to use a 8k map, but with this, your ram is emty. this is only one map, what do you do with all the other game content? lambert shading? oh wait, there is no lambert oder phong shading at all, because it's way to slow today for games. so with only "real" aa, 1 big texture map and phong shading the whole realtime thing is gone. and i dont start to speak about bit depth, things like this, shaders with thousends of thousends lines of code, come on, you cant be serious with this question.

Hirni_NG
03-09-2010, 02:40 PM
Realtime is not just game engines. There are apps like DetaGen and Showcase that can comfortably move 3 million polys + with a comfortable frame rate, full textures with normal maps, Raytracing, even some post effects.

Both of the above mentioned apps are chasing pre-rendered quality (esp for product and Vehicles) and doing a very good job of catching up and even surpassing what most can do with pre-rendered.

Both Showcase and DeltaGen are domain specific renderers targeted for protoyping single objects which do not require a large amount of memory and only limited shading capabilities.

They are a good choice if you are doing prototyping of cars, but they are not capable of rendering a frame of a movie scene for example.

furryball
03-09-2010, 03:11 PM
try to render your game engine with the same aa settings you use for your rendering and you will see there is no realtime anymore. this is only one point why game engines are so fast. texture resolution is another one. sure, try to use a 8k map, but with this, your ram is emty. this is only one map, what do you do with all the other game content? lambert shading? oh wait, there is no lambert oder phong shading at all, because it's way to slow today for games. so with only "real" aa, 1 big texture map and phong shading the whole realtime thing is gone. and i dont start to speak about bit depth, things like this, shaders with thousends of thousends lines of code, come on, you cant be serious with this question.

Kzin sorry, but you are prototype of prejudice as I talked before.
Yes, of course with AA and 2k render there there is no realtime anymore, but look on final time - (The Church render for example) - 15 second. In software render it will take 15 - 20 min at least!!!

We are preparing render compares - same scene with FurryBall and Mental. You will see the differences and times.

sebastian___
03-09-2010, 03:43 PM
try to render your game engine with the same aa settings you use for your rendering and you will see there is no realtime anymore.

True. But is still way faster - as was mentioned. BUt there is one BIG BIG advantage. So ok - if you render with 10.000 pixel it's not realtime. It could even take 10 seconds. BUt if you work with a very small resolution in the viewport - then it's real-time with maybe 30 fps. Depending on your hardware. The BIG advantage here is the low quality real-time viewport offers the exact same picture like in the final rendering. Only with low res and without antialiasing.

While a program like 3ds max even if they are capable now to show you some crude shadow in viewport - they are far from the final render. You are forced to render to see a preview.

You wanna make some smoke , or clouds ? You are forced to render and maybe even simulate. While I can just paint with clouds, smoke, rays of light in real-time.

I just rendered a few pictures. The main problem here are the low poly models. Since I work alone, I don't have time to model - I have to use the in-game models. But at one time I used a single small tree branch with 50.000 poly. The editor seems to not have problems with such overkill models.
Real-time smoke, glow, vignette, bloom, color correcting and more. And before anyone saying again - yes I know they are all fake :)

furryball : The dof shader is made with "brute force" :) approach and I use 128 samples

http://img94.imageshack.us/img94/2685/rainingfireriverparticl.jpg

http://img38.imageshack.us/img38/8775/00010river1.jpg

Kzin
03-09-2010, 04:30 PM
i like all these realtime preview stuff. but its way beyond offline rendering in terms of quality. as someone mentioned, fead mr or prman with such simple scenes, render it with such low quality settings and judge again. the thing is, i cant understand how someone which works in this field can ask such a question, thats all. ;)

Genesis
03-09-2010, 06:40 PM
Hirni_NG,

Why not render frames from realtime the same way as pre-rendered? Passes that are comp'ed. You won't get deformation or particles, but for hard non deforming objects you can get most passes out of something like Showcase or DeltaGen without problems. Rigs can also be transferred without problems from Maya so lack of animation tools is not a problem.

Both solutions handle much more than just one object. I have had more stable experiences with large datasets (poly and object count) in Realtime than I have in Maya (and I am a big pre-rendered over realtime fan).

I am not saying Realtime is a perfect solution, but it is a lot better than most people think is. The results are not often seen as these apps are mostly used for product development.

CaptainObvious
03-09-2010, 11:48 PM
While a program like 3ds max even if they are capable now to show you some crude shadow in viewport - they are far from the final render. You are forced to render to see a preview.

You wanna make some smoke , or clouds ? You are forced to render and maybe even simulate. While I can just paint with clouds, smoke, rays of light in real-time.
That's why everyone and their grandmothers has interactive preview rendering nowadays. If I want real-time rendering, I've got FPrime.

ambient-whisper
03-09-2010, 11:59 PM
blah blah blah blah...


EDIT:

err, i misread your post :D carry on :D

CGIPadawan
03-10-2010, 01:19 AM
try to render your game engine with the same aa settings you use for your rendering and you will see there is no realtime anymore. this is only one point why game engines are so fast. texture resolution is another one. sure, try to use a 8k map, but with this, your ram is emty. this is only one map, what do you do with all the other game content? lambert shading? oh wait, there is no lambert oder phong shading at all, because it's way to slow today for games. so with only "real" aa, 1 big texture map and phong shading the whole realtime thing is gone. and i dont start to speak about bit depth, things like this, shaders with thousends of thousends lines of code, come on, you cant be serious with this question.

This discussion is actually more focused on humanism and a very simple question:

"Will the audience see the difference?"

If you go by the numbers, of course 8k > 4k > 2k...... But, who is going to see the difference in a practical viewing situation? There is a saying in Chinese Business: "Do not do something that your Customer isn't paying for." People who buy tickets at the cinema are not saying: "And it all better be 8k textures!".

Now, of course, there IS a difference, visually... the real question is: "Will it matter?"

There is reason to believe one day (or even today). It no longer matters. And is that a sad thing? Hardly. If it is true, then it means the art must diversify to achieve better results.

In terms of sound, one professor once opined that in terms of digital quality. The plateau for sound was reached decades ago... So now the improved experience is done in other ways, and not purely better sound quality. The question of using alternative forms of Graphical FX and rendering is actually a similar question. It's a question of whether we've hit a plateau that makes it possible to achieve a target audience experience without "adding more numbers" to the sheer basic quality of an image.

P.S.: The other issue is business and timetables related. Most people know that any project, personal or professional, can live and die through the time lost in Renders and Re-Renders. Optimization of this stage actually determines if you can achieve the best result in time, or if one must compromise, or at worst.. if deadlines have to be moved. Researching the limits of a "Plateau of Perception" and finding more efficiencies in rendering therefore, are very important questions for the CG field.

sundialsvc4
03-10-2010, 04:26 AM
In a video game, "frame rate is not just an 'important' thing .. it's the only thing." Everything about a video-game production is subordinate to what the hardware can do in real time. There are very severe compromises that are made to achieve that.

Outside of that realm, the GPU is simply a special-purpose math coprocessor. And, maybe, a very good one. Maybe very useful. And, maybe not. Obviously, to the extent that it can accelerate the computation ... great!

Even so: every shot is going to be built up using compositing. The GPU, if it is used at all, is going to be "cranking out tracks." Those pieces of information are then going to be "mixed down" to produce the final versions of every shot. The hardware is never going to be expected to "just spew out a shot in real time" as it would for a video-game.

I recently listened to an amateur recording, mixed and mastered to a very professional quality, where the artist commented that there were sixty-seven audio tracks in the mix. The reason why things are done this way is so that changes can be made without re-doing anything that does not need to change. The quality of the work product (visual and/or audio as it may be) is kept very high because it can be finely adjusted. And the need to do that is trumps.

CGIPadawan
03-10-2010, 04:30 AM
In a video game, "frame rate is not just an 'important' thing .. it's the only thing." Everything about a video-game production is subordinate to what the hardware can do in real time. There are very severe compromises that are made to achieve that.

Outside of that realm, the GPU is simply a special-purpose math coprocessor. And, maybe, a very good one. Maybe very useful. And, maybe not. Obviously, to the extent that it can accelerate the computation ... great!

Even so: every shot is going to be built up using compositing. The GPU, if it is used at all, is going to be "cranking out tracks." Those pieces of information are then going to be "mixed down" to produce the final versions of every shot. The hardware is never going to be expected to "just spew out a shot in real time" as it would for a video-game.

I recently listened to an amateur recording, mixed and mastered to a very professional quality, where the artist commented that there were sixty-seven audio tracks in the mix. I've done single shots that had more than thirty direct and intermediate components. The reason why things are done this way is so that changes can be made without re-doing anything that does not need to change. The quality of the work product (visual and/or audio as it may be) is kept very high because it can be finely adjusted. And the need to do that is trumps.

Actually the method you describe is also one that I subscribe to. The emphasis on my version of this workflow is to gain efficiency Per-Track (or Per-Plate) for Compositor purposes. There are many times where trying to get each Plate alone takes forever when done simply brute-force style in 3D Applications.

sundialsvc4
03-10-2010, 04:45 AM
Actually the method you describe is also one that I subscribe to. The emphasis on my version of this workflow is to gain efficiency Per-Track (or Per-Plate) for Compositor purposes. There are many times where trying to get each Plate alone takes forever when done simply brute-force style in 3D Applications.
You're right about "brute force," which is precisely why brute-force is unthinkable. As you know.

Maybe I've been lucky in that I have never had "enough" computer power. So, the name of the game is to be efficient.
Work out each shot thoroughly in very low-resolution, using the real-time preview capabilities of my software (Blender). This is my "storyboard." I can't draw. But this is more than just a storyboard: this stuff will become "the finished shots." Do a rough edit now. A low-resolution shot nevertheless perfectly matches a final shot in terms of f-stop, camera movement, blocking and staging, and much more. So you really can edit the show before you shoot it. The finished shots "drop in" to replace the low-res ones, and of course, they match perfectly. The process is just like real-film editing. A lot of stuff winds up on the floor. But... it's "free." (Nothing is actually discarded.) Do a complete shot-breakdown. You know exactly what shots you need and exactly what frames you need from each one, and that's all you're actually going to do. Break it down. Every moving and non-moving object; color, specularity, shadow. Basically, anything and everything that one could want to tweak. Everything is generated out to individual files. Successively refine the shot in a "mix-down" to build up the final image. This becomes a compositing node-network (or "noodle"), one per shot/setup. The final version of that "noodle" is the definition of how the shot is generated.
Blender does incorporate a built-in game engine so it is actually able to leverage the GPU already in many ways. There are definitely cases where the GPU has been able to reduce a complex stage from hours to seconds or minutes. But the workflow is the far more important part of making the process work. And the workflow would work equally well regardless of which tools one used.

furryball
03-10-2010, 05:43 AM
Hi All,
You want some test: Here it is - Nice picture from our cooperator Carlos Ortega.
It's very hard to tune up both renderer TOTALY same. (There is small direrence in bump height, AO, eyes - it need tuning).

The MAIN different is in the time - FurryBall is 60x times FASTER!!!

My question is - Is it Mental Ray Image 60times BETTER???

You can see full resoluion image (Right clicking and show image)
THIS images are WITHOUT ANY postproces!!

http://www.aaa-studio.cz/720_Gallery/bull.jpg

CGIPadawan
03-10-2010, 06:04 AM
You're right about "brute force," which is precisely why brute-force is unthinkable. As you know.

Maybe I've been lucky in that I have never had "enough" computer power. So, the name of the game is to be efficient.


Work out each shot thoroughly in very low-resolution, using the real-time preview capabilities of my software (Blender). This is my "storyboard." I can't draw. But this is more than just a storyboard: this stuff will become "the finished shots."
Do a rough edit now. A low-resolution shot nevertheless perfectly matches a final shot in terms of f-stop, camera movement, blocking and staging, and much more. So you really can edit the show before you shoot it. The finished shots "drop in" to replace the low-res ones, and of course, they match perfectly. The process is just like real-film editing. A lot of stuff winds up on the floor. But... it's "free." (Nothing is actually discarded.)
Do a complete shot-breakdown. You know exactly what shots you need and exactly what frames you need from each one, and that's all you're actually going to do. Break it down. Every moving and non-moving object; color, specularity, shadow. Basically, anything and everything that one could want to tweak. Everything is generated out to individual files.
Successively refine the shot in a "mix-down" to build up the final image. This becomes a compositing node-network (or "noodle"), one per shot/setup. The final version of that "noodle" is the definition of how the shot is generated.
Blender does incorporate a built-in game engine so it is actually able to leverage the GPU already in many ways. There are definitely cases where the GPU has been able to reduce a complex stage from hours to seconds or minutes. But the workflow is the far more important part of making the process work. And the workflow would work equally well regardless of which tools one used.


Hmmm.. in many ways our workflows are similar. Very interesting. The only main difference is that I can draw, so the Pre-Production period spent with paper is longer for us. But the principle of "Animatic becomes Final Scene" is the same.


I also use Blender. :)


To: Furryball,


That's the thing. You can tell the difference in instances where a "Mental Ray Version" is done. But if you do not bother to do that, you only have the "Furryball Version" and your VFX Leads can just focus on making sure the Bump/Displacement and other FX are optimized to look their BEST over the chosen output method.

Pinionist
03-10-2010, 07:57 AM
I think that having renderer 60x times faster than what we currently have (I speak for XSI'ers), would change things dramatically.

I don't think I'd need uber-pedantic accuracy, when I have a render right in front of me, instead rendering 20 mins per frame, even after couple of hours tweaking stuff.

mister3d
03-10-2010, 08:18 AM
Here's already a hybrid preview renderer "quicksilver" coming out for 3ds max 2011 (ships with it). So it is already here. We'll wee how it works.
http://area.autodesk.com/3dsmax2011/features

Kzin
03-10-2010, 08:55 AM
Hi All,
You want some test: Here it is - Nice picture from our cooperator Carlos Ortega.
It's very hard to tune up both renderer TOTALY same. (There is small direrence in bump height, AO, eyes - it need tuning).

The MAIN different is in the time - FurryBall is 60x times FASTER!!!

My question is - Is it Mental Ray Image 60times BETTER???

You can see full resoluion image (Right clicking and show image)
THIS images are WITHOUT ANY postproces!!

http://www.aaa-studio.cz/720_Gallery/bull.jpg

first, rendertime of furryball is nice. but, yeah, a but of course, i would never render dof in mental ray, its way to slow, no way to use this in production. the quality of the dof of furryball is also way to coarse. another one, i would never use a bump for the ground, for real comparsion you have to use dispmaps. i also miss motion blur, no way to render without it. for the mr rendertime, the settings would be interesting. and also, do a test with renderman, i think this will change the 60x drastically. ;)

furryball
03-10-2010, 09:10 AM
first, rendertime of furryball is nice. but, yeah, a but of course, i would never render dof in mental ray, its way to slow, no way to use this in production. the quality of the dof of furryball is also way to coarse. another one, i would never use a bump for the ground, for real comparsion you have to use dispmaps. i also miss motion blur, no way to render without it. for the mr rendertime, the settings would be interesting. and also, do a test with renderman, i think this will change the 60x drastically. ;)

The first - this is NO any test scene. This is Carlos scene made last year in MR - (http://stroggtank.cgsociety.org/gallery/817515/)(But for this test without postproduction)

Do You render DOF in postprocess and Motion blur in MENTAL???? How do you compose it?? You Z-depth chanell will not be blured... :twisted:

Yes in next release we will have Displacements and Subsurface scaterring - so you can stay tuned :shrug:

BTW Do you mean REALLY, that Renderman will "change it Drastically"??? OK, it could be "just" 30x-40x :curious:

Hirni_NG
03-10-2010, 09:21 AM
Hirni_NG,

Why not render frames from realtime the same way as pre-rendered? Passes that are comp'ed. You won't get deformation or particles, but for hard non deforming objects you can get most passes out of something like Showcase or DeltaGen without problems. Rigs can also be transferred without problems from Maya so lack of animation tools is not a problem.

Both solutions handle much more than just one object. I have had more stable experiences with large datasets (poly and object count) in Realtime than I have in Maya (and I am a big pre-rendered over realtime fan).

I am not saying Realtime is a perfect solution, but it is a lot better than most people think is. The results are not often seen as these apps are mostly used for product development.

There is a set of scenes that are entirely renderable in real-time, and this set is increasing as the
GPUs gets faster. If there are only simple materials and simple light transport in the scene and everything fits into the GPU memory, it makes sense to use real-time rendering in renderers like MachStudio, Furryball or Showcase.

If you put correct reflections/refractions, some caustics, some 8k textures and a model that
needs hundreds of megabytes, some clouds for participating media, some specular color
bleeding for a good measure of GI and a glossy surface into a scene and you want to have all those
things rendered correctly and interacting in the correct way, you are out of luck using a GPU
rasterizer.

Furryball posted a very good comparison, the geometry is simple, mainly diffuse surfaces and simple light transport using only direct lights and AO, a perfect situation for real-time rendering and this
kind of feature list can be found in AAA game titles.

Although I have to say the comparison with a 25 minute MR render is biased, the scene could be
fully path traced in 25 minutes, since only AO is used, MR also should only do AO and DOF
should also be filter based, so the scene should be renderable in less than 5 minutes rather than 25
with trivial tweaking. A rasterizer with GI capabilities like PRman or 3delight may get the result even
faster.

Kzin
03-10-2010, 09:47 AM
Do You render DOF in postprocess and Motion blur in MENTAL???? How do you compose it?? You Z-depth chanell will not be blured... :twisted:

BTW Do you mean REALLY, that Renderman will "change it Drastically"??? OK, it could be "just" 30x-40x :curious:

zdepth will be blurred, because you render it with motion blur too, framebuffer in mr.

from my experience, i would say renderman could render this in 1 or 2 minutes.

sebastian___
03-10-2010, 10:21 AM
the thing is, i cant understand how someone which works in this field can ask such a question, thats all. ;)

I can try an explanation. Someone working in this field - might wanna make a short movie -all by himself. A final product, or maybe just a previz with a quality 100 times better than the previzualizations made for Matrix and Superman returns

Pictures from Matrix previs and SUperman returns :

http://img26.imageshack.us/img26/2899/matrixprev.png

http://img705.imageshack.us/img705/9374/supermanret.png

Here's already a hybrid preview renderer "quicksilver" coming out for 3ds max 2011 (ships with it). So it is already here. We'll wee how it works.
http://area.autodesk.com/3dsmax2011/features

I can't wait. Actually I'm very excited about max 2011. But I doubt it will have real-time bokeh dof. And realtime motion blur :)

And to think about - the cryengine is several years old now
http://img193.imageshack.us/img193/2228/bokehcircleriver.jpg

sundialsvc4
03-10-2010, 01:26 PM
... Furryball is 60x faster! ...
It is easy to look at a tool like that and to compromise what you are doing just so that "the whole thing magically appears." All at once. Instant digital gratification.

And by all means, if that's good enough then "shrink (-wrap) it and ship it," along with an invoice.

But you are not limited to that. You can take what the GPU spits-out very rapidly and use it as an input to a more conventional compositing process. You can use an admixture of both GPU- and CPU-generated material, and you can fine-tune it, and you have still shaved hundreds of hours (that you don't have) from your workflow.

It should also be said, though, that even a purely CPU-based workflow ought to be done that way, too. A scene is composed of pieces; maybe, hundreds of them. Just as a pop song is composed of many tracks. It is assembled into final form through a multi-step digital process, just like a song. If the singer "clams" a note, it gets fixed by a Foley/ADR process, and the same thing is done with a digital scene. (Ever heard of a "garbage matte?" You can matte-out your mistakes, too, and there is no compromise of the finished material.)

"What, you say that shadow's a little too dark?" (He pushes a knob slightly...) "There, is that better? Good. A little more specularity on that rock? ... How's that? ..." :thumbsup:

So ... this viewpoint as to workflow definitely embraces the GPU for all it can do, and at the same time it makes much more efficient use of the CPU for what it can do. There is absolutely no reason to say "either/or." Since you have this amazingly powerful array processor at your fingertips, by all means use it. But, unless you are doing a real-time presentation, you're not limited to it.

CHRiTTeR
03-10-2010, 01:37 PM
Furryball is good for cartoon and stylized stuff... but even those could need raytraced reflections from time to time. It would be nice if they made it hybid and included some raytracing abilties.

Sure, it will lose most of its speedup, but then at least the users will have more advanced features available if needed, now it is too restricted. But there have been posted some nice images made with it here, so it defenitly has its uses.

Pinionist
03-10-2010, 01:37 PM
It should also be said, though, that even a purely CPU-based workflow ought to be done that way, too. A scene is composed of pieces; maybe, hundreds of them. Just as a pop song is composed of many tracks. It is assembled into final form through a multi-step digital process, just like a song.

That would be valid, but even then, you have to spend time compositing these elements, and what would be a good thing, is to have a blast rendering engine, which would allow you to "composite" final image in one or max 3-4 pass.

I've tried that, having many passes for a single shot out of 3D, but that was a bit tiresome, as I would be preparing passes and then try to comp it, then figure it that I need more of them, etc.

But then I saw Fprime and understood that what us, 3d artists need, is fast raytracer.

CHRiTTeR
03-10-2010, 01:40 PM
I wouldnt say we always 'need' raytracing, but when we need it, it would be nice if it was there! Which isnt the case with furryball (as far as i know?) ;) :D

Pinionist
03-10-2010, 02:26 PM
I wouldnt say we always 'need' raytracing, but when we need it, it would be nice if it was there! Which isnt the case with furryball (as far as i know?) ;) :D

What I've meant is not raytracer per se, but fast renderer overall.

Animals
03-12-2010, 08:45 PM
When I started this thread I was told directly I shouldnt talk about this, it has been discussed and the graphics of games are just toooo awaful compared to animations and that there is no way to do it etc etc etc... well my point was simple: look at the WHOLE thing generally. yes the games are optimized but cmon.. I know deep inside that animation could be rendered way faster by ultilising the methods of games, just look at the newst games, at least the animation should be a little faster now! and my point is correct.. just look at the amount of posters here.. wow.. opinions are TRULY divided, but I dont want to fight with words, you are welcome to say what ever you want and think, I am gonna instead spend some good money on rendering my animations in render farms in the end, that is the POINT of the industry leaders isnt it,


regards
Kalle

DanielWray
03-12-2010, 08:53 PM
I still think you misunderstand some aspects of this debate a little bit.

But you are entitled to your own opinion and I'm in no way stopping or discouraging you from finding new work flows.

Bullit
03-12-2010, 09:04 PM
I agree with you animals. For a lot of jobs a Game engine is good enough, specially those that need input fast and are part of decision process:
Example Pre Viz. Architecture, Furniture, Clothing, Hairdressing are other areas, since a game engine have good ability to show diverse options fast.
Factory, Medical, Military tutorial working process too.

mister3d
03-12-2010, 09:08 PM
I agree with you animals. For a lot of jobs a Game engine is good enough, specially those that need input fast and are part of decision process:
Example Pre Viz. Architecture, Furniture, Clothing, Hairdressing are other areas, since a game engine have good ability to show diverse options fast.
Factory, Medical, Military tutorial working process too.
Yes-yes, and those things will render very fast even on an old computer, no need to change the engine... just put shadowmaps, AO, put some lights to fake GI, use interpolated reflections and volia! You have an in-game graphics - cheap and looks correspondingly.

Animals
03-12-2010, 09:16 PM
I still think you misunderstand some aspects of this debate a little bit.

But you are entitled to your own opinion and I'm in no way stopping or discouraging you from finding new work flows.

I think you missunderstand me, and I am not finding new ways, even if I was I wouldnt now, I am not an expert in programming/hardware engineering and I wouldnt waste my time on reading indepth technical aspects of how exactly this or that can be improved in the means of writing code, etc. I am all for getting ideas for film, animations and stories , but I had wondered a lot about many things in this faulty world , mind you, cg rendering basic issues are not the only faulty subject in the world, there are such subjects in the complicated mechanics of a car, the ultra complicated mehcanics of the helicopter, in the building of houses and in the making of this and that.. all is made for the industry leaders profit at the highest priority... see, but if you dont believe it you are welcome to do it as you wish.


Kalle

Kzin
03-12-2010, 09:21 PM
When I started this thread I was told directly I shouldnt talk about this, it has been discussed and the graphics of games are just toooo awaful compared to animations and that there is no way to do it etc etc etc... well my point was simple: look at the WHOLE thing generally. yes the games are optimized but cmon.. I know deep inside that animation could be rendered way faster by ultilising the methods of games, just look at the newst games, at least the animation should be a little faster now! and my point is correct.. just look at the amount of posters here.. wow.. opinions are TRULY divided, but I dont want to fight with words, you are welcome to say what ever you want and think, I am gonna instead spend some good money on rendering my animations in render farms in the end, that is the POINT of the industry leaders isnt it,


regards
Kalle

setup your 3d scene like, for example, crysis does with geo complexity, shading tricks and all this stuff and you will be suprised how fast rendering can be. so, i dont understand your point here. if you want such fast renderings and you are happy with such coarse render quality then download the unreal kit or the crysis sandbox editor and start to work. but i am pretty sure your clients will laugh about it. ;)

Animals
03-12-2010, 09:32 PM
setup your 3d scene like, for example, crysis does with geo complexity, shading tricks and all this stuff and you will be suprised how fast rendering can be. so, i dont understand your point here. if you want such fast renderings and you are happy with such coarse render quality then download the unreal kit or the crysis sandbox editor and start to work. but i am pretty sure your clients will laugh about it. ;)

I didnt mean to setup a scene specifically like that cuz that will not work, because the program itself wasnt designed to do that, there must a new software taht can do that, dont ask me which but I am certain that humans are capable of rendering animations faster than current speed considering all the "coarse " game eyecandy. ..aah I cant tell you exactly how to make that software, did I build and designe the computer I am using right now? no.

simply folks:

look at games

look at animations

look at the rendering times for both

something is wrong , animation renderings should be much much faster but not as fast as games offcourse. compining the gpu/cpu and a speciall software for instance.. I dunno

mister3d
03-12-2010, 09:49 PM
I didnt mean to setup a scene specifically like that cuz that will not work, because the program itself wasnt designed to do that, there must a new software taht can do that, dont ask me which but I am certain that humans are capable of rendering animations faster than current speed considering all the "coarse " game eyecandy. ..aah I cant tell you exactly how to make that software, did I build and designe the computer I am using right now? no.

simply folks:

look at games

look at animations

look at the rendering times for both

something is wrong , animation renderings should be much much faster but not as fast as games offcourse. compining the gpu/cpu and a speciall software for instance.. I dunno

:banghead: 12 pages didn't help. :)

Kzin
03-12-2010, 09:53 PM
i think you have not a clue about production scene complexity. you look on games, they say they use sss and you think, why render my sss so long while crysis renders it in seconds. the point is, crysis dont use sss, it uses tricks to fake it. for ao the same, its coarse, i mean really coarse. you would not use such ao solution in offline rendering. but you can, rendermans point based ao is captured from an nvidia tech paper. look at mudbox, you can see here how coarse the realtime solution is. the problem ist you cant use settings in mud you would use in production renderings, because that would not be realtime, it would come unusable. sure, you can make it faster by using the gpu, but for produktion rendering, its nor that easy to use gpu for certain tasks, because you have to load all the data in graphic memory and here is one problem (bandwidth is another one). the other, gpu's are not good in general for flexible render engines like renderman or mental ray. this is one thing mental images and nvidia decided to develope iray, simply because its impossible/to hard to bring mr directly to gpu. but gpu rendering is off topic here. ;)

so, if you want it faster, use furrybal for example. i think its the solution you look for.

DanielWray
03-12-2010, 09:56 PM
There are huge difference between the two and both target very different areas.

That's not to say Animation can't become faster to render, GPU/ CPU (Mixed processing) is now coming along very well and can dramatically improve speeds of rendering, some engines can render with 2/3fps on a highly-complex full illuminated and textured environment. It's very different to gaming technology but it does, in one sense use some of the ideas from game rendering, such as using the GPU for the rasterizing part.

I only object to the idea of using a game engine or something along those lines for production work as it simply wouldn't fit into what I do, of course it will work for others and I encourage that. But again, it brings me back to the first line, there are huge expectations from each area and they both do what they do exceptionally well, but it's not something that can easily be combined. If it were it would have been done a long long long time ago.

Animals
03-12-2010, 10:01 PM
There are huge difference between the two and both target very different areas.

That's not to say Animation can't become faster to render, GPU/ CPU (Mixed processing) is now coming along very well and can dramatically improve speeds of rendering, some engines can render with 2/3fps on a highly-complex full illuminated and textured environment. It's very different to gaming technology but it does, in one sense use some of the ideas from game rendering, such as using the GPU for the rasterizing part.


Exactly what I mean, not to copy the game engine ofcourse I didnt mean anything technical.. you said what I said, and it is being done too.. it dramatically improve speed of rendering

in one sense use some of the ideas from game rendering, such as using the GPU for the rasterizing" part. "

that is what I have been trying to say, not copy exactly, but somehow incorporate it.. watches and computers seem very different but the base is the same

Animals
03-12-2010, 10:04 PM
Exactly what I mean, not to copy the game engine ofcourse I didnt mean anything technical.. you said what I said, and it is being done too.. it dramatically improve speed of rendering

in one sense use some of the ideas from game rendering, such as using the GPU for the rasterizing" part. "

that is what I have been trying to say, not copy exactly, but somehow incorporate it.. watches and computers seem very different but the base is the same

Kzin, I have seen furrybal and it is looking good... so what is the conflict really, it is being done but it is being done very slowly(the technology not the rendering).BTW a good deal of missunderstanding is what I hate about forums

DanielWray
03-12-2010, 11:11 PM
I believe the reason it's taken so long is that the physical techology has had to catch up.

If you look at CPU's released say, since the Nvidia 8x series, they started to incorporate the bases for a processor that could do more than just process what ever is required for graphics, vectors, pixel shaders (I've no idea bout this to be honest :) ) and started to open it up to general computing. Now with the latest technology it's finally allowed developers to take their knowledge and experience from the X86 architecture over to the GPU via OpenCL and the like. This all takes time, the developers have to understand the hardware and new API's, it also takes time to get funding for new technology. So yea in general it all takes a very long time, but look how long it's taken the CPU and the software which runs on it to the level that it's at now. It's taken a very long time and in comparison the GPU accelerated technology has really developed at a phenomenal rate.

CGIPadawan
03-12-2010, 11:46 PM
When I started this thread I was told directly I shouldnt talk about this, it has been discussed and the graphics of games are just toooo awaful compared to animations and that there is no way to do it etc etc etc... well my point was simple: look at the WHOLE thing generally. yes the games are optimized but cmon.. I know deep inside that animation could be rendered way faster by ultilising the methods of games, just look at the newst games, at least the animation should be a little faster now! and my point is correct.. just look at the amount of posters here.. wow.. opinions are TRULY divided, but I dont want to fight with words, you are welcome to say what ever you want and think, I am gonna instead spend some good money on rendering my animations in render farms in the end, that is the POINT of the industry leaders isnt it,
regards
Kalle

Kalle, I think you are on to something there. I believe in the end, a final resolution will be attained not necessarily by the forced combination of "game engines", GPU calls, and 3D applications (although that could be the start). But eventually by a careful (and long period) of modifying the software to gather all the strengths of each one it is possible a new way of working can emerge.

Again, I want to remind people that in the end, the bottom-line is the audience experience. It would be wasteful for hobbyists or professionals to go over that in a manner that is excessive. I believe what Kalle has experienced is the beginning of an "Experience Plateau".

In the early, mid-90's you look at your Playstation games and the FMV's and the difference is night and day, so this "realization" that maybe a cross-working of technology/methods was possible was not even being discussed. But like Kalle said... Look at TODAY'S games and there are instances where it seems it can work. Of course, there are still limitations, but what's clear is we're not in the 90's anymore.

The other big change is the scale of work in CG, and the deadlines... There are more and more and still more CG houses now around the world. The deadlines are getting shorter, and there is a need to push out quality output for less time and less money to get more bang for the audience. If one day a 3D app told you that you could mold and work in your workspace and that instead of Open GL you could switch on "Crytek View" or something so that you saw an extremely close "beauty render approximation" to what you were getting, that would be a BIG help. Same goes for if render engines one day fully leveraged GPU power. It's just that all 3D apps started from not using GPU and Graphics cards so extensively that we are in this divide.

I think that is just what Kalle was trying to point out. It's not necessarily "Game engines are better" it's just that he must have been thinking about a "germ of an idea" about how one day 3D applications and workflows can evolve.

Animals
03-13-2010, 12:37 AM
Kalle, I think you are on to something there. I believe in the end, a final resolution will be attained not necessarily by the forced combination of "game engines", GPU calls, and 3D applications (although that could be the start). But eventually by a careful (and long period) of modifying the software to gather all the strengths of each one it is possible a new way of working can emerge.

Again, I want to remind people that in the end, the bottom-line is the audience experience. It would be wasteful for hobbyists or professionals to go over that in a manner that is excessive. I believe what Kalle has experienced is the beginning of an "Experience Plateau".

In the early, mid-90's you look at your Playstation games and the FMV's and the difference is night and day, so this "realization" that maybe a cross-working of technology/methods was possible was not even being discussed. But like Kalle said... Look at TODAY'S games and there are instances where it seems it can work. Of course, there are still limitations, but what's clear is we're not in the 90's anymore.

The other big change is the scale of work in CG, and the deadlines... There are more and more and still more CG houses now around the world. The deadlines are getting shorter, and there is a need to push out quality output for less time and less money to get more bang for the audience. If one day a 3D app told you that you could mold and work in your workspace and that instead of Open GL you could switch on "Crytek View" or something so that you saw an extremely close "beauty render approximation" to what you were getting, that would be a BIG help. Same goes for if render engines one day fully leveraged GPU power. It's just that all 3D apps started from not using GPU and Graphics cards so extensively that we are in this divide.

I think that is just what Kalle was trying to point out. It's not necessarily "Game engines are better" it's just that he must have been thinking about a "germ of an idea" about how one day 3D applications and workflows can evolve.

Yepp I never meant to use literally the game engines but to somehow ultilize what made games render lightning fast.. I am talking about the concept and I see that rendering can be greatly sped up without sacrifising quality. But ofcourse if you use a game engine directly that is an another story.

Maybe because there is no software today is what blocks the vision

R10k
03-13-2010, 01:12 AM
I like how this thread has boiled down to rendering being slow because today's software blocks the vision of it being faster.

Someone grab the jar of magical coding dust so we can fix this problem.

sebastian___
03-13-2010, 02:59 AM
Yes-yes, and those things will render very fast even on an old computer, no need to change the engine... just put shadowmaps, AO, put some lights to fake GI, use interpolated reflections and volia! You have an in-game graphics - cheap and looks correspondingly.

Will render very fast, yes. BUt you could render it even faster - like 30 fps. What would be the benefit ?
Well the benefit of adjusting you scene while looking at the final picture. I think that is a significant advantage. The only problem is you have to learn a new software. That is until the 3d programs viewport finally catch-up with the technology and start to offer a coarse final render picture.

But by the time programs like Maya and 3ds max will have an improved viewport technology - like the games offers today - the games will also advance very much.

CGIPadawan
03-13-2010, 03:09 AM
Will render very fast, yes. BUt you could render it even faster - like 30 fps. What would be the benefit ?
Well the benefit of adjusting you scene while looking at the final picture. I think that is a significant advantage. The only problem is you have to learn a new software. That is until the 3d programs viewport finally catch-up with the technology and start to offer a coarse final render picture.

But by the time programs like Maya and 3ds max will have an improved viewport technology - like the games offers today - the games will also advance very much.

Actually yes. That would be a HUGE advantage and time saver. The other thing to note is I, personally, do not believe the curve is infinite. Just as what happened with Audio and Optical Storage, there will be a plateau.. an end after which further increase yields no more return.

There is reason to believe that time is fast approaching.

sebastian___
03-13-2010, 05:43 AM
Yes. In 1995 there wasn't such thing as realtime effects audio. At least in computers. The technology advanced and in 2002 maybe was possible to have all realtime sound effects and tens of audio tracks. You would hit Play - and you could listen to the final finished audio song - before you hit mixdown (the render button equivalent in audio).

The song could even pass through mastering effects at the same time (the equivalent of compositing and color correction in graphics). And that was several years back.

The same thing will happen in 3d and 2d graphics.

And as for the final product quality evolution in songs - after 16 bit audio and 44.1 kHz (the audio cd launched in 1980) companies started recently to offer 24 bit audio and 96 khz or even 192 khz - but people would no longer care.

Majority of population are satisfied with 128 or 192 kbps mp3 which actually sound a little worse than the standard cd launched in 1980.

Hirni_NG
03-13-2010, 11:15 AM
I like how this thread has boiled down to rendering being slow because today's software blocks the vision of it being faster.

Someone grab the jar of magical coding dust so we can fix this problem.

Yes, and as it seems artists and users have far more insight into the problem of rendering than the people who know all the math behind it and write all those slow renderers and fast game engines.

Also, most of the discussion is redundant because everyone who wants to do real-time rendering as a final render output should buy a furryball license today which is basically an interactive final render output viewport right inside Maya. I threw some heavier stuff at furryball and it worked as expected, so there is no reason not to use renderers like furryball or MachStudio if the scene can be output with those renderers and the quality is sufficient.

CHRiTTeR
03-13-2010, 03:30 PM
I like how this thread has boiled down to rendering being slow because today's software blocks the vision of it being faster.

Someone grab the jar of magical coding dust so we can fix this problem.


best post in the whole thread :D

sundialsvc4
03-15-2010, 03:44 AM
Animals, honestly I do think that you are repeating the same things over and over again without really listening to the responses you are getting. :banghead:

The production demands of a game are over-arched by exactly one concern: frames per second. Nothing else matters. Any amount of material that is used in the game can be calculated ahead-of-time to produce the raw-material that goes into this "must-be real-time" presentation.

The production demands of "anything but a game" are very different. Now, frame rate doesn't mean squat. The finished imagery isn't calculated all-at-once, because it does not have to be. The demands placed on the final image are much, much higher. The production company now has many more choices as to what is the best solution for each problem. And it makes them. Over the lifetime of a project that might take four or five years to finish ... and not just "due to render times."

You can be certain that high-speed computational hardware (such as a GPU or its equivalent) is used to the extent that it can be. But, I think that you misunderstand the scope of the work and therefore seriously over-estimate its applicability to that work.

Simple example: "a live music show vs. a recording session." A live show is real-time. But it might use loops, sequencers, and even CDs of pre-recorded material. The over-arching constraint is: real time.

And yet: albums aren't recorded that way. Haven't been, since Les Paul made his wondrous invention. And for good reason. CG scenes in movies follow a similar workflow for the same good reason.

sebastian___
03-15-2010, 07:59 AM
Everything you said it's true. However in the future (soon enough) there will be professional 3d programs able to display everything in real-time (the final render). In the same way game editors can display today a tons of things in realtime.

We are not there yet (for dedicated 3d app) however everything indicates we are going there. Maybe in 5 or 10 years ...

CHRiTTeR
03-15-2010, 08:07 AM
If you look at how things evolved then its a reasonable thing to think (and i also think it will be possible in x years).


HOWEVER, we have been incredibly 'spoiled' with rapid evolving technology in such a way we start to think its normal. But if you look at more recent events then you'd see they are starting to have real problems with getting faster cpu's and also gpu's.

You should keep that in mind and not trust too much on how things went in the past.
Its a good reference but not a guarantee.

Its not that obvious at all like some ppl seem to think...
There is a limit at how fast information can be transported or at how efficient mathematical formulas can be optimized. So they will have to try and discover new ways and i agree that there are probably plenty of new things to discover, but you get the point...

CGIPadawan
03-15-2010, 08:27 AM
Actually this discussion also reminds me a bit about the AVATAR "Virtual Camera".

James Cameron mentions in many interviews that when he asked for a camera and monitors that showed him the composited CG result as close as it can get to Beauty Renders while he was shooting the actors (both Performance-Captured Actors and live actors to be composited who were not being Performance Captured), and that he wanted to see the same in Stereo 3D output all at once, he was told: "No.. That' not possible. That's Post-Production not done In-Production."

But see, it IS possible, people just have to think a bit differently. And yes, it's going to take a LOT of time and effort.

Sure sure, Movies are different from Games... yeah.. But you know what? It's all CG. It's all mesh Geometry, Textures, Lighting, Rigs, FK, IK, SSS, AO.... guys who make BOTH use the same terms these days.

It's not THAT different.
But it's not going to happen overnight either.

CHRiTTeR
03-15-2010, 08:40 AM
Well, James Cameron is always verry proud to say he had that vision while everyone else told him its not possible.

I wasnt there of course, but im sure they told him it wasnt possible with the budget he had then. Which is true. And now off course it is possible... it only cost him $500 million. ;)

The technology to use 2 cameras and create a stereoscopic 3D movie has been there for quite some time.
Same thing with the previews.

Its not like he invented 'new technology' as far as i know.
He just had the cash/budget to use the technology in the way he wanted to.

CHRiTTeR
03-15-2010, 09:34 AM
It's not THAT different.

No they arent THAT different, but they are still different enough that one requires a lot more to be calculated than the other. Verry small differences can require a lot more complicated computations.
Its about computer power and thats where the problem lies.

If computer power wasnt a problem we would all be using insane resolutions with unbiased renderers, we would all be able to render/see how the big bang theory evolves and see the resulting universe evolve (if the theory and formullas are right that is) in realtime and throw some variations in just for fun and zoom in at insane rates at any point.

You know, an apple and an orange are both fruit, they have both simular components, its all matter, so they arent THAT different. But try to turning one into the other.

Or lets play oldschool alchemist and lets turn lead into gold... they arent THAT different.

Now lets turn an apple into gold... both matter, not that different

sundialsvc4
03-15-2010, 12:37 PM
Consider once again the "multi-track recording" analogy.

There are huge advantages to being able to record each component of a performance in a separate and isolated way, and then to "mix-down" the individual performances into a final deliverable product. In short, we have three steps: "capture," mixing, and mastering.

Or, conventional movies. An actor on the set "clams" a line. You "loop it." Or, there's a truck driving by at the wrong instant in an otherwise perfect scene. You can remove it, because you had a microphone outside in the street. You have a process that means that you don't have to ship the "clam."

It goes without saying that you want to leverage the hardware as much as you possibly can, with the technology now available ... but the underlying process demands flexibility, not so much "real time." Therefore, while this pushes you to use GPUs as much as you can, it does not limit you to using only what a GPU can provide. Games, on the other hand, do live by that fundamental constraint.

Is there a way that the GPU can do this-or-that step instantly? Great! You'd be a fool not to use it. But... it's just a step.

If your rendering "takes forever," well, you are doing something wrong. You are wasting time. If you spent days rendering something and now you must throw it all away, you just threw away days that you cannot bill for, having made zero progress. "Enjoy eating soup." :buttrock:

It isn't simply a matter of "instantaneous vs. gobs-of-hours." The key difference is one of process, not the means by which computations are done. You don't want to record all those tracks all at once straight to a two-track cassette tape. This isn't Live Radio.

sebastian___
03-15-2010, 02:12 PM
I know what you are trying to say. That we don't do anything in realtime - in a single beauty pass - as a choice. Even if we had the possibility to do everything in real-time - we still would render in passes , piece by piece - because we have the benefit of compositing and fixing things in post.

But a theoretical real-time editor wouldn't negate that benefit. You could still render in passes. BUt you would have the tremendous advantage of finishing most of the things before rendering. Because you would look at the final render - and still building and adjusting the scene.

Even though we don't have that possibility yet - I could still feel how would that be, just working on this scene. Everything real-time (although no realistic GI), everything moving , the leafs, the plants, the water. And I drop this dragonfly in the scene. At first I wanted to animate the wings in 3ds max but then I thought of a better way. Parametric animate in cryengine with controls like speed , amplitude and offset as you can see in the schematic attached. And the I hit simulate (or play) and I have object motion blur right in the viewport. BEFORE I hit render. So I can better tweak the wings animation - or the motion blur.

http://img194.imageshack.us/img194/5233/dragonflymblur.jpg

http://img14.imageshack.us/img14/6154/dragonflyfg.png

I think it would be great when we will have that in every software.

gruhn
03-15-2010, 02:57 PM
Except you don't have blurred wings, you have smeared background.

One of the issues is that people who make renderings that take a very long time are different from people who play games - they can see what they're looking at.

sebastian___
03-15-2010, 07:46 PM
Except you don't have blurred wings, you have smeared background.


Well, maybe the motion blur was too strong. Still, I believe the future will be bright regarding the development of the real time viewport technologies. I'm sorry I cannot transmit you my enthusiasm

http://img31.imageshack.us/img31/5233/dragonflymblur.jpg

CHRiTTeR
03-15-2010, 08:09 PM
No the motion blur is not too strong, its just wrong.

Like in this example.
There are 2 planes. One in the background, which is mapped with a B/W checker map and one in the front which is mapped with a red/blue checker map, 50% transparent and rotating (with motion blur).

http://img708.imageshack.us/img708/3295/wrongmoblur.jpg (http://img708.imageshack.us/i/wrongmoblur.jpg/)


Normally the background shouldnt be blurred as it isnt moving.


This is a good example of the quick hacks games have to use and the ignorance of a gamer thinking he knows how to do to it better than ppl who actually have to code the damn thing and know what they are talking about.

My suggestion would be start a topic in the graphics programming thread and teach all the dumb programmers how to render avatar in realtime using techniques from cryengine, unreal or whatever. Because appearently they are doing it all wrong!


good luck!

Tryn
03-15-2010, 08:59 PM
Sebastian, you're obviously very enthusiastic about this, you communicate that just fine. You have some very nice images in here, but they're all essentially the same - moodily lit forest shots with soft light and nice water. And that's great if that's your goal, or if you're just making images for fun, but far too limited for most commercial use.

anopheles
03-15-2010, 09:37 PM
This is a good example of the quick hacks games have to use and the ignorance of a gamer thinking he knows how to do to it better than ppl who actually have to code the damn thing and know what they are talking about.

This is because games are games and not intended for rendering movies. And I guess it is not too hard to implement a correct version of motion blur in a 3D gaming engine. Btw the incorrect one looks better :)

I guess creating a real time 3D enginge for cinematic productions isn't that hard to realize, the problem is that there is simply no interest in using one.

CHRiTTeR
03-15-2010, 09:46 PM
clearly i didnt make that image to show of if it looks better. :rolleyes:

Its not hard to implement the correct version in a game engine, but it will be slow, as it requires a totally different approach ;)
Thats the whole point!

Dont you think they would have used correct motion blur if it was possible?
Or do you think game engine programmers dont do research and have no clue what motion blur is and just code random stuff?

Do you actually read or do you only look at the pictures and make stupid remarks on which one looks prettier even when thats clearly not what they were intended to show?
This aint the gallery section, this is a technical discussion.

CGIPadawan
03-16-2010, 12:28 AM
clearly i didnt make that image to show of if it looks better. :rolleyes:

Its not hard to implement the correct version in a game engine, but it will be slow, as it requires a totally different approach ;)
Thats the whole point!

Dont you think they would have used correct motion blur if it was possible?
Or do you think game engine programmers dont do research and have no clue what motion blur is and just code random stuff?

Do you actually read or do you only look at the pictures and make stupid remarks on which one looks prettier even when thats clearly not what they were intended to show?
This aint the gallery section, this is a technical discussion.

Well it LOOKS like the wings are flapping... That's all the audience will care about.

And yes.. to US we all know the difference of games and movies. I was just saying the paying audience wouldn't know that difference. The Apples and Oranges comparison is not really fair because in that scenario an "audience" can taste the difference.

The main root of this discussion was that in many CG presentations, gamers and cinema goers hardly see the difference anymore. So the comparison should be between two kinds of Apples and the possibility of cross breeding them into a Super-Apple. :)

derMarkus
03-16-2010, 01:29 AM
Few things come to my mind when I read the last pages of this thread.

- a lot of computation for games is done in advance, so i think "realtime" is a little bit misleading here. Also a lot of smart guys did a lot of work in advance to get the engines doing what they do. That's why the AAA-engines are so expensive, i guess.
- saying that people don't care about the quality difference between highend game-content and movies is a little bit like saying people do not like to drive big luxury cars, because the smaller ones get them from A to B, too. I think people will always favor the latest and best pictures/products. If that wouldn't be the case we could get away with DAZ3D and Poser and have nice long weekends with our families.
- as said earlier, game engines are faking a lot. That's exactly the opposite to the whole "unbiased" render trend in CGI we have today. Sure, realtime raytracing is on the go, but i will take a litte time more, since we are doing (production ready) realtime SSS/GI/blurry/vecdisplacement-stuff that will look as good or better than the stuff we did before with the good ol' vrays, mental rays and mantras.

Just my 2 cents, just ignore stuff that's already been said.

Tryn
03-16-2010, 01:50 AM
Well it LOOKS like the wings are flapping... That's all the audience will care about.

And yes.. to US we all know the difference of games and movies. I was just saying the paying audience wouldn't know that difference. The Apples and Oranges comparison is not really fair because in that scenario an "audience" can taste the difference.

The main root of this discussion was that in many CG presentations, gamers and cinema goers hardly see the difference anymore. So the comparison should be between two kinds of Apples and the possibility of cross breeding them into a Super-Apple. :)

You can bet your pants they'll know the difference when its output at cinema resolution, so I think the analogy is fair.

gruhn
03-16-2010, 02:17 AM
> This is because games are games and not intended for rendering movies.

This is true. But the subject of this thread presumes otherwise.

sebastian___
03-16-2010, 09:39 AM
Why are you so negative ? Instead of criticizing the realism of the motion blur - you should say instead : WOW ! Is that in viewport ? In real-time ?

Of course it's not 100% realistic unbiased physical accurate scientific simulated university approved. I have shown that picture saying "look how it feels to have everything in realtime". Even if our today's professional editors don't have that - look how we will work in the future.

You could for example have that fake mblur in viewport - then for rendering using a more advanced one. Would that not be useful ? Nobody would like to work that way ?

All kind of different "plugins" can be implemented easily in a game engine / editor. Look at the dof bokeh shader or at the alpha channel and z-depth I made.

EDIT : and I just realized in 3ds max viewport you could also have a motion blur preview. Just press a button, wait 10 seconds while the program calculates the samples and then will show you a picture (very low quality) with motion blur. But that is just a picture. The picture I posted is not just one frame. Eveything is moving with 30 fps. The wings are flaping and the motion blur is on all the time.

CHRiTTeR
03-16-2010, 01:12 PM
It has been said more than once that it is a nice image, its just not good enough to use for movies or high res print. It would look much better if rendered with a 'slower' off line renderer.

If you wanna hear how beautyfull your scene is, you are posting in the wrong topic.
Like i said, this aint the gallery section.

Im sure if you show the audience both versions (correct and smeared version) they will notice the difference, especially in action scenes where a lot of stuff is flying around at high speed. With the wrong version you would quickly get a screen full of smearing while with the correct version youd get good results without any unwanted distortions.

Again, there are situations where you wont notice, and these techniques are actually being used for movies in cases where it doesnt matter. But there are plenty of cases it just wont work and thats when you need to have the 'slow' version.

But after reading your last post I think you want to use something like the crisis engine in max's viewport for 'previewing' purposes and not the final renders? Once you hit render the off-line renderer takes over and renders in far better quality?

But then again, many ppl already tried to tell you games use different techniques to display stuff. So if you want a nicer preview in your max viewport you will need to write/create separate shaders for the max viewport to use.
So if you want a gaming quality preview in the viewport, but hollywood movie production quality when you hit the render button, you can do that but you will need to create the shaders twice.
One fast (hacky) version to describe the scene to the viewport renderer and another (high quality) version for the final renderer. So you double the work needed.

For materials there is the directx material, for post processing effects (like motion blur) there is the scene effect loader utility. So if you have a .fx shader that uses the same technique as in crysis, you'll get the same effect in your viewport. Many game studios use this to preview their models inside 3ds max.
This has been available for quite some releases already.
There are already some good examples included with max by the way and there are plenty of really good shaders available for free on the net.

You could write one yourself, which shouldnt be a big problem because you already seem to know everything about it.


Now with metaSL this should become easier (if the rendering dev's choose to support it, which is another issue/topic).
Its a universal shading language which can be compiled into different other shading standards.
So you can create materials using mental mill and then mental mill can compile these materials for you to the desired format (dont know about scene effects like motion blur, but i suspect it should be possible, if the devs implement it).
But do not expect that the 'automaticly' generated code will work as good/fast/efficient as 'handmade' code (which is better optimized) and dont expect the max viewport to render the stuff as fast as some game, because again(!) games are optimised for simple scenes, while max needs to be able to display heavy geometry and animation.
3ds max already has metaSL support build into it and mental ray supports it (I think the arch&design shaders are using it), since mental images developed metaSL.
Id suggest you search some info about that.

http://www.mentalimages.com/products/metasl.html



Yes, you can also preview the motion blur using camera passes, this is the slowest solution (but good quality if you use enough passes), but its verry nice to have available in combination with the animation preview renderer.
The quality isnt bad if you use enough samples. As a matter of fact, the example i posted earlier is done this way.

mister3d
03-16-2010, 05:33 PM
It's time to close this thread as it's outdated http://www.randomcontrol.com/arion
Technology moves so fast that by the time you create a thread the topic is too old.;)

sundialsvc4
03-16-2010, 05:59 PM
It would just reopen again, and besides, the bit about motion-blur technique that just popped up is quite interesting.

This thread has, in some ways, already evolved into a discussion of how the techniques (and the compromises) of these two "highly-visible yet very-different" sub-disciplines of CG depart from one another. In my mind, that's worth continuing.

There are, indeed, many ways that CG is used: motion graphics, games, movies, television, architectural walk-throughs, forensics, and much more. And it is always a very big deal, always very important, that the right techniques be used. This is the true subject of this thread. Keep it open. Keep it going. :cool:

mister3d
03-16-2010, 06:03 PM
It would just reopen again, and besides, the bit about motion-blur technique that just popped up is quite interesting.

This thread has, in some ways, already evolved into a discussion of how the techniques (and the compromises) of these two "highly-visible yet very-different" sub-disciplines of CG depart from one another. In my mind, that's worth continuing.
I was just joking, that's why I put a smiley there.

sundialsvc4
03-16-2010, 06:04 PM
I was just joking, that's why I put a smiley there.
Yeah, but the mods around here are sometimes quick to the draw.

(And to be perfectly fair, sometimes they need to be.)

P.S. CHRiTTeR: I've read and re-read your post :argh: about five times now. :bowdown: Care to elaborate? A lot? "Take it away..."

sebastian___
03-16-2010, 07:42 PM
CHRiTTeR I don't think you read my post well. I don't care for someone to congratulate the artistic aspect of my pictures.

I posted only for technical consideration. I said before:

imagine a future where you will have everything in front of you in real-time. Including mblur - and not just a static single frame picture (rendered in 10 seconds) like in 3ds max. But mblur with 30 fps like your final movie. Soon we will have the final beauty pass - static singe frame - right in the viewport. But I dream even further - to have a complete animation - with the quality of final render - right in viewport. And working on my project I could just imagine I'm in the future already :) - because I have some of the mentioned capabilities already

About the audiences noticing the difference I will post a link movie (the images posted are from that movie)

And about what you said - with the preview stuff in viewport - you didn't understand me. It's not about creating the same stuff twice - and not about old technologies already implemented in 3ds max. With those you would never be able to achieve the quality and the speed.

But a technology preview in viewport with the same assets you are working. The drawback would be - in viewport you would not have antialiasing and you would have lower resolution.
But the image would be identical with the final render. Wouldn't that be nice ?

mister3d : I worked and studied the Fryrender engine. Very good stuff. And now the same with real-time? Amazing indeed.

slight off-topic maybe - not about rendering but other areas of 3d work. The cryengine facial Editor is also impressive and "more real-time" (if I can say that) than other professional editors I think :) I heard about work to connect to a motion capture device for real-time feedback like in Avatar. By the way - anyone have pictures of the 3d previz they used for avatar ? To see their real-time quality with their million dollar machines.

http://img715.imageshack.us/img715/7026/facialeditor.png

EDIT : the pic doesn't show - so here is the link http://img715.imageshack.us/img715/7026/facialeditor.png

Tryn
03-16-2010, 09:05 PM
Huh. Well, I thought this thread was asking the question in the topic, but now its all "I have a dream...." and the future of real-time display.
The old technologies in Max are still there because they work. I think you're underestimating the amount of pre-computation and baking that is taking place in CryEngine that is allowing you to work in real-time. And to me combining that with a high quality renderer is counter-intuitive. Real-time rendering would be awesome, obviously (though it would give me less time to browse CGtalk, hehehe...) but not at the expense of freedom to control every element of my render.

sebastian___
03-16-2010, 09:25 PM
Huh. Well, I thought this thread was asking the question in the topic, but now its all "I have a dream...." and the future of real-time display.

:)

I'm not gonna repeat about the lack of baking in cryengine, instead I'm gonna ask what do you think it is baked there ?

Tryn
03-16-2010, 10:50 PM
:)

I'm not gonna repeat about the lack of baking in cryengine, instead I'm gonna ask what do you think it is baked there ?

Baking is probably the wrong term, then - game tech is not my forte. I am sure there is a great deal of pre-comp going on - I can't see how it could possibly be otherwise (barring some sort of Faustian deal between Crytek and Santa Claus...).
My point was that you can't use the same computations for real time as for final rendering. I can definitely see viewport display constantly improving, but not to the point that you see a copy of your final render with accurate light bouncing, physically correct soft shadows and reflections. There will have to be 'cheats' in place for a while yet if we want to maintain the level of control we have now. I do a lot of compositing with photos, and the excuse of "this is what the software gives me, I can't do anything about it" won't fly with my bosses, let alone our clients.

CHRiTTeR
03-16-2010, 11:20 PM
CHRiTTeR I don't think you read my post well. I don't care for someone to congratulate the artistic aspect of my pictures



I posted only for technical consideration. I said before:

imagine a future where you will have everything in front of you in real-time. Including mblur - and not just a static single frame picture (rendered in 10 seconds) like in 3ds max. But mblur with 30 fps like your final movie. Soon we will have the final beauty pass - static singe frame - right in the viewport. But I dream even further - to have a complete animation - with the quality of final render - right in viewport. And working on my project I could just imagine I'm in the future already :) - because I have some of the mentioned capabilities already

About the audiences noticing the difference I will post a link movie (the images posted are from that movie)

And about what you said - with the preview stuff in viewport - you didn't understand me. It's not about creating the same stuff twice - and not about old technologies already implemented in 3ds max. With those you would never be able to achieve the quality and the speed.

But a technology preview in viewport with the same assets you are working. The drawback would be - in viewport you would not have antialiasing and you would have lower resolution.
But the image would be identical with the final render. Wouldn't that be nice ?

mister3d : I worked and studied the Fryrender engine. Very good stuff. And now the same with real-time? Amazing indeed.

slight off-topic maybe - not about rendering but other areas of 3d work. The cryengine facial Editor is also impressive and "more real-time" (if I can say that) than other professional editors I think :) I heard about work to connect to a motion capture device for real-time feedback like in Avatar. By the way - anyone have pictures of the 3d previz they used for avatar ? To see their real-time quality with their million dollar machines.

http://img715.imageshack.us/img715/7026/facialeditor.png

EDIT : the pic doesn't show - so here is the link http://img715.imageshack.us/img715/7026/facialeditor.png


I just told you that you can have the same motion blur in max vieuwport if you want to.
All you need is an .fx shader that applies this as a scene effect.

The current technologie to do that is max isnt 'old'. Its the same tech games use today.
A lot of the assets you see in games are actually built in 3Dsmax before they get imported into the game engine.
You do realise 3dstudio max is the most used 3d software in the gaming industry, right?

What you fail to understand is that the max viewport uses the same technologie and principles as games. They are OpenGL and/or DirectX accelerated. The fact that you dont know anything about realtime shaders isnt a max limitation, its YOUR limitation.

I already told you you have to check out the directx material, the scene effect loader and read about metaSL as that is exactly what you are looking for.

Random Control's (makers of fryrender) Arion looks nice indeed, but it doesnt give final renders in realtime at 25-60fps. And again here its not a software limitation, but a hardware limitation.

I dont see what's so special about that screenshot of the facial editor?
Its just a low poly head with a nice texture on it with real time control sliders.
The same thing can be done in max.

The fact that you dont seem to know much about 3dstudio max, or how 3d works in general, isnt a max limitation.
This is all pretty basic stuff you know.

If you prefer to make movies with the crysis engine then no one is stopping you to do so.
But please stop acting like you know everything about the subject while you clearly dont.

Ppl take the time to try and explain you why, but you refuse to understand because in your head you think you have this great vision and no one else shares it. There are plenty of verry smart ppl who understand a whole lot more about it than you working on that vision FOR YEARS AND YEARS but it aint as simple as you think.

And let me tell you something else, when the time will be there that you can render high quality stuff in real time, then that will not be because they use the crysis technology that you are showing here but that will be because computers are fast enough to render with current high quality renderers (renderman, mental ray, vray, brazil, final render, etc...) at blazing speeds without the need of hacks used by games.
So it will actually be games using hq movie techniques and not the other way around.

sebastian___
03-17-2010, 12:19 AM
Of course there are heavy optimization - but "baking" isn't the right word.
Also no "real" AO - but Screen space AO..and so on.
For example object not visible by the camera are disabled and made invisible - still objects outside the view and casting shadows contribute to the view. But objects on the other side of the mountain are not rendered. Game designers normally use low poly objects for speed - but I did import some heavy poly objects and the frame rate was unchanged. But possible if I would fill the scene with high poly objects - the frame reate would drop.

I do a lot of compositing with photos, and the excuse of "this is what the software gives me, I can't do anything about it" won't fly with my bosses, let alone our clients.

I didn't understand that


The fact that you dont know anything about realtime shaders isnt a max limitation, its YOUR limitation.


Lol Now you really put in my place :)

I studied and I asked about the shader / scene effects in Max and I was told that is limited. No motion blur / screen space AO and other possible.

About the shaders for objects - yes they are the same shaders in use in games today but with limitations. For example objects can't interact with other objects - example for reflections. And even if I had the same possibility in max viewport - I certainly wouldn't have the same speed nor the possibility to render to 5.000 pixel resolution directly from max viewport. Nor the ultra fast possibility of automatically filling in 5 minutes an entire forest with trees, grass, rocks, stones, fallen leafs on the ground ...

So you see I did studied the possibilities for real time 3ds max viewport. I'm sorry I didn't gave a more detailed answer before.

About the facial Editor pic I posted : Last time I checked there were some limits regarding real-time work with characters in certain programs. For example working with heavy characters with rigging, high poly and complex morphs for facial - possed some frame rate issues and interactivity. For this purpose 3d artists build for animation only - a low poly characters - without skinning. The joints are not connected with skin. Like in these two Avatar pics. While the cryengine editor seems to not have any speed reduction even if you use skin and high poly characters with tons of morph joysticks and other controls. And also gives a nice preview image

http://img294.imageshack.us/img294/3204/avatar02d.jpg
http://img294.imageshack.us/img294/3204/avatar02d.jpg

http://img260.imageshack.us/img260/3548/avatar01d.jpg
http://img260.imageshack.us/img260/3548/avatar01d.jpg

CHRiTTeR
03-17-2010, 12:49 AM
check this out:
http://www.speedyshare.com/files/21473980/directx_blur.zip


all done in 2 minutes with shaders that are already included with the standard max installation (and there is much better stuff out there).


Sure its just a quick example and it isnt real motion blur, but crysis isnt using real motion blur either. Thats the whole point.
Also notice how the background doesnt get smeared, so it is in fact more correct than the one you used. ;)

check these pages out for an example of some better shaders (including ones that use AO):

http://www.robg3d.com/shaders.html
http://www.bencloward.com/resources_shaders.shtml

also check this out:
http://www.lumonix.biz/shaderfx.html


We are understanding you perfectly fine.


.

Tryn
03-17-2010, 12:59 AM
I didn't understand that


Sorry, it was a bit of a ramble. What I meant was I need precise control over my rendering settings. The images I produce have to stand up in a court of law, and they need to be backed up with evidence of how this-or-that was simulated. I need to have confidence that my tools are accurate, that there are no 'cheats' or optimization going on.

R10k
03-17-2010, 02:19 AM
I need to have confidence that my tools are accurate, that there are no 'cheats' or optimization going on.

Side note to the thread: Isn't having a separate specular control a cheat? I know what you're saying... I just tend to think of that one whenever people talk about accurate tools ;)

Tryn
03-17-2010, 03:08 AM
Side note to the thread: Isn't having a separate specular control a cheat? I know what you're saying... I just tend to think of that one whenever people talk about accurate tools ;)

It's a valid point, but we build materials to real world specifications wherever possible - Mental Ray materials in Max are very good at that.

CHRiTTeR
03-17-2010, 03:23 AM
in vray you can lock the specular highlights to the glossy value so you get physical correct results. When you create a new vray material it is setup like this by default.

Tryn
03-17-2010, 03:38 AM
Unfortunately we're running Mental Ray here, but thanks for the tip.

R10k
03-17-2010, 03:53 AM
It's a valid point, but we build materials to real world specifications wherever possible - Mental Ray materials in Max are very good at that.

Sounds like you have it covered :)

in vray you can lock the specular highlights to the glossy value so you get physical correct results. When you create a new vray material it is setup like this by default.

Ah, interesting. I'm not a VRay guy but that's a good one to know.

Tryn
03-17-2010, 04:30 AM
Well, looks like this thread is about done. Someone mention the nazis so we can all go home :D
Seriously though, I learnt a lot. Thanks for sharing your wisdom CHRiTTeR! And thank you Sebastian for getting it started; what CryEngine can do is pretty awesome.

CHRiTTeR
03-17-2010, 04:51 AM
here a few nicer examples from realtime screengrab from the 3ds max viewport using game shaders.

http://www.laurenscorijn.com/wp-content/uploads/2010/01/FinalGrab001.jpg

http://www.laurenscorijn.com/wp-content/uploads/2009/08/Kaleb_chair.jpg

http://www.laurenscorijn.com/wp-content/uploads/2009/08/AlecTrack.jpg

http://www.laurenscorijn.com/wp-content/uploads/2009/08/SamChester_Goat_Posed.jpg

http://www.laurenscorijn.com/wp-content/uploads/2009/08/UBot_shot.jpg

http://skins.thanez.net/kriss/render.jpg

http://i39.tinypic.com/avinv9.jpg

http://www.psytraxx.de/gameartist/speedtexture40/stc40cannon_overtimebykyo.jpg

PhilipeaNguyen
03-17-2010, 05:18 AM
Rather than continue what seems to have turned into a virtual blood war I think this can be turned into a more educational conversation. With all of the development going on with GPU based off-line rendering I'm surprised that didn't come up somewhere. There's never been a time where the quality of game rendering has approached offline CG rendering quality of any year until now. Realistically the Crysis engine could probably render at the quality of Toy Story 1.5 in realtime and that's saying a lot by itself. With the development of new renderers, like unbiased light generators, constantly revised techniques for computationally heavy features like hair rendering means there's always going to be a gap, but film isn't interactive, and only one perspective is necessary. Now may be the time to for offline renderers and asset development techniques to pick up a few tricks from the game industry to save time, but also to allow more time for development on those elements where there just is no way around taking a major compute hit. Thing like efficient scene light map baking and smooth transition between realtime lighting moving into the foreground from where all of the lighting was baked when it was in the background could be huge.

furryball
03-17-2010, 08:54 AM
Just another quality and speed test of FurryBall:
(The image was rendered in Mental Ray the first!! After it was tuned in FurryBall to looked same as Mental!!!)
The grass was made by FUR in Mental and FURRYBALL hairs in FurryBall.
(Right click - full size)
http://www.aaa-studio.cz/720_Gallery/images/exec.jpg

sebastian___
03-17-2010, 10:08 AM
Why do you wanna close the thread ? We are having a civilized conversation about the real-time solutions and also about the advantages/drawbacks of the real-time renderings vs the classical off line approach. Where is the blood ? There's no blood here :)

2 of the users here mentioned about closing the thread while other 2 or 3 said it should stay open as it's interesting. Maybe it's inconvenient for someone ? :)

Others should post more pictures with other real-time solutions, because so far it was just me in what seemed like a heavy advertising of cryengine :). BUt it only seemed that way, I was in fact just backing up with pics my points.

CHRiTTeR I studied the real-time possibilities since 3ds max 8. I also know those links www.bencloward.com and about shaderfx. Some answers about the limitations of shaders in max viewport are from the programmers of shaderfx.
Also I know some of the pictures posted with 3ds max real-time viewport. Still they are amazing.

furryball you should post more pictures showcasing features like AO, soft shadows, bokeh depth of field, advanced materials like skin shader or sss (single sheet) vegetation like leafs and so on. Or interior rendering with clay (gray materials) colors for lighting study. And seems you will soon competing with Autodesk since they will release their own GPU hardware renderer.

It seems there is a need in the market for rendering like cryengine - since Autodesk are willing to release a similar solution. It remains to be seen what features will have and what quality.

CHRiTTeR
03-17-2010, 01:35 PM
well clearly those statement werent right, bacause like i showed you, motion blur is possible in the max viewport.
ShaderFX is only for materials. So no you cant use shaderFX for motion blur (as far as i know) but you can make a motion blur post shader and use that in the viewport (like i showed you). I posted an example so you can see it with your own eyes, but still here you are ignoring it like it didnt happen and saying everyone is doing it wrong and you know everything about it while with every statement you make clearly shows you dont.

So, it looks to me you asked the wrong questions at the makers of shaderFX. Again this is proof you dont know what you are talking about.
You just admit yourself you have to ask someone else.

Also the fact that toy story can be rendered in realtime today isnt quite true.
They said the same thing 10 years ago ;)
Toy story has much better bitmap (texture) handling and filtering and AA filters, the geometry is a lot heavier and the animations are more complex, good depth of field and motion blur, etc...
This may seem not so important to you, but this makes all the difference between a good looking movie and a crappy looking movie.
But yeah, toystory hasnt the best looking CG... it was the first movie to be 100% CG. Its more famous for that than for its visual quality. No one ever said it was the best looking CG ever. There are movies from that age that have much better looking CG.

Furryball looks nice though, but as you can see it isnt rendered in realtime, even if t uses the magicall uber cpu killing GPU.
Realtime at 25fps = final renders at 0,04 seconds per frame. So its still 525 times too slow to be considered realtime.

And (as far i know) it cant do good raytraced reflections/refractions, which are needed in lots of cases. And what about GI?
21 seconds is still quite long for a single frame like that. I think you can get verry close to it with the scanline renderer or renderman? And 28 minutes is verry long to render that in mental ray? I dont know at which resolutions they were rendered, but looks to me you didnt do a fair comparison and set mental ray parameters far to high.

Im not saying furryball isnt nice. It looks verry good, but you should make more fair comparisons and be honnest about limitations. It would help a lot to avoid all the misinformation out there because of misleading marketting. It isnt going to help to sell your product to professionals but rather give you a bad reputation.

Also it is not because suddenly programmers know how to code better that everything can be rendered faster. For 98% its because of faster hardware.

If the techniques used in slow off line renderers are bad, then why is it that everytime the hardware gets faster more and more techniques from movies get used in games? And not the other way around, like this thread wrongfully suggests.
GPU's are getting adjusted/updated so they can do things like raytracing and such.
If the techniques in games were better than it would surely be the other way around.

So, im not saying games arent getting closer to movie quality. I agree (like i also mentioned earlier), but a lot of things you said are just wrong.

The fact they didnt use the cryengine for the realtime preview of avatar is because it would have taken a lot of extra time to write nice realtime shaders and models that work in a gaming/realtime engine. In the end it would be a waste of money because the audience is not going to pay to see how pretty the previews are, but the final renders. So they are just good enough to do what they are designed to do -> Preview the animations and camera angles.

sebastian___, you keep on ignoring the facts. Everytime i and others here who know something about the subject take the time to try to explain you things you keep on ignoring them and answer with ignorant facts that arent quite right/correct.
Thats why ppl want this thread closed; because it is verry annoying!!!
If you are convinced you can do better, then go write your awesome shaders or use cryengine or whatever you like and show us your realtime movie with avatar quality.
Hell, i would even be impressed if you can get close to toystory or terminator 2.

furryball
03-17-2010, 03:55 PM
Furryball looks nice though, but as you can see it isnt rendered in realtime, even if t uses the magicall uber cpu killing GPU.
Realtime at 25fps = final renders at 0,04 seconds per frame. So its still 525 times too slow to be considered realtime.

And (as far i know) it cant do good raytraced reflections/refractions, which are needed in lots of cases. And what about GI?
21 seconds is still quite long for a single frame like that. I think you can get verry close to it with the scanline renderer or renderman? And 28 minutes is verry long to render that in mental ray? I dont know at which resolutions they were rendered, but looks to me you didnt do a fair comparison and set mental ray parameters far to high.


Sorry CHRiTTeR,
but I don't understand you... :banghead::banghead::banghead:
Yes in 2k with 4x AA FurryBall is NOT realtime of course!! - But in Maya window, 1 sample it's realtime.

Resolution of final renders was of course same 1200x1600 (3x AA in Furryball)

Are you serious, that you can render this image in Renderman, in this resolution, with GI, with grass, DOF and 4 samples AA in 4core CPU "very close" to 21 seconds??? :hmm::hmm::hmm: ... :curious:

CHRiTTeR
03-17-2010, 04:42 PM
My point is that i dont understand why you are posting an image that took 21 seconds to show how fast your renderer is, while this topic is about why real time rendering is so fast and production rendering is slow. Can you pls elaborate me what those images add anything usefull to this topic?

Furryball looks nice though, but as you can see it isnt rendered in realtime, even if t uses the magicall uber cpu killing GPU.
Realtime at 25fps = final renders at 0,04 seconds per frame. So its still 525 times too slow to be considered realtime.

I dont know if it can be rendered that quick in renderman. Obviously i dont have the scene and i dont even own renderman. That is why i put a question mark there... I was asking and hoping someone who actually knows and owns rendermand could tell me.
But i know it can render this type of scenes VERRY quick and wouldnt be surprised if it can render it in 21 seconds or quicker.

I think you can get verry close to it with the scanline renderer or renderman?

Why must i restrict myself to render it on a 4core cpu? I thought furryball uses the GPU and not CPU? Yes, cpu costs more when you buy it, but your electricity bill will show a big difference if you render a lot on your 'cheaper' GPU. Especially entire movies. ;)
Still 28 minutes in mental ray seem a lot. What settings did you use?
Shadows are better in the mental ray version (looks like furryball is using AO and mental ray is using real GI) and the dof in furryball looks like simple zbuffer based imageblur, while mental ray is using sampled/raytraced dof.

Tryn
03-17-2010, 08:00 PM
Then by all means carry on. I'm in a different time zone to most people so threads can seem dead to me till I check CGtalk the next morning.
But I'll bow out because I have nothing else to add. Thanks again to everyone who shared their knowledge.

icester
03-17-2010, 09:10 PM
THE ONLY REASON I POSTED WAS TO SPAM.

sebastian___
04-14-2010, 06:23 PM
A little late with the reply but better late then never :)

Finally I found the time to render and upload this movie.

http://www.youtube.com/watch?v=4No43o7zkDQ

http://img263.imageshack.us/img263/7875/cryengineforestanimated.jpg

http://img683.imageshack.us/img683/6633/forestanimatedbokeh.jpg

I will mention again Ė this is not posted here for critique or to show my work. Itís only a technical showcase pertinent to the discussion.

I could have rendered the dragonfly as a separate pass - and simply apply a post motion blur effect for a more accurate effect.

The only thing composited are a few white small particles Ė which were also rendered with cryengine. Also I didnít apply any color correction nor levels or contrast.
In ten years of work I never had a movie where there wasnít a need for (extensive) color correction. But this maybe is a coincidence (with the scene) and not necessarily the cryengine merit.

teruchan
04-14-2010, 09:06 PM
My goodness! You're worried about someone critiquing your work?!

Tryn
04-14-2010, 09:21 PM
Nice work Sebastian. I don't have anything else to add to the thread but I just wanted to say that.

sebastian___
04-14-2010, 09:27 PM
teruchan : I don't know what you meant with that. All I said was - this is not WORK . It's just a test for a real time bokeh shader.

EDIT : teruchan maybe I misunderstood your post :)

yolao
04-20-2010, 12:28 PM
Amazing work Sebastian!!.

I think that using a game engine to make an animated project is a great and valuable choice. Checking your work on the moment is something of an enormous value. I believe that there are directors that already have used this in the past as a previz for their movies, and with the current technology you can really not just use it as a previz but also as a final output.

sebastian___
04-21-2010, 03:50 AM
Another good use for this would be matte painting or even better concept art. It's very easy and fast to sketch a plate as a base for drawing.

CGTalk Moderation
04-21-2010, 03:50 AM
This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.