Is the future of Rendering Game Engines?Using Unreal/Unity as a primary DCC Tool

Become a member of the CGSociety

Connect, Share, and Learn with our Large Growing CG Art Community. It's Free!

REPLY TO THREAD
 
Thread Tools Search this Thread Display Modes
  03 March 2015
We've been playing with UE4 for a few months now and there's enormous potential there! The workflow from 3D App to UE is the real sticking point. As mentioned though Blueprints could be built to largely optimize each artists pipeline to their specific needs. Also the ability to create interaction, + use of VR headgear etc provide many opportunities than simply using is as a faster way to spit out frames. Until we get 32-bit and true multi-pass output from Matinee post work is a bit challenging. That said you can do a ton of real-time post-processing work right within UE4
__________________
 
  03 March 2015
Originally Posted by zpanzer: Thats pretty cool. I've been playing around with the engine myself and I'm wondering how do you get the clean reflections? I always get very dithered/noisy reflections that looks like they don't have enough samples or something. I've tried putting out a postproccessvolume and enabling Screenbased reflections, but it doesn't give me a nicer result.


You can set the SSR quality to 100 in the post process settings. And then use the console command r.ssr.quality = 4 which will increase the quality further.
__________________
The Z-Axis
 
  03 March 2015
We used a modifed Cryengine to render 128 shots for Maze Runner, as well as other movies coming this year that I unfortunately can't name yet.
We use the engine on set, previz and for final render. Company name is The Box Creative llc. and we're based on Venice - California.

Used Unreal as well, but we hit too many roadblock after a while.

Our pipeline is designed all around using game engine for rendering and we've been doing it for 3 years now (we presented it to a small professional group at Siggraph 2011). This includes multiple game "pre-rendered" cinematics as well as movies and tv show. All is script based and pretty automatic now, and it doesn't make a difference for us to use cry or unreal or vray, it's all the same. Just the game engine is much much faster, especially during Maze Runner where we needed to render at 6k resolution.

Haven't had time to do a BTS or making of yet, but it is in the work. We have pretty heavy movie title coming this year and we've been quite busy, after that maybe.

I'm quite surprise to see still articles about that thought, or maybe just that articles are talking about "future" possibilities, while many in the industry have been using it for many years. I still believe we were the first one to use it to render and do vfx for a live action AAA movie. But I might be wrong.

Just again, it is being use RIGHT NOW on multiple movies. And it has been for years.

I'm quite happy to actually see that people didn't even think it could be a game engine, and watch movies and tv show rendered with a game engine without even seeing the difference.

Last edited by SuperXCM : 03 March 2015 at 08:50 PM.
 
  03 March 2015
I can't wait for making of!

Last year on my country 3D forum I was almost killed by oldtimers saying that it cannot be used in production. Because Pixar doesn't use it, that means it cannot be used.

It's funny to see many people that just can't believe how fast it progresses.

Originally Posted by SuperXCM: Haven't had time to do a BTS or making of yet, but it is in the work. We have pretty heavy movie title coming this year and we've been quite busy, after that maybe.


What roadblocks? Was it lack of Dynamic GI? I suppose it was rather lack of support for some file formats, import and output?

BTW. You used CryEngine or CineBox?

Originally Posted by SuperXCM: Used Unreal as well, but we hit too many roadblock after a while.
__________________
If it's not real-time, it's a piece of shit not a state of the art technology" - me

magic happens here... sometimes
go nodeway

Last edited by mantragora : 03 March 2015 at 09:31 PM.
 
  03 March 2015
Originally Posted by CGIPadawan: Actually, Falk answered it quite well by pointing out to me that UE4's lighting can occur in 3 modes ranging from Static to Full Dynamic.

In addition, yes, there is a push for more GPU's/Parallel GPU's but for Unreal 4 it is not yet supported.

https://answers.unrealengine.com/qu...upport-sli.html


Actually, the fact that it's not SLI friendly doesn't mean Parallel Processing is not supported.
Unreal and the other game engines renders are done in the GPUs and thus are Parallel Processed. The more CUDA cores your graphic card has, the faster youll be pulling out frames.
NVIDIA's SLI and ATI's Crossfire are just technology to distribute rendering across multiple video cars but with today's best cards you can have 3000 plus CUDA cores in one single GPU. Thats why it's computational power is called Parallel Processing.
Parallel Processing isn not a multi GPU/Video Cards setup but it can occur with multiple GPUs. NVIDIA already have a cloud solution called GRID with thousands of GPU servers which can do a whole lot of stuff besides rendering graphics. Shazam, the popular music finder app run the searchs on those servers...

Each CUDA core is one processor, a tiny one, not at all like a Bulky i7 Core, but Parallel Processing is the power of harvesting computational processing through all this tiny processors.

More about Parallel Processing and GPU Computing:


So, just to clarify, all technological advances we are seeing in Game Engines (Unreal, Unity, Frostbite, Snowdrop, Cryengine, etc) and 3D Software Real Time Rendering Plugins and Standalones (Otoys, Vray RT, Furryball, etc) are possible due to NVIDIA's reasearch and development of new, very efficient chips composed of hundreds, thousands of tiny little cores which can be more and more programable. Logical Processors like the i7, Xeon, etc don't do Parallel Processing, thus, if you disable your Video Card Drivers what you'll get is a machine without Parallel Processing... The old "Software Render" stuttering... Can't even open a browser...
 
  03 March 2015
Sorry, went a little bit off the topic but my point is that we should understand that behind Game Engines technologies there's another layer. All this technical achievement are possible today because there's a silent revolution behind it, the revolution of GPU computing. Every time you upgrade or buy a new video card you are sponsoring this evolution, so keep up with that!
 
  03 March 2015
We are getting close and the hardware is definitely becoming more powerful but I believe for general purpose 3d applications the problem is a good, robust and consistent algorithm for GI and ray tracing that works in a GPGPU fabric. I believe that right now most 3d packages 3d rendering routines use CPU based algorithms. The voxel based GI solution that NVIDIA is promoting with the Kite Demo and last years Apollo 11 demo probably wont cut it in a 3d DCC suite, except maybe for previews. Also add to that the fact that these apps will be rendering larger and larger frames for 4k and beyond and you can see that the algorithms have to have to be built for different concerns, mainly dealing with consistency and accuracy before speed. So it is still a trade off until someone comes up with the killer app of a monte carlo solution that runs in a hybrid CPU GPU architecture. And that probably wont happen until the dust clears in the general purpose CPU/GPU race between Intel, Nvidia and AMD. Right now though that GPGPU fabric isn't there just yet, it is coming but it isn't necessarily clear what the hardware architecture will look like (note the whole Larrabee fiasco). On the other side of the house, one of the bigger changes that is coming is in how the OS and low level graphics APIs use the CPU and GPU hardware. Windows 10 with directx 12 is being built to take more advantage of hybrid computing within the graphics subsystem (which started in Vista) on the OS side and 3d DCC apps are following suit as well as a result. And along with that you have the newest version of OpenGL called Vulcan that does similar things. Once we get software that is written to use heterogenous hardware in a more collaborative manner, things will definitely begin to speed up. The point being that right now it is really hard to share large data sets between GPU and CPU for collaborative rendering in a general purpose context using existing APIs, as opposed to custom built proprietary solutions.

http://arstechnica.com/gadgets/2015...modern-systems/

http://www.winbeta.org/news/amd-dir...-multi-core-cpu

http://www.anandtech.com/show/8217/...cessor-detailed

http://dl.acm.org/citation.cfm?id=2523756
__________________
Well now, THATS a good way to put it!

Last edited by DotPainter : 03 March 2015 at 10:36 AM.
 
  03 March 2015
@OlavoEkman: Appreciate the extra information.

But my point is simple. I can only hear one of my two GT650M's running in UE4. :P There was also that official thread where Epic Games discouraged people from using SLI or Crossfire.

There's also these:

https://forums.geforce.com/default/...nreal-engine-4/

https://answers.unrealengine.com/qu...li-support.html

So yeah it's probably in there somewhere/someday... It's just not working right now - nor is it encouraged by Epic Games for end users right now.
__________________
"Your most creative work is pre-production, once the film is in production, demands on time force you to produce rather than create."
My ArtStation
 
  03 March 2015
Originally Posted by SuperXCM: We used a modifed Cryengine to render 128 shots for Maze Runner, as well as other movies coming this year that I unfortunately can't name yet.
We use the engine on set, previz and for final render. Company name is The Box Creative llc. and we're based on Venice - California.

Used Unreal as well, but we hit too many roadblock after a while.

Our pipeline is designed all around using game engine for rendering and we've been doing it for 3 years now (we presented it to a small professional group at Siggraph 2011). This includes multiple game "pre-rendered" cinematics as well as movies and tv show. All is script based and pretty automatic now, and it doesn't make a difference for us to use cry or unreal or vray, it's all the same. Just the game engine is much much faster, especially during Maze Runner where we needed to render at 6k resolution.

Haven't had time to do a BTS or making of yet, but it is in the work. We have pretty heavy movie title coming this year and we've been quite busy, after that maybe.

I'm quite surprise to see still articles about that thought, or maybe just that articles are talking about "future" possibilities, while many in the industry have been using it for many years. I still believe we were the first one to use it to render and do vfx for a live action AAA movie. But I might be wrong.

Just again, it is being use RIGHT NOW on multiple movies. And it has been for years.

I'm quite happy to actually see that people didn't even think it could be a game engine, and watch movies and tv show rendered with a game engine without even seeing the difference.


Big eye opener! Thank you for sharing!
__________________
"Your most creative work is pre-production, once the film is in production, demands on time force you to produce rather than create."
My ArtStation
 
  03 March 2015
Originally Posted by SuperXCM: We used a modifed Cryengine to render 128 shots for Maze Runner

Hi XCM, thank you for sharing this with us. Very exciting to know. May I ask which CryEngine you guys modified? Is it the CryEngine Cinematic, or is it the one for game?
 
  03 March 2015
Anything right now relative to GPU rendering has to deal with limitations in memory, which means in many cases you are dealing with some level of shortcuts. And when you use game engines that is even more true, as opposed to direct GPU APIs like CUDA. But that said, given the right amount of shortcuts you will be able to generate photoreal renders renders but not necessarily using large textures, GI or ray tracing just yet.

BOXX Tech has some interesting articles on this:
Quote: Memory is the primary limitation with GPU’s at the moment. However with each generation of cards that becomes less of an issue. The K6000′s and K40′s with 12 GB memory entered the realm of what high quality production demands. Even still, frequently I encounter scene files using 36+ GB of CPU RAM, so I realized that adopting a ‘rationing’ mentality to make CONSTRUCT a viable GPU rendered project was necessary. Reducing the unique geometry footprint by using instancing as much as possible helped. The house under construction is essentially thousands of instanced boxes in the form of repeating wood planks. Unique grass patches and trees were kept to a minimum, again instancing those as much as possible. Relying on shader based color variations were extremely useful in creating visual complexity with minimal assets. The robots again were all the same instanced geometry with shader color variations. So with those geometry efficient approaches, next came texture maps. This by far proved the largest memory consumer. Very quickly a scene full of a few dozen 2k or 4k texture maps would demand 6-10 gigs of memory. This was where the most focused optimizations needed to occur. Our character modelers implemented pixel saving strategies like storing multiple data channels (reflect,gloss,bump) inside the 3 RGB color channels to be accessed individually in the shader for their respective data usage. We avoiding color info storage in diffuse maps, instead grayscale maps used to mix various colors together. Collapsing complex shader trees to single bitmaps per component also was needed at times. These are all tricks game engines have used for years.

Lastly, I had various resolutions for every bitmap in the scene, from 256 ^2 up to 4k. For each shot I started with a low end resolution like 256 or 512…did some test renders with DOF and motion blur enabled, and evaluated what scene elements needed higher resolutions and increased those accordingly. In the end, most scenes hovered between 4 and 6 GB of memory, with the most complex scenes touching 7 or 8 GB. Which actually was quite comfortable, since there’s always a little more overhead on the main card handling windows/app operations. My K6000 had this responsibility, so always was about 1 GB higher than the two compute only K40′s.



http://blog.boxxtech.com/2014/04/07...motion-builder/

And the next generation architecture for NVIDIA is going to address the memory issue:
Quote: While PCIe is fast, this bandwidth doesn't measure up when compared to the speed at which the CPU can access memory. This bandwidth can become even further limited if systems use a PCIe switch, which can occur in multi-GPU systems. NVLink is a new high-speed interconnect for CPU to GPU and GPU to GPU communication, intended to address this problem. Co-developed with IBM, NVLink can hit speeds of between 80 and 200 GB/sec of bandwidth, providing very fast access to system memory. It is also more energy efficient in moving data than PCIe.

Stacked memory allows much more memory to be on-board and accessible to the GPU. While the bandwidth of the memory on a GPU is greater than that on the CPU, there simply isn’t enough memory on the GPU for many of the tasks needed in vfx and post. NVLink not withstanding, moving data from the GPU to the memory chip on the card is inefficient as the card itself has speed limitations due to its size and it actually takes (relatively) considerable power consumption to do the move.

In Pascal, the solution is to move the memory and stack multiple memory chips it on top of the GPU on a silicon substrate (a slice of silicon). They then cut through the memory, directly connecting the memory to the GPU. This will solve having to get memory off the actual GPU chip and onto the surrounding board. The new architecture has three times the memory bandwidth of Maxwell, which should be close to hitting about 1TB/sec. That's welcome news for our industry.


http://www.fxguide.com/quicktakes/n...5-day-1-report/

But even with that memory limitation, the trend is definitely towards GPUs and by extension game engines to generate production quality renders which are already approaching real time in optimal cases.
__________________
Well now, THATS a good way to put it!
 
  04 April 2015
Unreal Engine Livestream - Sr. VFX Artist Bill Kladis Joins to Talk Particles! - Live

If you've been using the Unreal Engine for any sort of VFX work, you're probably familiar with ImbueFX, Bill Kladis' site for learning resources dedicated to the dark art of VFX / particle systems in UE. Follow along as Bill does a quick live tutorial on properly using refraction with particles, demonstrates some of the new features in Cascade being used in the new UT release (blueprints, lit particles, etc) and demos Jon Lindquist's new animated particle system being released with 4.8.

https://t.co/JUd1PlCNW6
__________________
LW FREE MODELS:FOR REAL Home Anatomy Thread
FXWARS
:Daily Sketch Forum:HCR Modeling
This message does not reflect the opinions of the US Government

 
  04 April 2015
Originally Posted by DotPainter: Anything right now relative to GPU rendering has to deal with limitations in memory, which means in many cases you are dealing with some level of shortcuts. And when you use game engines that is even more true, as opposed to direct GPU APIs like CUDA. But that said, given the right amount of shortcuts you will be able to generate photoreal renders renders but not necessarily using large textures, GI or ray tracing just yet.

BOXX Tech has some interesting articles on this:

http://blog.boxxtech.com/2014/04/07...motion-builder/

And the next generation architecture for NVIDIA is going to address the memory issue:

http://www.fxguide.com/quicktakes/n...5-day-1-report/

But even with that memory limitation, the trend is definitely towards GPUs and by extension game engines to generate production quality renders which are already approaching real time in optimal cases.


Redshift3d has out of core memory access which removes the hard memory cap and allows system ram usage. I believe octane is also working on allowing textures to use system ram as well.
__________________
www.matinai.com
 
  04 April 2015
for archivis and advertising for instance it could be so, but I expect raytracers to keep some competitive advantage over game engines for specialised stuff and accurate GI effects
__________________
www.yafaray.org
Free rays for the masses
 
  04 April 2015
Originally Posted by OlavoEkman: NVIDIA's SLI and ATI's Crossfire are just technology to distribute rendering across multiple video cars but with today's best cards you can have 3000 plus CUDA cores in one single GPU.

Not really. SLI and Cross-Fire are synchronization platforms that let two cards keep memory in sync and communicate across an added bus. That is different than using both cards for processing, something that doesn't require SLI, in fact GPU rendering engines (e.g. Redshift) can work across multiple cards and maintain their own data synchronicity regardless of SLI.

Quote: Each CUDA core is one processor, a tiny one, not at all like a Bulky i7 Core, but Parallel Processing is the power of harvesting computational processing through all this tiny processors.

That is both incorrect and reductive, and generally buying too much into nVIDIAa marketing.
i7 CPUs are also capable of parallel processing, and that's not just across cores. You also have multiple pipelines that can be parallel per pipe with a one stage stagger across, you have SIMD style instructions in the vectorizable parts (going all the way back to SSE), you have across cores, and you have on-board graphic units that are actually capable of quite a few added vector ops across dozens to hundreds of pipes.

Your exposition of CPU architecture is simplified to the point of being downright wrong I'm afraid.

Quote: So, just to clarify, all technological advances we are seeing in Game Engines (Unreal, Unity, Frostbite, Snowdrop, Cryengine, etc) and 3D Software Real Time Rendering Plugins and Standalones (Otoys, Vray RT, Furryball, etc) are possible due to NVIDIA's reasearch and development of new, very efficient chips composed of hundreds, thousands of tiny little cores which can be more and more programable. Logical Processors like the i7, Xeon, etc don't do Parallel Processing, thus, if you disable your Video Card Drivers what you'll get is a machine without Parallel Processing... The old "Software Render" stuttering... Can't even open a browser...

See above, this is pretty wrong. You confuse many specialized units for the only form of parallelism, but that's not how it works.
There is also hardware convergence happening, with Intel offering more and more along the lines of massively parallel both on die and across bus, and nVIDIA bum-rushing hard the notion of having an on-board CPU like, long pipe, non specialized unit (Maxwell's biggest step forward is the fact it has an ARM compliant unit on board on GPUs).

Again, convergence and heterogeneous resources are key here, and both Intel and nVIDIA understand that and have taken different routes to the same ultimate goal for different sectors and use cases.

Saying GPUs are limited to many tiny cores is fundamentally incorrect, modern GPUs already transcended that and more is being done to transcend it further, and on the other side saying i7 CPUs aren't capable of parallel processing is asinine in the extreme when P3s already had SSE and P4s had multiple manipulatable staged pipes, let alone modern CPUs with 48 virtual cores, hundreds of total pipes amounting to versatile computation across all those units, on-board Iris chips with their own specialized cores and all that.

You might want to buy into marketing a bit less and write a bit more SIMD friendly code on both Intel and CUDA if you want to give such a peremptory view of matters on parallelism
__________________
Come, Join the Cult http://www.cultofrig.com - Rigging from First Principles
 
reply share thread



Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
CGSociety
Society of Digital Artists
www.cgsociety.org

Powered by vBulletin
Copyright ©2000 - 2006,
Jelsoft Enterprises Ltd.
Minimize Ads
Forum Jump
Miscellaneous

All times are GMT. The time now is 08:52 PM.


Powered by vBulletin
Copyright ©2000 - 2017, Jelsoft Enterprises Ltd.