Announcing Redshift - Biased GPU Renderer


#1

Hello folks,

Today we’re very pleased to officially announce the release of Redshift v0.1 Alpha.

Redshift is, to our knowledge, the world’s first fully GPU accelerated biased production-quality renderer.

Redshift supports multiple biased global illumination techniques: Brute-Force GI, Irradiance Cache (aka Final Gather), Irradiance Point Cloud (aka Light Cache) and Photon Mapping (GI and Caustics) - all fully GPU accelerated and performing many times faster than similar CPU-based solutions. As a biased renderer, Redshift provides you with the flexibility to tune your settings where it counts to achieve noise-free results faster when compared to unbiased renderers. People familiar with Mental Ray or VRay will feel right at home with Redshift.

//youtu.be/fjCguRdSlV0

A problem that plagues many GPU renderers on the market is that they are limited by the available VRAM on the graphics card (and most systems have significantly less VRAM than main memory). Redshift addresses this by using an out-of-core architecture for geometry and textures allowing you to render scenes with tens of millions of polygons and gigabytes of textures with off-the-shelf, inexpensive hardware.

Redshift currently integrates directly with Softimage 2011 through 2013 and Maya 2011 through 2013 on Windows XP or higher. 3ds Max support is in development. To run Redshift, you’ll need an NVidia graphics card supporting compute 1.2 or higher with 1GB VRAM or more.

You can check out our website http://www.redshift3d.com for more information.

We’re starting small and looking for interested alpha testers. If you’d like to take Redshift for a spin, visit http://www.redshift3d.com/get-redshift for information on submitting a request for alpha access.
Our goals for alpha are to shake out bugs prior to releasing to a broader audience and to gather feedback from users to help focus our development efforts.

Feature Summary
[ul]
[li]Point-based sub-surface scattering
[/li][li]Camera and object motion blur (deformation blur coming soon)
[/li][li]Instances and proxies
[/li][li]Flexible node-based shader system
[/li][li]Physically correct shaders, IES lights, physical sun & sky and physical camera
[/li][li]High quality elliptical texture filtering
[/li][/ul]

You can find a complete feature list on our website

Sample Renders (click images for higher resolution versions)


Scene courtesy of Jeff Patton.


#2

While your FAQ understandably states “Pricing details have not yet been finalized”,
do you perhaps have some kind of “ballpark figure” at this point?
:wink:


#3

Very interesting. Any chance to implement it to use AMD hardware as well?


#4

@Hirazi - Unfortunately, I can’t provide any solid pricing info yet, but we’d like to keep the price accessible to everyone so you can expect it to be priced competitively (and likely cheaper) compared to the other renderers. We’re also considering a couple of pricing tiers, but that’s still all TBD.

@davius - Redshift currently only supports NVidia hardware (since it uses CUDA) but we do plan to eventually support OpenCL and hence AMD hardware.


#5

Any chance we could see the render-times on some of those test renders? I’m curious what sort of speed boosts you can achieve doing these techniques on a GPU, since this is the first of it’s kind that I’ve seen.


#6

Yes, render times would be great to know. Given the speed of GPU path tracing, I would assume that irradiance caching would be blazingly fast.


#7

According to a post on the Softimage mailing list this render took 2 minutes on a single GTX 470.


#8

On the web site it claims supporting out of core textures and geometry. If that is true and doesn’t come with a severe performance penalty, then that’s the most impressive accomplishment to me!


#9

Sorry for the delay in responding. We’ve had quite a few requests for alpha so we’ve been busy fielding those. On the plus side, we got a chance to get some times on the GTX Titan as well for comparison.

Here are the render times for the screenshots posted (the higher res ones not the embedded ones). The machine used for these tests was a Core i7 950 (3.07 Ghz) with 8GB RAM.

Gargoyle 1280x720 (jp_studio_icp_1280.png)
GTX 470: 35 seconds
GTX 670: 27 seconds
GTX Titan: 17 seconds

Car 1024x683 (mazda_1024.png)
GTX 470: 75 seconds
GTX 670: 65 seconds
GTX Titan: 39 seconds

Evermotion Living Room 1200x1000 (AI_V8_S10_1200.png)
GTX 470: 155 seconds
GTX 670: 123 seconds
GTX Titan: 77 seconds

Keep in mind that we’re just starting alpha and we still have many more opportunities for optimizations to improve on these numbers.


#10

In a sea of unbiased gpu renderers, this is a refreshing approach :thumbsup:

The render times look good given it’s still in alpha stage.
Of course losts of questions pop up to ones mind, like how performance scales with 2 graphic cards. Support/performance with DOF, motionblur, displacement. GI quality and consistency in animations etc… Will keep an eye on the progress of this renderer for sure.


#11

Hi Stew!

Regarding your question about out-of-core performance…

Not going to lie about it: there can be a performance penalty with geometry if it’s 100% visible and it’s many times larger than what we can fit in VRAM. There are potential solutions to this which we’ll be attempting in the next few months. We’ll keep you posted! :slight_smile:

Textures, on the other hand, work really great out-of-core because of tiling and mip-mapping. We have rendered scenes with 100s of megabytes or even gigabytes of textures while only using something like 30-60MB of texture cache memory!

Please let us know if you have any questions/comments!

-Panos


#12

Yeah, there are a ton of GPU path tracers out there now, so it is exciting to see a development like this :cool: Would be great to have something where you are not always battling with noise levels , as you do with the unbiased renderers.

I couldn’t see any animation samples on your site, so was wondering how suitable it is for animation. What sort of GI sampling modes are suitable for object animation in Redshift? Is a IR/LC combo the most suitable, as with Vray, or would it require a brute force approach?

As I mainly use Cinema 4D, I am hoping there are plans for supporting it as well…is this on the cards?


#13

Are Displacement maps supported?

That one feature would truly make it stand out from the rest.


#14

I just took a look at your web site and indeed you are planning on this an other features like Deformation Motion Blur, Displacement, Hair, Particles, Volumetrics, Render Passes, Multi-GPU Support, Network Rendering, Ptex Support.
http://www.redshift3d.com/products/coming-soon

Although I guess it will be years till we see all or most of these implemented.


#15

Definitely piques my interest. I use vrayRT a bit, but as an animator it is still missing a lot of things. If you can get through your “coming soon” list quickly, this should be a great product!


#16

Hello Zendorf

Regarding your question about animation samples, we are working on adding more examples and maybe videos showing our renderer in action. We also hope that some of you guys can give us an artistic hand and deliver us from “programmer art”! :wink:

We have rendered tons of animations here with the irradiance cache and the irradiance point cloud enabled. We have found that GI animation artifacts (flickering or ‘crawling’ splotches) can be typically eliminated my increasing the amount of IC rays / IPC samples. We have compared our results with other renderers (and with equivalent settings) and we believe that we are at the very least competitive in terms of quality for both static scenes and animations.

Of course, if you don’t want to have to experiment with point-based GI settings, brute force is always an option. Brute-force can be combined with the irradiance point cloud which makes multiple GI bounces render much faster and with much less noise!

Regarding Cinema4D support: we are definitely open to it - especially if we hear lots of people asking for it! After 3DSMax support is online, we’ll be evaluating support for all other DCCs. An API is also in the works so that 3rd parties can hopefully help us with plugin creation.

-Panos


#17

Hello GQ1!

We hope to have most of the listed features online in the next few months. Since you mentioned displacement maps on your original post: we already have tessellation/subDs (tris/quads) working so displacement mapping (which is a natural extension to tessellation) is coming real soon! Also deformation motion blur is currently being implemented.

Depending on user feedback we’ll prioritize on the remaining features.

Thanks!

-Panos


#18

Displacement and deformation motion blur would be on the top of my list.


#19

Hi guys,

My new CGTalk account seems to have misplaced a couple of posts I made earlier today. They might show up later once they go through CGTalk’s approval process but, in case they don’t, I will re-iterate them here…

Regarding out-of-core performance:

Won’t lie about it: there can be a performance penalty with out-of-core geometry if it’s all visible and its size exceeds the available VRAM by a large factor. There are solutions to improving this condition which we’ll be trying in the next few months. BTW, if you are curious about how many triangles we can fit in VRAM, a very rough ballpark figure is 10m triangles in 600MB of VRAM. We are working on improving this number further.

While out-of-core geometry performance might suffer in some cases, out-of-core texturing performance works really great!! We have rendered scenes with hundreds of megabytes of texture data (and in a few cases gigabytes) and it consumed very little VRAM (30-60MB). So if you like super high resolution textures, you are in luck! :slight_smile:

Regarding animation in GI:

We have rendered tons of animations here with the irradiance cache and irradiance point cloud enabled. We have found that any GI animation artifacts (such as flickering or crawling splotches) go away once the quality settings are high enoungh, i.e. enough IC rays and irradiance point cloud samples. “Irradiance point cloud”, btw, is our equivalent to VRay’s “Light Cache”. It doesn’t work exactly the same but it serves a very similar purpose (to get multiple GI bounces fast and clean).

We’ve done comparison renders with other renderers (with equivalent quality settings) and we’ve found our results to be at least comparable and, often, superior.

Having said all of that, some people simply don’t enjoy tweaking GI settings. If that’s the case, there’s always the brute-force fallback which, once combined with the irradiance point cloud (which has very few settings to tweak) can give you fast and clean multiple GI bounces!

Regarding Cinema4D support:

Once 3DSMax support is online we’ll look to all other DCCs. If we get enough interest for Cinema4D, Sketchup, Lightwave, Blender or any other DCC we’ll obviously do it! We are also working on an API so that 3rd parties can help us out with creating plugins for other DCCs.

Please let us know if you have any questions or comments!

-Panos


#20

this looks mighty impressive and would be interested in a cinema4d version :beer: