Announcing Redshift - Biased GPU Renderer

Become a member of the CGSociety

Connect, Share, and Learn with our Large Growing CG Art Community. It's Free!

Thread Tools Display Modes
  03 March 2013
Announcing Redshift - Biased GPU Renderer

Hello folks,

Today we're very pleased to officially announce the release of Redshift v0.1 Alpha.

Redshift is, to our knowledge, the world's first fully GPU accelerated biased production-quality renderer.

Redshift supports multiple biased global illumination techniques: Brute-Force GI, Irradiance Cache (aka Final Gather), Irradiance Point Cloud (aka Light Cache) and Photon Mapping (GI and Caustics) - all fully GPU accelerated and performing many times faster than similar CPU-based solutions. As a biased renderer, Redshift provides you with the flexibility to tune your settings where it counts to achieve noise-free results faster when compared to unbiased renderers. People familiar with Mental Ray or VRay will feel right at home with Redshift.

A problem that plagues many GPU renderers on the market is that they are limited by the available VRAM on the graphics card (and most systems have significantly less VRAM than main memory). Redshift addresses this by using an out-of-core architecture for geometry and textures allowing you to render scenes with tens of millions of polygons and gigabytes of textures with off-the-shelf, inexpensive hardware.

Redshift currently integrates directly with Softimage 2011 through 2013 and Maya 2011 through 2013 on Windows XP or higher. 3ds Max support is in development. To run Redshift, you'll need an NVidia graphics card supporting compute 1.2 or higher with 1GB VRAM or more.

You can check out our website for more information.

We're starting small and looking for interested alpha testers. If you'd like to take Redshift for a spin, visit for information on submitting a request for alpha access.
Our goals for alpha are to shake out bugs prior to releasing to a broader audience and to gather feedback from users to help focus our development efforts.

Feature Summary
  • Point-based sub-surface scattering
  • Camera and object motion blur (deformation blur coming soon)
  • Instances and proxies
  • Flexible node-based shader system
  • Physically correct shaders, IES lights, physical sun & sky and physical camera
  • High quality elliptical texture filtering

You can find a complete feature list on our website

Sample Renders (click images for higher resolution versions)

Scene courtesy of Jeff Patton.

Last edited by nburtnyk : 03 March 2013 at 02:33 AM. Reason: Added live rendering video
  03 March 2013
While your FAQ understandably states "Pricing details have not yet been finalized",
do you perhaps have some kind of "ballpark figure" at this point?
unofficial Softimage community
  03 March 2013
Very interesting. Any chance to implement it to use AMD hardware as well?
  03 March 2013
@Hirazi - Unfortunately, I can't provide any solid pricing info yet, but we'd like to keep the price accessible to everyone so you can expect it to be priced competitively (and likely cheaper) compared to the other renderers. We're also considering a couple of pricing tiers, but that's still all TBD.

@davius - Redshift currently only supports NVidia hardware (since it uses CUDA) but we do plan to eventually support OpenCL and hence AMD hardware.
  03 March 2013
Any chance we could see the render-times on some of those test renders? I'm curious what sort of speed boosts you can achieve doing these techniques on a GPU, since this is the first of it's kind that I've seen.
__________________ - 3D Visualization and Content Creation
  03 March 2013
Yes, render times would be great to know. Given the speed of GPU path tracing, I would assume that irradiance caching would be blazingly fast.
  03 March 2013
According to a post on the Softimage mailing list this render took 2 minutes on a single GTX 470.
  03 March 2013
On the web site it claims supporting out of core textures and geometry. If that is true and doesn't come with a severe performance penalty, then that's the most impressive accomplishment to me!
  03 March 2013
Sorry for the delay in responding. We've had quite a few requests for alpha so we've been busy fielding those. On the plus side, we got a chance to get some times on the GTX Titan as well for comparison.

Here are the render times for the screenshots posted (the higher res ones not the embedded ones). The machine used for these tests was a Core i7 950 (3.07 Ghz) with 8GB RAM.

Gargoyle 1280x720 (jp_studio_icp_1280.png)
GTX 470: 35 seconds
GTX 670: 27 seconds
GTX Titan: 17 seconds

Car 1024x683 (mazda_1024.png)
GTX 470: 75 seconds
GTX 670: 65 seconds
GTX Titan: 39 seconds

Evermotion Living Room 1200x1000 (AI_V8_S10_1200.png)
GTX 470: 155 seconds
GTX 670: 123 seconds
GTX Titan: 77 seconds

Keep in mind that we're just starting alpha and we still have many more opportunities for optimizations to improve on these numbers.
  03 March 2013
In a sea of unbiased gpu renderers, this is a refreshing approach

The render times look good given it's still in alpha stage.
Of course losts of questions pop up to ones mind, like how performance scales with 2 graphic cards. Support/performance with DOF, motionblur, displacement. GI quality and consistency in animations etc... Will keep an eye on the progress of this renderer for sure.
"Any intelligent fool can make things bigger, more complex & more violent..." Einstein
  03 March 2013
Hi Stew!

Regarding your question about out-of-core performance...

Not going to lie about it: there *can* be a performance penalty with geometry if it's 100% visible and it's many times larger than what we can fit in VRAM. There are potential solutions to this which we'll be attempting in the next few months. We'll keep you posted!

Textures, on the other hand, work really great out-of-core because of tiling and mip-mapping. We have rendered scenes with 100s of megabytes or even gigabytes of textures while only using something like 30-60MB of texture cache memory!

Please let us know if you have any questions/comments!

  03 March 2013
Yeah, there are a ton of GPU path tracers out there now, so it is exciting to see a development like this Would be great to have something where you are not always battling with noise levels , as you do with the unbiased renderers.

I couldn't see any animation samples on your site, so was wondering how suitable it is for animation. What sort of GI sampling modes are suitable for object animation in Redshift? Is a IR/LC combo the most suitable, as with Vray, or would it require a brute force approach?

As I mainly use Cinema 4D, I am hoping there are plans for supporting it as this on the cards?
  03 March 2013
Are Displacement maps supported?

That one feature would truly make it stand out from the rest.

  03 March 2013
I just took a look at your web site and indeed you are planning on this an other features like Deformation Motion Blur, Displacement, Hair, Particles, Volumetrics, Render Passes, Multi-GPU Support, Network Rendering, Ptex Support.

Although I guess it will be years till we see all or most of these implemented.

  03 March 2013
Definitely piques my interest. I use vrayRT a bit, but as an animator it is still missing a lot of things. If you can get through your "coming soon" list quickly, this should be a great product!
Thread Closed share thread

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

vB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Society of Digital Artists

Powered by vBulletin
Copyright 2000 - 2006,
Jelsoft Enterprises Ltd.
Minimize Ads
Forum Jump

All times are GMT. The time now is 02:17 AM.

Powered by vBulletin
Copyright ©2000 - 2018, Jelsoft Enterprises Ltd.