Announcing Redshift - Biased GPU Renderer


Thanks for the info Panos, you should put up some of those animation renders onto your website with render times. This would definitely generate more interest than just stills.

It is a pity that Cinema4D is often giving such a low priority among developers(ie. some time after the Autodesk apps), considering how large its marketshare now is. Except for the film/vfx market there are almost certainly more seats of C4D out there than Softimage, for instance.
Many broadcast and motion graphics studios would kill for a renderer like this if it is as fast as it seems, but in the meantime Vray and Octane will be the goto third party renderers.

Will keep my eye on this product though as it is very promising :cool:


there are almost certainly more seats of C4D out there than Softimage, for instance

While that is definitely true, remember that c4d does not come bundled with mental ray as its default renderer. And while that may not mean anything to you, for people using xsi in 2013 it means a ton of headaches. And while it’s true that now we have arnold and vray available for it, you have to remember one is prohibitive to say the least, and the other is more than 1k/license. So for a freelancer or small studio, a new gpu renderer fully integrated with the platform would be more than welcome.
Also c4d’s internal renderer, afaik, is pretty damn good, and most of the stuff that gets integrated into the package, is rock solid…like the maxwell integration for example, which is miles ahead of the one in xsi. (and that’s another choice for a renderer).
To be honest, most c4d users i know have no complaints about rendering choices.
All in all, if these guys will get licenses sold, i’m sure there will be plenty of softimage freelancers sick and tired of mental ray, that can’t access things like arnold, and can’t afford things like vray.


This looks really nice, i’d love to give it a try.

But what about the ‘production stuff’, like passes (lighting&secondaries) , mattes, IDs and so on?


It is my impression that any developers that don’t deliver Mac OS X solutions as well as windows, won’t be easily approached by Maxon. C4D has more than 40% Mac OS X users (if my knowledge serves me right).
I’d love for other developers to consider Mac users a bit more, just like Maxon does.
If Redshift came out for Mac today, I’d buy it in a heartbeat, like I bought Vray, Maxwellrender and Thea Render. I don’t even use them much (except Vray).
But of course that’s just my opinion, and I am a bit jealous for not being able to buy it.

Other than that, Redshift looks like an amazing product. Great job people, I tip my hat.


back on topic…

I usually have over 8GB of textures in RAM when I render (extremely detailed packaging).
Would Redshift be able to handle this amount of textures?



This is looking incredibly Interesting. Wow…if it can handle big textures and Displacement…thats a big thing. I also raise my Voice for a CINEMA 4D Bridge. I buy it in one Second.
I would also like to see some Animation Examples…I could not find more examples than the ones you posted. Is there no Gallery or such?


is redshift the gpu renderer that have been shown on siggraph rendering
a multi million polygon scene with an airplane in it?


i would also love to see Cinema 4D support, and i would buy it immediately when it is out (if it will be ever released for C4D).

Cinema 4D user base is growing very fast (and it is already quite big), and Cinema 4D has almost everything (sculpting, modelling, texturing, animation tools etc…) expect good rendering, C4D’s own render engine is very slow and it doesn’t offer realistic results with good speed.

Only good option for C4D is Vray but that lag behind the original vray renderer and a lot, (we have 1.2.5… version and there are already 2.3 for maya? Or do i remember it right?), also bad motion blur support in the current version, but the current vray is still the best what C4D has

So i would love to see other option for C4D in rendering. Also other vray and C4D users have spoke from searching other rendering solutions for their needs, so i think that C4D would be good option for expand the support in redshift.


Add my vote for C4D support as well.



add my vote for C4D as well :smiley:


looks very nice!

 at jumamu, vray on c4d numbering is different than on maya, the 1.2.6 version that is available uses the vray 2.35 core, and implements already many of the 2.3 features. some yet missing version features are coming in the next update (see the c4d vray forum). also full mb despite the lack of mb in c4d sdk.
 gratulations to the redshift team anyway, looks very interesting


Thanks for the words of encouragement guys, we really appreciate it!

Regarding Cinema4D support: point well taken! :slight_smile:

Regarding noseman’s question about 8GB texture scenes: Yeah, I believe your scenes should work fine. Redshift does a background conversion on all scene textures when it first encounters them (and caches the result to disk so you don’t have to wait over and over again). This includes tiling and making mip-maps for them. From that point onwards, it loads only the tiles of the textures it needs and only at the resolutions it needs. This ensures that closeup detail is as high as possible (full-rez) while distant detail is properly (via elliptical filtering) softened and does not alias/flicker.

Regarding passes: support for these will be added in the next couple of months. We fully appreciate that this is a super-important feature for production people given the usual post work.

Finally regarding animations and more pictures: yep, we know we are extremely content-light at the moment. We’ll probably create a movie or two showing the product in action later today. Also some people had questions on the quality of the irradiance cache on animations so might make some short movies on that too.




This is looking very interesting indeed!

How does it deal with rendering at very high resolution? Does it store the frame buffer in RAM instead of VRAM? And what about distributed rendering?


Hi CaptainObvious,

The rendering is partitioned in tiles, similar to other biased renderers. So the entire frame buffer does not have to be resident in VRAM. We’ve tested Redshift with 8K renders and it worked fine.

Having said that, the space needed for the point-based techniques does have some relevance to the framebuffer size so their sizes could grow (talking about irradiance cache, SSS, etc). But, unless the scene is insanely highly detail (has extremely few flat areas), the growth is sub-linear. Meaning: if you need X amount of megabytes for the irradiance cache at some resolution and then you grow that resolution 4-fold, the irradiance cache size (and computation time) most likely won’t grow 4-fold.

Also, with very high resolutions, the texture cache will be utilized more since everything will require more detail.

Finally, network rendering is not currently supported but will be added in a few months.

Hope this helps



This looks very interesting, are there plans for a standalone renderer?


We do have an (in-house) scene exporter and a standalone executable that can load the exported scene file and render it. Unfortunately, though, it’s not fully featured yet and customized primarily for our own testing/profiling purposes.

We are currently prioritizing development effort on features. Once the list of missing features shrinks down a bit, we’ll look into these kinds of tools/APIs.




With Network Rendering you mean Distributed, right? Is it possible to use several GPU from different Machienes?

Another Question: How Much does CPU play a role? Is CPU also used when Calculating Irradicane Cache for example? Or is only GPU used?


Hello HolgerBiebrach,

Yep, by “network rendering” I meant “distributed rendering”! :). Multi-GPU support is also something that will be coming up in the next few months.

The CPU plays a relatively small role in Redshift. We do use it for certain things like building the ray tracing acceleration structure. Also during irradiance caching and other point based techniques, it helps out with rebalancing some data structures on the videocard which, in turn, helps the GPU performance.

We haven’t (so far) seen a situation where the CPU performance was dominating the final rendering time. That could happen if the scene contained many tens of millions of triangles but the scene used no GI, reflections/refractions and area lights. If that was the case (unlikely!) the GPU would be done almost instantly but would first have to wait for the CPU to ‘feed it’ the triangles.

On a similar note, extracting several million triangles from XSI and Maya can take a bit of CPU time. For this reason we use caches so that only geometry that was modified by you is re-extracted. This way you don’t have to wait seconds each time you move the camera on a high-poly scene. Our Redshift proxies also help a lot with that (they eliminate that extraction time).



Thank you for all that good and detailed explanations. Really appreciate it.
This sounds all very promising…really hope for OSX and CINEMA 4D support. :argh:


So how about advanced texturing? Is it possible to blend together multiple different textures in a single material channel and such? A major limitation with most of the crop of GPU renderers right now is that each material channel only accepts one image map. My texturing workflow usually involves loads of different textures blended together with other textures as masks.