OpenGL - GLSL vs. CG


I’m starting to dabble in OpenGL a little bit, and the examples I’ve found that make use of GLSL have kind of popped out at me. What surprises me about this is that I had always been under the impression that OpenGL shaders were typically done through nVidia’s CG language - or at least everything I’ve gone through in the past that has referenced the shaders themselves, such as MR and its hardware rendering feature, have eluded to the use of CG for hardware shaders.
So, what’s the deal? I’ve noticed that CG is supposed to work with both OpenGL and DirectX, so I’m guessing that that’s the greatest reason for its continued existence. I’d tend to wonder though, with CG being an nVidia thing, how well it works through ATI cards (if that even matters…).
I have found posts that have suggested that HLSL and CG are more or less the same - syntax and all. But at the same time, I’ve found plenty of google links that have suggested otherwise. So, forgive me if this seems like one of those threads where the poster obviously hasn’t touched the search feature; just looking for a good, solid answer.


Cg and HLSL were developed in parallel as a partnership between nVidia and MS. Their syntax and semantics originally (not sure if its still the case) were identical. HLSL was developed for DirectX and Cg was originally nVidia’s “proposal” for the OpenGL shading language. The OpenGL ARB (of which MS was once a member, but not for a few years now), rejected Cg in favor of the GLSL proposal from ATi/3DLabs.

Since nVidia put all this effort into making this language and didn’t want ot see it wasted, they turned Cg into a shader meta-language. This means that shaders can be written in Cg and then “compiled” into another shader language. This, IMHO, was a smart move in nVidia’s part since it made Cg compatible with GLSL cards, but also with older cards that may not have support for GLSL, but have support for ARB vertex and fragment programs (these are two OpenGL extensions released a couple of years before GLSL that allowed shaders to be written in a pseudo-assembly language. Not the best looking code, but the best we had for shaders at that time.)

Cg coded compiled into GLSL will theoretically run on any cards that support GLSL. In practise this is not the case. The nVidia Cg compiler will optimize its code for nVidia cards, which somtimes means more, but simpler commands in the code. The limits on number of commands allowed in a GLSL shader varies from card to card but lately nVidia cards have been able to support many more commands than ATi per shader.


As stated, Cg should work on any card that supports the appropriate vertex and fragment profiles (which should basically be all modern cards).

The reason I think you see Cg more in the literature is a) because it’s older than GLSL, and b) because it’s a much better language than GLSL, supporting language features like interfaces, which are very cool.


Hey, great explanation.
So, CG does also compile into HLSL, right?

Their syntax and semantics originally (not sure if its still the case)

I guess it’s changed at least a little bit then. From what I’ve seen, they do look very similar, but the methods for declaring shaders and returning their output does at least look rather different. Well, as long as the main guts are the same, I guess that’s all that matters.

Just one more thing: One of the tools that I’ve installed for working with opengl/directx, ATI’s Ashli viewer, compiles Renderman SL shaders into both directX and openGL: Is this actually done normally, or just kind of one of those nice little tools that is only ever really used for experimenting?


then why did the opengl community reject Cg in favour of GLSL?



Be careful with programming in Cg. I’ve been working on it with good success in the past month but there is one thing you should be aware of. The Cg shaders are not directly compatible for compiling to either HLSL or ARB. This is because OpenGL (or ARB) uses a right-handed coordinate system and HLSL (or DirectX) uses a left-handed coordinate system. When you transform vectors, the parameter order in the “mul()” will be reversed.



May help.

May have been a situation where the ARB still felt burned by MS leaving.


Apart from that, there are certain intrinsics that Cg comes with that HLSL doesn’t (didn’t) have.

But either way you write it, it is easy to go back and forth.


On a similar note what is the difference between cg and cgFX? Is FX an Nvidia specific extension or an updated version of the language? OR the Nvidia implementation of the cg language? Or none of the above??



Cg is nvidia specific. It just happens that it can be ‘compiled’ (or profiled, if you prefer that term) into platform-independent shader languages.


I was just wondering if the “FX” part was relevant or if it’s just there to say you can only run it on FX or greater cards?

Edit: Nevermind - I RTFM. So FX is a wrapper around Cg files setting up the environment right? A bit like Slim and (kind of…) does for RSL?



This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.