PDA

View Full Version : error C2065: 'GL_TEXTURE_3D'


jchat
02-13-2007, 05:18 PM
When I Compile this code I've got error C2065: 'GL_TEXTURE_3D' : undeclared identifier.

What's wrong in this code?

#include <GL/glut.h>
#include <stdlib.h>
#include <stdio.h>

#define iWidth 16
#define iHeight 16
#define iDepth 16
static GLubyte image[iDepth][iHeight][iWidth][3];
static GLuint texName;
/* Create a 16x16x16x3 array with different color values in

* each array element [r, g, b]. Values range from 0 to 255.

*/

void makeImage(void)
{
int s, t, r;
for (s = 0; s < 16; s++)
for (t = 0; t < 16; t++)
for (r = 0; r < 16; r++)
{
image[r][t][s][0] = (GLubyte) (s * 17);
image[r][t][s][1] = (GLubyte) (t * 17);
image[r][t][s][2] = (GLubyte) (r * 17);
}
}
void init(void)
{
glClearColor (0.0, 0.0, 0.0, 0.0);
glShadeModel(GL_FLAT);
glEnable(GL_DEPTH_TEST);
makeImage();
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
glGenTextures(1, &texName);
glBindTexture(GL_TEXTURE_3D, texName);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_S, GL_CLAMP);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_T, GL_CLAMP);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_WRAP_R, GL_CLAMP);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MAG_FILTER,GL_NEAREST);
glTexParameteri(GL_TEXTURE_3D, GL_TEXTURE_MIN_FILTER,GL_NEAREST);
glTexImage3D(GL_TEXTURE_3D, 0, GL_RGB, iWidth, iHeight,iDepth, 0, GL_RGB, GL_UNSIGNED_BYTE, image);
glEnable(GL_TEXTURE_3D);

}

void display(void)

{

glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

glBegin(GL_QUADS); {

glTexCoord3f(0.0, 0.0, 0.0); glVertex3f(-2.25, -1.0, 0.0);

glTexCoord3f(0.0, 1.0, 0.0); glVertex3f(-2.25, 1.0, 0.0);

glTexCoord3f(1.0, 1.0, 1.0); glVertex3f(-0.25, 1.0, 0.0);

glTexCoord3f(1.0, 0.0, 1.0); glVertex3f(-0.25, -1.0, 0.0);

glTexCoord3f(0.0, 0.0, 1.0); glVertex3f(0.25, -1.0, 0.0);

glTexCoord3f(0.0, 1.0, 1.0); glVertex3f(0.25, 1.0, 0.0);

glTexCoord3f(1.0, 1.0, 0.0); glVertex3f(2.25, 1.0, 0.0);

glTexCoord3f(1.0, 0.0, 0.0); glVertex3f(2.25, -1.0, 0.0);

} glEnd();

glFlush();

}

void reshape(int w, int h)

{

glViewport(0, 0, (GLsizei) w, (GLsizei) h);

glMatrixMode(GL_PROJECTION);

glLoadIdentity();

gluPerspective(60.0, (GLfloat) w/(GLfloat) h, 1.0, 30.0);

glMatrixMode(GL_MODELVIEW);

glLoadIdentity();

glTranslatef(0.0, 0.0, -4.0);

}

void keyboard(unsigned char key, int x, int y)

{

switch (key) {

case 27:

exit(0);

break;

}

}

int main(int argc, char** argv)

{

glutInit(&argc, argv);

glutInitDisplayMode(GLUT_SINGLE | GLUT_RGB | GLUT_DEPTH);

glutInitWindowSize(250, 250);

glutInitWindowPosition(100, 100);

glutCreateWindow(argv[0]);

init();

glutReshapeFunc(reshape);

glutDisplayFunc(display);

glutKeyboardFunc (keyboard);

glutMainLoop();

return 0;

}

UrbanFuturistic
02-13-2007, 05:46 PM
#include <GL/gl.h>

:thumbsup:

Ian Jones
02-14-2007, 12:33 AM
I'm suprised it got that far into the code before throwing the first error, or are all the others supported by GLUT? I doubt it...

Ian Jones
02-14-2007, 12:58 AM
A couple of things I've noticed, although I'm new to c++ and OpenGL but thought I'd mention.

1) Are you aware that you are drawing in a clockwise winding direction? This means the textures will appear flipped because you are drawing facing the quad away from the viewer.

2) You don't need { and } to surround a block of calls imbetween glBegin() and glEnd(). It seems to compile fine, but it looks like poor formatting IMO... but to each his own. A common practice is to indent the calls inside the block instead of using { and }. You may already have that but you should place your code inside the [CODE] tags in these forums otherwise formatting is lost.

3) Calls to glFlush() are not neccessary. This does infact require the GPU to complete the requests before continuing and flush the command buffer but it also requires the CPU and GPU to synchronise which is a performance problem. The GPU will manage fine without it and in the long run the speed increase might become important for a large application. You always want to keep the CPU and GPU running as asynchonously as possible with as little calls as possible.

jchat
02-14-2007, 03:24 AM
I included #include <GL/gl.h>. But appear same error

UrbanFuturistic
02-14-2007, 11:21 AM
Yes, sorry about that. It appears that this is yet another 'extension' that Microsoft's libs don't support as it's post OpenGL 1.1

GLEW (http://glew.sourceforge.net/) is an extension library that should handle your extension issues (I haven't tested this yet).

Failing that, there's information on GPWiki (http://gpwiki.org/index.php/OpenGL:Tutorials:3D_Textures) that should be relevant to your problem and Moving Beyond OpenGL 1.1 for Windows (http://www.gamedev.net/reference/programming/features/oglext/default.asp) on gamedev.net (http://gamedev.net/) has a more thorough overview of how to implement OpenGL extensions.

Say what you like about Linux, at least I don't have to put up with this kind of crap on it.

Other notes are: from my understanding glFlush is required as it's the command to display the image on screen, although you should probably look up GL_DOUBLE_BUFFER and glSwapBuffers.

Take out the {} between glBegin(); and glEnd(); as they're unnecessary and it's best not to have anything unnecessary in your code because you never know when some esoteric compiler bug's going to throw a wobbler just because something it should allow was merely unexpected.

Leave in #include <GL/gl.h> as it's still necessary and make sure it's above #include <GL/glut.h>.

Ian Jones
02-14-2007, 01:16 PM
You are correct, I confused glFlush() with glFinish().

nurcc
02-14-2007, 05:29 PM
This might not be it, but I've had issues in the past with glut on windows, where I need to

#include <windows.h>

before including glut.

Robert Bateman
02-15-2007, 03:14 PM
This might not be it, but I've had issues in the past with glut on windows....

It's not it. 3D textures are an openGL extension, and therefore it has to be found in GL/glext.h rather than GL/gl.h

Bear in mind though that 3D textures are horrifically inefficient, so even though they may be available, you probably don't want to actually use them.

Robert Bateman
02-15-2007, 04:50 PM
1) Are you aware that you are drawing in a clockwise winding direction? This means the textures will appear flipped because you are drawing facing the quad away from the viewer.

That's up to you. It all depends on which way you decide to cull the faces, and which way you specify the normals. It's all relative. You can use CW or CCW, GL's nice enough to let you choose. By default it's CCW.

2) You don't need { and } to surround a block of calls imbetween glBegin() and glEnd(). It seems to compile fine, but it looks like poor formatting IMO... but to each his own. A common practice is to indent the calls inside the block instead of using { and }. You may already have that but you should place your code inside the [CODE] tags in these forums otherwise formatting is lost.

{} is not a terribly bad thing. It's extremely useful in strict C code when doing things like setting up lights etc (all vars have to be defined at the beggining of a scope, so might as well scope things using {} to get around that limitation and keep the vars local to the funcs that use them)

3) Calls to glFlush() are not neccessary. This does infact require the GPU to complete the requests before continuing and flush the command buffer but it also requires the CPU and GPU to synchronise which is a performance problem. The GPU will manage fine without it and in the long run the speed increase might become important for a large application. You always want to keep the CPU and GPU running as asynchonously as possible with as little calls as possible.

Calls to glFlush *are* required. Just because you do not need them on windows does not mean that you can completely ignore them - other OS's require them. If you are using double buffering with glut, then glutSwapBuffers actually issues a call to glFlush internally (thus you do not need to call it). There is no such thing as an optimisation in the way you describe, the CPU and GPU run asynchronously all the time. At some point though, you do have to sync the two processors so that the CPU does not start rendering before the GPU has finished drawing the last frame. Hence glFlush / swap buffers or whatever else you need.

Ian Jones
02-16-2007, 03:37 AM
Thx for the info.


Calls to glFlush *are* required. Just because you do not need them on windows does not mean that you can completely ignore them - other OS's require them. If you are using double buffering with glut, then glutSwapBuffers actually issues a call to glFlush internally (thus you do not need to call it). There is no such thing as an optimisation in the way you describe, the CPU and GPU run asynchronously all the time. At some point though, you do have to sync the two processors so that the CPU does not start rendering before the GPU has finished drawing the last frame. Hence glFlush / swap buffers or whatever else you need.

I confused glFlush() with glFinish(). I use glFlush() at the end of my own rendering loops (using SDL). Everywhere I read about OpenGL it talks about working as asynchronously as possible:

" The CPU and GPU are both powerful processors which, with proper care, can be used completely asynchronously. In OpenGL, there are several obvious and other not-so-obvious ways to cause either the CPU or GPU to stall on the other."

http://developer.apple.com/graphicsimaging/opengl/optimizingdata.html

Now this may be a mac specific thing, but I doubt it. What's your opinion of this?

CGTalk Moderation
02-16-2007, 03:37 AM
This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.