Haha you know when I saw the int variable =0.0 it raised a mental red flag which is a good thing I guess. I was thinking that it should be double but float makes more sense. Hey mummey, nice code! Lets turn this thread into putting together a small program that would be pretty awesome.
programming help?
Allright, i was just trying to give him a perspective of how to start-off, i rather use struct to handle small datatypes like camera movements, texturing coordinates and local/world coordinated which get attached to every point/poly aso, but hey who am i 
Btw, im using C# as language
here’s the rest of what i ment!
if you started out with some primitives…
public stract Vector3{
public float X = 0.0;
public float Y = 0.0;
public float Z = 0.0;
}
public struct Attitude{
public float pitch = 0.0;
public float yaw = 0.0;
public float roll = 0.0;
}
finally define a vertex…
public struct Vertex{
public Vector3 Point;
public Vector3 Normal;
public Color Diffuse;
public Color Spectaculair;
public float TU = 0.0;
public float TV = 0.0;
}
So now every time you adress a vertex you allready implemented the coordinate structs…
just for the ones who are intressted since i work with c#…
cheers
I have to say I’m much more fond of mummey’s class then of that shrunk down struct, no offense meant 
also look at the class and the way it structures data, it’s much more flexible then a straight declaration, and could be expanded straight away (with a very simple additions) for quadrivectors, matrices etc. retaining a consistancy of style over the whole math library set.
as for defining a vertex so extensively I wouldn’t know… it’s a bit cumbersome.
defining all that data at once for every vertex.
IE: an explicit normal like that is very unconvenient, what if the model needs to deform? you are going to rewrite the whole vertex for that? if you really want to incorporate it in the vertex definition at least handle it by reference, so that another routine can take care of normal computation and the vertices will always fish the right normals even when those are modified.
also, are there any particular advantages in defining the texture coordinates directly in the vertex?
I can see that going wrong in a number of occasions.
to avoid data redundancy you won’t define each single tri/poly repeating overlapping vertices everytime, you’d be better off with a lookup for a point cloud where shared vertices only take one slot and then another lookup takes care of binding texture space samples to vertices, therefore you declare the UVs in the samples data and look them up, not straight away in the 3D space vertices.
also this kind of handling allows to see if you need to shade continuously or not just looking at the point cloud (overlapping double entries mean the points aren’t merged).
dunno, maybe I’m thinking too much in terms of plugins and dev for 3D apps, while you think in different terms that come from a different school of thought.
where I come from you don’t always need/use all those info about vertices, so this style (including its baked normals) doesn’t make much sense to me.
I’m in agreement. Extra time spent on what will define the program structure can lead to alot of redundant programming prevented.
class vector3{
public:
vector3(const float& a, const float& b, const float& c) : vec[0](a), vec[1](b), vec[2](c) { }
vector3() { vec[0]=0; vec[1]=0; vec[2]=0; }
float x() { return vec[0]; }
float y() { return vec[1]; }
float z() { return vec[2]; }
void set(const float a, const float b, const float c) { vec[0]=a; vec[1]=b; vec[2]=c; }
void Setx(const float& a) { vec[0]=a; }
void Sety(const float& a) { vec[1]=a; }
void Setz(const float& a) { vec[2]=a; }
private:
float vec[3];
};
class RotationAngle : public Vector3 {
public:
float yaw() { return Vector3::x(); }
float pitch() { return Vector3::y(); }
float roll() { return Vector3::z(); }
void SetYaw(const float& a) { Vector3::Setx(a); }
void SetPitch(const float& a) { Vector3::Sety(a); }
void SetRoll(const float& a) { Vector3::Setz(a); }
};
As for vertex, I don’t think there is an All-in-One solution for them. You could have multiple definitions depends on what you want it to have.
The OOP class method might be nicer looking and clearn/organized, but it is slow. Sure you can use it to make a spinning cube demo and say, “what are you talking about? it’s not slow.” Well throw a couple thousand/million vertices at it and tell me it’s not slow.
Problem with classes (C++ or C#) is the memory allocation of the object doesn’t necessarily have to be linear. So in worst case scenario your program would have to make a few hops and jumps to different memory addresses to get the entire contents of that object. A struct allocates memory linearly, also makes it a little easier to vectorize. Why vectorize? For one thing you get more throughput when you can stream data smoothly to the CPU, moreover it makes more sense when you go to optimize it with SSE/SSE2/3DNow!
Also, when you go to save the scene to a file, you can directly write a struct into file, and read it back later on byte-by-byte. A class must first be serialize (this brings all data of the object together linearly) then written to disk. I can almost guarentee that majority of newbie programmers will sit there writing a routine that loops through all their vertex objects and individually write out each component. Oh excuse me, it’s the 21st century, we convert the numbers into plain text first, then write it out as XML 
that’s very sound reasoning, and that’s why I said my background brings me to do things in a way that’s very different from what one would do if, for example, writing for PS2; but that’s what parsers are for really.
reading a modular data structure and parsing it into optimized and contiguous allotments before the number crunching.
the same argument about traversing large data sets using simple memory space back and forth movments can also backfire big time with the impossibility to allocate large areas of contiguous memory.
you will hardly ever need to write hundreds of thousands of vertices in an contiguous space, if it’s for fast display, you are more likely to end up sorting in order (usually by Zsorting position of centers) lots of smaller assets.
this can obviously go in a truckload of directions (continuous levels, data managment and so on) that will find weak and strong points in both styles, and in most of those areas I would just have to shut up because my experience is limited to just a couple of fields.
but going back to the original topi of somebody learning C++ for 3D I assume the accent is not on resource tight game dev, and for learning purposes I can see an OOP approach to be something to learn about before moving to data managment.
P.S.
what’s wrong with XML? 
OK as no one has mentioned it before in this thread, I have to recommend the official openGL book. I myself have been learning openGL these past 6 months, and this book is very nice to use. As it not only tells you about programming in the language, but also explains the different ideas before hand, so you know what is trying to be achieved. Great if you are new to graphics programming, and also came in handy while writing my dissertation, bonus! It does assume some basic knowledge of C/C++, but before this I never used either before, I was tought Java in university. The book does give you the C code in the listings, so the knowledge you claimed to have earlier is probably enough, to start with at least. So if you are serious about learning openGL, and would like a book, I really recommend going for this one.
OpenGL Programming Guide, Third Edition: Release 1.2
Jackie Neider, Mason Woo, Tom Davis
I don’t think it’s actually in print anymore, but I got a lightly used copy off Amazon marketplace for just 9 quid, about 17USD
thanks for the info Twib. and speaking of books, one that looked pretty interesting to me was:
GPU Gems
http://developer.nvidia.com/object/gpu_gems_home.html
it was written by some people at nVidia, with stuff in it from a lot of other companies. I actually looked at this book once in a borders store, its full color, and has a LOT of math in it which could help with making a small 3d app for sure. Anyone seen this book before?
Nothing, I’m being a wise guy. XML saved my butt plenty of times. Once I found a decent parser, I never went back to writing my own binary file format. Actually for that particular project a binary format was overkill considering most of the data involved was text. I had my reasons though, PalmOS programming is really tight: your apps are limited to a 96KB heap.
check Imath, included in OpenEXR. It’s a opensource math library release with ILM’s OpenEXR. The Imath library, "math library with support for matrices, 2d- and 3d-transformations, solvers for linear/quadratic/cubic equations, and more. ".
Its a great resource for people who wants to get into mathematical data structures and operations in modern C++.
-mk-
Yeah I have the GPU Gems book aswell. This book is about GPU programming, it gives you information on how people in the know created different effects on the GPU. The emphasis is on the Nvidia’s Cg or Microsoft’s HLSL languages (which are really the same) rather than openGL. If you’re only really interested in openGL at the moment then the book will probably not be that much use to you, apart from giving some more general information about different algorithms. If you do want to adapt your knowledge and do some shader programming in Cg then this is a very useful book, especially once you have read “the Cg Tutorial” by Randima Ferando and Mark Kilgard. Cg is a shader language which is used to program your GPU, in the programs you can specify functions such as lighting and texturing etc. The obvious advantage is that it takes a lot of the computation off the CPU, so you can have more complex graphics programs running better, as the CPU can work on other things, and the GPU is specialized for graphics operations. However, if you do want to use this language, you still need some openGL or Direct3D. Cg requires a C/C++ program that will complie it and feed it the information it needs, mostly vertex co-ordinates and texture maps, here’s where the openGL comes in.
ahhh thanks for the info twib, i probably wont get it then. What I would really like to find is a book that has algorithms and source for common 3d modeling tasks, like chamfering, extruding, boolean subtraction, welding etc. that would RULE.
Ah right, erm, as I have said I am quite new to openGL, so anyone please correct me if I am wrong, but openGL doesn’t do those things. It has a few primative shapes, cube, sphere, teapot etc. If you want a more detailed original mesh, then you have to specify all your vertices and triangles etc. this is where a 3D modelling package comes in handy. You can then recreate the model in your openGL code by feeding in this information from your modelling package. Or do you mean that you want to create your own modelling program? If so that’s probably a bit advanced, hence why they cost so much. I’m afraid I don’t know any books that teach you to do that.
The OOP approach doesn’t have to be slow if coded correctly, and with operator overloading it makes the 3D math MUCH easier to work with (read, write, etc.). You can inline most of the operations by putting the code in the header declaration, especially for a vector/point class.
I would probably suggest storing the x, y, and z floats as separate variables rather than as an array of 3 floats, so that you avoid the array lookup for each element (even if the array lookup is a constant). Saves you some register loading.
If you don’t declare any virtual functions in your vector class, the memory used by the class will be linear without gaps. You can make an array of your class without problem, and send it directly to the GPU if needed. You can even optimize it. In C++ the only difference between a struct and a class is that a struct has all its members public, if memory serves.
You really don’t want to directly write structs to and from disk. This is an approach everyone uses when they start coding, but as your programs get more complex you discover that this starts to run into major restrictions. You can’t edit your struct’s formats to improve them over time, you have to be careful of memory packing which can lead to performance hits if the data isn’t aligned, and you don’t deal with endian-ness well. Personally I prefer loading buffers of a file into memory, and then parsing out the individual data out of that parser. Much faster than reading a little bit at a time directly from the file, and it allows you to write caching systems that hold your files in memory (or read files from a network connection into a memory buffer, or decompress a file into a temp memory buffer, etc.) Also if you are going to support any of the standard file formats out there (OBJ, LWO) you won’t be reading arrays of your vertex type in anyways. Also you may want to store your vertices internally in homogenous coordinates in case you are doing your own clipping to the view frustrum, but you would only want to load and save 3 components instead of 4.
Cheers,
Michael Duffy
Just for kicks, here’s my Vec3 class. I have a Vec4 class too for homogenous coordinates. I have float #defined as FLOAT and bool #defined as BOOL, etc.
#define R_EPSILON 0.0000000001f
//------------------------------------------------------------------------
class RVec3
{
public:
union
{
FLOAT fX;
FLOAT fR;
FLOAT fU;
};
union
{
FLOAT fY;
FLOAT fG;
FLOAT fV;
};
union
{
FLOAT fZ;
FLOAT fB;
FLOAT fW;
};
public:
RVec3 () {};
RVec3 (FLOAT fXIn,
FLOAT fYIn,
FLOAT fZIn) {fX = fXIn; fY = fYIn; fZ = fZIn;};
RVec3 (const RVec3& v3In) {fX = v3In.fX; fY = v3In.fY; fZ = v3In.fZ;};
~RVec3 () {};
VOID Set (FLOAT fXIn,
FLOAT fYIn,
FLOAT fZIn) {fX = fXIn; fY = fYIn; fZ = fZIn;};
VOID Set (const RVec3& v3In) {fX = v3In.fX; fY = v3In.fY; fZ = v3In.fZ;};
RVec3 operator+ (const RVec3& v3In) const {return (RVec3 (fX + v3In.fX, fY + v3In.fY, fZ + v3In.fZ));};
RVec3 operator- (const RVec3& v3In) const {return (RVec3 (fX - v3In.fX, fY - v3In.fY, fZ - v3In.fZ));};
RVec3& operator+= (const RVec3& v3In) {fX += v3In.fX; fY += v3In.fY; fZ += v3In.fZ; return *this;};
RVec3& operator-= (const RVec3& v3In) {fX -= v3In.fX; fY -= v3In.fY; fZ -= v3In.fZ; return *this;};
// scalar multiply and divide
RVec3 operator* (FLOAT fIn) const {return (RVec3 (fX * fIn, fY * fIn, fZ * fIn));};
RVec3 operator/ (FLOAT fIn) const {return (RVec3 (fX / fIn, fY / fIn, fZ / fIn));};
RVec3& operator*= (FLOAT fIn) {fX *= fIn; fY *= fIn; fZ *= fIn; return *this;};
RVec3& operator/= (FLOAT fIn) {fX /= fIn; fY /= fIn; fZ /= fIn; return *this;};
// cross product
RVec3 operator% (const RVec3& v3In) const {return (RVec3 (fY*v3In.fZ - fZ*v3In.fY, fZ*v3In.fX - fX*v3In.fZ, fX*v3In.fY - fY*v3In.fX));};
// dot product
FLOAT operator* (const RVec3& v3In) const {return (fX*v3In.fX + fY*v3In.fY + fZ*v3In.fZ);};
// assignment
RVec3& operator= (const RVec3& v3In) {fX = v3In.fX; fY = v3In.fY; fZ = v3In.fZ; return *this;};
// reverse sign (unary negation)
RVec3 operator- (VOID) const {return (RVec3 (-fX, -fY, -fZ));};
// equality (each value within R_EPSILON (0.001) of each other)
BOOL operator== (const RVec3& v3In) const {return ((fabsf (fX - v3In.fX) <= R_EPSILON) && (fabsf (fY - v3In.fY) <= R_EPSILON) && (fabsf (fZ - v3In.fZ) <= R_EPSILON));};
BOOL operator!= (const RVec3& v3In) const {return ((fabsf (fX - v3In.fX) > R_EPSILON) || (fabsf (fY - v3In.fY) > R_EPSILON) || (fabsf (fZ - v3In.fZ) > R_EPSILON));};
FLOAT LengthSquared (VOID) const {return (fX*fX + fY*fY + fZ*fZ);};
FLOAT Length (VOID) const {return (sqrtf (fX*fX + fY*fY + fZ*fZ));};
RVec3& Normalize (VOID) {FLOAT fLength = Length (); fX /= fLength; fY /= fLength; fZ /= fLength; return *this;};
VOID Zero (VOID) {fX = fY = fZ = 0.0f;};
VOID Reverse (VOID) {fX = -fX; fY = -fY; fZ = -fZ;};
VOID Add (FLOAT fXIn,
FLOAT fYIn,
FLOAT fZIn) {fX += fXIn; fY += fYIn; fZ += fZIn;};
VOID Subtract (FLOAT fXIn,
FLOAT fYIn,
FLOAT fZIn) {fX += fXIn; fY += fYIn; fZ += fZIn;};
// component-wise squaring
RVec3 Squared (VOID) const {return RVec3 (fX * fX, fY * fY, fZ * fZ);};
};
Ok I’ll admit perhaps I’m being too picky and OCD. Still, C++ objects need to be allocated, then the constructor is called followed by whatever init code the programmer designs. With a flat struct I can just allocate the buffer and zero it, then fill it with data.
Ditto. And again, with a flat memory buffer to store the vertices, I can stream them to the CPU and even use the PREFETCH instruction for a bonus.
What about subclasses? Would they have copies of inherited functions, or function pointers (by the compiler not by design).
I don’t see why not? Stick a version number tag at the beginning of the file so your parser knows the layout of the data. Otherwise you might as well say “binary formats are sloppy, store everything as XML.” Which is ok, and it’s endian safe, and it’s 64-bit ready.
You see, the idea is to store similar data together. For example, all my XYZ coords would be stored in a linear fashion, laid out flat inside the file. Then all my normals the same way. Later on if a new feature comes along, it’s added as a “group” more specifically a memory region in the file. That way it doesn’t interfere with the already established data. My “new” parser would still understand the old data formats and I can still easily add new data types and features without breaking backwards compatibility.
You can buffer a whole binary file and parse it out too. Would be a lot faster and less memory hungry. Consider this: storing the float value of: “3.14159265” would take 10 ASCII characters, thus 10 bytes (double if it has to be unicode). The same value can be stored in binary using just 4 bytes (the size of a float datatype). But yes it has its flaws, it’s not endian safe and it will cause problems when on a 64 bit platform.
I kind of walk into this conversation with the wrong mind set and I apologize for that. I was thinking in terms of 3D real-time applications, such as games, where speed means everything. But for a 3D modeling app, being able to sanely manage all that extra data and info is a better cause than speed.
Cheers.
No.
A constructor should always try to init its member data outside the function, like:
Vec3f::Vec3f(float ix, float iy, float iz) :
x(ix),
y(iy),
z(iz)
{}
This tells the compiler that x,y,z are inited right after the vec3f class is created and it is more efficient than doing it in the constructor body. This is particularly true when you are initing large classes instead of simple datatypes.
Still, this initialization of members is totally optional. In my classes I usually create a special constructor that does not initialize its elements for this very reason.
If you were to write a constructor that is just:
Vec3f::Vec3f() : x(0), y(0), z(0) {};
Vec3f::Vec3f( void* ) {};
and use it like:
static void* dummy;
Vec3f v1; // initialized to 0
Vec3f v2(dummy); // xyz have crap in them
The dummy is a simple identifier to tell the compiler to use another constructor than the default one. It would be a static within your whole code. Today’s C++ compilers would also be able to optimize it away, being that the void* is not used within the constructor at all.
For larger classes with lots of members, you can do the same as in C, and use a constructor to init everything to 0 with memset or similar:
Matrix4::Matrix4()
{
memset(this, 0, sizeof(Matrix4));
}
Not as long as you are sticking to single inheritance and non-virtual functions and code is compiled optimized. As long as you follow those rules, your classes’ memory footprint should be equal to a struct in C.
As you clearly don’t have a good grasp on the how C++ works internally and you are starting coding with a lot of bad assumptions, I’d recommend:
- For a thorough discussion of memory footprint (and some potential performance pitfalls) of the C++ object model, I recommend “Inside the C++ Object Model” by Stanley Lippman.
- All books by Scott Myers as he also covers common issues that show up both with C++ and the STL.
Scott Myers is very, very, very easy to read and follow, but it does assume you have a solid grasp of C++ first. Better to problably read a good C++ book first (so you understand what virtuals are, inheritance, etc), then Lippman and then Scott later.
P.S. What do you think most 3d packages, games and applications you use today are coded in? If C++ is good enough for maya, 3dmax, etc. and pretty much every game today on the market, I’d say it should be good enough for what you want. The only reason to perhaps avoid C++ these days would be if you were writing a renderer, and even that is somewhat open to debate.
Okay… let’s do that… are your xyz coords floats? What if you later realize that floats was not enough and you needed doubles? Or what if tomorrow when 64-bit and several terrabytes of data becomes common on personal computers your users start complaining that doubles are no longer enough for them either? What if you decide that you need to support vertices in homogenous coordinates and now need xyzw?
End reuslt: Your format is obsolete or now you are storing the data twice to keep backwards compatability.
Most binary formats that stand the test of time predict these sort of things and take approaches of using tags or blocks/chunks to declare the type of data and their alignment. This makes the formats harder to parse, but also easily extensible.
I stand corrected. Was already starting to see holes in my arguments, and I believe I did point some out myself when I realized them.
I want to make a disclaimer first, that I’m not just arguing with you to be a block head. I genuinely want to continue this discussion because it’s interesting.
All this talk of future proofing, and what to do when 64-bit computing becomes mainstream… well it’s already upon us. All those big companies you mentioned, they have their own proprietary file formats, how do you think they’re going to deal with the 64-bit change? (Retorical question).
What’s wrong with that? The data is NOT stored twice, the code that parses it is redundant (if that’s what you mean) and am I wrong in believing that this code redundancy is necessary at least in the intermediate time frame of transitioning into 64-bits?
There are many applications with binary file formats that support big and little endian. That’s a non-issue. For example, I have seen an MD3 tutorial and they spent time looking at the endian issue because apparently the original MD3 code hadn’t taken that into consideration, thus the example code would not work on a Mac. The solution was to basically add code to convert the endian-ness where necessary. It has to happen somewhere, how else are we able to view JPEG images (for example) on both PCs and Macs? Store the encoded data twice into the file? I don’t think that’s the case.
Look at this hypothetical situation: we have binary files that were created on a 32 bit platform. Along comes a 64-bit version of Maya. Common sense tells me it should still be able to open those older files. But when you re-save the file, guess what? It gets saved into the new format, and if Alias decides to use larger data types, that’s when the conversion is done. So what’s the big deal? Maya already has 12+ millions of lines of code, what’s a little more baggage for the sake of backwards compatibility? And I won’t believe that the talented programmers at Microsoft were thinking about the future when they were developing every version of Windows after 95, otherwise Windows wouldn’t be the monstrousity that it is, and MFC would have never been conceived.
I can appreciate the fact that with the right knowledge and skills, good C++ classes can be design to work efficiently. What I don’t seem to “grasp” is why encapsulate an individual entity such as a vertex into a class, and then how do you store thousands of them? Into an array? We already established that working with arrays can be slow (thanks MDuffy), and yes I’ll admit I might be OCD in trying to work around that.
And again I will admit that I’m going about this hazardously using an immutable memory buffer to store data, especially for a 3D modeling app where the data structure has to be able to grow. I haven’t found it yet but I believe there is a happy medium, a mutable memory buffer to store similar data linearly.
Well, since you mentioned the Windows OS in the same paragraphs, I guess you answered your own question. Having duplicated code leads to code bloat, new potential additional bugs, things that work in one format but not the other, etc. All things your customers will not like.
Following your example, say Maya does change their binary format. Guess how many angry emails are they going to receive because their new scenes do not open in previous maya versions? Big changes like that need to be done carefully, so it makes sense to try to design new formats thinking already in advance that they may need to be extended later on. Sometimes there is no other way but to trash the format, as even the best thought out design may need to be trashed due to some unforeseen new development. That’s also why maya supports both a binary and an ASCII format, too. So even if the binary format drastically broke compatability, you’d always have the ASCII format for any arising issue.
Because any application of any complexity will need to perform operations on those vertices. Using a class with associated member and non-member functions helps with this enormously. You certainly don’t need to do it, but it will make your code much more readable.
And nobody has established that arrays are slower at all. If you read back, you’ll read the exact opposite. Array access and iteration can be done exactly the same as just moving a pointer. In fact, most compilers these days will do that sort of optimization automatically.
The only place where your statement is somewhat valid is on array creation. If you do something like:
Vec3f* a = new Vec3f[256];
those 256 vectors will automatically run their default constructors, which, if it is not empty, will indeed incur on a small (and really insignificant) overhead compared to a malloc allocation followed by memset.
But even that can still be worked around, if you truly need to. And when it comes to performance and memory in an application, you will likely run into many other issues like CPU cache sizes, stack access, etc. before a basic vector constructor becomes your problem.