Calculate Eye Vector


#1

Hey,

If I know the rotation of a camera (rX,rY,rZ), could I derive an eye vector? I understand that it’s a vector pointing in the direction of the camera, but no idea how to derive it.

I’m basicly trying to calculate an eye vector from a Nuke Camera.


#2

hi nick,

i think you should be able to setup a rotation matrix with your angles and transform the vector <<0, 0, 1>> with it. i guess its <<0, 0, 1>> because when rotation is all 0 the camera is aligned along the z axis.

does that work?
grs
Patrik


#3

The eye vector isn’t a function of the camera rotation alone, it’s a function of the camera rotation and the screenspace, or a function of the camera transform (or the film back) and the sampled point (most of the time the shading point), depending on which direction you path. This is already a bit of a simplification.

The direction of the camera you can get by just taking the facing vector from the camera transform matrix (most likely positive Z in nuke), but the further away you from the center (of the screenspace sampled), the more divergent it will be from the actual eye vector.

The simplest eye vector is the position of the point you’re sampling minus the position of the centre of your filmback, it will still have discrepancy at this level of simplification, but a helluva lot closer than just taking the camera facing.

What are you trying to do exactly?


#4

Cheers Jaco, ok i owe you a beer.

I’ve been looking at writting a environment maper for Nuke (there are ones out there, I just like this exercise from a learning perspective).

Using this as a reference:
http://www.ozone3d.net/tutorials/glsl_texturing_p04.php

Almost all methods seem to require an eye vector. Thought I might save rendering an extra AOV and just calculate it off other passes/camera.

Cheers,

Nick


#5

in that case this really depends on what kind of rendering passes you get. having position and normal information in combination with the position of the camera would be enough to calculate the corresponding reflection vector which can then be used for environment map lookups.

less flexible but more direct would be to render the reflection vector directly into your pass

grs
Patrik


#6

in that case this really depends on what kind of rendering passes you get. having position and normal information in combination with the position of the camera would be enough to calculate the corresponding reflection vector which can then be used for environment map lookups.

This is my preference


#7

You can reasonably approximate if it’s just that.

Take the point the camera is looking at, its position, subtract those and get the eye vector approximation (what I mentioned before), then use that and the normal of the same point to calculate the reflection look-up (the eye vector reflected on the plane of the normal), and from there you can look-up the appropriate spherical coordinate (or whatever other space you intend to use) and map it to the raster coordinate of the image.

If you’re working off an image you can work out all of that based on the ratio between the frame’s size and the camera’s angle (each pixel has an unambiguous eye vector), and a normal pass.

All the various techniques regarding env mapping basically take advantage of the fact that given something like a space that can be implicitly described, such as a spherical environment, you can condense a lot of those steps into a single function to directly look up a 2D space from just two vectors.


#8

This thread has been automatically closed as it remained inactive for 12 months. If you wish to continue the discussion, please create a new thread in the appropriate forum.