The eye vector isn’t a function of the camera rotation alone, it’s a function of the camera rotation and the screenspace, or a function of the camera transform (or the film back) and the sampled point (most of the time the shading point), depending on which direction you path. This is already a bit of a simplification.
The direction of the camera you can get by just taking the facing vector from the camera transform matrix (most likely positive Z in nuke), but the further away you from the center (of the screenspace sampled), the more divergent it will be from the actual eye vector.
The simplest eye vector is the position of the point you’re sampling minus the position of the centre of your filmback, it will still have discrepancy at this level of simplification, but a helluva lot closer than just taking the camera facing.
What are you trying to do exactly?