I thought I would chime in here, I help develop the work flow that the Dave school uses. We use option 5 in the above post using the Samsung DLP TV. Workes great, we have a 32bit quadcore hooked up to it and using a sterioscopic player to stitch them together in real time from two seperate video files.
when testing the eye sep in AE or fustion we use anagyph glasses (Donāt wear them too long as it temporarly screws up your color vision) in AE we just use the effect that comes with the software in fusion we use a simple node setup that I canāt remeber off the top of my head that does the same thing. Iāve found that a AE work flow workes better while doing sterio but really you can use either or.
Right now I think one of our students uses he built on his own for the project they are working on to do eye sep and convergence. the only downside is that it uses two seperate cameras thus has to be brokenout seperatly. However you almost HAVE to do it this way if your using render buffer export as RBE does not currantly support the sterio toggel in the camera pannel. I usally mannually break out my passes and donāt use RBE because they are only 8bit images anyway.
Iāve passed along a sterio wish list to NT I hope gets looked at in the near furture because we do see a lot of that stuff thesedays and helps to have all advantages you can get.
I hope that makes sence to yāall 