There is no setting for convergence, its either on or off depending if you have the camera targeted to a null. You will also need to disable the Pitch and Bank on the camera as convergance should only affect the heading. Its also a good idea to put you camera into a rig that will allow the movement to be speratly controled on other nulls. I like to build a vitural dolly and crane with my setups. This will get a more realistic camera in the end.
HOW WOULD YOU IMPLEMENT THIS? Stereoscopic Rendering (3D)
Wondering, anyone knows if glasses-free 3D televisions work the same way, so the LW stereoscopic-method can be used?
A stereo cam setup is not that difficult to set up.
1 Null with as childs a left eye cam and a right eye cam, with 8cm inbetween.
Having 2 cameras means you have to render 2 different scenes. If you use the stereo
function from LW it will render 2 eyes in 1 go.
- P -
The thing about 3D is you have to seperate MOST viewing methods from the generation method. Humans have two eyes, we see the world 3 Dimensionally from birth. two renders that replicate that view (in scale) is all you need. Once you have those two images, you can set them up to view in almost any format including glasses free 3D. Many of those systems will work better with more than one stereo pair, but can do amazing things with just one pair.
Remember, LW stereo rendering is workable for any system be it Print, anaglyph, porlarized, real-d, dolby 3d, IMAX 3D, Sensio, Feild Sequntial, Pulfrich, etc. There is no new stereo technology. Stereo hasn’t changed since the civil war when the photgraphed battelfileds in 3D. We still have two eyes and thats all you need to do is recreate that. The only thing that has improved is the method of delivery (projection) and the knowledge of how to set it up.
Thanks,
that basically answered my question 
It’d be cool to test this some day
Home-made 3D
http://www.scec.org/geowall/
I’m not sure if reed3d sells other than wholesale anymore. A few varieties of non-paper 3D glasses can be found at:
http://www.rainbowsymphony.com/3d-glasses.html
I use mostly colour anaglyph with red/cyan paper glasses for gallery installations, CDs, etc. (I buy them by the truckload :)) although I did my Master’s thesis project with LCD shutter glasses and images rendered with the Lightwave Stereo rendering.
OK, so now I have these two images from the stereo option in Lightwave, what settings in the other App. (normal, screen, overlay, etc,) do I use to to comp them together ?
E.
If your in Orlando swing by the Dave School to see this stuff in action…they are currently working on an animated short using this technology.
Unfortunetly, I won’t be getting to Orlando any time soon. I mean, I understand the whole null convergance thing, it’s just the whole settings problem I can’t crack. C’mon, somebody has to have done this…
E.
Here’s another site I found that sells 3d Glasses - all types - and nicer plastic ones as well:
Ok once you have your Left eye and right eye images your all set. Now you just have to decide what you can afford to view these in 3D. I will cover these in order of complexity and expense. Most methods require you to use either Photoshop to affect a few images or a compositing application to affect a long sequence.
[ol]
[li]Freeview and cross eye. You simply place the two images next to each other and allow your eyes to diverge or cross. I find crossed eyes method the easiest to accomplish. Just place the right eye image on the left side of the screen, and left on the right, cross your eyes and allow yourself to focus on the blurring image in the middle. Once locked on you will see beatiful 3D in full color.[/li][li]Anaglyphic. This is a very cheap and easy method, but it does limit the color as your using color channels to seperate the two eyes. IN photoshop simply copy the Red channel from the left eye and replace the red channel in the right eye image. In Fusion, Shake or Aftereffects you can swap the channels using boolean or channel reorder operations. Red/Cyan glasses are best for color retention. Red/Green work well for Black and White images. You can get these glasses from many internet shops for around .40 cents a pair. http://www.rainbowsymphony.com/[/li][li]LCD Shutter glasses. You can get a pair of 60hz shutter glasses for video around $70 http://www.razor3donline.com/ With these you can create a image sequnce that interlaces two images left and right into one image with Field 1 being the left eye and Field two being the right. When played back on a DVD VHS or any NTSC signal on a CRT based monitor (DOES NOT WORK ON PLASMA, LCD or HD) you get a pretty solid 3D image[/li][li]Polarized Projection This setup will cost several Thousand dollars. You will need specialized software, a media server to feed the projectors, or dual dvd machine sthat are interlocked. Also you need two projectors bother with poloarizing filters set to 45 and 90 degrees and a silver painted non depolarizing screen. Also poloarized glasses. It takes alot of calibrating, but this will give a fantastic expirence. Be carful of light levels, as you loose almost 60% of your light from all the filtering.[/li][li]3D Dlp Television. I have just got one of these. Samsung makes DLP based HD set that can display 120 images persecond at full HD. Using special flicker free Shutterglasses, and a HTPC feeding your set, you get truly amazing 3D display in your home.[/li][/ol]
Hopefully that will give you some ideas on how to view your 3D animations in stereo. I’m putting together some tutorials on my website soon that will go a lot deeper into the techniques I was trying to describe above.
I thought I would chime in here, I help develop the work flow that the Dave school uses. We use option 5 in the above post using the Samsung DLP TV. Workes great, we have a 32bit quadcore hooked up to it and using a sterioscopic player to stitch them together in real time from two seperate video files.
when testing the eye sep in AE or fustion we use anagyph glasses (Don’t wear them too long as it temporarly screws up your color vision) in AE we just use the effect that comes with the software in fusion we use a simple node setup that I can’t remeber off the top of my head that does the same thing. I’ve found that a AE work flow workes better while doing sterio but really you can use either or.
Right now I think one of our students uses he built on his own for the project they are working on to do eye sep and convergence. the only downside is that it uses two seperate cameras thus has to be brokenout seperatly. However you almost HAVE to do it this way if your using render buffer export as RBE does not currantly support the sterio toggel in the camera pannel. I usally mannually break out my passes and don’t use RBE because they are only 8bit images anyway.
I’ve passed along a sterio wish list to NT I hope gets looked at in the near furture because we do see a lot of that stuff thesedays and helps to have all advantages you can get.
I hope that makes sence to y’all 
OK, this may be really stupid. I had an idea that if you went ahead and used the two camera setup,(shot the sequence twice @8cm camera distance), then used spotlight projected image sequences, spots at the same distance as the original cameras, then reshot the combined images with a third camera onto a flat plane, couldn’t you use the polarized glasses we got for free at Beowulf to view the 3D image? ( Yeah, probably really stupid idea).
E.
Does anyone know the exact technique used on Beowulf to get the non red/blue 3D effect?
Is this something we can do in Lightwave and if so how? (I kept the sunglass looking grey 3D glasses from the movie for testing hoping to find out how to do this myself) I have the red/blue glasses but they still distort the color way too much for my liking, thats why I love these sunglass type 3D glasses like the Beowulf movie and some Universal Studios thrill rides use.
I tried Googling and found it uses Real-D tech, but couldn’t find anything explaining exactly how it works so I could recreat it in Lightwave.
Any help is much appreciated.
Just a link to a Stereoscopic Camera plugin, thought it was the right place to post a link.
http://www.flay.com/GetDetail.CFM?ID=2403
Nottin’ more…
Ok all 3D is the same no matter what your doing with it. Read D like in Beuwolf or IMAX or SpyKids (red/cyan) all comes out of Lightwave or Maya or Renderman the same way.
You render a left camera image and a right camera image. ( I know renderman and Lightwave’s ACT can render two views in one render, but thats not the point of this)
Once you have those two images you can display them many ways. In order to display them to use your READ D glasses you will have to PROJECT them using two projectors with each one have a CIRCULAR polarization filter on each lens. Polarization takes the wave of light and aligns them to a pattern, this pattern is decoded by your glasses and allows only the correct pattern though. If the light is any other type of light or has been depolarized the glasses will appear black and opaque. The last part of this equation is you must project these two images onto a silver screen. Sliver screens maintain the reflected lights polarization. If you were to try this on a flat white wall or screen, it would fail because the light photons would scatter after the bounce.
over the years we’ve done a number of 3D stereoscopic projects using LightWave as well as our in-house renderer for IMAX and HDTV. We found having the convergence point at infinity works best.
There are many ways of combining the left and right eye images to produce stereo. Here is a tutorial I did for one of these ways.
http://www.jpl.nasa.gov/news/features.cfm?feature=528
[b]http://tinyurl.com/bugco
[/b]Best Regards,
Zareh
P.S. for the live action stereo photos used in the tutorial above the cameras were positioned in a parallel manner as well, putting the convergence point at infinity.
I agree. However Id like to take it further and say that you do not need any convergence set in the render. Using orthographic stereo or parallel cameras is best. The muscles in your eyes are set up to converge, but not diverge. When using a convergence point and toeing in the cameras, theres a tendency to have too much parallax in the distance at infinity and barrel distortion in the corners. This is the leading cause of eyestrain.
Parallel cameras all the time. thats not to say you can’t use converging cams for special instances like a foreground character thats rendered as another layer could have converging cams.
