So, I got LW 7.5 and messiah:studio Rev9b loaded and configured, and both seem to run quite well on it. It appears that the OpenGL performance is a bit slower than my home rig (an Athlon 2100+ with 1Gb of RAM and a GeForce4 4200), but nothing dramatic, and completely useable.
I hadn't tried a render test until today, however, and I was shocked. No, wait, make that *****SHOCKED******. While it renders a bit faster in LW (scene that took 10m40s on the Athlon took 8m30s on the Pentium M), the speed difference in messiah:render is dramatic, to say the least - so far, scenes are rendering anywhere from 2.5-3 times as fast.
For instance, a scene with one metanurbed character, an area light (direct illumination only, no radiosity), floor, and UV colormap rendered in 2m32s on the Athlon, and rendered in 49s on the Pentium M. The same scene with Monte Carlo radiosity clocked in at over 13 minutes on the Athlon, and at 4m55s on the Pentium M. All benchmarks were run with the Dell running on its AC adapter - not sure if performance will be similar on battery power, and if that can be tweaked via system power settings.
So, a question for pmG (or anybody else who might know) - is this performance due primarily to the large L2 cache the Pentium-M is running (1mb)? Or is there something else going on here? I know that LW takes advantage of Intel's SSE2 instructions, so I expected a performance boost in Lightwave (and got about 15% in my test), but I'm not aware of messiah operating in the same fashion. The two renderers are fundamentally different, of course, with Lightwave using a Zbuffer technique while messiah uses scanline, and I'm not sure how much effect that has on this test.
So, anybody know the scoop on this? Needless to say, I'm happy that I now have a very serviceable animation and render box that I can carry around with me.