What is in that demo can be done with any camera that sees Z-depth - the Forge people themselves said that they will soon support other depth cameras, not just Primesense.
This means that a variety of dual lens Stereo 3D cameras for example can be used for exactly the same thing.
What is important is the software app that actually extracts the 3D mesh + texture, tracks where the camera is and so on. The quality of the mesh and texture processing is important, not so much the depth camera.
ARKit is of no use to me whatsoever.
I have the money to buy Apple gear, but won’t. I’m a Windows/Android user. Apple stuff is of no use to me.