Brekel Kinect v2 videos


A new set of apps for the Kinect for Windows v2 and Kinect for XBox One sensor have been released.


Brekel Pro Body v2 does realtime markerless body Motion Capture of up to 6 people from your living room or office using a Kinect sensor.


Pro PointCloud v2 records 3D pointcloud and export them to popular mesh-cache and particle-cache formats for use most 3D packages.

More info, trials, example data here:


Hello Jasper,

I have a question with PointCloud v2. is there a way to take just a single snapshop as opposed to recording a sequence? I used that feature a lot in v1 and couldn’t locate it in v2 (1st beta).

Thanks and congrats on release!


Hi Dave,

Uhm nope, but the only reason is I haven’t thought of it :slight_smile:
Seems like a useful addition for the next update.

You can of course record a really short sequence and when exporting just write a single frame.



Hi Jasper,

This looks very cool. I was wondering, can your point cloud app capture from Kinect Studio 2.0 playback? I have some Kinect 2 material I shot and captured using the studio that I would love to run through this.


How fast can you move before the tracking breaks down? If you throw a fast punch, will it record it?


Hi Jasper.

I have some questions :slight_smile:

  • What is the capture area size? (approx)

  • Are you planning a multicamera system (even if it’s based in multiple pc’s) to increase capture area and fidelity? (I saw that the current brekel pro fidelity is pretty neat with Kinect2)

It looks pretty amazing, and a big leap from Pro 1, congrats :slight_smile:



is there a way to use this to scan a room? So pick up the camera and take it round your room and get a 3d textured model of a location? That would be hugely useful! You could also zoom into key areas for more quality etc.



Yes that should be no problem.

When you play back a clip in Kinect Studio 2.0 it is presented to the application as if it was coming from a live sensor and it shouldn’t actually even see the difference.



The sensor itself records at 30fps and the tracking algorithm can recover in a single frame.

Fast punches shouldn’t be too much of a problem as long as the arm/hand is visible, even if it’s generating motionblur it should pick it up at the start/end and fill the gap.

It may be important to play with the smoothing filtering for these type of motions though so to not over smooth it.
Or even record without filtering and do that afterwards in your favourite 3D app so you have more selective control.



Yes both the sensor hardware and software have greatly improved over the previous generation in my opinion.

The capture area is cone shaped (like with any camera) and the v2 lenses are much wider than on the previous generation sensors.

To give you some ballpark figures:

Regarding distance, you can capture a full body between about 1.25 - 4.5 meters. (as close to 50cm for upper body)
The width is about 3 meters for a full body when close to the sensor and about 6 when further away.

Behind the scenes I am experimenting with multi-sensor setups, at the moment this is in a highly experimental internal pre-alpha stage but it looks promising. After working on Pro Face 2 I will pick this up further.



Nope, Pro PointCloud is designed to record full motion 3D pointclouds, not to combine multiple viewpoints into a static mesh.

There are several good scanning applications on the market already so it didn’t make sense for me to compete with them.

The ones I know of are:

  • Kinect Fusion Explorer (comes with the Microsoft SDK)
  • Skanect
  • ReconstructMe
  • KScan 3D
  • Artec Studio

I’m not sure if all of those apps support the v2 sensor at this moment but I know some do.


what is the advantage to the program of ipisoft?
is there a postprocess available like ipisoft? to smooth (for food slides)?


iPi and Brekel internally use very different algorithms and have different workflow philosophies.
I suggest you simply try both and see what you like, but here are some differences:


  • operates in realtime and needs no offline processing, and comes with smoothing filtering btw
  • starts tracking instantaneously when a body is seen, no initialization is needed
  • has little to no learning curve (select format, stand in front of sensor, hit record)
  • can track 1-6 bodies simultaneously
  • produces higher fidelity data including shoulder, head, and hand movement without the need for additional hardware
  • is a bit more susceptible to occlusions compared to the much higher priced multi-sensor iPi (which is not available for Kinect 2 yet btw)
  • can track hand states: open/closed/lasso (two fingered pointing) and drive finger joints using various poses
  • can do simple 2D face tracking to enhance head rotations (also exported to FBX)
  • can record pointcloud data and export to various mesh and particle cache formats using Pro PointCloud
  • can record audio in sync
  • exports to the much better FBX file format (incl hand states, face tracking and additional state info), as well as BVH (skeleton only)
  • can export to TXT and CSV formats
  • does not do biomechanical analysis
  • no subscription service, you simply buy and own a permanent license old-school-style

Btw if you’re experiencing foot slides try:

  • capture on a matte black floor surface instead of a shiny floor
  • make sure you don’t wear baggy pants, the better the ankle/foot can be distinguished the more accurate the foot data
  • use a retargeting process that can handle positions as well as rotations (Maya/MotionBuilder for example)


Thanks Jasper, these are great news!!! Count me in as soon as I can gather three Kinect v2 :smiley:

And please, don’t migrate to SaaS licensing policy :stuck_out_tongue:



Thanks for the answer! I was wondering, with the new multi-kinect setup you are working on would it be very difficult to offset half of the kinects attached to be offset 0.5fps thus giving you 60fps? Or if you have 4, 0.25 to theoretically get 120fps?

I know the FPS issue is why another cheap mo-cap solution uses PS Eye cameras because they run at 60fps (though I imagine the PS Eyes have less fidelity than the Kinect v2) but they aren’t doing what I am proposing so I have no idea whether it’s possible.

What do you think? Possible?


A consumer device like the Kinect v2 (or any other kind of webcam or depth camera for that matter) doesn’t provide that kind of precision control over the shutter. And certainly not as simple as controlled over USB/driver/SDK.

The PS Eye is a 2D camera so it certainly provides much less data, but it can run at 60fps (even higher I believe).


Just hoping for smart people like yourself to rig stuff together with what’s available to compete with the 10k+ solutions, hehe. Hoping to test the cheap solutions that are out there, including Brekel, to see if it can do what I want as well as look around to see if any places near me would be willing to rent out mo-cap stages… The biggest problem I see is the framerate for capturing accurately fast motions. I don’t mind using your clever hand mo-cap system with Leap to capture hand animation separately – not ideal but I believe it would work for budget restrained people like myself :).

I realized it was a far fetched idea, the frame-rate offset. You’re doing great things though, keep it up! I’m always following your updates.


ipisoft is now rental-only.

I wouldn’t support ANY forced rental only system.

Brekel looks pretty good!


Being an Autodesk and Adobe user myself I’m also not a fan of subscription services.
So I’ll probably be the last to change my apps to that.

I believe more in the oldschool way of single purchase of a permanent license, free updates for the same minor version, payed update for a major update (when a new sensor comes out that requires a full rewrite).


Well… with this ONE POST you’ve probably endeared yourself to MANY more users - especially those who purchased or were going to purchase ipisoft. In fact I’ll probably purchase from Brekel simply because you choose to keep your software as a license and not a rental!

Any chance of supporting Asus XTion Live Pro?