Microsoft as sponsor & a general update

First things first: the guys over at Mircrosoft have been very kind to the project and actually sponsored a couple of Kinect 2! I am quite grateful for their support (and also their support in other matters, of course)!

Otherwise the system has been up and running for a long while now and a lot of work has gone into the software and connecting the multiple Kinects, Unity and the Oculus Rift.

IMAG0205

There is one Kinect 2 camera at each corner of the 3.5m x 3.5m big space, pointing towards the middle. A person standing in the middle will therefore be recorded from four different sides.

Every computer with a Kinect 2 gets a color and depth image (and all the corresponding mapping data, in order to know where this point actually is in the 3d-space). This data is sent to the central computer and remodelled as a mesh. Basically it uses the depth data to build the mesh (= a network of triangles/vertices) and then mapps the color image on that mesh (UV mapping of the color image pixels to the mesh vertices).

What we get are four separate 3d-models. If the person in the middle is facing one camera, we get one 3d-model of his front (face/stomach), one of his left side (left arm/leg), right side (right arm/leg) and one of his back. The tricky part is combining these four overlapping images into one combined 3d-model (which can produce unexpected results ūüôā and yes, I used the Tuscany demo in the beginning as well ūüėČ ).

Screenshot 2015-04-09-02

Before this can actually be done, the 3d-space has to be calibrated and aligned. That means finding a point of origin (for instance some point in the middle) and the correct rotation (in at least two axes). Currently I use colored points for that, they are approximately 1cm wide each and about 1m apart from each other.

After the calibration is done and the software knows how to bring the separate 3d-models into one 3d-space, there are a lot of overlapping areas and now the software has to decide which surfaces to keep and which to delete and/or combine. This is quite tricky, especially considering that the calibration can be off.

IMAG0448

In the image above you can see an early version that didn’t manage to delete all the right triangles and combine them when needed. But by now this¬†works pretty well¬†and is a really¬†good first step towards filming¬†3d-animated models!

Combining the Oculus Rift with Unity 5 worked pretty well, especially since I used a plugin for Unity in order to view the game in fullscreen on the Rift the moment I hit play in the Unity editor (before that the game window had to be dragged to the extended display of the Rift and wasn’t fullscreen, so you could still see the borders, meh).

Holodeck_I_see_you

You can see the result on the left (well, at least a bit). The picture was taken at the re:publica 2015 conference (over 7000 visitors, wow!), where we had the best spot in the middle to showcase the¬†very first demo of our Holodeck! I’ll write a separate blog post about¬†it.

2 Comments:

  1. Nice article! I was wondering whether you are experiencing interference between your Kinects? Thanks.

  2. No, for us it seems to work without any problems. We only got interference in connection with the Vive, but otherwise it’s fine.

Leave a Reply

Your email address will not be published. Required fields are marked *