DCSIMG
Kinect and WPF: Skeleton tracking using the official Microsoft SDK - Vangos Pterneas blog - Site Root - StudentGuru

Kinect and WPF: Skeleton tracking using the official Microsoft SDK

It's official! Microsoft released its Kinect SDK we have all been waiting for! Kinect SDK offers natural user interaction and audio APIs. It can also be used with the existing Microsoft Speech API. Today we'll see how to create a WPF application performimg skeleton tracking.

Not a surprize, this official SDK provides an API similar to OpenNI's one. That's pretty cool for me (and anyone following my blog), because not much stuff needs to be learned from the beginning.

• Download demo project.

Kinect skeleton tracking

Step 0

Uninstall any previous Kinect drivers such as PrimeSensor, CL NUI, OpenKinect, etc).

Step 1

Download the official Kinect SDK and install. System requirements:

  • Kinect sensor
  • Computer with a dual-core, 2.66-GHz
  • Windows 7–compatible graphics card that supports DirectX® 9.0c capabilities
  • 2-GB RAM

IMPORTANT: Remember to restart your PC after the installation!

Step 2

Launch Visual Studio and create a new WPF application.

Step 3

Add a reference to Microsoft.Research.Kinect assembly, found under the .NET tab. Do not forget to include its namespace in your xaml.cs file. I just included Nui namespace as we do not currently need the audio capabilities.

[code lang="c#"] using Microsoft.Research.Kinect.Nui; [/code]

Step 4

It's time to create the user interface: An image displaying the raw camera output and a canvas displaying the users' joints:

[code lang="xml"] <Grid>
    <Image Name="img" Width="640" Height="480" />
    <Canvas Name="canvas" Width="640" Height="480" />
</Grid> [/code] 

Step 5

We are up to the most interesting part right now! Let's see how we obtain the raw camera image and how we perform user skeleton tracking! Open your xaml.cs file and start typing.

Kinect API offers a Runtime object which will accomplish the mission:

[code lang="c#"] Runtime _nui = new Runtime(); [/code]

After that, we have to initialize the Runtime object and then open the video stream:

[code lang="c#"] _nui.Initialize(RuntimeOptions.UseDepthAndPlayerIndex | RuntimeOptions.UseSkeletalTracking | RuntimeOptions.UseColor);
_nui.VideoStream.Open(ImageStreamType.Video, 2, ImageResolution.Resolution640x480, ImageType.Color); [/code]

_nui.Initialize(RuntimeOptions.UseDepthAndPlayerIndex | RuntimeOptions.UseSkeletalTracking | RuntimeOptions.UseColor);
_nui.VideoStream.Open(ImageStreamType.Video, 2, ImageResolution.Resolution640x480, ImageType.Color);

Finally, we need to define the proper event handlers for the camera image display and the skeleton recognition. Pretty simple:

[code lang="c#"] _nui.VideoFrameReady += new EventHandler<imageframereadyeventargs>(Nui_VideoFrameReady);
_nui.SkeletonFrameReady += new EventHandler<skeletonframereadyeventargs>(Nui_SkeletonFrameReady); [/code]

Here follows the implementation of each method. They are self explantory and quite similar to what my Nui.Vision library does.

[code lang="c#"] void Nui_VideoFrameReady(object sender, ImageFrameReadyEventArgs e)
{
    var image = e.ImageFrame.Image;
    img.Source = BitmapSource.Create(image.Width, image.Height, 96, 96, PixelFormats.Bgr32, null, image.Bits, image.Width * image.BytesPerPixel);
} [/code]

 

[code lang="c#"] void Nui_SkeletonFrameReady(object sender, SkeletonFrameReadyEventArgs e)
{
    canvas.Children.Clear();

    foreach (SkeletonData user in e.SkeletonFrame.Skeletons)
    {
        if (user.TrackingState == SkeletonTrackingState.Tracked)
        {
            foreach (Joint joint in user.Joints)
            {
                DrawPoint(joint, Colors.Blue);
            }
        }
    }
} [/code]

Done! Build and run your project. Download demo with source code.

Attention: I have omitted some lines of code from this blog post in order to make it clearer. I suggest you downolad the sample project and have a look at it. You'll find that the X, Y and Z-axis values conversion from centimetres to pixels is quite interesting. In my example, I actually used the basic idea from Coding for Fun Kinect Toolkit.

  • Anonymous
    Anonymous

    Hi Vangos,

    I need to connect the joints appeared on the frame to form a skeleton structure, please help me how to achieve this.

    Thanks,

    Bharat.

  • Anonymous
    Anonymous

    Hi Vangos!  I was using your Nui.Vision library until the Kinect SDK landed, and I wanted to know about some of the differences.  Mainly, I don't think the Z positions of joints are the same values as what you had.  I'm trying to get a sprite to move along the screen's Y axis in relation to how close or far away you are to the kinect camera.  My calculations were fine with your library, but with MS SDK, I can't seem to get the math right.  I think their depth values range from like 800 to 4000 or something.  Any ideas?

  • Anonymous
    Anonymous

    Considerably, the story is in reality the greatest on this noteworthy topic. I agree with your conclusions and will eagerly watch forward to your next updates. Saying nice one will not just be sufficient, for the wonderful clarity in your writing. I will immediately grab your rss feed to stay privy of any updates!

  • Anonymous
    Anonymous

    The Windows Client Developer Roundup aggregates information of interest to Windows Client Developers

  • Anonymous
    Anonymous

    there is error in canvas.children.clear();

  • Anonymous
    Anonymous

    Is there anyone way to make the joint colors different

    Harry