Kinect Camera Fundementals

Overview

The development has been started and well underway this post is about how the Kinect camera has been used to develop ways to gather data from it. By using the depth data action can be perfromed depeind on variables.

Background

The next step is to understand how to the Kinect can show video data from it RGB camera. 
The system will used an "eventing" system within the Kinect that synchronizes all the data such as depth and colour image data. The resolution of the colour camera can be 1280x960 RGB at 12FPS, Raw YUV 640x480 at 15 FPS, or 640x480 at 30FPS. The depth camera resolution can be 80x60, 320x240, or 640x480 all at 30FPS. Using multiple camera (depth and colour) then the event that is being used will only fire when both cameras are showing data (slowest). This means the event will fire 12 times per second if using the 1280x960 RGB at 12FPS.

Activity

Kinect RGB Camera

To display the Kinect RGB camera I have found there are two ways to do it.
The first is by using the Kinect API. The code below shows how by getting the byte array of pixels and detailing the stride, then creating a new bitmap for each frame.
            //This creates a bitmap every 30 seconds
            using (ColorImageFrame colorframe = e.OpenColorImageFrame())
            {
                if (colorframe == null) //if lost a frame (dropped)
                {
                    return;
                }

                //the API of showing video
                byte[] pixels = new byte[colorframe.PixelDataLength];
                colorframe.CopyPixelDataTo(pixels);
                //stride how many byte are needed on a each section (a pixel 0,0,0,0)
                int stride = colorframe.Width * 4;
                //BGRa can also be used to add transparency
                
                image1.Source = BitmapSource.Create(colorframe.Width, colorframe.Height, 96, 96, PixelFormats.Bgr32, null, pixels, stride);

The 2nd way is by using a pre-made viewer from Microsoft that deals with the API and only having to add it the the XAML file and data binding it to the Kinect.

Kinect Depth Camera

One of the main attractions of the Kinect is the depth data that can be used and manipulated within the Kinect. The depth data is the distance and player (I.E the pixel represent the distance and the player) Source Microsft Channel 9 Video. The image shown will be populated by colours that represent the distances.
Similar to the RGB camaera there are 2 ways, 1 to use the API, the complete code is too big to post. The code below shows the processing of each pixel that gives the depth and returns the colour depending on distance:

                //.9M or 2.95'
                if (depth <= 900)
                {
                    //we are very close
                    pixels[colorIndex + BlueIndex] = 255;
                    pixels[colorIndex + GreenIndex] = 0;
                    pixels[colorIndex + RedIndex] = 0;

                }
                // .9M - 2M or 2.95' - 6.56'
                else if (depth > 900 && depth < 2000)
                {
                    //we are a bit further away
                    pixels[colorIndex + BlueIndex] = 0;
                    pixels[colorIndex + GreenIndex] = 255;
                    pixels[colorIndex + RedIndex] = 0;
                }
                // 2M+ or 6.56'+
                else if (depth > 2000)
                {
                    //we are the farthest
                    pixels[colorIndex + BlueIndex] = 0;
                    pixels[colorIndex + GreenIndex] = 0;
                    pixels[colorIndex + RedIndex] = 255;
                }



The 2nd way is by using a pre-made viewer from Microsoft that deals with the API and only having to add it the the XAML file and data binding it to the Kinect.

After hours of fiddling               

Along with the depth data that is received from the Kinect there is a byte for how many players are detected (if skeleton tracking is enabled). After playing with the colouring of different depths I wanted to detect if a player was there and colour them differently this was possible by simple using the depth data array and detecting if a player was there and changing the colours accordingly in the array.
There was a problem that arose when trying to change a value when a player was detected, because this system was checking on EVERY pixel it was doing it thousands of times a second. This caused performance issues.
I found a new system from Microsoft that counts the number of players detected through the skeleton tracking system on a timer. By using a timer every 2 seconds the program would on the next frame count the number of players by counting the "SkeletonArrayLength" and determining how many players where tracked.
This lead to experimenting with a scenario  If a user was at a desk the Kinect would detect the user, however if the user then left the desk the computer would lock automatic as the Kinect could not track any users. The only issue was the distance the Kinect had to be positioned away from the desk to detect the user, also the angle. Although altering the Kinect angle is possible Microsft warn that this feature should not be excessively used as the motor is not designed for it.

Comments

Popular Posts