Now that we’ve outgrown all the controls in the Interaction Gallery and we’ve exhausted all the ways we can use KinectRegion to make our own controls we get into the thick of it and look at the InteractionStream itself. The cool thing is we can use the stream for non-WPF applications. So you could write a console app or XNA using the stream data to perform functions. Here we can get data of multiple users, and both hands and what states they are in.
Ben from Microsoft has already written a great post on how to use the data here. So instead of basically repeating him, I’ll let you read that and just note a few things that struck me when I first used it.
1. It doesn’t follow the same pattern of the other streams – Most of us are used to enabling our stream and handing it’s frame ready event. It looks a bit different:
_interactionStream = new Microsoft.Kinect.Toolkit.Interaction.InteractionStream(e.NewSensor, new MyInteractionClient()); _interactionStream.InteractionFrameReady +=InteractionFrameReady;
Firstly, we keep a copy of the stream, and we have to give it something the implements the IInteractionClient interface.
2. The InteractionFrameReadyEvent isn’t “it” – normally for other streams like depth we’ve handle depthframe ready and we’re all happy. With InteractionStream, if you only do that nothing useful happens…To get anything useful, you will need to hook up both the SkeletonFrameReady and the DepthFrameReady events. In the skeleton the key bits are:
skeletonFrame.CopySkeletonDataTo(_skeletons); var accelerometerReading = _sensor.AccelerometerGetCurrentReading(); _interactionStream.ProcessSkeleton(_skeletons, accelerometerReading, skeletonFrame.Timestamp);
We need to copy the skeleton data somewhere for later, and tell the interaction stream to process the skeletons.
In the depth stream it’s similar:
We need to tell the InteractionStream to process the depth data