The human eye is sensitive to the human form. Even from a distance, we can immediately recognize if someone is there. Even if they are standing still, we can instantly tell people from trees, lamps, chairs and tables. Computers are not nearly so clever. Teaching them to find the human form, and to analyze what it is doing, is what is meant by “motion tracking”.
The underlying technology is based on analyzing differences in light intensity. But if you look around you, you may notice that the lighting can be quite chaotic. There are reflections, shadows and things moving in the background. Even the trees moving out the window might cause unwanted sounds!
The MotionComposer solves this challenge using stereo-vision technology. This means that two cameras, like our eyes, essentially see the world in three dimensions, and this in turn allows our software to locate where the player is and what they are doing. Even persons in wheelchairs can be identified and analyzed according to expressive gestures, shapes and movements.