Frequently -and not so frequently- Asked Questions
You can order your MC 3.0 by contacting us at:
Please visit the section of the website called “Products” for further information.
MC3.0 comes with a 3-year warranty. If you have problems or questions at any time, we are here to help. We want everyone to have a great experience.
1) Plug it with loudspeakers and turn it on,
2) Select the music you want from the tablet-controller menu,
3) Move your body and you will hear your movements as music!
Its that simple.
Yes, anyone. If you can move some part of your body, if only your eyes, then you can create music with the MotionComposer. There are settings to allow different body parts and different levels of activity to be used.
Since there are many kinds of therapy, and many abilities, there is no quick answer to this question. We have made over 50 workshops in hospitals, hospice care centers, memory units, school for hearing impaired, special education schools, live-in care centers, and so on. We have also made inclusive performances with artists with and without disabilities. At each event, we learn. You can read about some of our experiences at publications.
Soon, we will have an on-line tutorial and a blog for users to learn from each other’s experiences.
Finally, a great way to learn about the possibilities is in the form of a workshop. We offer to come to your facility and teach you how to use the MotionComposer in different settings.
The MC has 1-Player and 2-Player modes. For example, one person can play one musical instrument, while the second person plays another instrument.
Having said this, there are ways to use it with groups. This is explained during the introductory workshop and in the guides that come with the MotionComposer.
There are two parts to turning movement into music:
Part one is the motion tracking. Video cameras attached to a computer, analyze expressive shapes, movements and gestures and turn them into computer data.
Part two is called mapping. This means assigning the different body parts, gestures and movements to musical features, such as the notes of a piano.
But just playing notes is not the same as making music! This is where our composers come in. Using a kind of software called algorithmic composition, users are helped to play musically. For example, when you play a note, the software looks for notes that sound good together. The same is true of the player’s rhythm. If you play a little too early, the software will correct your mistake to keep you on the beat.
If done correctly, the player will have a feeling of hearing their body as music. This is our ultimate goal.
The human eye is sensitive to the human form. Even from a distance, we can immediately recognize if someone is there. Even if they are standing still, we can instantly tell people from trees, lamps, chairs and tables. Computers are not nearly so clever. Teaching them to find the human form, and to analyze what it is doing, is what is meant by “motion tracking”.
The underlying technology is based on analyzing differences in light intensity. But if you look around you, you may notice that the lighting can be quite chaotic. There are reflections, shadows and things moving in the background. Even the trees moving out the window might cause unwanted sounds!
The MotionComposer solves this challenge using stereo-vision technology. This means that two cameras, like our eyes, essentially see the world in three dimensions, and this in turn allows our software to locate where the player is and what they are doing. Even persons in wheelchairs can be identified and analyzed according to expressive gestures, shapes and movements.