As motion sensors give higher performance with lower price, various applications using 3d motion data captured from motion sensors have been introduced. One of the applications is a puppetry problem meaning that when a person produces certain actions in front of a motion sensor, non-human characters in the monitor produce actions which have the same meaning. To solve this problem, there have been many approaches. Those solutions successfully control non-human characters accomplishing the performer’s intention. However they do not maintain the characters’ original motion patterns. For instance, an elephant which usually moves in a quadruped manner walks with human’s biped manner in those solutions. In this research, we want to puppet non-human characters while maintaining their original motion patterns. We select features from a motion unit and map them to characters’ motions, which is contrary to previous researches which learn mappings based on a pose unit. Using this algorithm, motion mappings between human and non-human characters can be done while maintaining both the semantics and original patterns of motions.