The role of sonification modeling in learning of motion pattern's spatial characteristics
Oral Presentation XML
Paper ID : 1690-11THCONF
Assistat Professor, School of Humanities, Department of Sport Science, Damghan University
Introduction: Use the movement sonification (convert kinetic and kinematic properties of human movement patterns to audio patterns) alone or along with the visual pattern to develop performance or motor learning don´t have long history. However, some research has been shown the effect of sonification on performance and learning as concurrent feedback or modeling. But, how it works is still unclear. Because the ears act more accurate than eyes in the diagnosis of time or integrating consecutive sounds in a rhythm, it is likely that the use of ears in motion-related perception could lead to a wider range of information especially time features of motor pattern (such as relative timing). This result is shown by several studies. But is the use of audible patterns (in the form of sonification) effective on spatial features of motor pattern?
Methodology: 20 subjects were selected. The subjects were randomly divided two groups: visual and visual-auditory. Visual group watched pattern skilled basketball player and visual-auditory group in addition to watching pattern skilled basketball player, heard Elbow angular velocity as sonification. For creation auditory pattern, after extracting raw data from the motion analysis device, elbow angular velocity convert to audio graph using software Sonification Sandbox. Each group after the pre-test, during the acquisition phase, performed four sessions of 40 trials on four consecutive days, and at the end of each session, they were evaluated by acquisition test. Retention test was conducted 48 hours after the last acquisition session. At all stages, the angular velocity and angular distance of the elbow joint of the individuals coincided to the pattern of the skilled performer. Then, by calculating the root mean square error, the inconsistency was considered as the spatial error of angular distance and spatial error of angular velocity.
Results: The results showed that in both acquisition and the retention tests, in both variables spatial error of angular distance and spatial error of angular velocity, there was a significant difference between two groups in favor of audio-visual group.
Discussion: The results show that audiovisual integration led to the development of spatial patterns of motor task in audiovisual group compared to the visual group. The results clearly show that the use of visual-auditory patterns improves the performance as a result of the learning of spatial pattern of the elbow joint in basketball jump shot. This results is explainable based of Common Coding Theory and modality appropriateness hypothesis.