Research on Virtual Scene Creation and Interactive Learning Model for Music Performance Combining Virtual Reality and Computer Vision Technology

Jingjin Zhang 1
1Luoyang Weishusheng Middle School, Luoyang, Henan, 471000, China

Abstract

With the development of virtual reality and computer vision technology, the demand for virtual scenes of music performances is becoming more and more prosperous, which brings new development opportunities for music performances and music teaching. In this paper, we use the beam leveling method to determine the camera parameters in the virtual scene, implement the calibration process and parameter solving for the camera, and implement the segmentation process for the virtual scene image through the GrabCut algorithm, formulate the model constraints and objective function, construct a virtual scene for music performance, and design a virtual scene system for music performance. Based on the virtual scene of music performance, the interactive learning model of music is proposed, and the virtual roaming mode is formulated by combining human-computer interaction technology to realize the interactive learning roaming of music learners in the virtual scene of music performance. The PSNR and SSIM values of the music performance virtual scene constructed by this paper’s technology are 25.8291db and 0.9396 respectively, which are higher than those of the virtual scene construction algorithms such as VSRS and JTDI as a comparison. Carrying out music teaching experiments, the experimental class that applies the interactive learning model of this paper for music interactive learning roaming has higher mean values of all dimensions than the control class in both music learning ability and music listening ability, showing significant differences (P<0.05).

Keywords: virtual reality technology, beam leveling method, GrabCut algorithm, virtual roaming, virtual scene