Adoption of Gesture Interactive Robot in Music Perception Education with Deep Learning Approach
This work intends to help students perceive music, study music, create music, and realize the "human-computer interaction" music teaching mode. A distributed design pattern is adopted to design a gesture interactive robot suitable for music education. First, the client is designed. The client gestur...
Gespeichert in:
Veröffentlicht in: | Journal of Information Science and Engineering 2023-01, Vol.39 (1), p.19-37 |
---|---|
Hauptverfasser: | , , |
Format: | Artikel |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
container_end_page | 37 |
---|---|
container_issue | 1 |
container_start_page | 19 |
container_title | Journal of Information Science and Engineering |
container_volume | 39 |
creator | Hu, Jia-Xin Song, Yu Zhang, Yi-Yao |
description | This work intends to help students perceive music, study music, create music, and realize the "human-computer interaction" music teaching mode. A distributed design pattern is adopted to design a gesture interactive robot suitable for music education. First, the client is designed. The client gesture acquisition module employs a dual-channel convolutional neural network (DCCNN) for gesture recognition. The convolutional layer of the constructed DCCNN contains convolution kernels with two sizes, which operate on the image. Second, the server is designed, which recognizes the collected gesture instruction data through two-stream convolutional neural network (CNN). This network cuts the gesture instruction data into K segments, and sparsely samples each segment into a short sequence. The optical flow algorithm is employed to extract the optical flow features of each short sequence. Finally, the performance of the robot is tested. The results show that the combination of convolution kernels with sizes of 5×5 and 7×7 has a recognition accuracy of 98%, suggesting that DCCNN can effectively collect gesture command data. After training, DCCNN's gesture recognition accuracy rate reaches 90%, which is higher than mainstream dynamic gesture recognition algorithms under the same conditions. In addition, the recognition accuracy of the gesture interactive robot is above 90%, suggesting that this robot can meet normal requirements and has good reliability and stability. It is also recommended to be utilized in music perception teaching to provide a basis for establishing a multi-sensory music teaching model. |
doi_str_mv | 10.6688/JISE.202301_39(1).0002 |
format | Article |
fullrecord | <record><control><sourceid>proquest_airit</sourceid><recordid>TN_cdi_proquest_journals_2761232970</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><airiti_id>10162364_202301_202209080002_202209080002_19_37</airiti_id><sourcerecordid>2761232970</sourcerecordid><originalsourceid>FETCH-LOGICAL-a246t-bb14f1528199233aed960e71210e5476490f0bcfd3214073c74134e6652926ce3</originalsourceid><addsrcrecordid>eNpVkE9PwzAMxXMAiTH4CigSFzh0OE6aNsdpbGNoCMSfc5WmLsuE2tKm8PXp2CTE6dnWs5_8Y-xCwETrNL25X73MJwgoQWTSXInrCQDgERsJEDpCqdUJO-267TDUsVIjlk2Lugm-rnhd8iV1oW-Jr6pArXXBfxF_rvM6cF_xh77zjj9R62i_MC96Z3-rbx82_Jao4WuybeWrdz5tmra2bnPGjkv70dH5QcfsbTF_nd1F68flajZdRxaVDlGeC1WKGFNhDEppqTAaKBEogGKVaGWghNyVhUShIJEuUUIq0jpGg9qRHLPL_d0h9rMf_si2dd9WQ2SGiRYo0SQwuBZ7l_WtD_7Ps8Ozo5Md2A2CYCDd0fvfCJPJRP4AWEFnfQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2761232970</pqid></control><display><type>article</type><title>Adoption of Gesture Interactive Robot in Music Perception Education with Deep Learning Approach</title><source>Free E-Journal (出版社公開部分のみ)</source><creator>Hu, Jia-Xin ; Song, Yu ; Zhang, Yi-Yao</creator><creatorcontrib>Hu, Jia-Xin ; Song, Yu ; Zhang, Yi-Yao</creatorcontrib><description>This work intends to help students perceive music, study music, create music, and realize the "human-computer interaction" music teaching mode. A distributed design pattern is adopted to design a gesture interactive robot suitable for music education. First, the client is designed. The client gesture acquisition module employs a dual-channel convolutional neural network (DCCNN) for gesture recognition. The convolutional layer of the constructed DCCNN contains convolution kernels with two sizes, which operate on the image. Second, the server is designed, which recognizes the collected gesture instruction data through two-stream convolutional neural network (CNN). This network cuts the gesture instruction data into K segments, and sparsely samples each segment into a short sequence. The optical flow algorithm is employed to extract the optical flow features of each short sequence. Finally, the performance of the robot is tested. The results show that the combination of convolution kernels with sizes of 5×5 and 7×7 has a recognition accuracy of 98%, suggesting that DCCNN can effectively collect gesture command data. After training, DCCNN's gesture recognition accuracy rate reaches 90%, which is higher than mainstream dynamic gesture recognition algorithms under the same conditions. In addition, the recognition accuracy of the gesture interactive robot is above 90%, suggesting that this robot can meet normal requirements and has good reliability and stability. It is also recommended to be utilized in music perception teaching to provide a basis for establishing a multi-sensory music teaching model.</description><identifier>ISSN: 1016-2364</identifier><identifier>DOI: 10.6688/JISE.202301_39(1).0002</identifier><language>eng</language><publisher>Taipei: 社團法人中華民國計算語言學學會</publisher><subject>Accuracy ; Algorithms ; Artificial neural networks ; Design ; Education ; Feature extraction ; Gesture recognition ; Kernels ; Machine learning ; Music ; Neural networks ; Object recognition ; Optical flow (image analysis) ; Perception ; Robots ; Segments</subject><ispartof>Journal of Information Science and Engineering, 2023-01, Vol.39 (1), p.19-37</ispartof><rights>Copyright Institute of Information Science, Academia Sinica Jan 2023</rights><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27924,27925</link.rule.ids></links><search><creatorcontrib>Hu, Jia-Xin</creatorcontrib><creatorcontrib>Song, Yu</creatorcontrib><creatorcontrib>Zhang, Yi-Yao</creatorcontrib><title>Adoption of Gesture Interactive Robot in Music Perception Education with Deep Learning Approach</title><title>Journal of Information Science and Engineering</title><description>This work intends to help students perceive music, study music, create music, and realize the "human-computer interaction" music teaching mode. A distributed design pattern is adopted to design a gesture interactive robot suitable for music education. First, the client is designed. The client gesture acquisition module employs a dual-channel convolutional neural network (DCCNN) for gesture recognition. The convolutional layer of the constructed DCCNN contains convolution kernels with two sizes, which operate on the image. Second, the server is designed, which recognizes the collected gesture instruction data through two-stream convolutional neural network (CNN). This network cuts the gesture instruction data into K segments, and sparsely samples each segment into a short sequence. The optical flow algorithm is employed to extract the optical flow features of each short sequence. Finally, the performance of the robot is tested. The results show that the combination of convolution kernels with sizes of 5×5 and 7×7 has a recognition accuracy of 98%, suggesting that DCCNN can effectively collect gesture command data. After training, DCCNN's gesture recognition accuracy rate reaches 90%, which is higher than mainstream dynamic gesture recognition algorithms under the same conditions. In addition, the recognition accuracy of the gesture interactive robot is above 90%, suggesting that this robot can meet normal requirements and has good reliability and stability. It is also recommended to be utilized in music perception teaching to provide a basis for establishing a multi-sensory music teaching model.</description><subject>Accuracy</subject><subject>Algorithms</subject><subject>Artificial neural networks</subject><subject>Design</subject><subject>Education</subject><subject>Feature extraction</subject><subject>Gesture recognition</subject><subject>Kernels</subject><subject>Machine learning</subject><subject>Music</subject><subject>Neural networks</subject><subject>Object recognition</subject><subject>Optical flow (image analysis)</subject><subject>Perception</subject><subject>Robots</subject><subject>Segments</subject><issn>1016-2364</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><recordid>eNpVkE9PwzAMxXMAiTH4CigSFzh0OE6aNsdpbGNoCMSfc5WmLsuE2tKm8PXp2CTE6dnWs5_8Y-xCwETrNL25X73MJwgoQWTSXInrCQDgERsJEDpCqdUJO-267TDUsVIjlk2Lugm-rnhd8iV1oW-Jr6pArXXBfxF_rvM6cF_xh77zjj9R62i_MC96Z3-rbx82_Jao4WuybeWrdz5tmra2bnPGjkv70dH5QcfsbTF_nd1F68flajZdRxaVDlGeC1WKGFNhDEppqTAaKBEogGKVaGWghNyVhUShIJEuUUIq0jpGg9qRHLPL_d0h9rMf_si2dd9WQ2SGiRYo0SQwuBZ7l_WtD_7Ps8Ozo5Md2A2CYCDd0fvfCJPJRP4AWEFnfQ</recordid><startdate>20230101</startdate><enddate>20230101</enddate><creator>Hu, Jia-Xin</creator><creator>Song, Yu</creator><creator>Zhang, Yi-Yao</creator><general>社團法人中華民國計算語言學學會</general><general>Institute of Information Science, Academia Sinica</general><scope>188</scope><scope>7SC</scope><scope>8FD</scope><scope>JQ2</scope><scope>L7M</scope><scope>L~C</scope><scope>L~D</scope></search><sort><creationdate>20230101</creationdate><title>Adoption of Gesture Interactive Robot in Music Perception Education with Deep Learning Approach</title><author>Hu, Jia-Xin ; Song, Yu ; Zhang, Yi-Yao</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-a246t-bb14f1528199233aed960e71210e5476490f0bcfd3214073c74134e6652926ce3</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Accuracy</topic><topic>Algorithms</topic><topic>Artificial neural networks</topic><topic>Design</topic><topic>Education</topic><topic>Feature extraction</topic><topic>Gesture recognition</topic><topic>Kernels</topic><topic>Machine learning</topic><topic>Music</topic><topic>Neural networks</topic><topic>Object recognition</topic><topic>Optical flow (image analysis)</topic><topic>Perception</topic><topic>Robots</topic><topic>Segments</topic><toplevel>online_resources</toplevel><creatorcontrib>Hu, Jia-Xin</creatorcontrib><creatorcontrib>Song, Yu</creatorcontrib><creatorcontrib>Zhang, Yi-Yao</creatorcontrib><collection>Chinese Electronic Periodical Services (CEPS)</collection><collection>Computer and Information Systems Abstracts</collection><collection>Technology Research Database</collection><collection>ProQuest Computer Science Collection</collection><collection>Advanced Technologies Database with Aerospace</collection><collection>Computer and Information Systems Abstracts Academic</collection><collection>Computer and Information Systems Abstracts Professional</collection><jtitle>Journal of Information Science and Engineering</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Hu, Jia-Xin</au><au>Song, Yu</au><au>Zhang, Yi-Yao</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Adoption of Gesture Interactive Robot in Music Perception Education with Deep Learning Approach</atitle><jtitle>Journal of Information Science and Engineering</jtitle><date>2023-01-01</date><risdate>2023</risdate><volume>39</volume><issue>1</issue><spage>19</spage><epage>37</epage><pages>19-37</pages><issn>1016-2364</issn><abstract>This work intends to help students perceive music, study music, create music, and realize the "human-computer interaction" music teaching mode. A distributed design pattern is adopted to design a gesture interactive robot suitable for music education. First, the client is designed. The client gesture acquisition module employs a dual-channel convolutional neural network (DCCNN) for gesture recognition. The convolutional layer of the constructed DCCNN contains convolution kernels with two sizes, which operate on the image. Second, the server is designed, which recognizes the collected gesture instruction data through two-stream convolutional neural network (CNN). This network cuts the gesture instruction data into K segments, and sparsely samples each segment into a short sequence. The optical flow algorithm is employed to extract the optical flow features of each short sequence. Finally, the performance of the robot is tested. The results show that the combination of convolution kernels with sizes of 5×5 and 7×7 has a recognition accuracy of 98%, suggesting that DCCNN can effectively collect gesture command data. After training, DCCNN's gesture recognition accuracy rate reaches 90%, which is higher than mainstream dynamic gesture recognition algorithms under the same conditions. In addition, the recognition accuracy of the gesture interactive robot is above 90%, suggesting that this robot can meet normal requirements and has good reliability and stability. It is also recommended to be utilized in music perception teaching to provide a basis for establishing a multi-sensory music teaching model.</abstract><cop>Taipei</cop><pub>社團法人中華民國計算語言學學會</pub><doi>10.6688/JISE.202301_39(1).0002</doi><tpages>19</tpages></addata></record> |
fulltext | fulltext |
identifier | ISSN: 1016-2364 |
ispartof | Journal of Information Science and Engineering, 2023-01, Vol.39 (1), p.19-37 |
issn | 1016-2364 |
language | eng |
recordid | cdi_proquest_journals_2761232970 |
source | Free E-Journal (出版社公開部分のみ) |
subjects | Accuracy Algorithms Artificial neural networks Design Education Feature extraction Gesture recognition Kernels Machine learning Music Neural networks Object recognition Optical flow (image analysis) Perception Robots Segments |
title | Adoption of Gesture Interactive Robot in Music Perception Education with Deep Learning Approach |
url | https://sfx.bib-bvb.de/sfx_tum?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-01T07%3A27%3A54IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_airit&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Adoption%20of%20Gesture%20Interactive%20Robot%20in%20Music%20Perception%20Education%20with%20Deep%20Learning%20Approach&rft.jtitle=Journal%20of%20Information%20Science%20and%20Engineering&rft.au=Hu,%20Jia-Xin&rft.date=2023-01-01&rft.volume=39&rft.issue=1&rft.spage=19&rft.epage=37&rft.pages=19-37&rft.issn=1016-2364&rft_id=info:doi/10.6688/JISE.202301_39(1).0002&rft_dat=%3Cproquest_airit%3E2761232970%3C/proquest_airit%3E%3Curl%3E%3C/url%3E&disable_directlink=true&sfx.directlink=off&sfx.report_link=0&rft_id=info:oai/&rft_pqid=2761232970&rft_id=info:pmid/&rft_airiti_id=10162364_202301_202209080002_202209080002_19_37&rfr_iscdi=true |