节点文献

基于深度置信网络的多特征融合音乐分类算法

Multi-feature fusion music classification algorithm based on deep belief network

  • 推荐 CAJ下载
  • PDF下载
  • 不支持迅雷等下载工具,请取消加速工具后下载。

【作者】 郑旦

【Author】 ZHENG Dan;Xi’an Vocational and Technical College of Aeronautics and Astronautics;

【机构】 西安航空职业技术学院

【摘要】 针对单一形态数据在音乐情感分类方面的局限性,文中提出了基于深度置信网络的多特征融合音乐分类算法。首先对音乐信号从多个角度进行特征向量的提取,形成多特征数据并进行融合。同时针对音乐情感分类,将传统的深度置信网络进行改进,增加微调节点增强模型的可调节性。由融合得到的训练集在改进的深度置信网络中进行训练,通过调整RBM中可见层与隐藏层单元之间的权值来使得模型达到最优性能。测试结果表明,音乐情感分类结果最高为82.23%,可较好的为音乐检索提供辅助。

【Abstract】 In order to improve the limitations of single-form data in music sentiment classification,a multi-feature fusion music classification algorithm based on deep confidence network is proposed in the paper. At first,the music signal is extracted to form a multi-feature data and merged. At the same time,for the classification of music emotions,the traditional deep confidence network has been improved,and the micro-adjustment points have been added to enhance the adjustability of the model. The training data set obtained by the fusion is trained in the improved deep confidence network,and the model achieves optimal performance by adjusting the weight between the visible layer and the hidden layer unit in the RBM. The test results show that the music emotional classification result is up to 82.23%,which can provide better support for music retrieval.

  • 【文献出处】 电子设计工程 ,Electronic Design Engineering , 编辑部邮箱 ,2020年04期
  • 【分类号】TP181
  • 【被引频次】5
  • 【下载频次】134
节点文献中: 

本文链接的文献网络图示:

本文的引文网络