Friday, November 1, 2019
9:56 PM
Clipped from: https://owenlab.uwo.ca/research/the_openmiir_dataset.html
Music imagery information retrieval (MIIR) is an emerging field of research at the intersection of cognitive neuroscience and music information retrieval.
The goal is to identify music pieces from brain signals recorded while they were either perceived or imagined.
MIIR systems may one day be able to recognize a song just as we think of it.
Being able to reliably distinguish between just two different imagined music pieces would already allow us to build a music-based brain computer interface (BCI) for patients that are only able to communicate by modulating their own neural activity.
音乐图像信息检索是认知神经科学与音乐信息检索交叉研究的一个新兴领域。
研究的目的是通过大脑记录的信号来识别音乐作品,这些信号可以是感知到的,也可以是想象到的。
MIIR系统也许有一天能够识别出我们所认为的歌曲。
能够可靠地区分两种不同的想象中的音乐片段,将使我们能够为那些只能通过调节自己的神经活动进行交流的患者构建一个基于音乐的脑机接口(BCl)。
As a step towards such technology, we are presenting a public domain dataset of electroencephalography (EEG) recordings taken during music perception and imagination.
We acquired this data during an ongoing study that so far comprised 10 subjects listening to and imagining 12 short music fragments – each 7s-16s long – taken from well-known pieces.
These stimuli were selected from different genres and systematically span several musical dimensions such as meter, tempo and the presence of lyrics.
This way, various retrieval and classification scenarios can be addressed such as stimulus recognition, tempo estimation, meter recognition, and lyrics detection.
作为向这一技术迈进的一步,我们展示了一个公共领域的脑电图(EEG)数据集,它记录了音乐感知和想象过程。
我们在一项正在进行的研究中获得了这一数据,该研究目前包括10名受试者,他们分别听和想象12个短音乐片段,每个片段都来自于著名的音乐片段,每个片段都有7 -16秒长。
这些刺激是从不同的体裁中挑选出来的,系统地跨越了几个音乐维度,如节奏、节奏和歌词的存在。
这种方式可以处理各种检索和分类场景,如刺激识别、节拍估计、节拍识别和歌词检测。
The dataset is primarily aimed to enable music information retrieval researchers interested in these new MIIR challenges to easily test and adapt their existing approaches for music analysis like fingerprinting, beat tracking or cross-modal synchronization on this new kind of data. We also hope that the OpenMIIR dataset will facilitate a stronger interdisciplinary collaboration between music information retrieval researchers and neuroscientists.
该数据集的主要目的是使对这些新的MIIR挑战感兴趣的音乐信息检索研究人员能够轻松地测试和调整他们现有的音乐分析方法,如在这种新的数据上的指纹、节拍跟踪或跨模式同步。
我们也希望OpenMIIR数据集能够促进音乐信息检索研究者和神经科学家之间更强的跨学科合作。
This dataset is a result of ongoing joint work between the Owen Lab and the Music and Neuroscience Lab at the Brain and Mind Institute of the University of Western Ontario. It has been supported by a fellowship within the Postdoc-Program of the German Academic Exchange Service (DAAD), the Canada Excellence Research Chairs (CERC) Program, an National Sciences and Engineering Research Council (NSERC) Discovery Grant and an Ontario Early Researcher Award.
这个数据集是欧文实验室和西安大略大学大脑和思维研究所的音乐和神经科学实验室正在进行的联合工作的结果。
它得到了德国学术交流服务(DAAD)博士后项目、加拿大卓越研究委员会(CERC)项目、国家科学与工程研究理事会(NSERC)发现奖助金和安大略省早期研究员奖的资助。
Everybody can freely use the OpenMIIR dataset without any restrictions as it is released under the Open Data Commons Public Domain Dedication and Licence (PDDL). For data processing and analysis, we provide custom dataset implementations and deep learning pipelines for pylearn2 within the deepthought project under the 3-claused BSD license.
每个人都可以自由地使用OpenMIIR数据集,没有任何限制,因为它是在开放数据共享公共领域专用和许可(PDDL)下发布的。
对于数据处理和分析,我们根据3-claused BSD许可在deepthought proiect中为pylearn2提供定制的数据集实现和深度学习管道。
Further information on how to obtain the dataset or to contribute can be found on the Github project website
有关如何获取数据集或如何投稿的更多信息可以在Github项目网站上找到