345 阅读 2019-11-28 11:35:38 上传
会议讲座: 会议
时 间: 2019.09.18
地 点: 中国 北京 海淀区学院路15号
形 式: 回放
主办单位: 北京语言大学国际合作与交流处、语言学系
人 数: 待定
题目:If Instruments could talk
- Vowels and their role as key features for musical instrument timbre recognition)
主讲人:Christoph Reuter(维也纳大学教授)
时间:2019-09-18 16:00 至 2019-09-18 18:00
地点:教一106
主办单位:国际合作与交流处、语言学系
讲座内容:
Abstracts: The Sound of Joy, the Sound of Hardness
Using signal analysis techniques to make emotional and metaphorical sound
impressions calculable.
What makes a piece of music, a timbre, or a noise sounding smooth or hard or joyful or sad? Are there certain features in the temporal or spectral
envelope of a sound, which generate certain emotional impressions?
With the help of signal analysis techniques combined with statistical procedures and machine learning it becomes more and more possible to calculate and predict the typical features which cause acoustically evoked emotional impressions. On the basis of numerous intuitively comprehensible (acoustical and visual) examples participants of the lecture will experience typical timbre features, which can be used to detect annoying sounds (like motorcycle noises), or to automatically describe the hardness or darkness in musical rock/pop genres, to categorize branches or stereotypes in advertisements and audiologos, to predict the audibility of sirens of emergency vehicles, to find differences and similarities in the expression of joy in Chinese and European music, etc. In each of these cases there is not one but a bundle of timbre descriptors, which -- in interplay with each other -- are responsible for a certain emotional effect or impression. The lecture will end up with an outlook for recent innovative studies such as the combination of timbre features with skin conductance measurements for the prediction of the emotional involvement into (and success of) musical songs, or like the interpretation of newborn baby cries.
If Instruments could talk ...
Vowels and their role as key features for musical instrument timbre recognition
Why are we able to recognise musical instruments by their timbres independent of their pitch? Which features enable us to decide whether something sounds similar or dissimilar? Many musical instruments timbres turned out to have similar formant-shaped spectra like vowels, which enables us to use similar mechanisms for speech and timbre recognition. The aim of this paper is to show the manifold possibilities of the formant-driven approach, when it comes to automatic recognition and categorization of musical instruments as well as to the prediction of timbre similarity and timbral blending. With the help of intuitive pictures and sound examples the participants of the paper will be taken on a scientific journey through different positions and experiments in timbre research history as well as in actual music information retrieval (MIR) approaches. During this journey and at the end of it, it becomes more and more clear that the main feature for the perception of language -- the vowels, described by their formants -- can also be used for the recognition and intuitively comprehensible similarity prediction of musical instrument timbres.