Authors: Quinto, Lena and William Forde Thompson.
Publication: International Journal of Synthetic Emotions (IJSE)
Publication volume & issue: vol. 3 issue 2 (2012) pages 48-67.
DOI: 10.4018/jse.2012070103 (http://dx.doi.org/ doesn’t recognize this DOI).
Abstract: Most people communicate emotion through their voice, facial expressions, and gestures. However, it is assumed that only “experts” can communicate emotions in music. The authors have developed a computer-based system that enables musically untrained users to select relevant acoustic attributes to compose emotional melodies. Nonmusicians (Experiment 1) and musicians (Experiment 3) were progressively presented with pairs of melodies that each differed in an acoustic attribute (e.g., intensity – loud vs. soft). For each pair, participants chose the melody that most strongly conveyed a target emotion (anger, fear, happiness, sadness or tenderness). Once all decisions were made, a final melody containing all choices was generated. The system allowed both untrained and trained participants to compose a range of emotional melodies. New listeners successfully decoded the emotional melodies of nonmusicians (Experiment 2) and musicians (Experiment 4). Results indicate that human-computer interaction can facilitate the composition of emotional music by musically untrained and trained individuals.
The first two pages of the article are available online. I was unable to access the full text version, but this is an article I’d like to read, especially to examine the methodology.