Affect Formations

in art and music

Blog

Post Archive | RSS

Final improvements to the Biocombat piece before the Sage concert

March 17th at 5:15pm

by Adinda van 't Klooster

The past two weeks have been spent making various improvements to the Biocombat piece in particular. Biocombat combines biofeedback art with the idea of gaming. The computer demands a new emotion to be felt by two performers for each new one-minute interval and decides who is best at feeling the requested emotion. The winner is rewarded by having their electroacoustic composition play louder than their opponents’. The system has been trained on previous sessions of the performers listening to pieces of music that make them feel either happy, sad, tender, scared, calm, excited, annoyed or angry whilst being wired up with EEG, GSR and heartrate sensors. The performers are helped by a projection that visualizes the requested target emotion in an abstract and aesthetic way. However, the fact that this is a competitive game makes it all the harder to feel emotions on demand. Furthermore, biosignals vary at different times of the day and depend also on external conditions such as temperature and caffeine intake. The classifier used for the Durham concert had been trained on five sessions of biosignal recordings taken whilst listening to self-selected music that brings the performer into one of the eight target states. Two minutes of biodata had been recorded but the first minute was originally discarded to ensure the performer had enough time to reach the target state. However, in the live system, the performer doesn’t get as much time, as the requested emotion changes every minute. Also, in the emotion induction sessions, the order of the requested emotions was always the same and roughly followed the arousal valence model of Russell (1980) whereas in a live setup the order of the emotions is random and unknown in advance. Therefore, it was decided to make a few changes and record new biosignal data in order to come to a more robust classifier:

  • The requested emotion in the emotion induction sessions was made random as this would mirror a live performance situation.
  • The biosignal data used from the classifier is from 0.5 minutes in to 1.5 minutes to come to a compromise of getting reliable data whilst still allowing it to be reproducible in a live system.
  • The recordings were taken at different times in the day.
  • Instead of using music of other composers to induce us into the 8 target states, we used our own compositions as these would be the ones used in a live performance situation.

Both performers had to do five new emotion induction sessions to get the new data. The classifiers were then retrained for both performers and tested in a live performance setup. The results in a live performance setup were now of a higher quality, i.e. the performers felt the system was better able to gage the emotions they felt. However, there was still a discrepancy between the reported offline success rates of the classifiers, which were above 90 %, and the actual success rate according to the performer knowing what he/she actually felt.

Other approaches were looked at to improve the qualitative success rate of the system. The window size was increased from 5 seconds to ten seconds and the GSR data was discarded in some of the classifiers to see if this made a big difference. This was done as the GSR sensors used here (the BioEmo v. 2.0 from ICubeX) were found to be quite unreliable. Interestingly the offline success rate of the classifier for one of the performers was still about 80% even without the GSR values. Then for each performer a neural network was trained on all ten recording sessions, the old and the new combined. Whilst this brought the offline success rate down slightly, in theory it should create a more robust system for the live performance.

We’re still doing some final tests before launching the new system at the Sage this Thursday the 19th of March. We hope to see you all there! Also, the picture survey is still open, so if you haven’t done it yet but would like to, just click on the participate button above. So far, 45 people have taken the survey, but we need at least 5 more to start publishing the findings.

And last but not least: I have now uploaded my electroacoustic compositions to the project website. I have created these soundscapes, one for each of the eight emotions listed above, whilst focusing mostly on sound timbre rather than melody or rhythm. These sounds are also used in the Biocombat piece to be performed at the Sage but if I loose all the time you won’t hear them so much, so now you know where to find them anyway!

References

Russell, J.A. (1980), A Circumplex Model of Affect, Journal of Personality and Social Psychology, 39, p.1161-1178.

The research project needs your help!

Participate