Affect Formations

in art and music

Blog

Post Archive | RSS

Archives

Affect Formations concert video now released

April 27th at 5:21pm

by Adinda van 't Klooster

All good things come to an end and so the same with the residency at Durham University. My lovely studio overlooking the river has been reclaimed but the very productive residency has now been fully documented on this website for posterity to enjoy. A full HD video of the Sage concert is now also available online at https://youtu.be/qZmdpDISdUQ, thanks to Simone Tarsitani for all his hard work on the edit and filming. For those who didn't make the concert, I hope you enjoy watching it!

To summarise, the residency led to the creation and completion of two interactive audiovisual interfaces exploring emotion in music and art, titled 'In a State' https://www.affectformations.net/projects/in-a-state and 'BioCombat' https://www.affectformations.net/projects/biocombat. A second strand of work consisted of the creation of a series of graphical scores, https://www.affectformations.net/projects/graphical-scores, some of which were performed live in the two public performances at the Sage and Durham University (see YouTube video link above). The concerts would not have been possible without the excellent piano performance of John Snijders and Nick Collins. Thanks also to Janet Stewart for leading the panel discussion after the concert and to Tuomas Eerola for participating in the discussion. I think the concert was enjoyed by those who came. I deduct this from the audience feedback we received; you find some examples pasted below:

" The music was very powerful and effected the emotions I felt throughout the performance. I was intrigued by Biocombat and how the emotions can be triggered."

" Well structured and varied. Enjoyed very much. Original. Enjoyed the debate too."

"A very good interesting evening. Well organised and produced - avant-garde is still alive!"

" I very much liked the completely different interpretations of the graphical scores from the two pianists."

"I thought it was interesting to see how the animations gave me a different experience of a performance. The animations were engaging but did not detract from my enjoyment of the improvising."

Another output of the residency has been the online graphical research survey that has sofar been taken by 48 people. We need at least two more people to take the survey before we can release the data, so the survey is still open for people who did not attend the concert. If you feel generous please spend ten minutes of your time looking at drawings and assessing them for their emotional expression here: https://www.affectformations.net/research/visualaffects.

Asides from allowing me to make a new body of work, the residency has been an excellent opportunity to work with music professionals of a very high callibre and to make new links for further collaborations. I am most grateful to the Arts Council England, the Leverhulme Trust and to Durham University for making this possible.

See full post and comments →

Method of improvising to Graphical Score 1

April 3rd at 9:14am

Guest blog by Shelly Knotts

Durham Improvisors - made up of postgraduate composition students at Durham University and local improviser Holger Ballweg - performed Graphic Score 1 by Adinda van ‘t Klooster at the Affect Formations concert in Durham on 27th February.

file

Graphical Score 1, paper and ink, © Adinda van 't Klooster 2014, photograph by Aaron Guy 2015

The group is a mixed ensemble including acoustic instruments, and electronics and in this performance included Holger Ballweg (Electronics), Sondre Bryntesen (Electric Guitar), Shelly Knotts (Electronics), Egidija Medeksaite (Electronics) and Matthew Warren (Accordion). We have been experimenting with improvisation practices together since 2013 and are interested in exploring methods of structuring group improvisations including the use of graphics scores and algorithms. The Affect Formations project interested the group particularly in the challenge it presented of structuring multiple performers' responses to the graphics, particularly in relation to diverse understanding of emotion in graphics and their translation into sound.

In approaching the performance of the piece we focussed mostly on how to structure the individual responses in a way that made musical sense and varied the texture and density of the performance. We decided not to spend too much time on group discussion of valid musical responses to the graphics themselves as we felt that emotional interpretation of graphics and sound are both cultural and subjective and that in a group performance multiple interpretations of the same score would be a valid approach. The performance directions that we received from Adinda were that we could either read the score from left-to-right or move from element to element of the score interpreting each element individually. We decided that the latter approach would work best for an ensemble performance.

As we wanted to make a relationship between the idea of a painting as a stimulus for structuring a musical response, we thought about the time structure for looking at a painting as the basis for the musical structure. We discussed this as a group and came up with the idea of three stages of looking at a painting - the first being getting a general impression of the painting, or considering the painting as a whole, the second as looking at the individual parts of the painting and considering them in gradually greater detail and depth and finally considering the relations between different parts of the painting and how the objects interact with each other. We also thought the spatial aspects of the painting should be important to the interpretation so we decided that each element of the score should be given a time duration in the piece related to how dominant it is in the painting. So for example black and blue elements had more time given to them than yellow and purple.

With these limitations in mind we started the performance by playing for 1 minute with the whole group. Each player was allocated a different coloured element which was decided in advance. After the first minute we used a computer algorithm to decide the rest of the structure of the performance. The algorithm told each player which coloured element from the score to play, changing every 40-60 seconds at random, and when to drop out and play nothing (in order to vary the density of the performance). The order of the elements and the density of how many players played at a time was randomised so it would be different for every rehearsal and performance. The probability of each colour being allocated to a performer was scaled approximately by the space it takes up in the painting. This was displayed on a screen on stage visible to all the performers.

As a group we felt this interpretation left room for improvisation and different interpretations of the score in each performance and avoided us 'learning' the piece as we never know which colour combinations will come up and in which order. Also this meant we didn't have to do too much 'pre-arranging' of who would play what when. We each had an idea in advance of how we would interpret each element of the score, but the randomisation of colour allocation in the ensemble meant that our pre-formed ideas needed to be re-contextualised in each performance, depending on what other players are playing.

On the whole it was an interesting experiment in reconciling aspects of pre-prepared material with aspects of randomised structure and of trying to shape diverse responses to the same visual material in a way that is still relatable to the original score.

See full post and comments →

Final improvements to the Biocombat piece before the Sage concert

March 17th at 5:15pm

by Adinda van 't Klooster

The past two weeks have been spent making various improvements to the Biocombat piece in particular. Biocombat combines biofeedback art with the idea of gaming. The computer demands a new emotion to be felt by two performers for each new one-minute interval and decides who is best at feeling the requested emotion. The winner is rewarded by having their electroacoustic composition play louder than their opponents’. The system has been trained on previous sessions of the performers listening to pieces of music that make them feel either happy, sad, tender, scared, calm, excited, annoyed or angry whilst being wired up with EEG, GSR and heartrate sensors. The performers are helped by a projection that visualizes the requested target emotion in an abstract and aesthetic way. However, the fact that this is a competitive game makes it all the harder to feel emotions on demand. Furthermore, biosignals vary at different times of the day and depend also on external conditions such as temperature and caffeine intake. The classifier used for the Durham concert had been trained on five sessions of biosignal recordings taken whilst listening to self-selected music that brings the performer into one of the eight target states. Two minutes of biodata had been recorded but the first minute was originally discarded to ensure the performer had enough time to reach the target state. However, in the live system, the performer doesn’t get as much time, as the requested emotion changes every minute. Also, in the emotion induction sessions, the order of the requested emotions was always the same and roughly followed the arousal valence model of Russell (1980) whereas in a live setup the order of the emotions is random and unknown in advance. Therefore, it was decided to make a few changes and record new biosignal data in order to come to a more robust classifier:

  • The requested emotion in the emotion induction sessions was made random as this would mirror a live performance situation.
  • The biosignal data used from the classifier is from 0.5 minutes in to 1.5 minutes to come to a compromise of getting reliable data whilst still allowing it to be reproducible in a live system.
  • The recordings were taken at different times in the day.
  • Instead of using music of other composers to induce us into the 8 target states, we used our own compositions as these would be the ones used in a live performance situation.

Both performers had to do five new emotion induction sessions to get the new data. The classifiers were then retrained for both performers and tested in a live performance setup. The results in a live performance setup were now of a higher quality, i.e. the performers felt the system was better able to gage the emotions they felt. However, there was still a discrepancy between the reported offline success rates of the classifiers, which were above 90 %, and the actual success rate according to the performer knowing what he/she actually felt.

Other approaches were looked at to improve the qualitative success rate of the system. The window size was increased from 5 seconds to ten seconds and the GSR data was discarded in some of the classifiers to see if this made a big difference. This was done as the GSR sensors used here (the BioEmo v. 2.0 from ICubeX) were found to be quite unreliable. Interestingly the offline success rate of the classifier for one of the performers was still about 80% even without the GSR values. Then for each performer a neural network was trained on all ten recording sessions, the old and the new combined. Whilst this brought the offline success rate down slightly, in theory it should create a more robust system for the live performance.

We’re still doing some final tests before launching the new system at the Sage this Thursday the 19th of March. We hope to see you all there! Also, the picture survey is still open, so if you haven’t done it yet but would like to, just click on the participate button above. So far, 45 people have taken the survey, but we need at least 5 more to start publishing the findings.

And last but not least: I have now uploaded my electroacoustic compositions to the project website. I have created these soundscapes, one for each of the eight emotions listed above, whilst focusing mostly on sound timbre rather than melody or rhythm. These sounds are also used in the Biocombat piece to be performed at the Sage but if I loose all the time you won’t hear them so much, so now you know where to find them anyway!

References

Russell, J.A. (1980), A Circumplex Model of Affect, Journal of Personality and Social Psychology, 39, p.1161-1178.

See full post and comments →

Immersive experience by combining sonic and visual images in concert

March 7th at 10:41am

Guest blog by Tuomas Eerola

Observations from the Affect Formations concert, 27 February 2015 at the Concert Hall of the Department of Music at Durham University.

This event was the first of two concerts created by the artist in residence, Adinda van ‘t Klooster. The purpose of the concert was to explore how music influences emotions, how computer might detect emotions in music, and how sonic and visual creations overlap and interact in a live performance. All this was created by combining artwork by Adinda van 't Klooster and compositions and improvisations carried out by Nick Collins (piano), John Snijders (piano), Matthew Warren (composition), and a postgraduate ensemble combining electronic and acoustic sounds.

As an audience member, the mesmerising visualisations projected on large screen next to the performers played a major role in my experience of the music. The visual elements in these comprised of Adindas artwork which were transformed by a series of intricate processes (rotation, spinning, floating, zooming) that blended into seamless sequences. Moreover, the animations clearly reacted to performance in a responsive and immediate fashion, which had the effect of the onlooker finding meaningful narratives between the music and the animations. Of course, I know that this was completely algorithm driven process (with clever use of SuperCollider and Processing) but it was cleverly designed to have enough interesting visual elements to keep the viewer in its spell but not to overload. On top of this, a computer also interjected musical commentary to the performance, which created another layer of interpretation to it.

The second theme consisted of creating music out of an artwork. Analogies between colour, shape and sound have always fascinated composers, performers, artists, and scientists. Sir Isaac Newton was obsessed with the parallels between keys and colours, Wolfgang Köhler [footnote1] associated shapes and sounds in a series of experiments, which actually have been replicated and confirmed by neuroscientists [footnote2].

This was musically the most interesting part in my opinion. It was fascinating to immerse oneself into the unfolding creative process where the image actually helped you to navigate the terrain of affects. The graphical scores were images created by Adinda and they were performed by Nick Collins, John Snijders and the ensemble in turns. For each image and improvisation, the logic of how the image transformed into sound was soon discovered. One could easily devise a matching experiment where the listeners had to associate performances to the images [footnote3] and I'm fairly certain that these performances and images would result in a high consensus despite the performances adopting different strategies and devices to bring out the essence of the artworks. This mapping between musical improvisation and visual image also invites the listener to think about the analogies between sonic and visual arts. The interpretation by John Snijders of a Graphical score 3 was a particularly illustrative example of how visual art turns into sonic art.

filefile

Left image: John Snijders with Graphical Score 3 projected, Photograph by Aaron Guy

Right image: Graphical Score 3, © Adinda van 't Klooster, 2014

Here, the analogies consisted of:

  • colour and timbre (the black visual shape and the low dark sounds)
  • visual surface and musical texture (the small sharp "hairs" and fast jagged, edgy rhythmic motives)
  • visual shape and musical structure (the looping shape and slow build-up of tonal and textural progressions that twisted and turned before slowly winding down)

In addition to these relatively straightforward analogies, other devices could also be outlined. Knowledge gained in series of recent experiments suggests that such cross-domain mappings are rather uniformly deciphered [^footnote4] and large number of stable mappings does exist, but of course employing a large quantity of similarities does not necessarily lead to more interesting artwork.

The finale of the concert was called "biocombat", a performance, where Adinda and Nick were rigged up with sensors measuring their arousal (skin conductance and heart rate, also EEG). The computer captured the signals and determined who was the best at feeling the emotion projected by the music at each given time. This is a tough challenge in a live performance situation since the peripheral measures are fairly volatile in general and immensely affected by room temperature, level of anxiousness and arousal. Also the classification of the signals into discrete emotion states such as happy, sad, angry, tender, calm, and excited is not an easy task since many of these states are fairly similar in terms of arousal (which is easier to deduce from these measures than the underlying positive - negative axis of the core affect). The classification is also very much dependent on the amount and variability of the training data. It was a wonderful idea to demonstrate how affective states can be recognised by the computer and also how music may induce certain emotional states. file

Image of Bio Combat by Adinda van 't Klooster and Nick Collins, 2015, photograph by Aaron Guy

All in all, an impressive concert from any perspective you may want to look at it. The next opportunity to experience this is at the Sage Gateshead in two weeks, which will probably even be more terrific and affective experience.

Footnotes

[footnote1]: Köhler, W (1929). Gestalt Psychology. New York: Liveright.

[footnote2]: Ramachandran, VS & Hubbard, EM (2001). Synaesthesia: A window into perception, thought and language. Journal of Consciousness Studies, 8(12), 3–34.

[footnote3]: A number of such experiments has been carried out such as Lipscomb and Kim (2004) and Eitan and Rothschild (2010).

[footnote4]: See for instance how musicians associate shapes to music in a series of production experiments Küssner, M. B. (2013). Music and shape. Literary and Linguistic Computing, 28(3), 472-479.

See full post and comments →

Two weeks to go before the Durham performance!

February 13th at 3:36pm

by Adinda van 't Klooster

With two laptops running over hours progress is inevitable, at least in our case, thankfully…. The classifier Nick wrote about last week is now running at a 83% success rate on data it has never seen before which is a good result and most of the sounds are now also complete. I hope to upload some to the sound affects page next week.

At the moment I am working on the animations; see the screen grab below. You can probably guess which emotion this is meant to express?

file Image © Adinda van 't Klooster, 2015

With 38 people having completed the picture survey it serves as guideline in terms of which graphics are most successful in expressing the 8 emotions we work with for the BioCombat system (happy, sad, tender, excited, calm, angry, scared and annoyed). The most obvious thing that the survey shows is that tender is a particularly difficult emotion to express in an abstract graphic. The highest average rating I got for any of my four graphics that were designed as tender was 26.05 %. This is not very high given that people could give it 100%. The highest scoring was in the category calm where one graphic scored 59%. This was followed by a close second in the sadness category where another graphic gained 57 % and a third place in the happy category of 56 %. I won’t show you the particular graphics yet as the survey is still open and will remain so for another ten days. Yes, this is your last chance to affect the performance without even leaving your chair! Go on, just click on the participate button to the right and enjoy from there.

This week myself, Nick Collins and John Snijders also looked at the student submissions. A call had gone out for the music students at Durham University to compose a piece of piano music expressing one or some of the 8 emotions described above. The best piece was Three Expressions by Matthew Warren, an interesting and subtle composition that will be included in the Affect Formations concerts. Congratulations Matthew!

And last but not least Affect Formations got a little mention in this months NARC magazine. It's a free magazine distributed widely throughout the North East http://narcmagazine.com/outlets/. I keep getting the sci-fi tag entirely involuntarily but as this does keep happening to me perhaps there is some truth in it? Perhaps this how people like to describe the slightly more eerie side of my work. For the full article check the Affect Formations Facebook page https://www.facebook.com/affectformations

More work to do now, onwards!

See full post and comments →

The research project needs your help!

Participate