Monday, June 27, 2022

Fwd: Articles published in Music & Science


Music & Science provides an open-access platform for engaged debate and insight into music research that embraces a wide range of scientific perspectives. A selection of six articles illustrates the highly interdisciplinary scope and focus of the journal. The articles explore topics ranging from the central role of music in mother-infant interaction, to reassessment of the mechanisms underlying music's efficacy in therapy for Autistic Spectrum Disorder, to the responses of plants to vibrations in the form of audible stimuli, to a model of the factors underpinning changes in the structure of the songs of humpback whales, to a re-evaluation and updating of the results of Alan Lomax's monumental Cantometrics project, to a computational study of styles in Swiss yodelling.  The papers illustrate clearly the broad and interdisciplinary reach of the general field of music and science, despite the diversity of their topics and the fields in which their results might be expected to have significant implications.  Together, these six articles provide a snapshot of the diverse and original research presented in the journal, whilst also demonstrating the value of creating a forum for dialogue between music and the sciences.

 

Fancourt, D., & Perkins, R. (2018). The effects of mother–infant singing on emotional closeness, affect, anxiety, and stress hormones. Music & Science. https://doi.org/10.1177/2059204317745746

Mother–infant closeness is vital to a human infant's survival and to the wellbeing of both mothers and infants across the life span.  However, despite mother–infant singing being practiced across cultures, there remains little quantitative demonstration of any effects of it on mothers or their perceived closeness to their infants.  In this study Daisy Fancourt and Rosie Perkins investigated mother-infant interactions among 43 mother-infant pairs, finding that mother–infant singing was associated with greater increases in maternal perceptions of emotional closeness than was the case for other interactions.  Singing is also associated with greater increases in positive affect and greater decreases in negative affect, and greater decreases in both psychological and biological markers of anxiety, supporting previous findings concerning effects of singing on closeness and social bonding in other populations as well as suggesting associations between closeness, bonding, and wider mental health.

 

Janzen, T. B., & Thaut, M. H. (2018). Rethinking the role of music in the neurodevelopment of autism spectrum disorder. Music & Science. https://doi.org/10.1177/2059204318769639

While music as therapy for autism spectrum disorder (ASD) has focused on social interaction, communication skills, and social-emotional behaviours, recently there has been an increased research focus on the role of motor and attention functions as part of the hallmark features of ASD. This article by Thenille Braun Janzen and Michael Thaut provides a critical appraisal of these developments, reassessing the role of music as intervention to support healthy neurodevelopment in individuals with ASD. Compelling research evidence indicates that motor and attention deficits are deeply implicated in the healthy neurodevelopment of socio-communication skills and may be key indicators of structural and functional brain dysfunction in ASD. Janzen & Thaut suggest that the significant effect of auditory-motor entrainment on motor and attention functions and brain connectivity may lead to a critical new functional role for music in the treatment of autism.

 

Kwak, D., Combriat, T., Wang, C., Scholz, H., Danielsen, A., & Jensenius, A. R. (2022). Music for Cells? A Systematic Review of Studies Investigating the Effects of Audible Sound Played Through Speaker-Based Systems on Cell Cultures. Music & Science. https://doi.org/10.1177/20592043221080965

In a large-scale review led by Dongho Kwak, a team at RITMO in Oslo extended our knowledge of the potential effects of music in the non-human world by analysing studies of whether musical sound can be used as cell stimuli. An overview of studies that have used audible sound played through speaker-based systems to induce mechanical perturbation in cell cultures found effects such as  enhanced cell migration, proliferation, colony formation, and differentiation ability. However, they also found significant differences in methodologies and cell type-specific outcomes which limited the generalisability of the inferences that could be drawn from the review.  They suggested that future experiments must better control their acoustic environments, use standardized sound and noise measurement methods, and explore a more comprehensive range of controlled sound parameters as cellular stimuli.

 

Mcloughlin, Michael, Lamoni, L., Garland, E. C., Ingram, S., Kirke, A., Noad, M. J., Rendell, L., & Miranda, E. (2018). Using agent-based models to understand the role of individuals in the song evolution of humpback whales (Megaptera novaeangliae). Music & Science. https://doi.org/10.1177/2059204318757021

An interdisciplinary team from the fields of cetacean biology and computer music, led by Michael McLoughlin, explored the ways in which the complex hierarchical structure of male humpback whales may develop in the course of their annual migratory cycles.  While these songs appear gradually to change over the course of the breeding season, instances have been recorded of more rapid song changes associated with patterns of migration. In order to understand the mechanisms that drive these song changes, as individual whales cannot be tracked over long migratory routes, we apply methods used in computer music research. We model the migratory patterns of humpback whales, a simple song learning and production method coupled with sound transmission loss, and how often singing occurs during these migratory cycles. Our model shows that shared feeding grounds where conspecifics are able to mix provide key opportunities for cultural transmission, and that production errors facilitated gradually changing songs.

 

Savage, P. E. (2018). Alan Lomax's Cantometrics Project: A comprehensive review. Music & Science. https://doi.org/10.1177/2059204318786084

Alan Lomax's ambitious and the controversial Cantometrics Project, based on analysis of approximately 1,800 songs from 148 worldwide populations using 36 classificatory features, sparked extensive and fierce debate, and the project never gained mainstream acceptance.  In a comprehensive critical review of the Cantometrics Project, focusing on issues regarding the song sample, classification scheme, statistical analyses, interpretation, and ethnocentrism/reductionism, Pat Savage distils Lomax's sometimes-conflicting claims into diagrams summarizing his three primary results: (1) ten regional song-style types, (2) nine musical factors representing intra-musical correlations, and (3) correlations between these musical factors and five factors of social structure. While the links Lomax claimed to have uncovered between song style and social structure are weakly supported, Savage shows that Lomax's historical interpretations regarding connections ranging from colonial diaspora to ancient migrations provide a more promising starting point for both research and teaching about the global arts.

 

Wey, Y., & Metzig, C. (2021). Machine Learning Classification of Regional Swiss Yodel Styles Based on Their Melodic Attributes. Music & Science. https://doi.org/10.1177/20592043211004497

Yannick Wey and Cornelia Metzig provide a first computational analysis of alpine yodelling, a style of singing that is both idiosyncratic and that has national significance. Through a classification of yodel styles based on their melodic features, the authors demonstrate significant regional differences between yodel tunes in Switzerland, and reveal the most salient musical features that contribute to different yodel styles. The study provides empirical evidence to support anecdotal claims from folklore studies that yodelling, often assumed to have a 'national' style, is in fact rooted in distinct geographic regions. Moreover, the work offers an innovative methodology with potential applications for analysis of other developing or ambiguous genres of music.

 

Find out more about the journal here: https://us.sagepub.com/en-us/nam/music-science/journal202491.

 

Ian Cross (Editor-in-Chief), Adam Okeford, Graham Welch, Emily Payne (Assistant Editor)

 

 

Dr Emily Payne (she/her)

Lecturer in Music

Assistant Editor, Music & Science

 


Thursday, June 23, 2022

Fwd: [DMRN-LIST] 6 PhD fellowships at University of Oslo

Dear all,

We are happy to announce 6 PhD fellowships affiliated with RITMO, University of Oslo:

Feel free to pass on to relevant candidates and don't hesitate to get in touch if you have questions.

The application deadline for all positions is 1 September 2022.

Best,

--   Alexander Refsum Jensenius [he/him]  Professor, Department of Musicology, University of Oslo  https://people.uio.no/alexanje    Deputy Director, RITMO Centre for Interdisciplinary Studies in Rhythm, Time, and Motion  https://www.uio.no/ritmo/english/    Director, fourMs Lab  https://fourms.uio.no    Chair, NIME Steering Committee  https://www.nime.org    Master's programme: "Music, Communication & Technology"  http://www.uio.no/mct-master    New online course: Motion Capture: The Art of Studying Human Activity  https://www.futurelearn.com/courses/motion-capture-course

Sunday, June 19, 2022

Fwd: [DMRN-LIST] Fully funded PhD position in Sheffield



We are excited to advertise an EPSRC funded PhD position that is co-supervised between Music and Computer Science, and a collaboration with the University of Sheffield's Healthy Lifespan Institute. Funding for 3.5 years (Home fees and stipend).

Project - Personalising interaction-technology for dementia: AI-enabled musical instrument training.
Supervisors - Jennifer MacRitchie, Guy Brown, Renee Timmers

Deadline for applications: 30 June 2022

Please see: https://protect-au.mimecast.com/s/YuyhCD1vmxcBPBA9iW5JUw?domain=findaphd.com

Project in short:
This project proposes harnessing artificial intelligence (AI) in order to create a flexible digital music instrument, enabling people with dementia and different degrees of fine motor impairments to be able to make music together. Machine learning methods offer a means by which interaction with a digital musical instrument can be driven by data, through the learning of personalised models that capture information about the musical ability, style of interaction and motor abilities of specific users.

Broader context:
The postgraduate researcher will be part of a team of researchers working on music technology for people with dementia, and will benefit from the broader contexts offered by the Department of Music (Psychology of Music), Computer Science (AI and machine learning), and HELSI (dementia care, aging and multimorbidity).

Further info - Renee Timmers (r.timmers@sheffield.ac.uk)

Thursday, June 16, 2022

Imperial College Choir: Jubilate (17/06/2022)

JUBILATE
Friday 17th June 2022 - 7:30pm
Holy Trinity, Prince Consort Road - SW7 2BA
A CONCERT OF ENGLISH MUSIC FROM THE REIGN OF ELIZABETH I TO ELIZABETH II
Standard: £5
Students: FREE
Donations welcome
Tickets: tinyurl.com/imperialchoirpr

Fwd: [DMRN-LIST] PhD Scholarships at University of West London

Apologies for cross-posting; please forward to potentially interested parties in your networks
=====================================================================

London College of Music | University of West London

The Vice-Chancellor's PhD Scholarships


We have a number of positions for three-year fully funded PhD Scholarships (fees plus an annual stipend of £17,000). These will be available for all eligible UK students.

There are opportunities across the LCM portfolio in Music: Performance, Composition, Technology, Business, and Performing Arts. We have a number of specialist research programmes co-supervised by leaders from industry. Further details can be found at: https://www.uwl.ac.uk/research/research-degrees/phd-opportunities/research-degrees-london-college-music

The University welcomes applicants who wish to study for a PhD research degree.  Successful applicants will join a vibrant and challenging academic environment where innovation, insight, and knowledge creation fees into high-quality research.   Please contact the relevant supervisor with any enquiries and for support with your application.

For further information about the Vice-Chancellor's Scholarships please visit our website:  https://www.uwl.ac.uk/research/research-degrees/phd-opportunities

Application deadline: Wednesday 6 July 2022.

Interviews will take place between 11 and 29 July 2022.


Best regards,

Justin Paterson

Professor of Music Production

Tuesday, June 14, 2022

Fwd: June music-data seminar

The last of this year's music-data seminars will take place (virtually) on Monday 27/6 at 4pm (UK time)

Psyche Loui - Generation of New Musical Preferences from Hierarchical Mapping of Predictions to Reward

Abstract:
Prediction learning is considered a ubiquitous feature of biological systems that underlies perception, action, and reward. For cultural artifacts such as music, isolating the genesis of reward from prediction is challenging, since predictions are acquired implicitly throughout life. Here, we examined the trajectory of listeners' preferences for melodies in a novel musical system, where predictions were systematically manipulated. Across seven studies (n = 842 total) in two cultures, preferences scaled with predictions: participants preferred melodies that were presented more during exposure (global predictions) and that followed schematic expectations (local predictions). Learning trajectories depended on music reward sensitivity. Furthermore, fMRI showed that while auditory cortical activity reflects predictions, functional connectivity between auditory and reward areas encodes preference. The results are the first to highlight the hierarchical, relatively culturally-independent process by which predictions map onto reward. Collectively, our findings propose a novel mechanism by which the human brain links predictions with reward value.

Bio:
Psyche Loui is Associate Professor of Creativity and Creative Practice in the Department of Music and director of the MIND (Music, Imaging, and Neural Dynamics) lab at Northeastern University. She graduated from University of California, Berkeley with her PhD in Psychology, and attended Duke University as an undergraduate with degrees in Psychology and Music. Dr. Loui studies the neuroscience of music perception and cognition, tackling questions such as: What gives people the chills when they are moved by a piece of music? How does connectivity in the brain enable or disrupt music perception? Can music be used to help those with neurological and psychiatric disorders? Dr. Loui's work has been supported by National Institutes of Health and has received multiple Grammy awards, a young investigator award from the Positive Neuroscience Institute, and a Career award from the National Science Foundation, and has been featured by the Associated Press, New York Times, Boston Globe, BBC, CNN, the Scientist magazine, and other news outlets.

               
___________________________________________________
Dr. Oded Ben-Tal
Senior Lecturer, Music Technology
Kingston University


Monday, June 13, 2022

Fwd: Research Fellow in Room Acoustic Modelling

RESEARCH FELLOW IN ROOM ACOUSTIC MODELLING
University of Surrey (UK)
Salary: £33,309 to £38,587 per annum
Fixed Term for 24 months
Post Type: Full Time
Closing Date: 23.59 hours BST on Sunday 26 June 2022

Applications are invited for a Research Fellow to be based in the Institute of Sound Recording (IoSR, http://iosr.uk) and to work full-time on the EPSRC project SCReAM ("SCalable Room Acoustic Modelling"). The post is available for 24 months, from 1/August/2022 until 31/July/2024. For an exceptional candidate, a later start date may be accommodated, subject to the approval from the funder. You can submit the application up until 26/June/2022, but you are encouraged to do so as soon as possible since interviews may start even before the deadline.

The post-holder will work on exploring connections between room acoustic models; defining new unifying and scalable room acoustic models; adapting those models for application in e.g. consumer electronics, computer games, immersive media, and architectural acoustics. 

The successful applicant will have a range of skills, including some of the following: strong, independent research skills; an excellent signal processing background; knowledge of room acoustic models; expertise/interest in numerical acoustics; enthusiasm for working with project partners at other universities and organisations (including, among others, KU Leuven, Electronic Arts and Sonos). 

The IoSR is home to the Tonmeister degree in Music and Sound Recording, which produced a stream of highly successful graduates (including three Oscar winners, seven Grammy winners, and twelve BAFTA winners), and is a leading centre for research in acoustic engineering. It has several projects funded from research councils and industry, involving human listening tests, acoustic measurement, statistical modelling and digital signal processing. Current work is, for example, developing systems for spatial enhancement of object-based audio reproduction, for timbral perception modelling, and for next-generation environment-aware headphones.

For more information about the SCReAM project, see https://www.scream-project.org. To apply go to: https://jobs.surrey.ac.uk/Vacancy.aspx?ref=033022 Informal enquiries may be made to the project lead, Dr Enzo De Sena e.desena@surrey.ac.uk.

In return we offer a generous pension, relocation assistance where appropriate , flexible working options including job share and blended home/campus working locations (dependent on work duties), access to world-class leisure facilities on campus, a range of travel schemes and supportive family friendly benefits including an excellent on-site nursery.


The University of Surrey is committed to providing an inclusive environment that offers equal opportunities for all.  We place great value on diversity and are seeking to increase the diversity within our community.  Therefore we particularly encourage applications from under-represented groups, such as people from Black, Asian and minority ethnic groups and people with disabilities.

Best regards,
  Enzo

--
Enzo De Sena
Senior Lecturer (Associate Professor)
Institute of Sound Recording
Department of Music & Media
University of Surrey
Guildford, Surrey, GU2 7XH, UK

Thursday, June 9, 2022

Fwd: [DMRN-LIST] London NIME watching event at C4DM

The Augmented Instruments Lab in QMUL's Centre for Digital Music is hosting a local "watch party" for the upcoming NIME conference. The event will be held during the conference from 28 June to 1 July, 12pm to 12am each day to run at the same time as the online conference sessions hosted by the University of Auckland. The virtual conference programme can be found here: https://nime2022.org/program.html

This event is aimed mainly at those in reasonable commuting distance from London, though all are welcome to join for informal discussion, networking, and fun during the NIME conference. We will watch the sessions together live and chat together in the breaks during the conference. Additionally, we will facilitate casual demo and work-in-progress showcases for those who wish to trial or get feedback about their work in an in-person setting.

We'll be joined at the event by special guest Fabio Morreale, co-chair of NIME 2022, who will be in London for the week of the conference!

The event is free to attend for anyone registered for the NIME conference. Feel free to attend for only a part of the conference. If you plan to attend, please fill out the form below to indicate which days you will join us (the form is editable, so please feel free to update your sign-up if needed).

https://forms.gle/bKHj2eyRHsH7ezGBA


Sign-ups will close on June 22, so please sign up as soon as you can! We will send more detailed joining instructions to people who register via the survey.

Meanwhile please feel free to email me or Courtney Reed (c.n.reed@qmul.ac.uk) with questions or suggestions.

Best wishes,
Andrew


--
Andrew McPherson
Professor of Musical Interaction
Centre for Digital Music
School of Electronic Engineering and Computer Science
Queen Mary, University of London
Mile End Road
London E1 4NS

Tuesday, June 7, 2022

Fwd: PhD studentship on Neuro-Symbolic Modelling of Music (Durham, UK)

Dear list,

I am pleased to share a call for a funded PhD studentship at the Department of Computer Science, Durham University on neuro-symbolic modelling of music to begin October 2022.

For more information, see below and/or the position ad at Neuro-symbolic modelling of music (PhD studentship) | EURAXESS (europa.eu)

If you are interested applying for this position, please do not hesitate to contact myself (by reply) or Dr Robert Lieck (robert.lieck@durham.ac.uk) with a short motivating statement in the first instance. Applications are open and will be considered on a rolling basis.

Kind regards,

Dr Eamonn Bell
Department of Computer Science
Durham University

https://www.durham.ac.uk/staff/eamonn-bell/

---

More information can be found here.

This funded PhD position is about developing novel algorithmic tools for music analysis using deep learning and structured/symbolic methods. It will combine approaches from computational musicology, image analysis, and natural language processing to advance the state of the art in the field.

Music analysis is a highly challenging task for which artificial intelligence (AI) and machine learning (ML) is lagging far behind the capabilities of human experts. Solving it requires a combination of two different model types: (1) neural networks and deep learning techniques to extract features from the input data and (2) structured graphical models and artificial grammars to represent the complex dependencies in a musical piece. The central goal of the project is to leverage the synergies from combining these techniques to build models that achieve human-expert level performance in analysing the structure of a musical piece.

You will get:

  • the chance to do your PhD at a world-class university and conduct groundbreaking research in machine learning and artificial intelligence
  • the opportunity to work on an interdisciplinary project with real-world applications in the field of music
  • committed supervision and comprehensive training (regular one-on-one meetings, ample time for discussion, detailed feedback, support in your scientific development, e.g., presentation skills, research methodology, scientific writing etc.)
  • stimulatingdiverse, and supportive research environment (as member of the interdisciplinary AIHS group)
  • the opportunity to publish in top journals, attend international conferences, and build a network of collaborations

You should bring:

  • enthusiasm for interdisciplinary research in artificial intelligence and music
  • an open mind-set and creative problem-solving skills
  • solution-oriented can-do mentality
  • a desire to understand the structure of music and its inner workings
  • a good command of a modern programming language (preferably Python) and familiarity with a modern deep learning framework (e.g. PyTorch)
  • a strong master degree (or equivalent) with a significant mathematical or computational component

If you are interested, please send an email with your CV and a short informal motivation to Robert Lieck (robert.lieck@durham.ac.uk) for initial discussions.

Important Note: We are looking to fill this position as soon as possible (the position is still open as long as it is advertised) and are accepting applications on a rolling basis. The preferred start date is October 2022 (new academic year). We would particularly like to encourage applications from women, disabled, Black, Asian and other minority ethnic candidates, since these groups are currently underrepresented in our area.


Fwd: PhD position in Sheffield (Computer Science & Music)



Dear all

We are excited to advertise an EPSRC funded PhD position that is co-supervised between Music and Computer Science, and a collaboration with the University of Sheffield's Healthy Lifespan Institute. Funding for 3.5 years (Home fees and stipend). 

Project - Personalising interaction-technology for dementia: AI-enabled musical instrument training. 
Supervisors - Jennifer MacRitchie, Guy Brown, Renee Timmers 

Deadline for applications: 30 June 2022

Project in short:
This project proposes harnessing artificial intelligence (AI) in order to create a flexible digital music instrument, enabling people with dementia and different degrees of fine motor impairments to be able to make music together. Machine learning methods offer a means by which interaction with a digital musical instrument can be driven by data, through the learning of personalised models that capture information about the musical ability, style of interaction and motor abilities of specific users.

Broader context: 
The postgraduate researcher will be part of a team of researchers working on music technology for people with dementia, and will benefit from the broader contexts offered by the Department of Music (Psychology of Music), Computer Science (AI and machine learning), and HELSI (dementia care, aging and multimorbidity). 

Further info - Renee Timmers (r.timmers@sheffield.ac.uk)

Best
Renee 

--

Professor Renee Timmers (she/her)
Department of Music, University of Sheffield

Fwd: Lecturer vacancy AudioLab University of York

Apologies for cross-posting. 

Please details of a Lectureship opportunity at the AudioLab, University of York, deadline for applications, 22 June.




--


Dr Helena Daffern (she/her)
Senior Lecturer
Director of York Centre for Singing Science