Thursday, June 23, 2022

Fwd: [DMRN-LIST] 6 PhD fellowships at University of Oslo

Dear all,

We are happy to announce 6 PhD fellowships affiliated with RITMO, University of Oslo:

Feel free to pass on to relevant candidates and don't hesitate to get in touch if you have questions.

The application deadline for all positions is 1 September 2022.

Best,

--   Alexander Refsum Jensenius [he/him]  Professor, Department of Musicology, University of Oslo  https://people.uio.no/alexanje    Deputy Director, RITMO Centre for Interdisciplinary Studies in Rhythm, Time, and Motion  https://www.uio.no/ritmo/english/    Director, fourMs Lab  https://fourms.uio.no    Chair, NIME Steering Committee  https://www.nime.org    Master's programme: "Music, Communication & Technology"  http://www.uio.no/mct-master    New online course: Motion Capture: The Art of Studying Human Activity  https://www.futurelearn.com/courses/motion-capture-course

Sunday, June 19, 2022

Fwd: [DMRN-LIST] Fully funded PhD position in Sheffield



We are excited to advertise an EPSRC funded PhD position that is co-supervised between Music and Computer Science, and a collaboration with the University of Sheffield's Healthy Lifespan Institute. Funding for 3.5 years (Home fees and stipend).

Project - Personalising interaction-technology for dementia: AI-enabled musical instrument training.
Supervisors - Jennifer MacRitchie, Guy Brown, Renee Timmers

Deadline for applications: 30 June 2022

Please see: https://protect-au.mimecast.com/s/YuyhCD1vmxcBPBA9iW5JUw?domain=findaphd.com

Project in short:
This project proposes harnessing artificial intelligence (AI) in order to create a flexible digital music instrument, enabling people with dementia and different degrees of fine motor impairments to be able to make music together. Machine learning methods offer a means by which interaction with a digital musical instrument can be driven by data, through the learning of personalised models that capture information about the musical ability, style of interaction and motor abilities of specific users.

Broader context:
The postgraduate researcher will be part of a team of researchers working on music technology for people with dementia, and will benefit from the broader contexts offered by the Department of Music (Psychology of Music), Computer Science (AI and machine learning), and HELSI (dementia care, aging and multimorbidity).

Further info - Renee Timmers (r.timmers@sheffield.ac.uk)

Thursday, June 16, 2022

Imperial College Choir: Jubilate (17/06/2022)

JUBILATE
Friday 17th June 2022 - 7:30pm
Holy Trinity, Prince Consort Road - SW7 2BA
A CONCERT OF ENGLISH MUSIC FROM THE REIGN OF ELIZABETH I TO ELIZABETH II
Standard: £5
Students: FREE
Donations welcome
Tickets: tinyurl.com/imperialchoirpr

Fwd: [DMRN-LIST] PhD Scholarships at University of West London

Apologies for cross-posting; please forward to potentially interested parties in your networks
=====================================================================

London College of Music | University of West London

The Vice-Chancellor's PhD Scholarships


We have a number of positions for three-year fully funded PhD Scholarships (fees plus an annual stipend of £17,000). These will be available for all eligible UK students.

There are opportunities across the LCM portfolio in Music: Performance, Composition, Technology, Business, and Performing Arts. We have a number of specialist research programmes co-supervised by leaders from industry. Further details can be found at: https://www.uwl.ac.uk/research/research-degrees/phd-opportunities/research-degrees-london-college-music

The University welcomes applicants who wish to study for a PhD research degree.  Successful applicants will join a vibrant and challenging academic environment where innovation, insight, and knowledge creation fees into high-quality research.   Please contact the relevant supervisor with any enquiries and for support with your application.

For further information about the Vice-Chancellor's Scholarships please visit our website:  https://www.uwl.ac.uk/research/research-degrees/phd-opportunities

Application deadline: Wednesday 6 July 2022.

Interviews will take place between 11 and 29 July 2022.


Best regards,

Justin Paterson

Professor of Music Production

Tuesday, June 14, 2022

Fwd: June music-data seminar

The last of this year's music-data seminars will take place (virtually) on Monday 27/6 at 4pm (UK time)

Psyche Loui - Generation of New Musical Preferences from Hierarchical Mapping of Predictions to Reward

Abstract:
Prediction learning is considered a ubiquitous feature of biological systems that underlies perception, action, and reward. For cultural artifacts such as music, isolating the genesis of reward from prediction is challenging, since predictions are acquired implicitly throughout life. Here, we examined the trajectory of listeners' preferences for melodies in a novel musical system, where predictions were systematically manipulated. Across seven studies (n = 842 total) in two cultures, preferences scaled with predictions: participants preferred melodies that were presented more during exposure (global predictions) and that followed schematic expectations (local predictions). Learning trajectories depended on music reward sensitivity. Furthermore, fMRI showed that while auditory cortical activity reflects predictions, functional connectivity between auditory and reward areas encodes preference. The results are the first to highlight the hierarchical, relatively culturally-independent process by which predictions map onto reward. Collectively, our findings propose a novel mechanism by which the human brain links predictions with reward value.

Bio:
Psyche Loui is Associate Professor of Creativity and Creative Practice in the Department of Music and director of the MIND (Music, Imaging, and Neural Dynamics) lab at Northeastern University. She graduated from University of California, Berkeley with her PhD in Psychology, and attended Duke University as an undergraduate with degrees in Psychology and Music. Dr. Loui studies the neuroscience of music perception and cognition, tackling questions such as: What gives people the chills when they are moved by a piece of music? How does connectivity in the brain enable or disrupt music perception? Can music be used to help those with neurological and psychiatric disorders? Dr. Loui's work has been supported by National Institutes of Health and has received multiple Grammy awards, a young investigator award from the Positive Neuroscience Institute, and a Career award from the National Science Foundation, and has been featured by the Associated Press, New York Times, Boston Globe, BBC, CNN, the Scientist magazine, and other news outlets.

               
___________________________________________________
Dr. Oded Ben-Tal
Senior Lecturer, Music Technology
Kingston University


Monday, June 13, 2022

Fwd: Research Fellow in Room Acoustic Modelling

RESEARCH FELLOW IN ROOM ACOUSTIC MODELLING
University of Surrey (UK)
Salary: £33,309 to £38,587 per annum
Fixed Term for 24 months
Post Type: Full Time
Closing Date: 23.59 hours BST on Sunday 26 June 2022

Applications are invited for a Research Fellow to be based in the Institute of Sound Recording (IoSR, http://iosr.uk) and to work full-time on the EPSRC project SCReAM ("SCalable Room Acoustic Modelling"). The post is available for 24 months, from 1/August/2022 until 31/July/2024. For an exceptional candidate, a later start date may be accommodated, subject to the approval from the funder. You can submit the application up until 26/June/2022, but you are encouraged to do so as soon as possible since interviews may start even before the deadline.

The post-holder will work on exploring connections between room acoustic models; defining new unifying and scalable room acoustic models; adapting those models for application in e.g. consumer electronics, computer games, immersive media, and architectural acoustics. 

The successful applicant will have a range of skills, including some of the following: strong, independent research skills; an excellent signal processing background; knowledge of room acoustic models; expertise/interest in numerical acoustics; enthusiasm for working with project partners at other universities and organisations (including, among others, KU Leuven, Electronic Arts and Sonos). 

The IoSR is home to the Tonmeister degree in Music and Sound Recording, which produced a stream of highly successful graduates (including three Oscar winners, seven Grammy winners, and twelve BAFTA winners), and is a leading centre for research in acoustic engineering. It has several projects funded from research councils and industry, involving human listening tests, acoustic measurement, statistical modelling and digital signal processing. Current work is, for example, developing systems for spatial enhancement of object-based audio reproduction, for timbral perception modelling, and for next-generation environment-aware headphones.

For more information about the SCReAM project, see https://www.scream-project.org. To apply go to: https://jobs.surrey.ac.uk/Vacancy.aspx?ref=033022 Informal enquiries may be made to the project lead, Dr Enzo De Sena e.desena@surrey.ac.uk.

In return we offer a generous pension, relocation assistance where appropriate , flexible working options including job share and blended home/campus working locations (dependent on work duties), access to world-class leisure facilities on campus, a range of travel schemes and supportive family friendly benefits including an excellent on-site nursery.


The University of Surrey is committed to providing an inclusive environment that offers equal opportunities for all.  We place great value on diversity and are seeking to increase the diversity within our community.  Therefore we particularly encourage applications from under-represented groups, such as people from Black, Asian and minority ethnic groups and people with disabilities.

Best regards,
  Enzo

--
Enzo De Sena
Senior Lecturer (Associate Professor)
Institute of Sound Recording
Department of Music & Media
University of Surrey
Guildford, Surrey, GU2 7XH, UK

Thursday, June 9, 2022

Fwd: [DMRN-LIST] London NIME watching event at C4DM

The Augmented Instruments Lab in QMUL's Centre for Digital Music is hosting a local "watch party" for the upcoming NIME conference. The event will be held during the conference from 28 June to 1 July, 12pm to 12am each day to run at the same time as the online conference sessions hosted by the University of Auckland. The virtual conference programme can be found here: https://nime2022.org/program.html

This event is aimed mainly at those in reasonable commuting distance from London, though all are welcome to join for informal discussion, networking, and fun during the NIME conference. We will watch the sessions together live and chat together in the breaks during the conference. Additionally, we will facilitate casual demo and work-in-progress showcases for those who wish to trial or get feedback about their work in an in-person setting.

We'll be joined at the event by special guest Fabio Morreale, co-chair of NIME 2022, who will be in London for the week of the conference!

The event is free to attend for anyone registered for the NIME conference. Feel free to attend for only a part of the conference. If you plan to attend, please fill out the form below to indicate which days you will join us (the form is editable, so please feel free to update your sign-up if needed).

https://forms.gle/bKHj2eyRHsH7ezGBA


Sign-ups will close on June 22, so please sign up as soon as you can! We will send more detailed joining instructions to people who register via the survey.

Meanwhile please feel free to email me or Courtney Reed (c.n.reed@qmul.ac.uk) with questions or suggestions.

Best wishes,
Andrew


--
Andrew McPherson
Professor of Musical Interaction
Centre for Digital Music
School of Electronic Engineering and Computer Science
Queen Mary, University of London
Mile End Road
London E1 4NS

Tuesday, June 7, 2022

Fwd: PhD studentship on Neuro-Symbolic Modelling of Music (Durham, UK)

Dear list,

I am pleased to share a call for a funded PhD studentship at the Department of Computer Science, Durham University on neuro-symbolic modelling of music to begin October 2022.

For more information, see below and/or the position ad at Neuro-symbolic modelling of music (PhD studentship) | EURAXESS (europa.eu)

If you are interested applying for this position, please do not hesitate to contact myself (by reply) or Dr Robert Lieck (robert.lieck@durham.ac.uk) with a short motivating statement in the first instance. Applications are open and will be considered on a rolling basis.

Kind regards,

Dr Eamonn Bell
Department of Computer Science
Durham University

https://www.durham.ac.uk/staff/eamonn-bell/

---

More information can be found here.

This funded PhD position is about developing novel algorithmic tools for music analysis using deep learning and structured/symbolic methods. It will combine approaches from computational musicology, image analysis, and natural language processing to advance the state of the art in the field.

Music analysis is a highly challenging task for which artificial intelligence (AI) and machine learning (ML) is lagging far behind the capabilities of human experts. Solving it requires a combination of two different model types: (1) neural networks and deep learning techniques to extract features from the input data and (2) structured graphical models and artificial grammars to represent the complex dependencies in a musical piece. The central goal of the project is to leverage the synergies from combining these techniques to build models that achieve human-expert level performance in analysing the structure of a musical piece.

You will get:

  • the chance to do your PhD at a world-class university and conduct groundbreaking research in machine learning and artificial intelligence
  • the opportunity to work on an interdisciplinary project with real-world applications in the field of music
  • committed supervision and comprehensive training (regular one-on-one meetings, ample time for discussion, detailed feedback, support in your scientific development, e.g., presentation skills, research methodology, scientific writing etc.)
  • stimulatingdiverse, and supportive research environment (as member of the interdisciplinary AIHS group)
  • the opportunity to publish in top journals, attend international conferences, and build a network of collaborations

You should bring:

  • enthusiasm for interdisciplinary research in artificial intelligence and music
  • an open mind-set and creative problem-solving skills
  • solution-oriented can-do mentality
  • a desire to understand the structure of music and its inner workings
  • a good command of a modern programming language (preferably Python) and familiarity with a modern deep learning framework (e.g. PyTorch)
  • a strong master degree (or equivalent) with a significant mathematical or computational component

If you are interested, please send an email with your CV and a short informal motivation to Robert Lieck (robert.lieck@durham.ac.uk) for initial discussions.

Important Note: We are looking to fill this position as soon as possible (the position is still open as long as it is advertised) and are accepting applications on a rolling basis. The preferred start date is October 2022 (new academic year). We would particularly like to encourage applications from women, disabled, Black, Asian and other minority ethnic candidates, since these groups are currently underrepresented in our area.


Fwd: PhD position in Sheffield (Computer Science & Music)



Dear all

We are excited to advertise an EPSRC funded PhD position that is co-supervised between Music and Computer Science, and a collaboration with the University of Sheffield's Healthy Lifespan Institute. Funding for 3.5 years (Home fees and stipend). 

Project - Personalising interaction-technology for dementia: AI-enabled musical instrument training. 
Supervisors - Jennifer MacRitchie, Guy Brown, Renee Timmers 

Deadline for applications: 30 June 2022

Project in short:
This project proposes harnessing artificial intelligence (AI) in order to create a flexible digital music instrument, enabling people with dementia and different degrees of fine motor impairments to be able to make music together. Machine learning methods offer a means by which interaction with a digital musical instrument can be driven by data, through the learning of personalised models that capture information about the musical ability, style of interaction and motor abilities of specific users.

Broader context: 
The postgraduate researcher will be part of a team of researchers working on music technology for people with dementia, and will benefit from the broader contexts offered by the Department of Music (Psychology of Music), Computer Science (AI and machine learning), and HELSI (dementia care, aging and multimorbidity). 

Further info - Renee Timmers (r.timmers@sheffield.ac.uk)

Best
Renee 

--

Professor Renee Timmers (she/her)
Department of Music, University of Sheffield

Fwd: Lecturer vacancy AudioLab University of York

Apologies for cross-posting. 

Please details of a Lectureship opportunity at the AudioLab, University of York, deadline for applications, 22 June.




--


Dr Helena Daffern (she/her)
Senior Lecturer
Director of York Centre for Singing Science