Wednesday, December 16, 2020

Fwd: [DMRN-LIST] Ten PhD/postdoc positions at University of Oslo



Dear all,

We are happy to announce several doctoral and postdoctoral fellowships at RITMO Centre for Interdisciplinary Studies in Rhythm, Time, and Motion at the University of Oslo:

There is also one more doctoral fellowship at the Department of Musicology that may be of relevance to this list:

The positions have slightly different application deadlines (from 31 January to 15 March), so check specification for each position.

Please forward to relevant candidates and do not hesitate to get in touch if you have any questions.

Apologies for cross-posting.

Best,
--   Alexander Refsum Jensenius  Professor, Department of Musicology, University of Oslo  https://people.uio.no/alexanje    Deputy Director, RITMO Centre for Interdisciplinary Studies in Rhythm, Time, and Motion  https://www.uio.no/ritmo/english/    Director, fourMs Lab  https://fourms.uio.no    Chair, NIME Steering Committee  https://www.nime.org    New master's programme: "Music, Communication & Technology"  http://www.uio.no/mct-master

Tuesday, December 8, 2020

Fwd: [DMRN-LIST] CfP: Special session on representation learning for audio, music, and speech processing



Dear colleagues, 

We are happy to share the call for papers to the special session 

"Representation Learning for Audio, Speech, and Music Processing" 

at the International Joint Conference on Neural Networks (IJCNN) 2021. 

All submitted papers to the special session are submitted at the conference submission portal (as regular papers) and undergo the same full-paper peer review process as any other paper in IJCNN 2021.

Special session website: https://dr-costas.github.io/rlasmp2021-website/  
Conference website: https://www.ijcnn.org  

Accepted special sessions at IJCNN: https://www.ijcnn.org/accepted-special-sessions  


=====================================

Important dates: 

Paper submission: 15th of January, 2021
Notification of acceptance: 15th of March, 2021
Camera ready submission: 30th of March, 2021

=====================================

Scope and topics: 

In the last decade, deep learning has revolutionized the research fields of audio and speech signal processing, acoustic scene analysis, and music information retrieval. In these research fields, methods relying on deep learning have achieved remarkable performance in various applications and tasks, surpassing legacy methods that rely on the independent usage of signal processing operations and machine learning algorithms. The huge success of deep learning methods relies on their ability to learn representations from sound signals that are useful for various downstream tasks. These representations encapsulate the underlying structure or features of the sound signals, or the latent variables that describe the underlying statistics of the respective signals.

Despite this success, learning representations of audio with deep models remains challenging. For example, the diversity of acoustic noise, the multiplicity of recording devices (e.g., high-end microphones vs. smartphones), and the source variability challenge machine learning methods when they are used in realistic environments. In audio event detection, which has recently become a vigorous research field, systems for the automatic detection of multiple overlapping events are still far from reaching human performance. Another major challenge is the design of robust speech processing systems. Speech enhancement technologies have significantly improved in the past years, notably thanks to deep learning methods. However, there is still a large performance gap between controlled environments and real-world situations. As a final example, in the music information retrieval field, modeling the high-level semantics based on local and long-term relations in music signals is still a core challenge. More generally, self-supervised approaches that can leverage a large amount of unlabeled data are very promising for learning models that can serve as a powerful base for many applications and tasks. Thus, it is of great interest for the scientific community to find new methods for representing audio signals using hierarchical models, such as deep neural networks. This will enable novel learning methods to leverage the large amount of information that audio, speech, and music signals convey.

The aim of this session is to establish a venue where engineers, scientists, and practitioners from both academia and industry, can present and discuss cutting-edge results in representation learning in audio, speech, and music signal processing. Driven by the constantly increasing popularity of audio, speech, and music representation learning, the organizing committee of this session is motivated to build, in the long-term, a solid reference within the computational intelligence community for the digital audio field.

The scope of this proposed special session is representation learning, focused on audio, speech, and music. Representation learning is one of the main aspects of neural networks. Thus, the scope of this proposes special session is well aligned with the scope of the IJCNN, as the current special session is focused on a core aspect of neural networks, which is the representation learning.

The topics of the proposed special session include, but are not limited to:

• Audio, speech, and music signal generative models and methods
• Single and multi-channel methods for separation, enhancement, and denoising
• Spatial analysis, modification, and synthesis for augmented and virtual reality
• Detection, localization, and tracking of audio sources/events
• Style transfer, voice conversion, digital effects, and personalization
• Adversarial attacks and real/synthetic discrimination methods
• Information retrieval and classification methods
• Multi- and inter-modal models and methods
• Self-supervised/metric learning methods
• Domain adaptation, transfer learning, knowledge distilation, and K-shot approaches 
• Differentiable signal processing based methods
• Privacy preserving methods
• Interpretability and explainability in deep models for audio
• Context and structure-aware approaches

On behalf of the organizing committee, 

Konstantinos Drossos, PhD
Senior researcher
Audio Research Group
Tampere University, Finland

Office: TF309
Address: Korkeakoulunkatu 10, FI-33720 
mail: konstantinos.drossos@tuni.fi

Monday, December 7, 2020

Fwd: PhD studentships in Artificial Intelligence and Music (AIM) at Queen Mary University of London



UKRI Centre for Doctoral Training in Artificial Intelligence and Music (AIM), Queen Mary University of London

12+ fully-funded PhD studentships to start September 2021
Covers fees and a stipend for four years
Application deadline: 27 January 2021

Why apply to the AIM Programme?
  • 4-year fully-funded PhD studentships available
  • Extensive choice of projects, drawing on a supervisory team of over 30 academics
  • Access to cutting-edge facilities and expertise in artificial intelligence (AI) and music/audio technology
  • Comprehensive technical training at the intersection of AI and music through a personalized programme
  • Partnerships with over 25 companies and cultural institutions in the music, audio and creative sectors
More information on the AIM Programme can be found at: https://www.aim.qmul.ac.uk/

Programme structure
Our Centre for Doctoral Training (CDT) offers a four year training programme where students will carry out a research project in the intersection of AI and music, supported by taught specialist modules, industrial placements, and skills training. Find out more about the programme structure at: http://www.aim.qmul.ac.uk/about/ 

Who can apply?
We are on the lookout for outstanding students interested in the intersection of music/audio technology and AI. Successful applicants will have the following profile:
  • Hold or be completing a Masters degree at distinction or first class level, or equivalent, in Computer Science, Electronic Engineering, Music/Audio Technology, Physics, Mathematics, Music or Psychology. In exceptional circumstances we accept applicants with a first class Bachelors degree who do not hold a Masters degree, provided that applicants can provide evidence of equivalent research experience, industry experience, or specialist training.
  • Programming skills are strongly desirable; however we do not consider this to be an essential criterion if candidates have complementary strengths.
  • Musical training (any of performance, production, composition or theory) is desirable but not a prerequisite.
For this call for applications we are accepting offers from UK Home students and International students, as well as students supported by national and international funding bodies, such as the China Scholarship Council (CSC), CONACYT, and the Commonwealth PhD Scholarship scheme. Queen Mary's commitment to our diverse and inclusive community is embedded in our student admissions processes. We particularly welcome applications from women and under-represented groups, and from applicants in all stages of life.

Funding
For this call we offer 12+ fully-funded 4-year PhD studentships available for students starting in September 2021 which will cover the cost of tuition fees and will provide an annual tax-free stipend (£17,285 in 2020/21). The CDT will also provide funding for conference travel, equipment, and for attending other CDT-related events.

The AIM programme also welcomes applications from students who have sponsorship for PhD study from numerous international funding agencies and also accepts self-funded students. For more information on external PhD studentships and self-funded please visit http://www.aim.qmul.ac.uk/apply .

Apply Now
Information on applications and PhD topics can be found at: http://www.aim.qmul.ac.uk/apply
Application deadline: 27 January 2021
For further information on eligibility, funding and the application process please visit our website. Please email any questions to aim-enquiries@qmul.ac.uk 


— 
Dr. George Fazekas, 
Senior Lecturer (Assoc. Prof.) in Digital Media 
Programme Coordinator, Sound and Music Computing (SMC)
Centre for Digital Music (C4DM)
School of Electronic Engineering and Computer Science
Queen Mary University of London, UK
FHEA, M. IEEE, ACM, AES

Wednesday, December 2, 2020

Postdoc position at Sheffield

Postdoc position at Sheffield 

We are pleased to invite applications for a 4-year postdoc position at The University of Sheffield, as part of the UKRI Future Leaders Fellowship of Dr Jennifer MacRitchie.  The postdoctoral researcher will work with Jenni in the development of music technology to assist music making and listening in older adults, including people with dementia and complex needs. Candidates should have a PhD in Cognitive Sciences, Engineering, or Psychology and experience in music interface design and/or working with populations with varying physical / cognitive abilities. 

 

Further information and a link to the application portal can be found on the MMM website: 

https://mmm.sites.sheffield.ac.uk/postdoc-music-technology-for-wellbeing

 

Deadline for applications: 15 December 2020

Start date: 1 Feb 2021

Interview date: 12 January 2021 

 

Further questions can be directed to Jenni MacRitchie or to Renee Timmers

Tuesday, December 1, 2020

Fwd: Call for participants: Music, Mind-Wandering, & Wellbeing during COVID-19

Dear all,

Help us to understand whether and how the unusual circumstances
brought about by the COVID-19 pandemic have had an impact on the way
our minds wander, our uses of music in daily life, and our mental
wellbeing.

The study takes approximately 20 minutes to complete, is completely
anonymous, and offers the opportunity to win one £20 Amazon voucher.
The questionnaire can be completed in one of three languages (English,
Greek or Italian). The only requirement to participate is to be an
adult living either in the UK, Greece, or Italy.

To complete the questionnaire, please click on this link:
https://durhammusic.eu.qualtrics.com/jfe/form/SV_3K3qytcfDDHZZjL

The study is a collaboration between Durham University (Dr Liila
Taruffi) and the University of Sheffield (Dr Georgina Floridou) and
has received ethical approval from the Music Department (Durham
University). Your participation is voluntary and you are free to
withdraw at any time, and your information will remain anonymous and
confidential. For questions and/or comments, please get in touch:
liila.taruffi@durham.ac.uk or g.floridou@sheffield.ac.uk.

Feel free to pass this message on to anyone who may be interested.

Many thanks in advance for your contribution.

Best wishes,
Dr Georgina Floridou
Honorary Research Fellow
Music & Wellbeing Research Unit
Department of Music, University of Sheffield
34 Leavygreave Road, S3 7RD, Sheffield, UK