Wednesday, May 22, 2024

Fwd: Qualitative Interviewing in Music Research: A Study Day (in-person and online)

Qualitative Interviewing in Music Research: An RMA Study Day

Date: Thu Jun 27 2024

Time: 10:15 AM - 17:50PM

Location: Royal College of Music & Online.

You are invited to attend the following study on Qualitative Interviewing in Music Research. This event will bring together doctoral students and early career researchers from a range of disciplines within music research (e.g. music psychology, ethnomusicology, performance) to share ideas about methods and challenges in qualitative interviewing, and to learn from each other through supportive discussion. Additionally, there will be keynote presentations from Dr Katherine Williams, Research Fellow in Popular Music Songwriting (University of Huddersfield) and Dr Carsten Wernicke (University of Koblenz/Leuphana University). Please see the link below for more information:

https://www.eventbrite.co.uk/e/qualitative-interviewing-in-music-research-an-rma-study-day-tickets-906679521857?aff=oddtdtcreator

Tuesday, May 14, 2024

Fwd: Postdoc in digital music interaction at UWE, Bristol

Hi MUSIC-AND-SCIENCE community,

We are hiring a postdoc in digital music interaction starting September 2024, the deadline is 27th May with interviews expected the week of the 10th June.

For details on the post, and to apply see:
https://ce0164li.webitrent.com/ce0164li_webrecruitment/wrd/run/ETREC179GF.open?WVID=8433573cTb&VACANCY_ID=349107QtXl

The project page is: micalab.org

Feel free to contact me at tom.mitchell@uwe.ac.uk with any questions or to arrange an informal chat.

Best wishes,
Tom
--
Tom Mitchell
Professor of Audio and Music Interaction
UKRI Future Leaders Fellow
Creative Technologies Lab
The University of the West of England
Room 2Q16, Frenchay Campus
Coldharbour Ln
Bristol, BS16 1QY

Web: micalab.org
Email: tom.mitchell@uwe.ac.uk
Phone: +44 (0)117 3283349

Fwd: PhD Opportunities in AI for Digital Media Inclusion (Deadline 30 May 2024)

** PhD Opportunities in Centre for Doctoral Training in AI for Digital Media Inclusion
** Surrey Institute for People-Centred AI at the University of Surrey, UK, and
** StoryFutures at Royal Holloway University of London, UK

** Apply by 30 May 2024, for PhD cohort starting October 2024

URL: https://www.surrey.ac.uk/artificial-intelligence/cdt

The Centre for Doctoral Training (CDT) in AI for Digital Media Inclusion combines the world-leading expertise of the Surrey Institute for People-Centred AI at the University of Surrey, a pioneer in AI technologies for the creative industries (vision, audio, language, machine learning) and StoryFutures at Royal Holloway University of London, leader in creative production and audience experience (arts, psychology, user research, creative production).

Our vision is to deliver unique cross-disciplinary training embedded in real-world challenges and creative practice, and to address the industry need for people with responsible AI, inclusive design and creative skills. The CDT challenge-led training programme will foster a responsible AI-enabled inclusive media ecosystem with industry. By partnering with 50+ organisations, our challenge-led model will be co-designed and co-delivered with the creative industry to remove significant real-world barriers to media inclusion.

The overall learning objective of the CDT training programme is that all PhD researchers gain a cross-disciplinary understanding of fundamental AI science, inclusive design and creative industry practice, together with responsible AI research and innovation leadership, to lead the creation of future AI-enabled inclusive media.

The CDT training program will select PhD students who will work on challenge areas including Intelligent personalisation of media experiences for digital inclusion, and Generative AI for digital inclusion. Example projects related to audio include:

 - Audio Generative AI from visuals as an alternative to Audio Description
 - Audio orchestration for neurodivergent audiences using object-based media
 - AUDItory Blending for Inclusive Listening Experiences (AUDIBLE)
 - Foundational models for audio (including speech, music, sound effect) to texts in the wild
 - Generative AI for natural language description of audio for the deaf and hearing impaired
 - Generative AI with Creative Control, Explainability, and Accessibility
 - Personalised audio editing with generative models 
 - Personalised subtitling for readers of different abilities
 - Translation of auditory distance across alternate advanced audio formats

If you have any questions about the CDT, please contact Adrian Hilton (mailto:a.hilton@surrey.ac.uk) or Polly Dalton (mailto:Polly.Dalton@rhul.ac.uk).

For more information and to apply, visit:
https://www.surrey.ac.uk/artificial-intelligence/cdt

Application deadline: 30 May 2024

--
Prof Mark D Plumbley
EPSRC Fellow in AI for Sound
Professor of Signal Processing
Centre for Vision, Speech and Signal Processing
University of Surrey, Guildford, Surrey, GU2 7XH, UK
Email: m.plumbley@surrey.ac.uk