** PhD Opportunities in Centre for Doctoral Training in AI for Digital Media Inclusion
** Surrey Institute for People-Centred AI at the University of Surrey, UK, and
** StoryFutures at Royal Holloway University of London, UK
** Apply by 30 May 2024, for PhD cohort starting October 2024
URL:
https://www.surrey.ac.uk/artificial-intelligence/cdt The Centre for Doctoral Training (CDT) in AI for Digital Media Inclusion combines the world-leading expertise of the Surrey Institute for People-Centred AI at the University of Surrey, a pioneer in AI technologies for the creative industries (vision, audio, language, machine learning) and StoryFutures at Royal Holloway University of London, leader in creative production and audience experience (arts, psychology, user research, creative production).
Our vision is to deliver unique cross-disciplinary training embedded in real-world challenges and creative practice, and to address the industry need for people with responsible AI, inclusive design and creative skills. The CDT challenge-led training programme will foster a responsible AI-enabled inclusive media ecosystem with industry. By partnering with 50+ organisations, our challenge-led model will be co-designed and co-delivered with the creative industry to remove significant real-world barriers to media inclusion.
The overall learning objective of the CDT training programme is that all PhD researchers gain a cross-disciplinary understanding of fundamental AI science, inclusive design and creative industry practice, together with responsible AI research and innovation leadership, to lead the creation of future AI-enabled inclusive media.
The CDT training program will select PhD students who will work on challenge areas including Intelligent personalisation of media experiences for digital inclusion, and Generative AI for digital inclusion. Example projects related to audio include:
- Audio Generative AI from visuals as an alternative to Audio Description
- Audio orchestration for neurodivergent audiences using object-based media
- AUDItory Blending for Inclusive Listening Experiences (AUDIBLE)
- Foundational models for audio (including speech, music, sound effect) to texts in the wild
- Generative AI for natural language description of audio for the deaf and hearing impaired
- Generative AI with Creative Control, Explainability, and Accessibility
- Personalised audio editing with generative models
- Personalised subtitling for readers of different abilities
- Translation of auditory distance across alternate advanced audio formats
If you have any questions about the CDT, please contact Adrian Hilton (mailto:
a.hilton@surrey.ac.uk) or Polly Dalton (mailto:
Polly.Dalton@rhul.ac.uk).
For more information and to apply, visit:
https://www.surrey.ac.uk/artificial-intelligence/cdt Application deadline: 30 May 2024
--
Prof Mark D Plumbley
EPSRC Fellow in AI for Sound
Professor of Signal Processing
Centre for Vision, Speech and Signal Processing
University of Surrey, Guildford, Surrey, GU2 7XH, UK
Email:
m.plumbley@surrey.ac.uk