Dear all,
A funded PhD place is available to work within the Centre for Digital Music on the subject of machine learning applied to sound synthesis and content creation. Description below, and full details (including how to apply) at
http://www.eecs.qmul.ac.uk/phd/research-topics/funded
Please feel free to distribute this to anyone who might be interested.
Thanks.
Dr. Josh Reiss
Reader in Audio Engineering
Centre for Digital Music
Queen Mary University of London
Fully-funded PhD studentship: Machine learning applied to sound synthesis and media content creation
Applications are invited from all nationalities for a funded PhD Studentship starting January 2015 within the Centre for Digital Music (C4DM) at Queen Mary University of London, to perform cutting-edge research in machine learning applied to sound synthesis and content creation.
In this PhD project, the concept of an Intelligent Assistant is investigated as a means of short form media content creation. A small high-tech company are in the process of creating a collaborative cloud platform for the creation of short form media, such as advertisements, promotional videos, local information etc. The Intelligent Assistant would identify and organise the content, add effects and synthesised sounds where necessary and present the produced content as a coherent story. It will be used as a tool by content creators to assist in quick and intuitive content creation. The goal of this project is to create and assess such tools, focusing on the challenges of varied, user-generated content with limited metadata, and the need for an enhanced user experience.
Research questions to be investigated include;
- How best can sounds be synthesised in order to provide additional audio content to enhance the production?
- Can multimedia (especially audio) content be intelligently combined to effectively tell a story?
- How can this be assessed and evaluated? What are the key factors, features and metrics for intelligent storyboard systems?
This project is expected to generate high impact results, especially in the growing research fields of signal processing, sound synthesis, music informatics and semantic tools for content creation and production.
There is scope to tailor the project to the interests and skills of the successful candidate.
Informal enquiries can be made by email to Dr. Josh Reiss: joshua.reiss@qmul.ac.uk
More details, including how to apply, can be found at: http://www.eecs.qmul.ac.uk/phd/research-topics/funded
Closing date is Dec. 16, 2014, and interviews are expected to take place during the week of 5th January 2015.
A funded PhD place is available to work within the Centre for Digital Music on the subject of machine learning applied to sound synthesis and content creation. Description below, and full details (including how to apply) at
http://www.eecs.qmul.ac.uk/phd/research-topics/funded
Please feel free to distribute this to anyone who might be interested.
Thanks.
Dr. Josh Reiss
Reader in Audio Engineering
Centre for Digital Music
Queen Mary University of London
Fully-funded PhD studentship: Machine learning applied to sound synthesis and media content creation
Applications are invited from all nationalities for a funded PhD Studentship starting January 2015 within the Centre for Digital Music (C4DM) at Queen Mary University of London, to perform cutting-edge research in machine learning applied to sound synthesis and content creation.
In this PhD project, the concept of an Intelligent Assistant is investigated as a means of short form media content creation. A small high-tech company are in the process of creating a collaborative cloud platform for the creation of short form media, such as advertisements, promotional videos, local information etc. The Intelligent Assistant would identify and organise the content, add effects and synthesised sounds where necessary and present the produced content as a coherent story. It will be used as a tool by content creators to assist in quick and intuitive content creation. The goal of this project is to create and assess such tools, focusing on the challenges of varied, user-generated content with limited metadata, and the need for an enhanced user experience.
Research questions to be investigated include;
- How best can sounds be synthesised in order to provide additional audio content to enhance the production?
- Can multimedia (especially audio) content be intelligently combined to effectively tell a story?
- How can this be assessed and evaluated? What are the key factors, features and metrics for intelligent storyboard systems?
This project is expected to generate high impact results, especially in the growing research fields of signal processing, sound synthesis, music informatics and semantic tools for content creation and production.
There is scope to tailor the project to the interests and skills of the successful candidate.
Informal enquiries can be made by email to Dr. Josh Reiss: joshua.reiss@qmul.ac.uk
More details, including how to apply, can be found at: http://www.eecs.qmul.ac.uk/phd/research-topics/funded
Closing date is Dec. 16, 2014, and interviews are expected to take place during the week of 5th January 2015.