Sent on behalf of the British Section of the Audio Engineering Society. Apologies for cross-posting.
To stay up to date with other FREE events from the British AES Section, please refer to our website (www.aes-uk.org), Facebook page (www.facebook.com/aesuk/) and Twitter feed (https://twitter.com/AudioEngSocUK).
If you are an AES member, make sure to join the appropriate section(s) (and student section if you are a student or recent graduate) at http://www.aes.org/sections/.
Talk by Queen Mary University of London's Centre for Digital Music PhD students Christian Heinrichs and Roderick Selfridge with introduction by Andy Farnell
Part of the Audio Engineering Society UK section monthly evening lectures in London - open to the public
Tuesday 12 April 2016
6:30 pm for 7:00 pm start
David Sizer lecture theatre, Bancroft building, QMUL (Campus map)
Queen Mary University of London, Mile End Rd
London
Queen Mary University of London, Mile End Rd
London
Current Directions in Procedural Audio Research
Advances in real time computational audio for virtual worlds, animation, or real world applications continue apace. Queen Mary University of London has become an emerging centre for new research with projects guided by Andrew McPherson, Josh Reiss and Andy Farnell. These two presentations of QMUL doctoral researchers demonstrate the breadth and rigor of this emerging field, covering a range of enquiry from sound design psychology to fluid mechanics.
Roderick Selfridge will present "Real-time Aeroacoustic Sound Synthesis Models"
Roderick Selfridge will present "Real-time Aeroacoustic Sound Synthesis Models"
Aeroacoustic sounds emit from objects moving relative to a flow, and include the Aeolian tone, cavity tones and edge tones, sounds like a fence whistling in a storm, or turbines and jets. Efficient models incorporating fundamental fluid dynamic equations and listener position are developed by Selfridge, and results are evaluated against those produced by offline computational software solving finite difference equations and physical readings from wind tunnel experiments.
Christian Heinrichs examines the use of human gesture in the design of next-generation procedural game audio. With the dawn of virtual reality there is a move toward gameplay interactions that require sophisticated audio feedback and parameterisation. Heinrichs furthers Procedural Audio research by examining contextual aesthetics and behaviour in the design process to complement realism and efficiency. Physically-based sound engines that match the properties of an object are important but often fail to equal the expressivity of sound performed by a Foley artist. This research explores how gestural interaction can be employed in all stages of the sound design process, starting with the casual exploration of a sound model's parameter space and leading to its integration in a game.
Christian Heinrichs examines the use of human gesture in the design of next-generation procedural game audio. With the dawn of virtual reality there is a move toward gameplay interactions that require sophisticated audio feedback and parameterisation. Heinrichs furthers Procedural Audio research by examining contextual aesthetics and behaviour in the design process to complement realism and efficiency. Physically-based sound engines that match the properties of an object are important but often fail to equal the expressivity of sound performed by a Foley artist. This research explores how gestural interaction can be employed in all stages of the sound design process, starting with the casual exploration of a sound model's parameter space and leading to its integration in a game.
________________________________________________
Brecht De Man
PhD Student in Audio Engineering
Centre for Digital Music
Queen Mary University of London
School of Electronic Engineering and Computer Science
Mile End Road
London E1 4NS
United Kingdom
Skype: brechtdeman