Wednesday, September 25, 2024

Journal of Popular Music Education CALL FOR PAPERS – EXTENDED DEADLINE Special Issue: ‘Popular Music Education in Europe’ (to be published summer 2025)

Journal of Popular Music Education
CALL FOR PAPERS – EXTENDED DEADLINE
Special Issue: 'Popular Music Education in Europe' (to be published summer 2025)
Guest Editors
Lucy Green (Emerita Professor of Music Education, UCL, UK)
Avra Pieridou Skoutella (CCRSM Cyprus Centre for the Research and Study
of Music)
Europe is comprised of over 50 sovereign states and dependent territories which,
within and between themselves, have multifarious cultures, sub-cultures, ethnic and
religious groups, along with rich and diverse cultural heritage, values and customs,
turbulent histories, and struggles of nationalist movements. Some of its contemporary
states and people have been trying for decades to unite the European people under
the European Union's umbrella against the continuous influences of fragmentation,
economic interests, histories, nationalism, and ideological and political dilemmas.
The current times pose challenges, with wars, financial crises and intense immigrant
phenomena. On the one hand, such circumstances largely leave European people
limited or blocked by various forms of disadvantage from which they must constantly
strive to liberate themselves. On the other hand, they empower people's motivation
for connection and connectivity, for expression and resistance, for empathy and
solidarity, for surviving and thriving. In developing this Call for Papers, we are already
faced with critical questions that we hope will be explored in the ensuing issue. What
is popular music in contemporary Europe? Where did it come from? Who is Europe
today, musically? How do the different musical ecosystems of European countries,
cultures and sub-cultures influence and/or reflect popular music education? To what
extent does music education in Europe acknowledge such influences? What is the
relationship between music education and popular music in different educational
systems of each country? To what extent can we talk about 'European popular music',
or shall we talk about 'Popular music in Europe'? Many more questions such as these are
imaginable.
Music education in Europe is a diverse musical beehive (or beehives) which embrace
a vast, colourful, fluid and vibrant spectrum of musical styles, cultures, and practices
departing from folk, religious, cross-over, and composed music, to popular music in
all its manifestations. It produces, reproduces, negotiates, articulates, and transforms
values, ideas, customs, functions and uses, and major critical issues at each moment.
In Taranto Italy, one of the largest open-air music festivals takes place yearly that
blends tradition, history, folk melodies and rhythms with syncretic and hybrid music
performances. Street Parade, the World's Largest Techno Party, takes place every
August in Zurich, Switzerland, and big electronic music festivals happen in the
Netherlands, Belgium, and Romania, to mention a few. The UK hosts one of the
largest popular music festivals in the world at Glastonbury. During the last several
years, the music and practices of immigrants and refugees who have inhabited the
continent in large numbers have added to the picture. How does all of this reflect in

different music education contexts?
The topic is complex and vast, and this issue aims to provide a forum that can bring a
range of perspectives from different European contexts together into one publication.
We invite contributions on, but not limited to, the following themes:
•Current situation of popular music education in different European countries
•Comparative perspectives across European countries and/or regions
•Cultural heritage, identities, belongingness, and popular music education in Europe
•Historical dimensions of popular music education in Europe
•Popular music education and European citizenship
•Critical issues of music education and popular music education (social justice, human
rights, democracy, solidarity)
•Popular music education's critical purposes for the creative future of European music
•Music ecosystems and community in Europe
•Immigrants and refugees to Europe and the role of popular music education in their
lives
•Influence of European or global music industry and media on music education
•Interdisciplinary and transdisciplinary practices in European popular music education
•Creativities and technologies in European music education
•Popular music in European Higher Education, schools and other learning-and-teaching
contexts
•The future of popular music education across European countries
Authors should submit manuscripts of between 4,000 and 6,000 words, although
longer articles up to 8,000 will be considered (double-spaced, Times New Roman,
font size 12, including references). Please refer to the Intellect style guide when
preparing a submission.
Full papers should be uploaded via the journal's website at www.intellectbooks.
com/journal-of-popular-music-education or using the submissions portal via the
JPME website by 1 January 2025. Article which are accepted will be published in the 'Online
First' section of the journal, with a view to collating all articles relevant to the special issue for
publication in hard copy in 2025.
Enquiries are welcome, and should be emailed to the issue's guest editor Dr Avra
Pieridou Skoutella, at avraps@crsm.org.cy

Sunday, September 1, 2024

Fwd: Greek-English speakers wanted for a short survey


Are you fluent in both Greek and English? 
Then, you could help us with a study investigating people's feelings in everyday situations. The study is part of a larger music psychology project.

We are looking for adults and children/teenagers (8-18 years old) speaking Greek and English to complete a questionnaire investigating people's feelings in everyday situations. The questionnaire has an English and a Greek version, and we want you to complete both to examine the similarities and differences between the two versions.

If you are happy to participate, you will need to:

  • Complete one version of the questionnaire (link below - it will take 7-10 mins)
  • Wait for one week
  • Complete the second version (we will send you a link via email).

Everyone participating can win one of four £10 shopping vouchers!

If you are the parent/carer of a child or teenager (8-18 years old), they can also participate. A copy of the questionnaire is after yours at the end of the form.

Happy to participate? Please follow the right link below:

> If you were born between January and June, please start with this questionnaire: https://forms.gle/rZYDojgc9eiFrVNx6

> If you were born between July and December, please start with this questionnaire: https://forms.gle/aVyjNpQ1FyLj28pV7


For more information about the study, please contact Persefoni Tzanaki (ptzanaki1@sheffield.ac.uk).

--
Dr Persefoni Tzanaki (she/her)
Department of Music 
Faculty of Arts & Humanities | University of Sheffield

University email address: ptzanaki1@sheffield.ac.uk

Friday, August 30, 2024

Fwd: Fully funded 3-year PhD opening in music cognition, Univ. of Vienna


___________

Fully funded PhD opening in music cognition, University of Vienna

This PhD position is funded for 3 years by an Austrian Science Fund grant for a highly interdisciplinary research project concerned with the neurocognition of music and language, with a focus on syntax, prediction, and cultural evolution. Supervisors are Tudor Popescu (Department of Cognition, Emotion, and Methods in Psychology) and Tecumseh Fitch (Department of Behavioural and Cognitive Biology). There are no teaching obligations.

Successful candidates will have a quantitative background (e.g. experimental psychology, neuroscience, biology, engineering, etc.), with solid knowledge of programming, and interest in the neural bases of music perception. Also required are analysis skills in at least one of the project's planned methodologies (MRI, brain stimulation, pupillometry). Knowledge in any of music cognition's theoretical and computational frameworks is strongly desirable. Knowledge of German is not essential.

Informal enquiries prior to application can be directed to the project's PI, at tudor.popescu@univie.ac.at. Application deadline is 25th September 2024 and the position is expected to start in November 2024 or soon after. All details can be found at https://euraxess.ec.europa.eu/jobs/258249

Wednesday, August 7, 2024

Fwd: Job opening: Research Assistant / Postdoctoral Research Assistant in machine learning and music



---------- Forwarded message ---------
From: Emmanouil Benetos <emmanouil.benetos@qmul.ac.uk>
Date: Wed, 7 Aug 2024, 15:00
Subject: Job opening: Research Assistant / Postdoctoral Research Assistant in machine learning and music
To: <MUSIC-AND-SCIENCE@jiscmail.ac.uk>




At the Centre for Digital Music of Queen Mary University of London we are recruiting for a Research Assistant / Postdoctoral Research Assistant in machine learning and music, as part of an Innovate UK funded project in collaboration with Algorivm Ltd and Edinburgh Napier University. This is a 14-month position and the closing date for applications is 31 August. Please forward the information below to any colleagues who may be interested.

---

Research Assistant or Postdoctoral Research Assistant

School of EECS, Queen Mary University of London, UK
Annual Salary: 36,572 - 37,182 GBP (Research Assistant), 40,223 - 44,722 GBP (Postdoctoral Research Assistant)
Closing Date: 31 August 2024

https://qmul-jobs.tal.net/vx/mobile-0/appcentre-ext/brand-4/candidate/so/pm/1/pl/3/opp/2984-Research-Assistant-or-Postdoctoral-Research-Assistant

We are looking to appoint a Research Assistant or Postdoctoral Research Assistant in music information research to work on the project "Maestro - AI Musical Analysis Platform". The role involves investigating, developing and evaluating deep learning technologies for music performance analysis and music education, and will cover MIR tasks including automatic music transcription and audio-to-score alignment.

Applicants must hold a degree (PhD for postdoctoral level) in Computer Science, Electric/Electronic Engineering, Physics, Maths, or a related field. Able to work both independently and as part of a team, applicants will have experience with deep learning methods, music information retrieval or audio processing together with strong programming skills in Python. Applicants should have experience with conducting research, understanding the research process and summarising findings.

We offer competitive salaries, pension scheme, 30 days' leave per annum (pro-rata for part-time/fixed-term), a season ticket loan scheme, staff networks and access to a comprehensive range of personal and professional development opportunities. In addition, we offer a range of work life balance and family friendly, inclusive employment policies, flexible working arrangements, and campus facilities.

The post is based at the Mile End Campus in London. It is a full-time, fixed term appointment for 14 months, with an expected start date in September 2024 or soon after. For a Research Assistant the annual starting salary will be in the range of £36,572-£37,182; for a Postdoctoral Research Assistant it will be £40,223-£44,722. Salaries are inclusive of London allowance.

Queen Mary's commitment to our diverse and inclusive community is embedded in our appointments processes. Reasonable adjustments will be made at each stage of the recruitment process for any candidate with a disability. We have policies to support our staff throughout their careers, including arrangements for those who wish to work flexibly or on a job share basis, and we provide support for those returning from long-term absence. We particularly welcome applications from under-represented (BAME) groups, and from women in all stages of life, including pregnancy and maternity leave.

Informal enquiries should be addressed to Emmanouil Benetos at emmanouil.benetos@qmul.ac.uk.  Details about the School can be found at www.eecs.qmul.ac.uk , and details about the Centre for Digital Music can be found at www.c4dm.eecs.qmul.ac.uk .


--
Dr Emmanouil Benetos

Reader in Machine Listening, RAEng / Leverhulme Trust Research Fellow, Turing Fellow
Director of Research, School of Electronic Engineering and Computer Science
Deputy Director, UKRI Centre for Doctoral Training in AI and Music

Queen Mary University of London
T + 44 (0) 20 7882 6206
emmanouil.benetos@qmul.ac.uk
http://www.eecs.qmul.ac.uk/~emmanouilb/

Thursday, August 1, 2024

Fwd: One-year position for an undergraduate software engineer in the ECHOES project


Dear all,

I am sharing a research job opportunity for an Undergrad student here, in case you are interested. Work can be done remotely. Please feel free to distribute this information if you know about others that might be interested on applying.

Thank you! 
Kind regards,

Martha
Martha E. Thomae (PhD Music Technology, McGill University)
Postdoctoral Research Fellow at NOVA University Lisbon

JOB POSTING: ECHOES PROJECT

Call for the attribution of one (1) Research Initiation Scholarship (BII) within the scope of the Project Echoes from the Past: Unveiling a Lost Soundscape with Digital Analysis (ECHOES) (2022.01957.PTDC).

Duration of the fellowship: 12 months, starting on November 1, 2024. The Research Initiation Scholarship cannot be renewed.

Application Deadline: 13 Aug 2024 - 23:59 (Europe/Lisbon)

Offer description: This studentship is for a software engineer undergraduate student (or equivalent degree). The project ECHOES requires technical assistance from an undergraduate software engineer to continue the implementation in JavaScript of an algorithm for automatic analysis and deploy the Graphical User Interface for automatic analysis. The undergrad student might also help with the implementation in Python of an algorithm to be used in one of the steps of optical music recognition (OMR).

Salary: The Studentship for scientific initiation corresponds to €601,12 monthly, according to the table of scholarships awarded directly by the FCT, I.P. in Portugal. (http://www.fct.pt/apoios/bolsas/valores). Added to this amount is the voluntary social insurance corresponding to the first category, if the candidate so chooses, as well as a personal accident insurance.

Work can be done remotely.

Info at https://euraxess.ec.europa.eu/jobs/260616 or email Elsa De Luca

Please feel free to forward this information to people you think will benefit. Thank you!


Elsa De Luca

CESEM - Centre for the Study of the Sociology and Aesthetics of Music
IN2PAST – Associate Laboratory for Research and Innovation in Heritage, Arts, Sustainability and Territory
School of Social Sciences and Humanities
NOVA University of Lisbon


Wednesday, July 31, 2024

Fwd: Deadline Extension to 01.09.2024: TISMIR Special Collection on Multi-Modal Music Information Retrieval


Dear list,

we have been delighted with the response to this collection and due to numerous requests for
additional time, we are extending the deadline for all submissions to the 1st of September to
allow a bit more time for teams to polish off their manuscripts and ensure high quality submissions
for this collection.

Extended Deadline for Submissions

01.09.2024

Scope of the Special Collection
Data related to and associated with music can be retrieved from a variety of sources or modalities:
audio tracks; digital scores; lyrics; video clips and concert recordings; artist photos and album covers;
expert annotations and reviews; listener social tags from the Internet; and so on. Essentially, the ways
humans deal with music are very diverse: we listen to it, read reviews, ask friends for
recommendations, enjoy visual performances during concerts, dance and perform rituals, play
musical instruments, or rearrange scores.

As such, it is hardly surprising that we have discovered multi-modal data to be so effective in a range
of technical tasks that model human experience and expertise. Former studies have already
confirmed that music classification scenarios may significantly benefit when several modalities are
taken into account. Other works focused on cross-modal analysis, e.g., generating a missing modality
from existing ones or aligning the information between different modalities.

The current upswing of disruptive artificial intelligence technologies, deep learning, and big data
analytics is quickly changing the world we are living in, and inevitably impacts MIR research as well.
Facilitating the ability to learn from very diverse data sources by means of these powerful approaches
may not only bring the solutions to related applications to new levels of quality, robustness, and
efficiency, but will also help to demonstrate and enhance the breadth and interconnected nature of
music science research and the understanding of relationships between different kinds of musical
data.

In this special collection, we invite papers on multi-modal systems in all their diversity. We particularly
encourage under-explored repertoire, new connections between fields, and novel research areas.
Contributions consisting of pure algorithmic improvements, empirical studies, theoretical discussions,
surveys, guidelines for future research, and introductions of new data sets are all welcome, as the
special collection will not only address multi-modal MIR, but also cover multi-perspective ideas,
developments, and opinions from diverse scientific communities.

Sample Possible Topics
● State-of-the-art music classification or regression systems which are based on several
modalities
● Deeper analysis of correlation between distinct modalities and features derived from them
● Presentation of new multi-modal data sets, including the possibility of formal analysis and
theoretical discussion of practices for constructing better data sets in future
● Cross-modal analysis, e.g., with the goal of predicting a modality from another one
● Creative and generative AI systems which produce multiple modalities
● Explicit analysis of individual drawbacks and advantages of modalities for specific MIR tasks
● Approaches for training set selection and augmentation techniques for multi-modal classifier
systems
● Applying transfer learning, large language models, and neural architecture search to
multi-modal contexts
● Multi-modal perception, cognition, or neuroscience research
● Multi-objective evaluation of multi-modal MIR systems, e.g., not only focusing on the quality,
but also on robustness, interpretability, or reduction of the environmental impact during the
training of deep neural networks

Guest Editors
● Igor Vatolkin (lead) - Akademischer Rat (Assistant Professor) at the Department of Computer
Science, RWTH Aachen University, Germany
● Mark Gotham - Assistant professor at the Department of Computer Science, Durham
University, UK
● Xiao Hu - Associated professor at the University of Hong Kong
● Cory McKay - Professor of music and humanities at Marianopolis College, Canada
● Rui Pedro Paiva - Professor at the Department of Informatics Engineering of the University of
Coimbra, Portugal

Submission Guidelines
Please, submit through https://transactions.ismir.net, and note in your cover letter that your paper is
intended to be part of this Special Collection on Multi-Modal MIR.
Submissions should adhere to formatting guidelines of the TISMIR journal:
https://transactions.ismir.net/about/submissions/. Specifically, articles must not be longer than
8,000 words in length, including referencing, citation and notes.

Please also note that if the paper extends or combines the authors' previously published research, it
is expected that there is a significant novel contribution in the submission (as a rule of thumb, we
would expect at least 50% of the underlying work - the ideas, concepts, methods, results, analysis and
discussion - to be new).

In case you are considering submitting to this special issue, it would greatly help our planning if you
let us know by replying to igor.vatolkin@rwth-aachen.de.

Kind regards,
Igor Vatolkin
on behalf of the TISMIR editorial board and the guest editors

--   Dr. Igor Vatolkin  Akademischer Rat  Department of Computer Science  Chair for AI Methodology (AIM)  RWTH Aachen University  Theaterstrasse 35-39, 52062 Aachen  Mail: igor.vatolkin@rwth-aachen.de  Skype: igor.vatolkin  https://www.aim.rwth-aachen.de  https://sig-ma.de  https://de.linkedin.com/in/igor-vatolkin-881aa78  https://scholar.google.de/citations?user=p3LkVhcAAAAJ  https://ls11-www.cs.tu-dortmund.de/staff/vatolkin

Monday, July 29, 2024

Fwd: AI music creativity conference


With apologies for cross-posting
A reminder about  the 2024 AIMC conference which will take place In Oxford. September 9-11. Early-bird registration is closing this week

The Conference on AI and Music Creativity is an annual conference bringing together a community working on the application of AI in music practice. The AI and music community is a highly interdisciplinary community with a background in diverse fields of research and practice. This makes the AIMC exciting with topics ranging from performance systems, computational creativity, machine listening, robotics, sonification, and more.
For the 2024 conference we will explore the links between music AI and adjacent domains including two exciting keynote speakers - Dr Zubin Kanga and Dr Maya Ackerman.
List of presentations, workshops and music featured in the conference is available on our website

Tuesday, July 23, 2024

Fwd: [DMRN-LIST] CfP: First AES International Conference on AI and Machine Learning for Audio (AIMLA 2025)



[apologies for cross-posting, please circulate this call widely]

First AES International Conference on Artificial Intelligence and Machine Learning for Audio (AIMLA 2025), London, Sept. 8-10, 2025, Call for contributions


The Audio Engineering Society invites audio researchers and practitioners, from academia and industry to participate in the first AES conference dedicated to artificial intelligence and machine learning, as it applies to audio. This 3 day event, aims to bring the community together, educate, demonstrate and advance the state of the art. It will feature keynote speakers, workshops, tutorials, challenges and cutting-edge peer-reviewed research.


The scope is wide - expecting attendance from all types of institutions, including academia, industry, and pure research, with diverse disciplinary perspectives - but tied together by a focus on artificial intelligence and machine learning for audio. 


Original contributions are encouraged in, but not limited to the following topics:


  • Intelligent Music Production

    • Knowledge Engineering Systems

    • Automatic Mixing / Remixing / Demixing / Mastering

    • Differentiable Audio Effects

  • Audio and Music Generation

    • Generative models for audio

    • Deep Neural Audio Codecs

    • Neural Audio Synthesis

    • Text-to-audio generation

    • Instrument models

    • Speech and Singing voice synthesis

    • AI for sound design

    • Differentiable Synthesisers using DDSP and other neural models

  • Representation Learning

    • Fingerprinting using deep learning

    • Transfer Learning 

    • Domain Adaptation

    • Transfer of musical composition and performance characteristics including, timbre, style, production, mixing and playing technique

  • Real-time AI For Audio

    • Model Compression (Quantization, Knowledge Distillation)

    • Efficient Model Design and Benchmarking

    • Real-time inference frameworks in software and hardware

  • Applications of AI in Acoustics and Environmental Audio

    • Machine learning and AI models for acoustic sensing

    • Deep learning for acoustic scene analysis

    • Deep learning for localisation in noisy and reverberant environments

    • Binaural processing with AI or ML

    • Source and scene classification

    • Source separation, source identification and acoustic signal enhancement

    • AI-driven distributed acoustic sensor networks

    • Control and estimation problems in physical modelling

    • AI-based perception models inspired by human hearing

    • Application of AI to wave propagation in air, fluids and solids

  • AI Ethics for Audio and Music

    • AI-Generated Music and Creativity

    • Intellectual Property in AI-Composed Music

    • AI in Environmental Sound Monitoring

    • Cultural Appropriation in AI Music

    • Environmental Impact of Audio Data Processing


Call for Papers

The conference accepts full papers of 4-10 pages. Papers must be submitted as PDF files using EasyChair. All papers will be peer-reviewed by at least two experts in the field, and accepted papers will be presented in the conference programme and proceedings published in the AES Library in open access (OA) format. Final manuscripts of accepted papers must implement all revisions requested by the review panel and will be presented in an oral or poster session.


Important dates (pre-announcement):


Paper submission deadline: 28th February 2025

Notification of acceptance: 6th June 2025

Camera-ready submission deadline: 7th July 2025

Conference: 8-10 Sept. 2025


Paper submission will open early October 2024, deadlines are subject to minor changes until September 2024. 


Enquiries should be sent to: papers-aimla@qmul.ac.uk


Call for Special Sessions

As a part of this conference, we invite submissions for panel discussions, tutorials, and challenges. 


Call for Panel Discussions

We are seeking experts and professionals in the field to propose a 60-minute Panel discussion with at least 4 panellists on the proposed topics. The proposal should include a title, an abstract (60-120 words), a list of topics for discussion, and a description (up to 500 words). Additionally, the submission should include the number, names, and qualifications of presenters, and technical requirements (sound requirements during the presentation, such as stereo, multichannel, etc.).


Important dates (pre-announcement):

Deadline for panel discussion proposals: 28th February 2025

Accepted panels notified by: 6th June 2025


Call for Tutorials 

We are seeking proposals for 120-minute hands-on tutorials on the conference topics. The proposal should include a title, an abstract (60-120 words), a list of topics, and a description (up to 500 words). Additionally, the submission should include presenters' names, qualifications, and technical requirements (sound requirements during the presentation, such as stereo, multichannel, etc.). We encourage tutorials to be supported by an elaborate collation of discussed content and code to support learning and building resources for a given topic. 


Important dates (pre-announcement):

Deadline for tutorial proposals: 25th Oct 2024

Accepted sessions notified by: 24th Jan 2025


Call for Challenges

The AES AI and ML for Audio conference promotes knowledge sharing among researchers, professionals, and engineers in AI and audio. Special Sessions include pre-conference challenges hosted by industry or academic teams to drive technology improvements and explore new research directions. Each team manages the organization, data provision, participation instructions, mentoring, scoring, summaries, and results presentation. Challenges are selected based on their scientific and technological significance, data quality and relevance, and proposal feasibility. Collaborative proposals from different labs are encouraged and prioritized. We expect an initial expression of interest via mail to special-sessions-aimla@qmul.ac.uk by 15th Oct 2024, followed by a full submission on EasyChair by the final submission deadline. 

Proposal

Challenge bidders should submit a challenge proposal for review. The proposal should be a maximum of two pages (PDF format) including the following information:

  1. Challenge name

  2. Coordinators

  3. Keywords (e.g. classification, generation, transcription)

  4. Definition (one sentence, e.g. automatic mixing for 8 tracks with prediction of audio effect parameters for gain, pan, compressor, and EQ)

  5. Short description (including the research question the challenge is tackling. Please mention if it is a follow-up of a past challenge organised elsewhere)

  6. Dataset description: development, evaluation (short description, how much data is already available and prepared, how long would it take to prepare the rest, mention if you allow external data/transfer learning or not)

  7. Evaluation method/metric

  8. Baseline system (2 sentences, planned method if you do not have one from the previous challenge)

  9. Contact person (for main communication, website)

Important dates (pre-announcement):


Expression of Interest: 15th Oct 2024 

Final Submission Deadline: 31st Oct 2024 

Conditional Acceptance Notification: 29th Nov 2024


If required, we may ask for additional information regarding the organisation and scope of the challenge, and ask for a resubmission of the proposal. The discussion period will span from 2nd Dec 2024 to 17th Jan 2025.


Final Acceptance Notification: 24th Jan 2025


Tentative Timeline

Challenge Descriptions Announced: 31st Jan 2025

Challenges Start: 1st April 2025

Challenges End: 15th June 2025 

Challenges Results Announcement: 15th July 2025


We invite challenge organisers to compile a report and present it in the form of a paper which can be part of the conference proceedings.


Paper Submission deadline: 15th August 2025


Enquiries should be sent to: special-sessions-aimla@qmul.ac.uk 



Organising Committee

General Chair: Prof. Joshua Reiss (QMUL) (chair-aimla@qmul.ac.uk)

Papers Co-Chairs: Brecht De Man (PXL-Music) and George Fazekas (QMUL) (papers-aimla@qmul.ac.uk)

Special Sessions Co-Chairs: Soumya Vanka (QMUL) and Franco Caspe (QMUL) (special-sessions-aimla@qmul.ac.uk)


Conference Website: https://aes2.org/events-calendar/2025-aes-international-conference-on-artificial-intelligence-and-machine-learning-for-audio/ 




--
Open-access journal Transactions of ISMIR, open for submissions: https://tismir.ismir.net
---
ISMIR 2024 will take place from Nov 10-14, 2024 in San Francisco, USA (Hybrid format)
ISMIR 2025 will take place in Daejeon, South Korea
ISMIR Home -- http://www.ismir.net/
---
Please note! This list is lightly moderated, any email sent from a non-member address will be queued until it can be reviewed by a human. Be sure to join before posting!
---
You received this message because you are subscribed to the Google Groups "Community Announcements" group.
To unsubscribe from this group and stop receiving emails from it, send an email to community+unsubscribe@ismir.net.
To view this discussion on the web visit https://groups.google.com/a/ismir.net/d/msgid/community/2ddca4d8-bd0e-4459-9a1f-b999dac162ban%40ismir.net.

Thursday, July 18, 2024

Fwd: The Second Cadenza Signal Processing Challenge to Improve Music for Those with Hearing Loss


 

The Second Cadenza Signal Processing Challenge to Improve Music for Those with Hearing Loss

 

Open now

Submission deadline: January 2025

 

The 2nd Cadenza Challenge (CAD2) is part of the IEEE SPS Challenge Program. The SPS are funding cash prizes for the best entrants.

Why are these challenges important?

According to The World Health Organization, 430 million people worldwide have a disabling hearing loss. Hearing loss causes various problems such as quieter music passages being inaudible, poor and anomalous pitch perception, difficulties identifying and picking out instruments, and problems hearing out lyrics. While hearing aids have music programmes, the effectiveness of these is mixed.

2nd Cadenza Challenge (CAD2)

There are two tasks

  1. Improving the intelligibility of lyrics for pop/rock music while not harming audio quality.
  2. Rebalancing the level of instruments within a classical music ensemble (e.g. string quartet) to allow personalised mixes.

For both tasks, a demix / remix approach could be used. Gains could be applied to the demixed signals before remixing back to stereo to achieve the aims of the challenge. For lyric intelligibility, a simple amplification of the vocals could increase intelligibility, but there are other ways to achieve this that might cause less harm to audio quality. It would also be possible to use other machine learning approaches such as end-to-end transformation.

We provide music signals, software tools, objective metrics and baselines. The two tasks are evaluated using objective metrics. For lyric intelligibility, there will also be perceptual tests with listeners who have hearing loss.

More details: http://cadenzachallenge.org/

To stay up to date please sign up for the Cadenza Challenge's Google group https://groups.google.com/g/cadenza-challenge

Please feel free to circulate this invitation to colleagues who you think may be interested.

 

Trevor Cox
Professor of Acoustic Engineering
Newton Building, University of Salford, Salford M5 4WT, UK.
+44 161 518 1884
Mobile: 07986 557419

Alinka Greasley

Professor of Music Psychology

Director of Research and Innovation

School of Music | University of Leeds | Leeds, LS2 9JT, UK

Email: a.e.greasley@leeds.ac.uk | Phone: + 44 113 343 4560

 

Wednesday, July 17, 2024

Fwd: [DMRN-LIST] Industry-funded PhD position in AI and Music at the Centre for Digital Music of Queen Mary University of London

 

(apologies for cross-posting)

 

We have one industry-funded PhD position to join my lab (link in signature) in the Centre for Digital Music at QMUL and the UKRI CDT in AI and Music in September 2024

The topic is Smart EQ: Personalizing Audio with Context-aware AI using Listener Preferences and Psychological Factors and is part of a collaboration with Dr György Fazekas and Yamaha.

 

More information on the topic and how to apply is available here.

 

Application deadline: 26th August 2024

 

Best wishes

Charis

 

-- 

Communication Acoustics Lab 

Centre for Digital Music
Queen Mary University of London

http://comma.eecs.qmul.ac.uk/

c.saitis@qmul.ac.uk

--

We are always hearing, but are we listening? — Joel Chadabe