Wednesday, July 31, 2024

Fwd: Deadline Extension to 01.09.2024: TISMIR Special Collection on Multi-Modal Music Information Retrieval


Dear list,

we have been delighted with the response to this collection and due to numerous requests for
additional time, we are extending the deadline for all submissions to the 1st of September to
allow a bit more time for teams to polish off their manuscripts and ensure high quality submissions
for this collection.

Extended Deadline for Submissions

01.09.2024

Scope of the Special Collection
Data related to and associated with music can be retrieved from a variety of sources or modalities:
audio tracks; digital scores; lyrics; video clips and concert recordings; artist photos and album covers;
expert annotations and reviews; listener social tags from the Internet; and so on. Essentially, the ways
humans deal with music are very diverse: we listen to it, read reviews, ask friends for
recommendations, enjoy visual performances during concerts, dance and perform rituals, play
musical instruments, or rearrange scores.

As such, it is hardly surprising that we have discovered multi-modal data to be so effective in a range
of technical tasks that model human experience and expertise. Former studies have already
confirmed that music classification scenarios may significantly benefit when several modalities are
taken into account. Other works focused on cross-modal analysis, e.g., generating a missing modality
from existing ones or aligning the information between different modalities.

The current upswing of disruptive artificial intelligence technologies, deep learning, and big data
analytics is quickly changing the world we are living in, and inevitably impacts MIR research as well.
Facilitating the ability to learn from very diverse data sources by means of these powerful approaches
may not only bring the solutions to related applications to new levels of quality, robustness, and
efficiency, but will also help to demonstrate and enhance the breadth and interconnected nature of
music science research and the understanding of relationships between different kinds of musical
data.

In this special collection, we invite papers on multi-modal systems in all their diversity. We particularly
encourage under-explored repertoire, new connections between fields, and novel research areas.
Contributions consisting of pure algorithmic improvements, empirical studies, theoretical discussions,
surveys, guidelines for future research, and introductions of new data sets are all welcome, as the
special collection will not only address multi-modal MIR, but also cover multi-perspective ideas,
developments, and opinions from diverse scientific communities.

Sample Possible Topics
● State-of-the-art music classification or regression systems which are based on several
modalities
● Deeper analysis of correlation between distinct modalities and features derived from them
● Presentation of new multi-modal data sets, including the possibility of formal analysis and
theoretical discussion of practices for constructing better data sets in future
● Cross-modal analysis, e.g., with the goal of predicting a modality from another one
● Creative and generative AI systems which produce multiple modalities
● Explicit analysis of individual drawbacks and advantages of modalities for specific MIR tasks
● Approaches for training set selection and augmentation techniques for multi-modal classifier
systems
● Applying transfer learning, large language models, and neural architecture search to
multi-modal contexts
● Multi-modal perception, cognition, or neuroscience research
● Multi-objective evaluation of multi-modal MIR systems, e.g., not only focusing on the quality,
but also on robustness, interpretability, or reduction of the environmental impact during the
training of deep neural networks

Guest Editors
● Igor Vatolkin (lead) - Akademischer Rat (Assistant Professor) at the Department of Computer
Science, RWTH Aachen University, Germany
● Mark Gotham - Assistant professor at the Department of Computer Science, Durham
University, UK
● Xiao Hu - Associated professor at the University of Hong Kong
● Cory McKay - Professor of music and humanities at Marianopolis College, Canada
● Rui Pedro Paiva - Professor at the Department of Informatics Engineering of the University of
Coimbra, Portugal

Submission Guidelines
Please, submit through https://transactions.ismir.net, and note in your cover letter that your paper is
intended to be part of this Special Collection on Multi-Modal MIR.
Submissions should adhere to formatting guidelines of the TISMIR journal:
https://transactions.ismir.net/about/submissions/. Specifically, articles must not be longer than
8,000 words in length, including referencing, citation and notes.

Please also note that if the paper extends or combines the authors' previously published research, it
is expected that there is a significant novel contribution in the submission (as a rule of thumb, we
would expect at least 50% of the underlying work - the ideas, concepts, methods, results, analysis and
discussion - to be new).

In case you are considering submitting to this special issue, it would greatly help our planning if you
let us know by replying to igor.vatolkin@rwth-aachen.de.

Kind regards,
Igor Vatolkin
on behalf of the TISMIR editorial board and the guest editors

--   Dr. Igor Vatolkin  Akademischer Rat  Department of Computer Science  Chair for AI Methodology (AIM)  RWTH Aachen University  Theaterstrasse 35-39, 52062 Aachen  Mail: igor.vatolkin@rwth-aachen.de  Skype: igor.vatolkin  https://www.aim.rwth-aachen.de  https://sig-ma.de  https://de.linkedin.com/in/igor-vatolkin-881aa78  https://scholar.google.de/citations?user=p3LkVhcAAAAJ  https://ls11-www.cs.tu-dortmund.de/staff/vatolkin

Monday, July 29, 2024

Fwd: AI music creativity conference


With apologies for cross-posting
A reminder about  the 2024 AIMC conference which will take place In Oxford. September 9-11. Early-bird registration is closing this week

The Conference on AI and Music Creativity is an annual conference bringing together a community working on the application of AI in music practice. The AI and music community is a highly interdisciplinary community with a background in diverse fields of research and practice. This makes the AIMC exciting with topics ranging from performance systems, computational creativity, machine listening, robotics, sonification, and more.
For the 2024 conference we will explore the links between music AI and adjacent domains including two exciting keynote speakers - Dr Zubin Kanga and Dr Maya Ackerman.
List of presentations, workshops and music featured in the conference is available on our website

Tuesday, July 23, 2024

Fwd: [DMRN-LIST] CfP: First AES International Conference on AI and Machine Learning for Audio (AIMLA 2025)



[apologies for cross-posting, please circulate this call widely]

First AES International Conference on Artificial Intelligence and Machine Learning for Audio (AIMLA 2025), London, Sept. 8-10, 2025, Call for contributions


The Audio Engineering Society invites audio researchers and practitioners, from academia and industry to participate in the first AES conference dedicated to artificial intelligence and machine learning, as it applies to audio. This 3 day event, aims to bring the community together, educate, demonstrate and advance the state of the art. It will feature keynote speakers, workshops, tutorials, challenges and cutting-edge peer-reviewed research.


The scope is wide - expecting attendance from all types of institutions, including academia, industry, and pure research, with diverse disciplinary perspectives - but tied together by a focus on artificial intelligence and machine learning for audio. 


Original contributions are encouraged in, but not limited to the following topics:


  • Intelligent Music Production

    • Knowledge Engineering Systems

    • Automatic Mixing / Remixing / Demixing / Mastering

    • Differentiable Audio Effects

  • Audio and Music Generation

    • Generative models for audio

    • Deep Neural Audio Codecs

    • Neural Audio Synthesis

    • Text-to-audio generation

    • Instrument models

    • Speech and Singing voice synthesis

    • AI for sound design

    • Differentiable Synthesisers using DDSP and other neural models

  • Representation Learning

    • Fingerprinting using deep learning

    • Transfer Learning 

    • Domain Adaptation

    • Transfer of musical composition and performance characteristics including, timbre, style, production, mixing and playing technique

  • Real-time AI For Audio

    • Model Compression (Quantization, Knowledge Distillation)

    • Efficient Model Design and Benchmarking

    • Real-time inference frameworks in software and hardware

  • Applications of AI in Acoustics and Environmental Audio

    • Machine learning and AI models for acoustic sensing

    • Deep learning for acoustic scene analysis

    • Deep learning for localisation in noisy and reverberant environments

    • Binaural processing with AI or ML

    • Source and scene classification

    • Source separation, source identification and acoustic signal enhancement

    • AI-driven distributed acoustic sensor networks

    • Control and estimation problems in physical modelling

    • AI-based perception models inspired by human hearing

    • Application of AI to wave propagation in air, fluids and solids

  • AI Ethics for Audio and Music

    • AI-Generated Music and Creativity

    • Intellectual Property in AI-Composed Music

    • AI in Environmental Sound Monitoring

    • Cultural Appropriation in AI Music

    • Environmental Impact of Audio Data Processing


Call for Papers

The conference accepts full papers of 4-10 pages. Papers must be submitted as PDF files using EasyChair. All papers will be peer-reviewed by at least two experts in the field, and accepted papers will be presented in the conference programme and proceedings published in the AES Library in open access (OA) format. Final manuscripts of accepted papers must implement all revisions requested by the review panel and will be presented in an oral or poster session.


Important dates (pre-announcement):


Paper submission deadline: 28th February 2025

Notification of acceptance: 6th June 2025

Camera-ready submission deadline: 7th July 2025

Conference: 8-10 Sept. 2025


Paper submission will open early October 2024, deadlines are subject to minor changes until September 2024. 


Enquiries should be sent to: papers-aimla@qmul.ac.uk


Call for Special Sessions

As a part of this conference, we invite submissions for panel discussions, tutorials, and challenges. 


Call for Panel Discussions

We are seeking experts and professionals in the field to propose a 60-minute Panel discussion with at least 4 panellists on the proposed topics. The proposal should include a title, an abstract (60-120 words), a list of topics for discussion, and a description (up to 500 words). Additionally, the submission should include the number, names, and qualifications of presenters, and technical requirements (sound requirements during the presentation, such as stereo, multichannel, etc.).


Important dates (pre-announcement):

Deadline for panel discussion proposals: 28th February 2025

Accepted panels notified by: 6th June 2025


Call for Tutorials 

We are seeking proposals for 120-minute hands-on tutorials on the conference topics. The proposal should include a title, an abstract (60-120 words), a list of topics, and a description (up to 500 words). Additionally, the submission should include presenters' names, qualifications, and technical requirements (sound requirements during the presentation, such as stereo, multichannel, etc.). We encourage tutorials to be supported by an elaborate collation of discussed content and code to support learning and building resources for a given topic. 


Important dates (pre-announcement):

Deadline for tutorial proposals: 25th Oct 2024

Accepted sessions notified by: 24th Jan 2025


Call for Challenges

The AES AI and ML for Audio conference promotes knowledge sharing among researchers, professionals, and engineers in AI and audio. Special Sessions include pre-conference challenges hosted by industry or academic teams to drive technology improvements and explore new research directions. Each team manages the organization, data provision, participation instructions, mentoring, scoring, summaries, and results presentation. Challenges are selected based on their scientific and technological significance, data quality and relevance, and proposal feasibility. Collaborative proposals from different labs are encouraged and prioritized. We expect an initial expression of interest via mail to special-sessions-aimla@qmul.ac.uk by 15th Oct 2024, followed by a full submission on EasyChair by the final submission deadline. 

Proposal

Challenge bidders should submit a challenge proposal for review. The proposal should be a maximum of two pages (PDF format) including the following information:

  1. Challenge name

  2. Coordinators

  3. Keywords (e.g. classification, generation, transcription)

  4. Definition (one sentence, e.g. automatic mixing for 8 tracks with prediction of audio effect parameters for gain, pan, compressor, and EQ)

  5. Short description (including the research question the challenge is tackling. Please mention if it is a follow-up of a past challenge organised elsewhere)

  6. Dataset description: development, evaluation (short description, how much data is already available and prepared, how long would it take to prepare the rest, mention if you allow external data/transfer learning or not)

  7. Evaluation method/metric

  8. Baseline system (2 sentences, planned method if you do not have one from the previous challenge)

  9. Contact person (for main communication, website)

Important dates (pre-announcement):


Expression of Interest: 15th Oct 2024 

Final Submission Deadline: 31st Oct 2024 

Conditional Acceptance Notification: 29th Nov 2024


If required, we may ask for additional information regarding the organisation and scope of the challenge, and ask for a resubmission of the proposal. The discussion period will span from 2nd Dec 2024 to 17th Jan 2025.


Final Acceptance Notification: 24th Jan 2025


Tentative Timeline

Challenge Descriptions Announced: 31st Jan 2025

Challenges Start: 1st April 2025

Challenges End: 15th June 2025 

Challenges Results Announcement: 15th July 2025


We invite challenge organisers to compile a report and present it in the form of a paper which can be part of the conference proceedings.


Paper Submission deadline: 15th August 2025


Enquiries should be sent to: special-sessions-aimla@qmul.ac.uk 



Organising Committee

General Chair: Prof. Joshua Reiss (QMUL) (chair-aimla@qmul.ac.uk)

Papers Co-Chairs: Brecht De Man (PXL-Music) and George Fazekas (QMUL) (papers-aimla@qmul.ac.uk)

Special Sessions Co-Chairs: Soumya Vanka (QMUL) and Franco Caspe (QMUL) (special-sessions-aimla@qmul.ac.uk)


Conference Website: https://aes2.org/events-calendar/2025-aes-international-conference-on-artificial-intelligence-and-machine-learning-for-audio/ 




--
Open-access journal Transactions of ISMIR, open for submissions: https://tismir.ismir.net
---
ISMIR 2024 will take place from Nov 10-14, 2024 in San Francisco, USA (Hybrid format)
ISMIR 2025 will take place in Daejeon, South Korea
ISMIR Home -- http://www.ismir.net/
---
Please note! This list is lightly moderated, any email sent from a non-member address will be queued until it can be reviewed by a human. Be sure to join before posting!
---
You received this message because you are subscribed to the Google Groups "Community Announcements" group.
To unsubscribe from this group and stop receiving emails from it, send an email to community+unsubscribe@ismir.net.
To view this discussion on the web visit https://groups.google.com/a/ismir.net/d/msgid/community/2ddca4d8-bd0e-4459-9a1f-b999dac162ban%40ismir.net.

Thursday, July 18, 2024

Fwd: The Second Cadenza Signal Processing Challenge to Improve Music for Those with Hearing Loss


 

The Second Cadenza Signal Processing Challenge to Improve Music for Those with Hearing Loss

 

Open now

Submission deadline: January 2025

 

The 2nd Cadenza Challenge (CAD2) is part of the IEEE SPS Challenge Program. The SPS are funding cash prizes for the best entrants.

Why are these challenges important?

According to The World Health Organization, 430 million people worldwide have a disabling hearing loss. Hearing loss causes various problems such as quieter music passages being inaudible, poor and anomalous pitch perception, difficulties identifying and picking out instruments, and problems hearing out lyrics. While hearing aids have music programmes, the effectiveness of these is mixed.

2nd Cadenza Challenge (CAD2)

There are two tasks

  1. Improving the intelligibility of lyrics for pop/rock music while not harming audio quality.
  2. Rebalancing the level of instruments within a classical music ensemble (e.g. string quartet) to allow personalised mixes.

For both tasks, a demix / remix approach could be used. Gains could be applied to the demixed signals before remixing back to stereo to achieve the aims of the challenge. For lyric intelligibility, a simple amplification of the vocals could increase intelligibility, but there are other ways to achieve this that might cause less harm to audio quality. It would also be possible to use other machine learning approaches such as end-to-end transformation.

We provide music signals, software tools, objective metrics and baselines. The two tasks are evaluated using objective metrics. For lyric intelligibility, there will also be perceptual tests with listeners who have hearing loss.

More details: http://cadenzachallenge.org/

To stay up to date please sign up for the Cadenza Challenge's Google group https://groups.google.com/g/cadenza-challenge

Please feel free to circulate this invitation to colleagues who you think may be interested.

 

Trevor Cox
Professor of Acoustic Engineering
Newton Building, University of Salford, Salford M5 4WT, UK.
+44 161 518 1884
Mobile: 07986 557419

Alinka Greasley

Professor of Music Psychology

Director of Research and Innovation

School of Music | University of Leeds | Leeds, LS2 9JT, UK

Email: a.e.greasley@leeds.ac.uk | Phone: + 44 113 343 4560

 

Wednesday, July 17, 2024

Fwd: [DMRN-LIST] Industry-funded PhD position in AI and Music at the Centre for Digital Music of Queen Mary University of London

 

(apologies for cross-posting)

 

We have one industry-funded PhD position to join my lab (link in signature) in the Centre for Digital Music at QMUL and the UKRI CDT in AI and Music in September 2024

The topic is Smart EQ: Personalizing Audio with Context-aware AI using Listener Preferences and Psychological Factors and is part of a collaboration with Dr György Fazekas and Yamaha.

 

More information on the topic and how to apply is available here.

 

Application deadline: 26th August 2024

 

Best wishes

Charis

 

-- 

Communication Acoustics Lab 

Centre for Digital Music
Queen Mary University of London

http://comma.eecs.qmul.ac.uk/

c.saitis@qmul.ac.uk

--

We are always hearing, but are we listening? — Joel Chadabe


Tuesday, July 9, 2024

Fwd: HEartS Summit 2024: The Future of Creative Health & The Creative Workforce


Dear All,

 

We are pleased to share that registration is now open for HEartS Summit 2024: The Future of Creative Health and The Creative Workforce, an exciting event taking place at the Royal College of Music across two days on Friday 6th and Saturday 7th September 2024. Further details are provided below, along with information on how to register for a place. 

 

Background and Development

Funded by a £1 million grant from the Arts and Humanities Research Council, the HEartS (Health, Economic, and Social impact of the ARTs) project explored the impact of the arts and culture on health and wellbeing from individual, social, and economic perspectives. Building on this work, HEartS Professional tracked the impact of the COVID-19 pandemic on professionals in the arts and culture sectors, providing knowledge and policy recommendations at a critical time for the workforce. Together, the research from these projects has the potential to shape the future of creative health. 

 

HEartS Summit Event

The aim of this event is to create a lasting legacy for this research, providing a space for all stakeholders to engage creatively and collaboratively on pathways to policy implementation and cultural change across the sector. Organisations and public interest groups previously engaged with the project as part of the policy consultation phase will participate in roundtable discussions, enabling a comprehensive and inclusive approach to shaping the future of creative health.

 

Registration and Attendance

This event is open to professionals working across the sector, including artists, performers, educators, researchers, healthcare providers, policymakers, and anyone passionate about the intersection of arts and health and professional practice. Whether you're a seasoned expert or a newcomer to the field, your unique perspective is valuable in shaping our collective future. You can learn more and register for your FREE place here: HEartS Summit Event Page.

 

We very much hope that you will be able to join us and look forward to seeing you in September! 

 

Best wishes, 

Michael

 

Michael Durrant

HEartS Project Coordinator


CENTRE FOR 
PERFORMANCE SCIENCE  
 
The CPS is a partnership of 

Royal College of Music | Imperial College London 

www.PerformanceScience.ac.uk