Publications
You can also find my articles on my Google Scholar profile.
Conference & Workshop Papers
Published in IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2024
This paper proposes using synthetic data augmentation via emotion conversion to improve speech emotion recognition models.
Published in 22nd International Society for Music Information Retrieval Conference (ISMIR), 2022
This paper proposes real-time predictions of the user listening context
Published in 21st International Society for Music Information Retrieval Conference (ISMIR), 2020
This paper proposes a user-aware auto-tagging system for the contextual tags of music tracks
Published in The 2020 International Conference on Multimedia Retrieval, 2020
This paper is about a weighted loss function that accounts for the missing labels in the training set that is easily usable in fine-tuning pre-trained models
Published in The IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2020
This paper is about auto-tagging music tracks with context-related tags. The paper also presents a dataset of ∼50k tracks labelled with 15 different contexts.
Published in The 19th International Society for Music Information Retrieval Conference (ISMIR), 2018
This paper is about estimating the singability of a given song and the factors that make one song more singable than another. We propose a number of acoustic features to automatically estimate the singability of a song.
Published in The IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2018
This paper is about separting the primary and ambient sources from a sounds mixture to be used in surround sound upmixing. We propose a neural-network-based approach to apply the separation
Published in The 19th International Society for Music Information Retreival Conference ISMIR, 2017
This paper is about estimating the intelligibility of the singing voice in a given song. We propose a set of acoustic features that are relevant for estimating the intelligibility. We also propose an approach for labeling songs with an intelligibility score accroding to human perception
Published in The 13th Sound and Music Computing Conference (SMC), 2016
This paper is about separting the primary and ambient sources from a sounds mixture to be used in surround sound upmixing. We propose a PCA-based approach to apply the separation
Theses
Personalised Contextual Music Recommendation (PhD) Karim M. Ibrahim — supervised by Dr. Gaël Richard, Dr. Geoffroy Peeters (Télécom Paris) and Dr. Elena Epure (Deezer). Télécom Paris, Institut Polytechnique de Paris, 2021. [View on HAL]
Singing Voice Intelligibility (M.Sc.) Karim M. Ibrahim — Department of Computer Science, National University of Singapore, 2017. [View on ScholarBank]
Primary-Ambient Audio Source Separation and Surround Sound Upmixing (M.Sc.) Karim M. Ibrahim — Faculty of Information Technology and Computer Science, Nile University, Cairo, 2015.