About me
I am an audio research scientist working on AI for audio and music. Currently, I am a Senior Audio Research Scientist at Serato in Auckland, New Zealand, where I research next-generation beat detection and low-latency stem separation for DJ and music production tools.
Before Serato, I was a Research Scientist at Emobot working on speech emotion recognition, and before that I managed R&D projects at Arkamys in Paris, working on road noise cancellation for vehicles. Prior to that, I completed my PhD at Télécom Paris and Deezer, focused on personalised music auto-tagging and contextual music recommendation.
Prior to joining Télécom Paris, I completed a M.Sc. in Computer Science from NUS, where I worked on singing voice intelligibility, and a M.Sc. in Software Engineering from Nile University, Cairo, working on audio source separation and surround sound upmixing.
Research Interests
- Audio source separation and spatial audio
- Music information retrieval (auto-tagging, recommendation, singing voice)
- Speech and music perception, emotion recognition
- Deep learning for audio and music production tools
Beyond Research
- Playing guitar and drums
- Football and squash
- Hiking and camping
News
2025-05-01 Joined Serato as a Senior Audio Research Scientist in Auckland, New Zealand.
2024-04-14 Paper published at ICASSP 2024: Towards Improving Speech Emotion Recognition Using Synthetic Data Augmentation from Emotion Conversion. [PDF]
2022-12-04 Paper published at ISMIR 2022: Exploiting Device and Audio Data to Tag Music with User-Aware Listening Contexts. [PDF]
2021-12-15 Successfully defended my PhD at Télécom Paris on personalised contextual music recommendation! [Thesis]
2020-10-12 Paper published at ISMIR 2020: Should we consider the users in contextual music auto-tagging models? [PDF]
2020-05-01 Paper published at ICASSP 2020: Audio-Based Auto-Tagging With Contextual Tags for Music. [PDF]
