Projects

Speech Emotion Recognition

Published:

At Emobot, I led research on automatic speech emotion recognition — pushing accuracy significantly through synthetic data augmentation via emotion conversion, with direct impact on a real-time healthcare application.

Contextual Music Recommendation

Published:

My PhD project at Télécom Paris and Deezer, studying how listening context — activity, mood, device, time of day — shapes what music people want to hear, and building systems that learn to predict it automatically.

Singing Voice Intelligibility

Published:

For my M.Sc. at NUS, I studied what makes song lyrics easy or hard to understand, and built systems to measure it automatically — motivated by a real application: recommending music for language learning.

Primary-Ambient Source Separation

Published:

My first research project, spanning my M.Sc. at Nile University and an internship at Sony Stuttgart. The goal: automatically separate the direct sound from the diffuse ambience in a stereo recording, to enable surround sound upmixing.