Research Projects

I do research in human-centered music information retrieval-- combining music, machine learning, and cognition. For my PhD, I am advised by Dr. Brian McFee and Dr. Pablo Ripollés. I have collaborated with researchers at NYU, Stanford, and UCLA, and audio technology companies including Univeral Audio, Spotify, Smule, and Shazam.

I have a GitHub page. I have several research publications and a Google Scholar profile.

images/pic02.jpg

Total Variation in Vocals Over Time

We conduct an exploratory study of 43,153 vocal tracks of popular songs spanning nearly a century, and report trends in total variation over time and between genres.

images/pic02.jpg

HitPredict: Using Spotify Data to Predict Billboard Hits

We are able to predict the Billboard success of a song with ~75% accuracy using several machine learning algorithms.

Music Visualizer

A music visualizer I built for my Music, Computing, and Design course with Ge Wang at Stanford.

images/pic02.jpg

Instrument Design for a Laptop Opera

I worked with composer Anne Hege and a subset of the Stanford Laptop Orchestra on The Furies: A new LaptOpera.

images/pic03.jpg

Vocal Expression

My ISMIR 2020 late-breaking demo paper: An Evaluation Tool for Subjective Evaluation of Amateur Vocal Performances of “Amazing Grace.”

mages/pic02.jpg

Cochlear Implant Listening

Familiarity, quality and preference in Cochlear Implant Listening.

Research Publications

Acknowledgements

Invited Talks and Posters

Performances