GenRec4Music(To be Continued)

GenRec4Music(To be Continued)

Published
Apr 20, 2025 02:06 AM
Author
Jerry Huang
Keywords
Description
AI-Generative Music Recommender based on Suno
Tags
Data Science
对比生成式音乐推荐 vs 传统推荐系统

🎵 Background

As a music lover and a data scientist, I’ve always wondered: What if your next favorite song didn’t exist yet? What if it could be generated just for you?
That question sparked GenRec4Music, a project exploring the potential of AI-generated music as a form of recommendation. Rather than just recommending existing tracks based on user history, could we train a system to create a new song tailored to your taste?
Inspired by tools like Suno, I designed a system to compare:
  • 🎹 Generative Recommendation – generating a new song from user history
  • 📊 Traditional Recommendation – recommending existing songs using collaborative filtering and embedding techniques (like Word2Vec)

🎯 Objectives

  • Compare the effectiveness of generative AI vs. classical recommendation systems
  • Explore how user playlist data + metadata can be used to prompt music generation
  • Build an evaluation framework to assess:
    • Lyric relevance (via LLM-based semantic scoring)
    • Audio similarity (via MIDI features + vector space similarity)
 

🛠 Methods & Pipeline

Input Data

  • Playlist datasets (user history of 10+ songs per user)
  • Song metadata: artist, title, lyrics, mood tags

Traditional Models

  • Collaborative Filtering
  • Word2Vec for song embedding
  • ✅ Clustering (K-means, DBSCAN) to group listening patterns

Generative System

  • 🎧 Prompt Suno to generate new songs based on selected user songs
  • Input = style keywords + mood + reference songs
  • Output = brand new audio and lyrics

Evaluation

  • LLM-based lyric scoring (e.g., style, theme consistency)
  • MIDI-based audio similarity (e.g., cosine similarity of vectorized audio features)
 

🌌 Why This Matters

This project wasn't just about algorithms—it was about creativity. About imagining a future where music recommendation is no longer about “finding”, but about co-creating.
As someone who writes music and builds models, this project let me merge both worlds: treating music not only as data, but as a deeply personal expression.
 

🔮 What's Next?

  • Fine-tune prompts with user emotion or context (e.g., weather, time of day)
  • Add feedback loop for real-time tuning of generative preferences
  • Explore real-world deployment (e.g., “Make Me a Song” feature for mood apps)
 

📁 Links (To be Continued)