GitHunt
RA

RanitDERIA/moodmate

A cinematic AI web application that detects real-time emotions via webcam and recommends personalized music playlists using a custom Deep Learning model.

GitHub repo size
GitHub stars
GitHub forks

Twitter Follow
LinkedIn


Moodmate Logo

AI-Powered Auditory Empathy.

MoodMate is an intelligent emotional companion that detects your mood from a selfie, curates personalized music playlists, and connects you with a supportive community. Built with Next.js 16, React 19, and Supabase, and powered by a Flask AI Backend, it features a bold pop brutalist aesthetic and practical tools for your daily emotional well-being.

βž₯ Live Demo


MoodMate Interface Showcase

Table of Contents

Overview

Emotional wellbeing shouldn't be complicated or isolating. MoodMate elevates your daily mood with three core pillars:

  • Detection: Instantly analyze your emotional state from a single selfie using advanced AI.
  • Curation: Receive personalized music playlists tailored to resonate with or uplift your current vibe.
  • Connection: Share your mood card with a supportive community to find others on the same wavelength.

All wrapped in a high-contrast, partially accessible, and mobile-responsive Pop Brutalist UI.

Prerequisites:

Before setting up MoodMate, ensure you have:

  • Git Git (Version control)
  • Node.js Node.js (v18.x or later)
  • npm npm or pnpm (Package manager)
  • Python Python (v3.9 or later)
  • Supabase Supabase (Account & Project)
  • Docker Docker (Optional - for containerized backend)

Technologies Utilized:

  • Framework: Next.js Next.js 16 (App Router)
  • Language: TypeScript TypeScript & Python Python
  • Database & Auth: Supabase Supabase
  • AI Backend: Flask Flask
  • Machine Learning: TensorFlow TensorFlow & OpenCV OpenCV
  • Data Processing: Pandas Pandas & NumPy NumPy
  • Styling: Tailwind Tailwind CSS
  • Deployment: Docker Docker & Docker Hugging Face Spaces
  • Icons: Lucide Lucide React

Datasets & Model Training

MoodMate’s core intelligence is powered by a custom VGG-style Convolutional Neural Network (CNN), trained on industry-standard datasets and optimized via a robust preprocessing pipeline to ensure real-time accuracy.

πŸ“ Datasets Used

  • FER-2013 (Facial Expression Recognition)

    • Source: FER-2013 (Kaggle)
    • Scale: ~35,000 grayscale facial images (48Γ—48 pixel resolution).
    • Classes: 7 distinct emotions (Happy, Sad, Angry, Fear, Surprise, Disgust, Neutral).
    • Usage: Serves as the primary training ground for the deep learning model, pre-processed into .npy binary files for efficient memory loading.
  • Spotify Tracks Dataset

    • Source: Spotify Tracks Dataset - (Kaggle)
    • Features: Rich audio attributes including valence, energy, danceability, and track_genre.
    • Usage: Powers the recommendation engine by mapping detected emotion labels to sonically aligned music genres and attributes (e.g., High Valence + High Energy = "Happy").

πŸ”¬ Preprocessing & Training Pipeline

To overcome overfitting and ensure the model works in varied lighting conditions, the following engineering strategies were implemented:

  • Image Standardization:

    • Conversion to single-channel Grayscale.
    • Pixel normalization (scaling 0-255 values to 0-1 range).
    • Resizing to strict 48x48 input dimensions.
  • Real-Time Data Augmentation:

    • Implemented ImageDataGenerator to artificially expand the training set.
    • Techniques: Rotation (Β±15Β°), Zoom (10%), Width/Height Shifts (10%), and Horizontal Flips to force the model to learn structural features rather than memorizing pixels.
  • Regularization Strategy:

    • L2 Kernel Regularization (0.01) applied to dense layers.
    • Dropout layers (increased to 0.6) to prevent neuron co-dependency.
    • Callbacks: Utilized EarlyStopping and ReduceLROnPlateau to dynamically optimize the learning rate during training.

Features:

  • AI Mood Scanner: Analyze your emotions from a selfie using computer vision.
  • Vibe Curation: Get instant, mood-matched music recommendations.
  • Community Pulse: Share your "vibe cards" and connect with others feeling similarly.
  • Secure Identity: Seamless authentication via Supabase.
  • Pop Brutalist Design: A bold, high-contrast interface for a unique user experience.
  • Emotional Safety: Crisis resource integration for detected distress signals.
  • Responsive & Fluid: Optimized for all devices with smooth animations.

Run Locally:

  1. Clone the Repository:

    git clone https://github.com/RanitDERIA/moodmate.git
    cd moodmate
  2. Backend Setup:

    Open a terminal and navigate to the backend directory:

    cd backend
    pip install -r requirements.txt
    python app.py

    The Flask server will start on http://localhost:5000.

  3. Frontend Setup:

    Open a new terminal in the project root:

    npm install
    # or
    pnpm install
  4. Environment Configuration:

    Create a .env.local file in the root directory:

    NEXT_PUBLIC_SUPABASE_URL=your_supabase_url
    NEXT_PUBLIC_SUPABASE_ANON_KEY=your_supabase_anon_key
  5. Start Application:

    npm run dev

    Visit http://localhost:3000 to begin your journey.

Deployment:

MoodMate follows a distributed deployment strategy:

  1. Frontend: Deployed on Vercel for optimal performance and edge capabilities.
  2. Backend: AI Service hosted on Hugging Face Spaces (Docker/Flask).

To deploy your own instance:

  1. Fork the repo.
  2. Deploy the backend folder to Hugging Face Spaces (choose Docker SDK).
  3. Import the repo to Vercel and configure the environment variables.

Configuration:

  • Environment Variables:

    • NEXT_PUBLIC_SUPABASE_URL: Your Supabase Project URL.
    • NEXT_PUBLIC_SUPABASE_ANON_KEY: Your Supabase Anonymous Key.
    • NEXT_PUBLIC_API_URL: URL of your deployed Flask Backend.
  • Theme & Branding:

    • The "Pop Brutalist" aesthetic is centrally managed in tailwind.config.js.
    • Primary colors and shadows can be adjusted to match your preferred vibe.

Project Structure:

moodmate/
β”œβ”€β”€ app/                         # Next.js App Router (frontend pages & routes)
β”‚   β”œβ”€β”€ api/                     # Server-side API routes (Next.js)
β”‚   β”‚   β”œβ”€β”€ analyze-text/        # Mood analysis API (connects to ML backend)
β”‚   β”‚   └── metadata/            # SEO & OpenGraph metadata
β”‚   β”œβ”€β”€ auth/callback/           # OAuth authentication callback (Supabase)
β”‚   β”œβ”€β”€ community/               # Community playlists & social features
β”‚   β”œβ”€β”€ home/                    # User dashboard landing
β”‚   β”œβ”€β”€ login | signup           # Authentication pages
β”‚   β”œβ”€β”€ profile | my-vibe        # User profile & mood history
β”‚   β”œβ”€β”€ layout.tsx               # Global layout (Navbar, Footer, Providers)
β”‚   β”œβ”€β”€ globals.css              # Global Tailwind styles
β”‚   └── not-found.tsx            # Custom 404 page
β”‚
β”œβ”€β”€ backend/                     # Machine Learning backend (Python)
β”‚   β”œβ”€β”€ app.py                   # Flask/FastAPI app serving ML predictions
β”‚   β”œβ”€β”€ models/                  # Trained ML model (.h5)
β”‚   β”œβ”€β”€ data/                    # Processed dataset used for training
β”‚   β”œβ”€β”€ requirements.txt         # Python dependencies (TensorFlow, NumPy, etc.)
β”‚   └── Dockerfile               # Containerized ML backend
β”‚
β”œβ”€β”€ components/                  # Reusable React components
β”‚   β”œβ”€β”€ community/               # Vibe cards, comments, social sharing
β”‚   β”œβ”€β”€ home/                    # Dashboard UI (stats, mood selector)
β”‚   β”œβ”€β”€ layout/                  # Navbar, footer, user navigation
β”‚   └── custom/                  # Advanced UI (webcam, grids, OAuth buttons)
β”‚
β”œβ”€β”€ lib/                         # Shared frontend utilities
β”‚   β”œβ”€β”€ api.ts                   # API helpers (frontend ↔ backend)
β”‚   β”œβ”€β”€ supabase.ts              # Supabase client configuration
β”‚   β”œβ”€β”€ moods.ts                 # Mood constants & mappings
β”‚   └── validators.ts            # Input validation schemas
β”‚
β”œβ”€β”€ supabase/                    # Database schema & migrations
β”‚   └── migrations/              # SQL migrations (comments, likes, profiles)
β”‚
β”œβ”€β”€ public/                      # Static assets
β”‚   β”œβ”€β”€ images/                  # Logos, mood icons, partner platforms
β”‚   └── thumbnails/              # UI & feature preview images
β”‚
β”œβ”€β”€ middleware.ts                # Route protection & auth middleware
β”œβ”€β”€ next.config.ts               # Next.js configuration
β”œβ”€β”€ package.json                 # Frontend dependencies & scripts
β”œβ”€β”€ tsconfig.json                # TypeScript configuration
β”œβ”€β”€ README.md                    # Project documentation
└── LICENSE                      # Apache License 2.0

Model Training & Evaluation

All deep learning experiments, from data preprocessing to final model selection, were conducted in a cloud-based GPU environment using Google Collab Google Colab to ensure computational efficiency and reproducibility.

πŸ““ Source Notebooks (Google Colab)

The complete training pipeline is documented in the following notebooks:

  • Data Preprocessing & Augmentation Handles loading the FER-2013 dataset, converting raw pixels to standard arrays, and generating .npy binary files for efficient loading.

  • Model Training & Fine-Tuning Contains the Custom CNN architecture, Data Augmentation setup (ImageDataGenerator), and the full training loop with callbacks.

Note: These notebooks demonstrate the progression from raw CSV data to a finalized .h5 model file.

πŸ“ˆ Performance Metrics

The final model achieved a stable Validation Accuracy of ~63% on the FER-2013 dataset, a strong baseline for a custom lightweight CNN.

πŸ”Ή Key Observations

  • Overfitting Eliminated: By implementing Data Augmentation (Rotation Β±15Β°, Zoom) and L2 Regularization, the "Generalization Gap" between training and validation accuracy was effectively closed.
  • Robust Learning: The validation loss curve tracks closely with training loss, confirming that the model learns structural features rather than memorizing pixel noise.
  • Dynamic Optimization: Utilized ReduceLROnPlateau to fine-tune weights whenever learning stalled, ensuring convergence.
Training Accuracy
Training and validation accuracy convergence
Convergent training and validation accuracy demonstrates effective feature learning.
Training Loss
Training and validation loss convergence
Monotonic loss reduction across epochs confirms stable optimization.
Confusion Matrix
Confusion Matrix Heatmap
Strong diagonal performance indicates high true positive rates for 'Happy' and 'Neutral'.
Classification Report
Classification Report Table
Detailed precision, recall, and f1-scores for all 7 emotion classes.
Performance Metrics per Emotion
Bar chart of Precision, Recall and F1 Score per emotion
Visual breakdown of model efficacy across specific emotional states.

πŸ§ͺ Final Inference Model

  • Architecture: Custom VGG-style CNN (Lightweight, optimized for web deployment).
  • Export Format: moodmate_final_model.h5 (Keras/TensorFlow).
  • Inference Strategy: The model is loaded globally in the Flask backend to ensure <200ms latency per prediction.

Privacy & Safety:

  • Ephemeral Processing: User photos are processed in-memory for mood detection and immediately discarded. No images are ever stored on our servers.
  • Data Security: Personal data and curated vibes are secured via Supabase's Row Level Security (RLS) policies.
  • Emotional Well-being: If signs of distress are detected, MoodMate automatically provides links to verified crisis hotlines and mental health resources.

License:

This project is licensed under the Apache License 2.0.

  • Free to use, modify, and distribute (including commercial use)
  • Redistributions must include proper attribution and the license copy
  • Modified files must clearly indicate changes
  • Provided AS IS, without warranties or liability

Acknowledgements:

I would like to express my sincere gratitude to my mentor, for their invaluable guidance, continuous support, and constructive feedback throughout the development of MoodMate. Their insights played a pivotal role in refining the machine learning pipeline and shaping the final architecture of this project.

I also extend my thanks to Infosys Springboard for providing the platform, resources, and internship opportunity that allowed me to explore advanced AI/ML concepts and apply them in a real-world scenario.


Virtual Internship 6.0

Let's Connect

Email LinkedIn Twitter GitHub


⭐ Star this repository if you enjoyed your vibe check!

Made with ❀️ and 🎡 by Ranit Deria