RanitDERIA/moodmate
A cinematic AI web application that detects real-time emotions via webcam and recommends personalized music playlists using a custom Deep Learning model.
AI-Powered Auditory Empathy.
MoodMate is an intelligent emotional companion that detects your mood from a selfie, curates personalized music playlists, and connects you with a supportive community. Built with Next.js 16, React 19, and Supabase, and powered by a Flask AI Backend, it features a bold pop brutalist aesthetic and practical tools for your daily emotional well-being.
Table of Contents
- Overview
- Prerequisites
- Technologies Utilized
- Datasets & Model Training
- Features
- Run Locally
- Deployment
- Configuration
- Project Structure
- Model Training & Evaluation
- Privacy & Safety
- License
- Acknowledgements
- Connect
Overview
Emotional wellbeing shouldn't be complicated or isolating. MoodMate elevates your daily mood with three core pillars:
- Detection: Instantly analyze your emotional state from a single selfie using advanced AI.
- Curation: Receive personalized music playlists tailored to resonate with or uplift your current vibe.
- Connection: Share your mood card with a supportive community to find others on the same wavelength.
All wrapped in a high-contrast, partially accessible, and mobile-responsive Pop Brutalist UI.
Prerequisites:
Before setting up MoodMate, ensure you have:
Git (Version control)
Node.js (v18.x or later)
npm or pnpm (Package manager)
Python (v3.9 or later)
Supabase (Account & Project)
Docker (Optional - for containerized backend)
Technologies Utilized:
- Framework:
Next.js 16 (App Router)
- Language:
TypeScript &
Python
- Database & Auth:
Supabase
- AI Backend:
Flask
- Machine Learning:
TensorFlow &
OpenCV
- Data Processing:
Pandas &
NumPy
- Styling:
Tailwind CSS
- Deployment:
Docker &
Hugging Face Spaces
- Icons:
Lucide React
Datasets & Model Training
MoodMateβs core intelligence is powered by a custom VGG-style Convolutional Neural Network (CNN), trained on industry-standard datasets and optimized via a robust preprocessing pipeline to ensure real-time accuracy.
π Datasets Used
-
FER-2013 (Facial Expression Recognition)
- Source: FER-2013 (Kaggle)
- Scale: ~35,000 grayscale facial images (48Γ48 pixel resolution).
- Classes: 7 distinct emotions (Happy, Sad, Angry, Fear, Surprise, Disgust, Neutral).
- Usage: Serves as the primary training ground for the deep learning model, pre-processed into
.npybinary files for efficient memory loading.
-
Spotify Tracks Dataset
- Source: Spotify Tracks Dataset - (Kaggle)
- Features: Rich audio attributes including valence, energy, danceability, and track_genre.
- Usage: Powers the recommendation engine by mapping detected emotion labels to sonically aligned music genres and attributes (e.g., High Valence + High Energy = "Happy").
π¬ Preprocessing & Training Pipeline
To overcome overfitting and ensure the model works in varied lighting conditions, the following engineering strategies were implemented:
-
Image Standardization:
- Conversion to single-channel Grayscale.
- Pixel normalization (scaling
0-255values to0-1range). - Resizing to strict
48x48input dimensions.
-
Real-Time Data Augmentation:
- Implemented
ImageDataGeneratorto artificially expand the training set. - Techniques: Rotation (
Β±15Β°), Zoom (10%), Width/Height Shifts (10%), and Horizontal Flips to force the model to learn structural features rather than memorizing pixels.
- Implemented
-
Regularization Strategy:
- L2 Kernel Regularization (
0.01) applied to dense layers. - Dropout layers (increased to
0.6) to prevent neuron co-dependency. - Callbacks: Utilized EarlyStopping and ReduceLROnPlateau to dynamically optimize the learning rate during training.
- L2 Kernel Regularization (
Features:
- AI Mood Scanner: Analyze your emotions from a selfie using computer vision.
- Vibe Curation: Get instant, mood-matched music recommendations.
- Community Pulse: Share your "vibe cards" and connect with others feeling similarly.
- Secure Identity: Seamless authentication via Supabase.
- Pop Brutalist Design: A bold, high-contrast interface for a unique user experience.
- Emotional Safety: Crisis resource integration for detected distress signals.
- Responsive & Fluid: Optimized for all devices with smooth animations.
Run Locally:
-
Clone the Repository:
git clone https://github.com/RanitDERIA/moodmate.git cd moodmate -
Backend Setup:
Open a terminal and navigate to the backend directory:
cd backend pip install -r requirements.txt python app.pyThe Flask server will start on
http://localhost:5000. -
Frontend Setup:
Open a new terminal in the project root:
npm install # or pnpm install -
Environment Configuration:
Create a
.env.localfile in the root directory:NEXT_PUBLIC_SUPABASE_URL=your_supabase_url NEXT_PUBLIC_SUPABASE_ANON_KEY=your_supabase_anon_key
-
Start Application:
npm run dev
Visit
http://localhost:3000to begin your journey.
Deployment:
MoodMate follows a distributed deployment strategy:
- Frontend: Deployed on Vercel for optimal performance and edge capabilities.
- Backend: AI Service hosted on Hugging Face Spaces (Docker/Flask).
To deploy your own instance:
- Fork the repo.
- Deploy the
backendfolder to Hugging Face Spaces (choose Docker SDK). - Import the repo to Vercel and configure the environment variables.
Configuration:
-
Environment Variables:
NEXT_PUBLIC_SUPABASE_URL: Your Supabase Project URL.NEXT_PUBLIC_SUPABASE_ANON_KEY: Your Supabase Anonymous Key.NEXT_PUBLIC_API_URL: URL of your deployed Flask Backend.
-
Theme & Branding:
- The "Pop Brutalist" aesthetic is centrally managed in
tailwind.config.js. - Primary colors and shadows can be adjusted to match your preferred vibe.
- The "Pop Brutalist" aesthetic is centrally managed in
Project Structure:
moodmate/
βββ app/ # Next.js App Router (frontend pages & routes)
β βββ api/ # Server-side API routes (Next.js)
β β βββ analyze-text/ # Mood analysis API (connects to ML backend)
β β βββ metadata/ # SEO & OpenGraph metadata
β βββ auth/callback/ # OAuth authentication callback (Supabase)
β βββ community/ # Community playlists & social features
β βββ home/ # User dashboard landing
β βββ login | signup # Authentication pages
β βββ profile | my-vibe # User profile & mood history
β βββ layout.tsx # Global layout (Navbar, Footer, Providers)
β βββ globals.css # Global Tailwind styles
β βββ not-found.tsx # Custom 404 page
β
βββ backend/ # Machine Learning backend (Python)
β βββ app.py # Flask/FastAPI app serving ML predictions
β βββ models/ # Trained ML model (.h5)
β βββ data/ # Processed dataset used for training
β βββ requirements.txt # Python dependencies (TensorFlow, NumPy, etc.)
β βββ Dockerfile # Containerized ML backend
β
βββ components/ # Reusable React components
β βββ community/ # Vibe cards, comments, social sharing
β βββ home/ # Dashboard UI (stats, mood selector)
β βββ layout/ # Navbar, footer, user navigation
β βββ custom/ # Advanced UI (webcam, grids, OAuth buttons)
β
βββ lib/ # Shared frontend utilities
β βββ api.ts # API helpers (frontend β backend)
β βββ supabase.ts # Supabase client configuration
β βββ moods.ts # Mood constants & mappings
β βββ validators.ts # Input validation schemas
β
βββ supabase/ # Database schema & migrations
β βββ migrations/ # SQL migrations (comments, likes, profiles)
β
βββ public/ # Static assets
β βββ images/ # Logos, mood icons, partner platforms
β βββ thumbnails/ # UI & feature preview images
β
βββ middleware.ts # Route protection & auth middleware
βββ next.config.ts # Next.js configuration
βββ package.json # Frontend dependencies & scripts
βββ tsconfig.json # TypeScript configuration
βββ README.md # Project documentation
βββ LICENSE # Apache License 2.0
Model Training & Evaluation
All deep learning experiments, from data preprocessing to final model selection, were conducted in a cloud-based GPU environment using Google Colab to ensure computational efficiency and reproducibility.
π Source Notebooks (Google Colab)
The complete training pipeline is documented in the following notebooks:
-
Data Preprocessing & Augmentation Handles loading the FER-2013 dataset, converting raw pixels to standard arrays, and generating
.npybinary files for efficient loading. -
Model Training & Fine-Tuning Contains the Custom CNN architecture, Data Augmentation setup (
ImageDataGenerator), and the full training loop with callbacks.
Note: These notebooks demonstrate the progression from raw CSV data to a finalized
.h5model file.
π Performance Metrics
The final model achieved a stable Validation Accuracy of ~63% on the FER-2013 dataset, a strong baseline for a custom lightweight CNN.
πΉ Key Observations
- Overfitting Eliminated: By implementing Data Augmentation (Rotation Β±15Β°, Zoom) and L2 Regularization, the "Generalization Gap" between training and validation accuracy was effectively closed.
- Robust Learning: The validation loss curve tracks closely with training loss, confirming that the model learns structural features rather than memorizing pixel noise.
- Dynamic Optimization: Utilized
ReduceLROnPlateauto fine-tune weights whenever learning stalled, ensuring convergence.
π§ͺ Final Inference Model
- Architecture: Custom VGG-style CNN (Lightweight, optimized for web deployment).
- Export Format:
moodmate_final_model.h5(Keras/TensorFlow). - Inference Strategy: The model is loaded globally in the Flask backend to ensure <200ms latency per prediction.
Privacy & Safety:
- Ephemeral Processing: User photos are processed in-memory for mood detection and immediately discarded. No images are ever stored on our servers.
- Data Security: Personal data and curated vibes are secured via Supabase's Row Level Security (RLS) policies.
- Emotional Well-being: If signs of distress are detected, MoodMate automatically provides links to verified crisis hotlines and mental health resources.
License:
This project is licensed under the Apache License 2.0.
- Free to use, modify, and distribute (including commercial use)
- Redistributions must include proper attribution and the license copy
- Modified files must clearly indicate changes
- Provided AS IS, without warranties or liability
Acknowledgements:
I would like to express my sincere gratitude to my mentor, for their invaluable guidance, continuous support, and constructive feedback throughout the development of MoodMate. Their insights played a pivotal role in refining the machine learning pipeline and shaping the final architecture of this project.
I also extend my thanks to Infosys Springboard for providing the platform, resources, and internship opportunity that allowed me to explore advanced AI/ML concepts and apply them in a real-world scenario.
Let's Connect
β Star this repository if you enjoyed your vibe check!
Made with β€οΈ and π΅ by Ranit Deria






