GitHunt
AI

Aiden-Jeon/databricks-mlflow-demo

MLflow 3 GenAI Demo

A comprehensive demonstration of MLflow 3's GenAI capabilities for observability and evaluating, monitoring, and improving GenAI application quality. This interactive demo showcases a sales email generation use case with end-to-end quality assessment workflows.

This interactive demo is deployed as a Databricks app in your Databricks workspace. There is a guided UI experience that's accompanied by Notebooks that show you how to do the end-to-end workflow of evaluating quality, iterating to improve quality, and monitoring quality in production.

Learn more about MLflow 3:

Installing the demo

Choose your installation method:

Estimated time: 2 minutes user input + 15 minutes waiting for scripts to run

The automated setup handles resource creation, configuration, and deployment for you using the Databricks Workspace SDK.

Prerequisites

  • Databricks workspace access - Create one here if needed
  • Install Python >=3.10.16

Run Automated Setup

The ./auto-setup.sh script will run all the steps outlined in the Manual Setup workflow.

  • 1. Install the Databricks CLI >= 0.262.0

    • Follow the installation guide
    • Verify installation: Run databricks --version to confirm it's installed
  • 2. Install Python >= 3.10.16

  • 3. Authenticate with your workspace

    • Run databricks auth login and follow the prompts
    • Configure a profile named DEFAULT
  • 3. Clone repo and run setup script

    git clone https://github.com/databricks-solutions/mlflow-demo.git
    cd mlflow-demo
    ./auto-setup.sh

๐Ÿ”ง Option B: Manual Setup

Estimated time: 10 minutes work + 15 minutes waiting for scripts to run

For step-by-step manual installation instructions, see MANUAL_SETUP.md.

The manual setup includes:

  • Phase 1: Prerequisites setup (workspace, app creation, MLflow experiment, etc.)
  • Phase 2: Local installation and testing
  • Phase 3: Deployment and permission configuration

MLflow 3 overview

MLflow 3.0 has been redesigned for the GenAI era. If your team is building GenAI-powered apps, this update makes it dramatically easier to evaluate, monitor, and improve them in production.

Key capabilities

  • ๐Ÿ” GenAI Observability at Scale: Monitor & debug GenAI apps anywhere - deployed on Databricks or ANY cloud - with production-scale real-time tracing and enhanced UIs. Link
  • ๐Ÿ“Š Revamped GenAI Evaluation: Evaluate app quality using a brand-new SDK, simpler evaluation interface and a refreshed UI. Link
  • โš™๏ธ Customizable Evaluation: Tailor AI judges or custom metrics to your use case. Link
  • ๐Ÿ‘€ Monitoring: Schedule automatic quality evaluations (beta). Link
  • ๐Ÿงช Leverage Production Logs to Improve Quality: Turn real user traces into curated, versioned evaluation datasets to continuously improve app performance . Link
  • ๐Ÿ“ Close the Loop with Feedback: Capture end-user feedback from your appโ€™s UI. Link
  • ๐Ÿ‘ฅ Domain Expert Labeling: Send traces to human experts for ground truth or target output labeling. Link
  • ๐Ÿ“ Prompt Management: Prompt Registry for versioning. Link
  • ๐Ÿงฉ App Version Tracking: Link app versions to quality evaluations. Link

Languages

Python38.3%TypeScript35.4%Jupyter Notebook19.6%Shell5.7%CSS0.5%HTML0.4%JavaScript0.3%

Contributors

Other
Created October 3, 2025
Updated November 24, 2025
Aiden-Jeon/databricks-mlflow-demo | GitHunt