GitHunt
AT

ataschz/tanstack-start-mastra-example

AI-powered travel assistant built with Mastra, TanStack Start, and AI SDK. Features agent networks, real-time streaming, and dynamic UI for tool calls and reasoning.

README

Assistant - Mastra AI Chat

Example Project: This is a demo application showcasing how to integrate Mastra with TanStack Start. It demonstrates best practices for building AI-powered applications with agent networks, real-time streaming, and dynamic UI components.

A real-time AI travel assistant built with Mastra, TanStack Start, and AI SDK. Features agent networks, streaming responses, and dynamic UI rendering for tool calls, reasoning, and network execution.

About This Example

This project serves as a reference implementation for:

  • Integrating Mastra with TanStack Start - Full-stack TypeScript setup
  • Agent Networks - How to implement routing agents that delegate to specialized sub-agents
  • Real-time Streaming UI - Rendering different stream event types (text, tools, reasoning, network execution)
  • Thread Persistence - Managing conversation history with Mastra's memory system
  • AI SDK Integration - Using @ai-sdk/react with Mastra's backend

Use this as a starting point for building your own AI-powered applications with Mastra and TanStack Start.

Features

  • πŸ€– AI Agent Network - Routing agent delegates to specialized agents (weather, destinations)
  • πŸ” Web Search - Real-time web search powered by Perplexity Sonar with source citations
  • πŸ”„ Real-time Streaming - See AI responses, tool calls, and reasoning as they happen
  • πŸ’¬ Thread Persistence - Chat history saved to SQLite via Mastra
  • πŸ“ Auto-generated Titles - Thread titles automatically generated using Gemini Flash Lite
  • 🎨 Dynamic UI - Renders different types of stream events:
    • Text responses
    • Tool invocations (parameters & results)
    • Web search sources with citations
    • Network execution (agent routing decisions)
    • Model reasoning (chain of thought)

Prerequisites

Getting Started

1. Install Dependencies

bun install

2. Configure Environment Variables

Create a .env file in the root directory:

GOOGLE_GENERATIVE_AI_API_KEY=your_gemini_api_key_here
PERPLEXITY_API_KEY=your_perplexity_api_key_here  # Optional: for web search

Important: You must have a valid Gemini API key for the AI agents to work. The Perplexity API key is optional but required for web search functionality.

3. Run Development Server

bun run dev

This will start:

  • Mastra Backend on http://localhost:4111
  • Frontend on http://localhost:3000

4. Open the App

Navigate to http://localhost:3000 and start chatting!

Architecture: Stream System Flow

flowchart TD
    subgraph Frontend["Frontend (TanStack Start + React)"]
        User[πŸ‘€ User types message]
        
        subgraph ReactHooks["React Hooks Layer"]
            UseChat["useChat() hook
            πŸ“¦ @ai-sdk/react"]
            UseMastraClient["useMastraClient()
            πŸ“¦ @mastra/react"]
            UseQuery["useQuery()
            πŸ“¦ @tanstack/react-query"]
        end
        
        subgraph Transport["Transport Layer"]
            DefaultTransport["DefaultChatTransport
            πŸ“¦ ai
            POST β†’ http://localhost:4111/chat
            body: {threadId, resourceId}"]
        end
        
        subgraph State["State Management"]
            Messages["messages[] state
            Updated real-time"]
            Status["status: streaming/ready"]
        end
    end
    
    subgraph Backend["Mastra Backend (localhost:4111)"]
        subgraph Server["Mastra Server"]
            NetworkRoute["networkRoute()
            πŸ“¦ @mastra/ai-sdk
            path: '/chat'
            agent: 'routingAgent'"]
        end
        
        subgraph AgentNetwork["Agent Network"]
            RoutingAgent["Routing Agent
            Analyzes request"]
            WeatherAgent["Weather Agent
            Gets weather data"]
            DestAgent["Destinations Agent
            Gets travel info"]
        end
        
        subgraph Storage["Persistence"]
            LibSQL["LibSQLStore
            πŸ“¦ @mastra/libsql
            file: ./mastra.db"]
        end
    end
    
    subgraph StreamEvents["Stream Events (SSE/Streaming)"]
        TextChunk["text chunks
            type: 'text'"]
        ToolCall["tool calls
            type: 'tool-{toolName}'
            states: input-available,
                    output-available"]
        NetworkData["network execution
            type: 'data-network'
            steps[], status, task"]
        ReasoningData["reasoning
            type: 'reasoning'
            AI thinking process"]
    end
    
    subgraph Processing["Frontend Processing"]
        ToAISdk["toAISdkV5Messages()
        πŸ“¦ @mastra/ai-sdk/ui
        Converts Mastra β†’ AI SDK format"]
        
        ResolveMsg["resolveInitialMessages()
        Resolves network messages
        from memory storage"]
        
        FilterMsg["filterDisplayableMessages()
        Custom filter function
        Removes: completion checks,
                 network JSON,
                 empty messages,
                 reasoning (history only)"]
        
        RenderPart["MessagePartRenderer
        Switch by part.type:
        - text β†’ MessageResponse
        - reasoning β†’ Reasoning
        - data-network β†’ NetworkExecution
        - tool-* β†’ Tool"]
    end
    
    subgraph UIComponents["UI Components (ai-elements)"]
        MessageComp["Message
        MessageContent
        MessageResponse"]
        NetworkExec["NetworkExecution
        Shows agent routing,
        steps, decisions"]
        ToolComp["Tool
        ToolHeader, ToolInput,
        ToolOutput"]
        ReasoningComp["Reasoning
        ReasoningTrigger,
        ReasoningContent"]
    end
    
    User -->|types message| UseChat
    UseChat -->|sends via| DefaultTransport
    DefaultTransport -->|HTTP POST| NetworkRoute
    
    NetworkRoute -->|executes| RoutingAgent
    RoutingAgent -->|delegates to| WeatherAgent
    RoutingAgent -->|or delegates to| DestAgent
    
    WeatherAgent -->|streams back| TextChunk
    WeatherAgent -->|streams back| ToolCall
    RoutingAgent -->|streams back| NetworkData
    RoutingAgent -->|streams back| ReasoningData
    
    TextChunk -->|received by| UseChat
    ToolCall -->|received by| UseChat
    NetworkData -->|received by| UseChat
    ReasoningData -->|received by| UseChat
    
    UseChat -->|updates| Messages
    UseChat -->|updates| Status
    
    Messages -->|each part| RenderPart
    
    RenderPart -->|text| MessageComp
    RenderPart -->|data-network| NetworkExec
    RenderPart -->|tool-*| ToolComp
    RenderPart -->|reasoning| ReasoningComp
    
    NetworkRoute -.->|persists to| LibSQL
    
    subgraph HistoryLoad["Load History (Initial Load)"]
        UseQuery -->|calls| UseMastraClient
        UseMastraClient -->|listThreadMessages| LibSQL
        LibSQL -->|returns| ToAISdk
        ToAISdk -->|converts| ResolveMsg
        ResolveMsg -->|resolves| FilterMsg
        FilterMsg -->|setMessages| Messages
    end
    
    style Frontend fill:#e1f5ff
    style Backend fill:#fff4e1
    style StreamEvents fill:#f0e1ff
    style Processing fill:#e1ffe1
    style UIComponents fill:#ffe1e1
    style HistoryLoad fill:#f5f5f5
Loading

How It Works

πŸ“€ Sending Messages (Streaming)

  1. User types message β†’ useChat() hook (@ai-sdk/react)
  2. DefaultChatTransport β†’ POST to http://localhost:4111/chat
  3. Mastra backend receives via networkRoute() (@mastra/ai-sdk)
  4. routingAgent analyzes and delegates to sub-agents or tools
  5. Real-time stream events:
    • text chunks
    • tool-* invocations (including web-search with sources)
    • data-network agent execution
    • reasoning model thoughts
  6. Frontend dynamically renders each part

πŸ“₯ Loading History (Initial Load)

  1. useQuery() + useMastraClient() β†’ listThreadMessages()
  2. toAISdkV5Messages() converts Mastra format β†’ AI SDK format
  3. resolveInitialMessages() resolves network execution data from memory (handles both agent and tool-based networks)
  4. filterDisplayableMessages() removes internal system messages and reasoning from history (smart deduplication for agent vs tool networks)
  5. setMessages() sets chat history

🎨 Rendering

MessagePartRenderer component switches on part.type:

  • text β†’ <MessageResponse>
  • data-network β†’ <NetworkExecution> (shows routing decisions)
  • tool-web-search β†’ <Sources> (web search results with citations)
  • tool-* β†’ <Tool> (parameters and results for other tools)
  • dynamic-tool β†’ <Sources> or <Tool> (history: web-search shows sources, others show tool UI)
  • reasoning β†’ <Reasoning> (model thoughts, only during streaming)

πŸ”§ Adding Custom Tool UIs

You can register custom UI components for your tools:

import { toolUIRegistry } from '@/components/chat/renderers';

toolUIRegistry.register({
    toolIds: ['my-tool-id'],        // Tool ID(s) from Mastra
    Component: MyToolCard,           // Your React component
    isValidOutput: isMyToolData,     // Type guard function
});

The component will automatically render in streaming and history contexts.

See .agent/skills/tool-ui/SKILL.md for full documentation.

Project Structure

src/
β”œβ”€β”€ components/
β”‚   β”œβ”€β”€ ai-elements/        # Reusable AI UI components
β”‚   β”‚   β”œβ”€β”€ network-execution.tsx  # Agent network visualization
β”‚   β”‚   β”œβ”€β”€ tool.tsx              # Tool call display
β”‚   β”‚   β”œβ”€β”€ reasoning.tsx         # Model reasoning display
β”‚   β”‚   └── ...
β”‚   β”œβ”€β”€ chat/               # Chat-specific components
β”‚   β”‚   β”œβ”€β”€ chat-empty-state.tsx  # Empty state UI
β”‚   β”‚   β”œβ”€β”€ chat-input.tsx        # Message input with actions
β”‚   β”‚   β”œβ”€β”€ chat-layout.tsx       # Chat page layout wrapper
β”‚   β”‚   β”œβ”€β”€ message-part-renderer.tsx  # Renders message parts by type
β”‚   β”‚   └── index.ts              # Barrel exports
β”‚   └── ui/                 # shadcn/ui components
β”œβ”€β”€ hooks/
β”‚   β”œβ”€β”€ use-chat-navigation.ts    # Navigate to chat with initial message
β”‚   β”œβ”€β”€ use-delete-thread.ts      # Delete thread mutation
β”‚   β”œβ”€β”€ use-invalidate-threads.ts # Invalidate threads query
β”‚   β”œβ”€β”€ use-thread-messages.ts    # Fetch thread messages
β”‚   └── use-threads.ts            # Fetch all threads
β”œβ”€β”€ lib/
β”‚   β”œβ”€β”€ chat-utils.ts             # Chat utility functions
β”‚   β”œβ”€β”€ constants.ts              # Environment variables
β”‚   β”œβ”€β”€ filter-displayable-messages.ts  # Filter system messages
β”‚   β”œβ”€β”€ mastra-queries.ts         # Centralized query options & keys
β”‚   β”œβ”€β”€ resolve-initial-messages.ts     # Resolve network messages from memory
β”‚   └── utils.ts                  # General utilities
β”œβ”€β”€ mastra/
β”‚   β”œβ”€β”€ agents/             # AI agents
β”‚   β”‚   β”œβ”€β”€ routing-agent.ts      # Main routing logic
β”‚   β”‚   β”œβ”€β”€ weather-agent.ts      # Weather queries
β”‚   β”‚   └── destinations-agent.ts # Travel recommendations
β”‚   β”œβ”€β”€ tools/              # Mastra tools
β”‚   β”‚   └── web-search-tool.ts    # Web search via Perplexity Sonar
β”‚   β”œβ”€β”€ workflows/          # Mastra workflows
β”‚   β”œβ”€β”€ memory.ts           # Memory configuration with title generation
β”‚   └── index.ts            # Mastra configuration
└── routes/
    β”œβ”€β”€ index.tsx           # Home page
    └── chat.$threadId.tsx  # Chat page with thread support

Building for Production

bun run build

Linting & Formatting

This project uses Biome:

bun run lint      # Check for issues
bun run format    # Format code
bun run check     # Lint + format

Tech Stack

πŸ”§ Development Tools

AI SDK DevTools

Para debugging de interacciones LLM durante desarrollo:

  1. InstalaciΓ³n: bun add -d @ai-sdk/devtools
  2. Uso: Ejecutar bun dev para iniciar la app y DevTools automΓ‘ticamente
  3. Visualizar: Abrir http://localhost:4983 para inspeccionar llamadas AI SDK

DevTools captura automΓ‘ticamente:

  • βœ… Todas las llamadas generateText y streamText
  • βœ… Prompts enviados a modelos
  • βœ… Respuestas recibidas
  • βœ… Invocaciones de tools
  • βœ… Interacciones multi-step (routing de agentes, network execution)
  • βœ… Token usage y timing

Nota: DevTools almacena datos localmente en el directorio .devtools/ (gitignored). Solo estΓ‘ activo en modo desarrollo.

Deshabilitar DevTools: Si necesitas deshabilitar DevTools temporalmente, usa:

AI_SDK_DEVTOOLS_ENABLED=false bun dev

Learn More

Languages

TypeScript96.1%Shell2.7%CSS1.1%
Created January 15, 2026
Updated February 14, 2026