ataschz/tanstack-start-mastra-example
AI-powered travel assistant built with Mastra, TanStack Start, and AI SDK. Features agent networks, real-time streaming, and dynamic UI for tool calls and reasoning.
README
Assistant - Mastra AI Chat
Example Project: This is a demo application showcasing how to integrate Mastra with TanStack Start. It demonstrates best practices for building AI-powered applications with agent networks, real-time streaming, and dynamic UI components.
A real-time AI travel assistant built with Mastra, TanStack Start, and AI SDK. Features agent networks, streaming responses, and dynamic UI rendering for tool calls, reasoning, and network execution.
About This Example
This project serves as a reference implementation for:
- Integrating Mastra with TanStack Start - Full-stack TypeScript setup
- Agent Networks - How to implement routing agents that delegate to specialized sub-agents
- Real-time Streaming UI - Rendering different stream event types (text, tools, reasoning, network execution)
- Thread Persistence - Managing conversation history with Mastra's memory system
- AI SDK Integration - Using
@ai-sdk/reactwith Mastra's backend
Use this as a starting point for building your own AI-powered applications with Mastra and TanStack Start.
Features
- π€ AI Agent Network - Routing agent delegates to specialized agents (weather, destinations)
- π Web Search - Real-time web search powered by Perplexity Sonar with source citations
- π Real-time Streaming - See AI responses, tool calls, and reasoning as they happen
- π¬ Thread Persistence - Chat history saved to SQLite via Mastra
- π Auto-generated Titles - Thread titles automatically generated using Gemini Flash Lite
- π¨ Dynamic UI - Renders different types of stream events:
- Text responses
- Tool invocations (parameters & results)
- Web search sources with citations
- Network execution (agent routing decisions)
- Model reasoning (chain of thought)
Prerequisites
- Bun installed
- Google Gemini API key (Get one here)
Getting Started
1. Install Dependencies
bun install2. Configure Environment Variables
Create a .env file in the root directory:
GOOGLE_GENERATIVE_AI_API_KEY=your_gemini_api_key_here
PERPLEXITY_API_KEY=your_perplexity_api_key_here # Optional: for web searchImportant: You must have a valid Gemini API key for the AI agents to work. The Perplexity API key is optional but required for web search functionality.
3. Run Development Server
bun run devThis will start:
- Mastra Backend on
http://localhost:4111 - Frontend on
http://localhost:3000
4. Open the App
Navigate to http://localhost:3000 and start chatting!
Architecture: Stream System Flow
flowchart TD
subgraph Frontend["Frontend (TanStack Start + React)"]
User[π€ User types message]
subgraph ReactHooks["React Hooks Layer"]
UseChat["useChat() hook
π¦ @ai-sdk/react"]
UseMastraClient["useMastraClient()
π¦ @mastra/react"]
UseQuery["useQuery()
π¦ @tanstack/react-query"]
end
subgraph Transport["Transport Layer"]
DefaultTransport["DefaultChatTransport
π¦ ai
POST β http://localhost:4111/chat
body: {threadId, resourceId}"]
end
subgraph State["State Management"]
Messages["messages[] state
Updated real-time"]
Status["status: streaming/ready"]
end
end
subgraph Backend["Mastra Backend (localhost:4111)"]
subgraph Server["Mastra Server"]
NetworkRoute["networkRoute()
π¦ @mastra/ai-sdk
path: '/chat'
agent: 'routingAgent'"]
end
subgraph AgentNetwork["Agent Network"]
RoutingAgent["Routing Agent
Analyzes request"]
WeatherAgent["Weather Agent
Gets weather data"]
DestAgent["Destinations Agent
Gets travel info"]
end
subgraph Storage["Persistence"]
LibSQL["LibSQLStore
π¦ @mastra/libsql
file: ./mastra.db"]
end
end
subgraph StreamEvents["Stream Events (SSE/Streaming)"]
TextChunk["text chunks
type: 'text'"]
ToolCall["tool calls
type: 'tool-{toolName}'
states: input-available,
output-available"]
NetworkData["network execution
type: 'data-network'
steps[], status, task"]
ReasoningData["reasoning
type: 'reasoning'
AI thinking process"]
end
subgraph Processing["Frontend Processing"]
ToAISdk["toAISdkV5Messages()
π¦ @mastra/ai-sdk/ui
Converts Mastra β AI SDK format"]
ResolveMsg["resolveInitialMessages()
Resolves network messages
from memory storage"]
FilterMsg["filterDisplayableMessages()
Custom filter function
Removes: completion checks,
network JSON,
empty messages,
reasoning (history only)"]
RenderPart["MessagePartRenderer
Switch by part.type:
- text β MessageResponse
- reasoning β Reasoning
- data-network β NetworkExecution
- tool-* β Tool"]
end
subgraph UIComponents["UI Components (ai-elements)"]
MessageComp["Message
MessageContent
MessageResponse"]
NetworkExec["NetworkExecution
Shows agent routing,
steps, decisions"]
ToolComp["Tool
ToolHeader, ToolInput,
ToolOutput"]
ReasoningComp["Reasoning
ReasoningTrigger,
ReasoningContent"]
end
User -->|types message| UseChat
UseChat -->|sends via| DefaultTransport
DefaultTransport -->|HTTP POST| NetworkRoute
NetworkRoute -->|executes| RoutingAgent
RoutingAgent -->|delegates to| WeatherAgent
RoutingAgent -->|or delegates to| DestAgent
WeatherAgent -->|streams back| TextChunk
WeatherAgent -->|streams back| ToolCall
RoutingAgent -->|streams back| NetworkData
RoutingAgent -->|streams back| ReasoningData
TextChunk -->|received by| UseChat
ToolCall -->|received by| UseChat
NetworkData -->|received by| UseChat
ReasoningData -->|received by| UseChat
UseChat -->|updates| Messages
UseChat -->|updates| Status
Messages -->|each part| RenderPart
RenderPart -->|text| MessageComp
RenderPart -->|data-network| NetworkExec
RenderPart -->|tool-*| ToolComp
RenderPart -->|reasoning| ReasoningComp
NetworkRoute -.->|persists to| LibSQL
subgraph HistoryLoad["Load History (Initial Load)"]
UseQuery -->|calls| UseMastraClient
UseMastraClient -->|listThreadMessages| LibSQL
LibSQL -->|returns| ToAISdk
ToAISdk -->|converts| ResolveMsg
ResolveMsg -->|resolves| FilterMsg
FilterMsg -->|setMessages| Messages
end
style Frontend fill:#e1f5ff
style Backend fill:#fff4e1
style StreamEvents fill:#f0e1ff
style Processing fill:#e1ffe1
style UIComponents fill:#ffe1e1
style HistoryLoad fill:#f5f5f5
How It Works
π€ Sending Messages (Streaming)
- User types message β
useChat()hook (@ai-sdk/react) DefaultChatTransportβ POST tohttp://localhost:4111/chat- Mastra backend receives via
networkRoute()(@mastra/ai-sdk) routingAgentanalyzes and delegates to sub-agents or tools- Real-time stream events:
textchunkstool-*invocations (including web-search with sources)data-networkagent executionreasoningmodel thoughts
- Frontend dynamically renders each part
π₯ Loading History (Initial Load)
useQuery()+useMastraClient()βlistThreadMessages()toAISdkV5Messages()converts Mastra format β AI SDK formatresolveInitialMessages()resolves network execution data from memory (handles both agent and tool-based networks)filterDisplayableMessages()removes internal system messages and reasoning from history (smart deduplication for agent vs tool networks)setMessages()sets chat history
π¨ Rendering
MessagePartRenderer component switches on part.type:
- text β
<MessageResponse> - data-network β
<NetworkExecution>(shows routing decisions) - tool-web-search β
<Sources>(web search results with citations) - tool-* β
<Tool>(parameters and results for other tools) - dynamic-tool β
<Sources>or<Tool>(history: web-search shows sources, others show tool UI) - reasoning β
<Reasoning>(model thoughts, only during streaming)
π§ Adding Custom Tool UIs
You can register custom UI components for your tools:
import { toolUIRegistry } from '@/components/chat/renderers';
toolUIRegistry.register({
toolIds: ['my-tool-id'], // Tool ID(s) from Mastra
Component: MyToolCard, // Your React component
isValidOutput: isMyToolData, // Type guard function
});The component will automatically render in streaming and history contexts.
See .agent/skills/tool-ui/SKILL.md for full documentation.
Project Structure
src/
βββ components/
β βββ ai-elements/ # Reusable AI UI components
β β βββ network-execution.tsx # Agent network visualization
β β βββ tool.tsx # Tool call display
β β βββ reasoning.tsx # Model reasoning display
β β βββ ...
β βββ chat/ # Chat-specific components
β β βββ chat-empty-state.tsx # Empty state UI
β β βββ chat-input.tsx # Message input with actions
β β βββ chat-layout.tsx # Chat page layout wrapper
β β βββ message-part-renderer.tsx # Renders message parts by type
β β βββ index.ts # Barrel exports
β βββ ui/ # shadcn/ui components
βββ hooks/
β βββ use-chat-navigation.ts # Navigate to chat with initial message
β βββ use-delete-thread.ts # Delete thread mutation
β βββ use-invalidate-threads.ts # Invalidate threads query
β βββ use-thread-messages.ts # Fetch thread messages
β βββ use-threads.ts # Fetch all threads
βββ lib/
β βββ chat-utils.ts # Chat utility functions
β βββ constants.ts # Environment variables
β βββ filter-displayable-messages.ts # Filter system messages
β βββ mastra-queries.ts # Centralized query options & keys
β βββ resolve-initial-messages.ts # Resolve network messages from memory
β βββ utils.ts # General utilities
βββ mastra/
β βββ agents/ # AI agents
β β βββ routing-agent.ts # Main routing logic
β β βββ weather-agent.ts # Weather queries
β β βββ destinations-agent.ts # Travel recommendations
β βββ tools/ # Mastra tools
β β βββ web-search-tool.ts # Web search via Perplexity Sonar
β βββ workflows/ # Mastra workflows
β βββ memory.ts # Memory configuration with title generation
β βββ index.ts # Mastra configuration
βββ routes/
βββ index.tsx # Home page
βββ chat.$threadId.tsx # Chat page with thread support
Building for Production
bun run buildLinting & Formatting
This project uses Biome:
bun run lint # Check for issues
bun run format # Format code
bun run check # Lint + formatTech Stack
- Frontend Framework: TanStack Start
- AI Framework: Mastra
- AI SDK: @ai-sdk/react
- State Management: TanStack Query
- Styling: Tailwind CSS + shadcn/ui
- AI Models:
- Google Gemini 3 Flash Preview (main agent)
- Google Gemini 2.5 Flash Lite (title generation)
- Perplexity Sonar (web search)
- Database: SQLite (via @mastra/libsql)
π§ Development Tools
AI SDK DevTools
Para debugging de interacciones LLM durante desarrollo:
- InstalaciΓ³n:
bun add -d @ai-sdk/devtools - Uso: Ejecutar
bun devpara iniciar la app y DevTools automΓ‘ticamente - Visualizar: Abrir http://localhost:4983 para inspeccionar llamadas AI SDK
DevTools captura automΓ‘ticamente:
- β
Todas las llamadas
generateTextystreamText - β Prompts enviados a modelos
- β Respuestas recibidas
- β Invocaciones de tools
- β Interacciones multi-step (routing de agentes, network execution)
- β Token usage y timing
Nota: DevTools almacena datos localmente en el directorio .devtools/ (gitignored). Solo estΓ‘ activo en modo desarrollo.
Deshabilitar DevTools: Si necesitas deshabilitar DevTools temporalmente, usa:
AI_SDK_DEVTOOLS_ENABLED=false bun dev