tinker-agent
An intelligent agent for Anthropic's Tinker fine-tuning platform. Automates the creation of fine-tuning datasets and configuration with interactive and programmatic interfaces.
Installation
pip install tinker-agentOr with uv:
uv pip install tinker-agentQuick Start
Interactive Mode
Simply run the command without arguments for an interactive setup:
tinker-agentThis will guide you through:
- Environment setup - Configure API keys on first run
- Task selection - Choose between SFT, RL, or CPT
- Model selection - Pick from available models
- Dataset configuration - Specify your training data (see below)
Non-Interactive Mode
Pass configuration directly via command-line arguments:
tinker-agent \
config.dataset=HuggingFaceFW/fineweb \
config.task_type=sft \
config.model=Qwen/Qwen3-8BDataset Options
You can train on either a HuggingFace dataset or a local directory:
# HuggingFace dataset (org/dataset-name format)
tinker-agent config.dataset=ServiceNow-AI/R1-Distill-SFT config.task_type=sft
# Local directory (e.g., your Obsidian vault, markdown notes, or custom data)
tinker-agent config.dataset=~/Documents/my-obsidian-vault config.task_type=sft
tinker-agent config.dataset=/path/to/training-data config.task_type=sftLocal directories are mounted as read-only - the agent can read your data but won't modify it. Supported file formats include .json, .jsonl, .parquet, .csv, .txt, and .md.
Configuration
Environment Setup
Run the setup command to configure your environment:
tinker-agent setupThis creates a .env file with:
- TINKER_API_KEY - Your Tinker API key (Get it here)
- WANDB_API_KEY - Weights & Biases API key for tracking (Get it here)
- WANDB_PROJECT - W&B project name for organizing experiments
Alternatively, set these as environment variables before running.
Task Types
- sft - Supervised Fine-Tuning (instruction-response pairs)
- rl - Reinforcement Learning (reward-based training)
- cpt - Continued Pre-Training (raw text data)
Available Models
| Model | Type | Size |
|---|---|---|
Qwen/Qwen3-VL-235B-A22B-Instruct |
Vision | Large |
Qwen/Qwen3-VL-30B-A3B-Instruct |
Vision | Medium |
Qwen/Qwen3-235B-A22B-Instruct-2507 |
Instruction | Large |
Qwen/Qwen3-30B-A3B-Instruct-2507 |
Instruction | Medium |
Qwen/Qwen3-30B-A3B |
Hybrid | Medium |
Qwen/Qwen3-30B-A3B-Base |
Base | Medium |
Qwen/Qwen3-32B |
Hybrid | Medium |
Qwen/Qwen3-8B |
Hybrid | Small |
Qwen/Qwen3-8B-Base |
Base | Small |
Qwen/Qwen3-4B-Instruct-2507 |
Instruction | Compact |
openai/gpt-oss-120b |
Reasoning | Medium |
openai/gpt-oss-20b |
Reasoning | Small |
deepseek-ai/DeepSeek-V3.1 |
Hybrid | Large |
deepseek-ai/DeepSeek-V3.1-Base |
Base | Large |
meta-llama/Llama-3.1-70B |
Base | Large |
meta-llama/Llama-3.3-70B-Instruct |
Instruction | Large |
meta-llama/Llama-3.1-8B |
Base | Small |
meta-llama/Llama-3.1-8B-Instruct |
Instruction | Small |
meta-llama/Llama-3.2-3B |
Base | Compact |
meta-llama/Llama-3.2-1B |
Base | Compact |
moonshotai/Kimi-K2-Thinking |
Reasoning | Large |
Usage Examples
Interactive Mode Example
$ tinker-agent
╭──────────────────────────────────────────╮
│ tinker-agent │
│ Fine-tuning configuration │
╰──────────────────────────────────────────╯
Select task type:
1 sft Supervised Fine-Tuning
2 rl Reinforcement Learning
3 cpt Continued Pre-Training
Choice [1/2/3/sft/rl/cpt]: 1
Select model:
Key Model Type Size
1 Qwen/Qwen3-VL-235B-A22B-Instruct Vision Large
2 Qwen/Qwen3-VL-30B-A3B-Instruct Vision Medium
3 Qwen/Qwen3-235B-A22B-Instruct-2507 Instruction Large
...
Choice (number or model name) [1]: 8
HuggingFace dataset: HuggingFaceFW/finewebNon-Interactive Example
# Basic usage
tinker-agent config.dataset=my-org/my-dataset config.task_type=sft
# With all options
tinker-agent \
config.dataset=HuggingFaceFW/fineweb \
config.task_type=sft \
config.model=meta-llama/Llama-3.3-70B-InstructEnvironment Variables
# Set via environment
export TINKER_API_KEY="your-api-key"
export WANDB_API_KEY="your-wandb-key"
export WANDB_PROJECT="my-finetuning-project"
# Run with config
tinker-agent config.dataset=my-dataset config.task_type=rlFeatures
- ✅ Interactive CLI - Beautiful rich terminal UI for configuration
- ✅ Non-interactive mode - Scriptable with command-line arguments
- ✅ Flexible datasets - Use HuggingFace datasets or local directories (Obsidian vaults, markdown notes, etc.)
- ✅ Model selection - Choose from available models
- ✅ Environment management - Simple .env-based configuration
- ✅ Sandboxed execution - Agent runs in isolated directory with path validation
- ✅ Trace viewer - Streamlit-based viewer for execution traces
Sandboxing
The agent runs in a sandboxed environment with strict path validation:
- Root directory isolation - Agent can only access files within its working directory
- Path validation - Blocks access to
~,$HOME, absolute paths, and..escapes - No system access - Cannot read sensitive files like
/etc/passwdor user home directories
This ensures the agent operates safely without requiring Docker, making it more scalable for production use.
Additional Commands
View Execution Traces
tinker-viewerOpens a Streamlit interface to view and analyze agent execution traces.
Development
Setup Development Environment
git clone https://github.com/anthropics/tinker-agent.git
cd tinker-agent
uv sync --extra devRun Tests
uv run pytestBuild Package
uv buildDeploy to PyPI
python deploy.pyThis will:
- Ask for confirmation
- Request your PyPI API token (create one here)
- Clean old builds
- Build the package
- Upload to PyPI
License
MIT
Contributing
Contributions welcome! Please open an issue or PR.