seanbrar/pollux
Multimodal orchestration for LLM APIs. Source patterns, context caching, and structured output for text, PDFs, images, video, and YouTube - so you don't manage the complexity yourself.
Pollux
Multimodal orchestration for LLM APIs.
You describe what to analyze. Pollux handles source patterns, context caching, deferred delivery, and multimodal content.
Documentation ·
Getting Started ·
Building With Deferred Delivery
Quick Start
import asyncio
from pollux import Config, Source, run
result = asyncio.run(
run(
"What are the key findings and their implications?",
source=Source.from_file("earnings-report.pdf"),
config=Config(provider="gemini", model="gemini-2.5-flash-lite"),
)
)
print(result["answers"][0])
# Revenue grew 18% YoY to $4.2B, driven by cloud services. Operating
# margins improved from 29% to 34%. Management's $2B buyback and raised
# guidance signal confidence in sustained growth.run() returns a ResultEnvelope: answers holds one entry per prompt.
To use OpenAI instead: Config(provider="openai", model="gpt-5-nano").
For Anthropic: Config(provider="anthropic", model="claude-haiku-4-5").
For OpenRouter: Config(provider="openrouter", model="google/gemma-3-27b-it:free").
For a full walkthrough (install, key setup, first result), see
Getting Started.
Which Entry Point Should I Use?
| If you want to... | Use |
|---|---|
| Ask one prompt and get an answer now | run() |
| Ask many prompts against shared source(s) | run_many() |
| Submit non-urgent work and collect it later | defer() / defer_many() |
Pollux keeps realtime and deferred work on separate entry points. If the result
can wait, submit it once, persist the handle, and collect the same
ResultEnvelope later.
What Pollux Handles
Say you have a document and ten questions about it. Each API call re-uploads the file, and you're left managing caching, retries, and concurrency yourself. Pollux uploads once, caches the content, fans out your prompts concurrently, and hands back results.
The same Source interface handles PDFs, images, video, YouTube URLs, and arXiv papers. No per-format upload code.
Gemini-specific video clipping and FPS controls are available via
Source.with_gemini_video_settings(...); see the sending-content docs for the
intended scope.
Need structured output? Pass a Pydantic model as response_schema and get a validated instance alongside the raw text. Switching providers is a one-line change: provider="gemini" to provider="openai".
One Upload, Many Prompts
Got three questions about the same paper? run_many() fans them out concurrently:
import asyncio
from pollux import Config, Source, run_many
envelope = asyncio.run(
run_many(
["Summarize the methodology.", "List key findings.", "Identify limitations."],
sources=[Source.from_file("paper.pdf")],
config=Config(provider="gemini", model="gemini-2.5-flash-lite"),
)
)
for answer in envelope["answers"]:
print(answer)Add more sources and Pollux broadcasts every prompt across every source, uploading each once regardless of how many prompts reference it.
When the Work Can Wait
Deferred delivery is for long fan-out work, backfills, and scheduled analysis
where no one is waiting on the answer in the current process.
import asyncio
from pollux import (
Config,
Source,
collect_deferred,
defer,
inspect_deferred,
)
config = Config(provider="openai", model="gpt-5-nano")
handle = asyncio.run(
defer(
"Summarize the report in five bullets.",
source=Source.from_file("market-report.pdf"),
config=config,
)
)
snapshot = asyncio.run(inspect_deferred(handle))
if snapshot.is_terminal:
result = asyncio.run(collect_deferred(handle))
print(result["answers"][0])In production code, persist handle.to_dict() and restore it later with
DeferredHandle.from_dict(...). For the full lifecycle, read
Submitting Work for Later Collection
and
Building With Deferred Delivery.
Where Pollux Ends
Pollux owns content delivery, context caching, and provider translation. Prompt design, workflow orchestration, and what you do with results are yours. See Core Concepts for the full boundary model.
Installation
pip install pollux-aiSet your provider's API key:
export GEMINI_API_KEY="your-key-here" # or
export OPENAI_API_KEY="your-key-here" # or
export ANTHROPIC_API_KEY="your-key-here" # or
export OPENROUTER_API_KEY="your-key-here"Keys from: Google AI Studio · OpenAI · Anthropic · OpenRouter
Documentation
- Getting Started: first result in 2 minutes
- Core Concepts: mental model and vocabulary
- Submitting Work for Later Collection: deferred lifecycle API
- Building With Deferred Delivery: when deferred is worth it
- API Reference: entry points and types
- Cookbook: runnable end-to-end recipes
Full docs at polluxlib.dev.
Contributing
See CONTRIBUTING and TESTING.md for guidelines.
Built during Google Summer of Code 2025 with Google DeepMind. Learn more