rocksdb-cli
An interactive RocksDB command-line tool written in Go, with support for multiple column families (CF) and AI-powered natural language queries via GraphChain Agent.
Table of Contents
- Quick Start
- Features
- Web UI
- Transform Command
- Advanced Search (search tool)
- GraphChain Agent (AI-Powered)
- Installation and Build Process
- Available Commands
- Usage
- Interactive Commands
- JSON Features
- Generate Test Database
- MCP Server Support
Quick Start
# Web UI (easiest way to get started)
rocksdb-cli web --db /path/to/database
# Then open http://localhost:8080 in your browser
# Interactive mode (recommended for exploration)
rocksdb-cli repl --db /path/to/database
# Direct commands (good for scripting)
rocksdb-cli get --db mydb --cf users user:1001
rocksdb-cli scan --db mydb --cf logs --limit=100
# AI-powered queries
rocksdb-cli ai --db mydb "show me all active users"
# Data transformation (preview mode)
rocksdb-cli transform --db mydb --cf users --expr="value.upper()" --dry-run
# Find keys by prefix
rocksdb-cli prefix --db mydb --cf users --prefix "user:" --limit=50
# Real-time monitoring
rocksdb-cli watch --db mydb --cf eventsFeatures
- 🌐 Web UI - Modern React-based web interface with single binary distribution
- 📟 Interactive REPL - Real-time database exploration with command history
- 🔄 Transform Data - Batch data transformation with Python expressions or scripts
- 🤖 AI Assistant - Natural language queries using LLMs (OpenAI, Ollama, Google AI)
- 📊 Data Export - Export to CSV and JSON formats
- 🔍 Advanced Search - Fuzzy search, JSON queries, prefix/range scan
- 👁️ Real-time Monitor - Watch mode for live data changes
- 🗄️ Column Family Support - Full support for multiple column families
- 💾 Read-only Mode - Safe concurrent access for production environments
- 🐳 Docker Support - Easy deployment with pre-built Docker images
- 🔌 MCP Server - Model Context Protocol server for AI integration
Web UI
The Web UI provides a modern, intuitive interface for managing your RocksDB database through your web browser. It's embedded directly into the CLI binary - no separate installation required!
Quick Start
# Start the web server (default port 8080)
rocksdb-cli web --db /path/to/database
# Custom port
rocksdb-cli web --db /path/to/database --port 3000
# Read-only mode (recommended for production)
rocksdb-cli web --db /path/to/database --read-onlyThen open http://localhost:8080 in your web browser.
Features
- Browse Data - Navigate through column families and view key-value pairs
- Advanced Search - Search with regex patterns, case-sensitive options
- Export Data - Export search results or entire column families to CSV/JSON
- Pagination - Efficient cursor-based pagination for large datasets
- Binary Data Support - Automatic hex encoding for binary keys/values
- JSON Viewer - Expandable tree view for JSON data with nested parsing
- Column Family Management - List and switch between column families
- Database Statistics - View database and column family statistics
- Responsive Design - Clean, professional interface that works on all devices
Architecture
- Single Binary - Everything embedded using Go's
embedpackage (~55MB) - No Dependencies - No need to install Node.js or any other runtime
- REST API - Full-featured API available at
/api/v1/* - React + TypeScript - Modern frontend with TailwindCSS styling
- Production Ready - Supports read-only mode for safe database access
API Endpoints
When running the web server, the following API endpoints are available:
GET /api/v1/health - Health check
GET /api/v1/cf - List column families
GET /api/v1/stats - Database statistics
GET /api/v1/cf/:cf/get/:key - Get value by key
POST /api/v1/cf/:cf/put - Put key-value pair
POST /api/v1/cf/:cf/scan - Scan entries with pagination
POST /api/v1/cf/:cf/search - Advanced search
POST /api/v1/cf/:cf/jsonquery - JSON field query
GET /api/v1/cf/:cf/stats - Column family statistics
Development
For frontend development, see the web-ui/ directory:
cd web-ui
npm install
npm run dev # Start development server with hot reload
npm run build # Build for production (output to dist/)The built files are automatically embedded into the Go binary during compilation.
Transform Command
The transform command enables batch data transformation using Python expressions or script files. Perfect for data migration, cleanup, and batch updates.
Quick Start
# Preview transformation (safe, no changes)
rocksdb-cli transform --db mydb --cf users --expr="value.upper()" --dry-run
# Apply transformation
rocksdb-cli transform --db mydb --cf users --expr="value.upper()"
# Use a Python script file
rocksdb-cli transform --db mydb --cf users --script=scripts/transform/transform_uppercase_name.py --dry-runFeatures
- ✅ Python Expressions - Inline transformations with Python code
- ✅ Python Scripts - Reusable transformation logic with filtering
- ✅ Dry-run Mode - Preview changes before applying (RECOMMENDED)
- ✅ Filtering - Process only entries matching conditions
- ✅ Batch Processing - Efficiently handle large datasets
- ✅ Statistics - Detailed processing reports
Expression Examples
# Simple text transformation
rocksdb-cli transform --db mydb --cf users --expr="value.upper()" --dry-run
# JSON field modification
rocksdb-cli transform --db mydb --cf users \
--expr="import json; d=json.loads(value); d['status']='active'; json.dumps(d)" \
--dry-run
# With filter condition
rocksdb-cli transform --db mydb --cf users \
--filter="'admin' in value" \
--expr="value.upper()" \
--dry-run
# Key-based filter
rocksdb-cli transform --db mydb --cf users \
--filter="key.startswith('user:')" \
--expr="value.upper()" \
--dry-run --limit=10Script File Usage
Transform scripts provide more flexibility with custom functions:
# scripts/transform/my_transform.py
import json
def should_process(key, value):
"""Return True to process, False to skip"""
try:
data = json.loads(value)
return 'name' in data
except:
return False
def transform_value(key, value):
"""Transform the value"""
data = json.loads(value)
data['name'] = data['name'].upper()
return json.dumps(data)Usage:
rocksdb-cli transform --db mydb --cf users \
--script=scripts/transform/my_transform.py \
--dry-run --limit=10Available Scripts
See scripts/transform/README.md for pre-built transformation scripts:
- transform_uppercase_name.py - Uppercase the 'name' field
- filter_by_age.py - Filter and tag by age groups
- flatten_nested_json.py - Flatten nested JSON strings
- add_timestamp.py - Add processing timestamp
Safety Tips
--dry-run first to preview changes
💡 Start with --limit=10 to test on small dataset
📊 Check statistics output before proceeding
💾 Consider backing up your database first
Requirements
- Python 3.6+ (must be installed and in PATH)
- Standard library only for basic operations
Command Options
Flags:
-c, --cf string Column family to transform (default "default")
--expr string Python expression (e.g., "value.upper()")
--filter string Filter entries with Python boolean
--script string Python script file path
--dry-run Preview mode - show changes without applying (RECOMMENDED)
--limit int Process only N entries (0 = all)
--batch-size int Internal batch size (default 1000)
--verbose Show detailed progress information
For more details, run: rocksdb-cli transform --help
Advanced Search (search tool)
The search tool enables complex queries with multi-condition, regex, and cursor-based pagination support. This is ideal for efficiently searching large datasets with flexible criteria.
Supported Parameters
{
"args": {
"key_pattern": "string", // Optional, substring or regex for key matching
"value_pattern": "string", // Optional, substring or regex for value matching
"column_family": "string", // Optional, defaults to "default"
"limit": 100, // Optional, max results per page (default: 10)
"after": "string", // Optional, last key from previous page (for pagination)
"regex": true, // Optional, use regex for key/value (default: false)
"keys_only": false, // Optional, return only keys (default: false)
"export_file": "string", // Optional, export results to CSV file
"export_sep": "string" // Optional, CSV separator (default: ",")
}
}Response Fields
- results: Matched key-value pairs (or keys only if
keys_onlyis true) - count: Number of results in this page
- next_cursor: Last key in this page, use as
afterfor the next page - has_more: Whether more results are available for pagination
Example Usage
Request:
{
"args": {
"key_pattern": "user:",
"value_pattern": "alice",
"column_family": "users",
"limit": 50,
"after": "",
"regex": false
}
}Response:
{
"results": {
"user:1001": "{\"name\":\"alice\",\"age\":30}",
"user:1010": "{\"name\":\"alice\",\"age\":25}"
},
"count": 2,
"next_cursor": "user:1010",
"has_more": false
}To fetch the next page: Set "after": "user:1010" in your next request.
Tip: Use regex: true for advanced pattern matching, and keys_only: true if you only need the list of keys.
GraphChain Agent (AI-Powered)
GraphChain Agent transforms your RocksDB interactions using natural language processing. Instead of remembering specific commands, simply ask questions in plain English!
Quick Start with GraphChain
# Start GraphChain Agent
rocksdb-cli --db /path/to/database --graphchain
# With custom configuration
rocksdb-cli --db /path/to/database --graphchain --config custom-graphchain.yaml
# Docker mode
docker run -it --rm -v "/path/to/db:/data" -v "$PWD/config:/config" \
rocksdb-cli --db /data --graphchain --config /config/graphchain.yamlConfiguration
Create a configuration file (default: config/graphchain.yaml):
graphchain:
llm:
provider: "ollama" # openai, googleai, ollama, azureopenai
model: "llama2" # Model name
api_key: "${OPENAI_API_KEY}" # API key (not needed for Ollama)
base_url: "http://localhost:11434" # Ollama URL
timeout: "30s" # Request timeout
# Azure OpenAI specific (only when provider: azureopenai)
# azure_endpoint: "https://your-resource.openai.azure.com"
# azure_deployment: "gpt-4"
# azure_api_version: "2024-02-01"
agent:
max_iterations: 10 # Max tool iterations
tool_timeout: "10s" # Tool execution timeout
enable_memory: true # Enable conversation memory
memory_size: 100 # Max conversation history
security:
enable_audit: true # Enable audit logging
read_only_mode: false # Restrict to read operations
max_query_complexity: 10 # Max query complexity
allowed_operations: ["get", "scan", "prefix", "jsonquery", "search", "stats"]
context:
enable_auto_discovery: true # Auto-discover database structure
update_interval: "5m" # Context refresh interval
max_context_size: 4096 # Max context tokensNatural Language Examples
Once in GraphChain mode, you can ask natural questions:
Database Exploration
🤖 GraphChain Agent > Show me all column families in the database
🤖 GraphChain Agent > What's in the users column family?
🤖 GraphChain Agent > How many keys are in the logs table?
🤖 GraphChain Agent > Give me some statistics about the database
Data Queries
🤖 GraphChain Agent > Find all users named Alice
🤖 GraphChain Agent > Show me the last 5 entries in the logs column family
🤖 GraphChain Agent > Get all keys that start with "user:" in the users table
🤖 GraphChain Agent > Find JSON records where age is greater than 30
🤖 GraphChain Agent > Search for entries containing "error" in the value
Complex Operations
🤖 GraphChain Agent > Export the users column family to a CSV file
🤖 GraphChain Agent > Show me all product records where category is "electronics"
🤖 GraphChain Agent > Find the most recent log entry with level "ERROR"
🤖 GraphChain Agent > Compare the size of different column families
Data Analysis
🤖 GraphChain Agent > What types of data are stored in the database?
🤖 GraphChain Agent > Show me the key patterns used in the users table
🤖 GraphChain Agent > How is the data distributed across column families?
🤖 GraphChain Agent > Find unusual or interesting patterns in the data
Supported LLM Providers
1. Ollama (Local, Recommended)
# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh
# Start Ollama service
ollama serve
# Pull a model
ollama pull llama2 # Or llama3, codellama, mistral, etc.Configuration:
graphchain:
llm:
provider: "ollama"
model: "llama2"
base_url: "http://localhost:11434"2. OpenAI
export OPENAI_API_KEY="your-api-key-here"Configuration:
graphchain:
llm:
provider: "openai"
model: "gpt-4"
api_key: "${OPENAI_API_KEY}"3. Google AI (Gemini)
export GOOGLE_AI_API_KEY="your-api-key-here"Configuration:
graphchain:
llm:
provider: "googleai"
model: "gemini-pro"
api_key: "${GOOGLE_AI_API_KEY}"4. Azure OpenAI
export AZURE_OPENAI_API_KEY="your-api-key-here"Configuration:
graphchain:
llm:
provider: "azureopenai"
model: "gpt-4" # Not used, deployment name is used instead
api_key: "${AZURE_OPENAI_API_KEY}"
azure_endpoint: "https://your-resource.openai.azure.com"
azure_deployment: "gpt-4" # Your deployment name
azure_api_version: "2024-02-01" # Optional, defaults to 2024-02-01Notes:
- Azure OpenAI requires three additional fields:
azure_endpoint,azure_deployment, and optionallyazure_api_version - The
azure_deploymentis the name of your deployed model in Azure - You can find your endpoint and deployment names in the Azure Portal
GraphChain Agent Features
- 🧠 Intelligent Query Planning: Automatically selects the right tools for your questions
- 🔍 Context Awareness: Understands database structure and content patterns
- 💬 Natural Conversation: Ask follow-up questions and maintain context
- 🛡️ Security & Auditing: Configurable permissions and audit logging
- ⚡ Performance Optimized: Efficient tool selection and execution
- 🔧 Extensible: Easy to add new tools and capabilities
Troubleshooting GraphChain
Common Issues:
-
Ollama Connection Failed
# Check if Ollama is running curl http://localhost:11434/api/tags # Start Ollama if not running ollama serve
-
Model Not Found
# List available models ollama list # Pull required model ollama pull llama2
-
API Key Issues (OpenAI/Google)
# Verify environment variable echo $OPENAI_API_KEY # Or set in config file api_key: "your-actual-key-here"
-
Permission Errors
- Check
read_only_modesetting in config - Verify
allowed_operationsincludes needed operations
- Check
Project Structure
rocksdb-cli/
├── cmd/ # Main program entry
│ ├── main.go
│ └── mcp-server/ # MCP server implementation
│ └── main.go
├── internal/
│ ├── db/ # RocksDB wrapper
│ │ └── db.go
│ ├── repl/ # Interactive command-line
│ │ └── repl.go
│ ├── command/ # Command handling
│ │ └── command.go
│ ├── graphchain/ # GraphChain Agent implementation
│ │ ├── agent.go # Core agent logic
│ │ ├── config.go # Configuration management
│ │ ├── tools.go # Database tools for LLM
│ │ ├── context.go # Database context management
│ │ └── audit.go # Audit and security
│ └── mcp/ # MCP server components
│ ├── tools.go
│ ├── resources.go
│ └── transport.go
├── config/ # Configuration files
│ ├── graphchain.yaml # GraphChain config
│ └── mcp-server.yaml # MCP server config
├── scripts/ # Helper scripts
│ └── gen_testdb.go # Generate test database
├── Dockerfile # Docker configuration
└── README.md
Installation and Build Process
Option 1: Docker (Recommended)
Docker provides the easiest way to use rocksdb-cli without dealing with native dependencies.
Prerequisites
- Docker installed on your system
Building Docker Image
# Build Docker image (automatically detects proxy if needed)
./build_docker.sh
# Or build manually
docker build -t rocksdb-cli .Using Docker Image
# Get help
docker run --rm rocksdb-cli --help
# Interactive mode with your database
docker run -it --rm -v "/path/to/your/db:/data" rocksdb-cli --db /data
# Command-line usage
docker run --rm -v "/path/to/your/db:/data" rocksdb-cli --db /data --last users --pretty
# Prefix scan
docker run --rm -v "/path/to/your/db:/data" rocksdb-cli --db /data --prefix users --prefix-key "user:" --pretty
# Range scan
docker run --rm -v "/path/to/your/db:/data" rocksdb-cli --db /data --scan users --start "user:1000" --limit 10
# CSV export
docker run --rm -v "/path/to/your/db:/data" -v "$PWD:/output" rocksdb-cli --db /data --export-cf users --export-file /output/users.csv
# Watch mode
docker run -it --rm -v "/path/to/your/db:/data" rocksdb-cli --db /data --watch logs --interval 500msDocker with Proxy Support
If you're behind a corporate firewall or using a proxy, the build script automatically detects and uses your proxy settings:
# Set proxy environment variables (if needed)
export HTTP_PROXY="http://your-proxy-server:port"
export HTTPS_PROXY="http://your-proxy-server:port"
# Build with proxy support
./build_docker.shAlternatively, you can manually specify proxy settings:
# Manual proxy build
docker build \
--build-arg HTTP_PROXY="http://your-proxy-server:port" \
--build-arg HTTPS_PROXY="http://your-proxy-server:port" \
-t rocksdb-cli .Option 2: Native Build
For better performance or development purposes, you can build natively.
Prerequisites
Required:
- Go 1.20+ - For building the Go backend
- Node.js 16+ and npm - For building the Web UI frontend
- RocksDB C++ libraries - For database access
- Python 3.6+ (optional) - Required only for the
transformcommand
Installation:
macOS
# Install RocksDB and dependencies
brew install rocksdb snappy lz4 zstd bzip2
# Install Node.js (if not already installed)
brew install node
# Configure environment variables (add to ~/.zshrc or ~/.bash_profile)
export CGO_CFLAGS="-I/opt/homebrew/Cellar/rocksdb/*/include"
export CGO_LDFLAGS="-L/opt/homebrew/Cellar/rocksdb/*/lib -L/opt/homebrew/lib -lrocksdb -lstdc++ -lm -lz -lbz2 -lsnappy -llz4 -lzstd"
# Apply environment variables
source ~/.zshrcLinux (Ubuntu/Debian)
sudo apt-get update
sudo apt-get install librocksdb-dev libsnappy-dev liblz4-dev libzstd-dev libbz2-dev build-essential
# Install Node.js (if not already installed)
curl -fsSL https://deb.nodesource.com/setup_18.x | sudo -E bash -
sudo apt-get install -y nodejsLinux (CentOS/RHEL)
sudo yum install rocksdb-devel snappy-devel lz4-devel libzstd-devel bzip2-devel gcc-c++
# Install Node.js (if not already installed)
curl -fsSL https://rpm.nodesource.com/setup_18.x | sudo bash -
sudo yum install -y nodejsWindows
For Windows, we recommend using Docker or WSL:
Option 1: Use WSL (Recommended)
# Install WSL and Ubuntu, then follow Linux instructions above
wsl --installOption 2: Native Windows Build
Requires complex setup with vcpkg and Visual Studio. Docker is much easier.
Building Native Executable
Important: The project includes an embedded Web UI that must be built before compiling the Go binary.
Step 1: Build Frontend (Web UI)
The Web UI is a React + TypeScript application located in web-ui/. You need to build it first:
# Navigate to web-ui directory
cd web-ui
# Install Node.js dependencies
npm install
# Build the frontend (output goes to web-ui/dist/)
npm run build
# Return to project root
cd ..Step 2: Copy Built Files to Embed Directory
The Go binary embeds files from internal/webui/dist/, so you need to copy the built files:
# Copy built frontend to internal embed directory
cp -r web-ui/dist/* internal/webui/dist/Step 3: Build Go Binary
# Install Go dependencies
go mod tidy
# Build the executable (embeds the Web UI)
go build -o rocksdb-cli ./cmdComplete Build Script
For convenience, here's a complete build script:
#!/bin/bash
# Complete build process
# 1. Build frontend
cd web-ui && npm install && npm run build && cd ..
# 2. Copy to embed directory
cp -r web-ui/dist/* internal/webui/dist/
# 3. Build Go binary
go mod tidy
go build -o rocksdb-cli ./cmd
echo "Build complete! Binary: ./rocksdb-cli"Development Mode
For frontend development with hot reload:
# Terminal 1: Start frontend dev server
cd web-ui
npm run dev
# Frontend runs on http://localhost:5173
# Terminal 2: Run backend (pointing to dev frontend)
# Note: You'll need to configure CORS for dev mode
./rocksdb-cli web --db /path/to/database --port 8080Running Tests
# Run all tests
make test
# Run tests with coverage
make test-coverageAvailable Commands
web Start web UI server (all-in-one binary)
repl Start interactive REPL mode
get Get value by key from column family
put Put key-value pair in column family
scan Scan key-value pairs in range
prefix Search by key prefix
last Get the last key-value pair from column family
search Fuzzy search for keys and values
jsonquery Query by JSON field value
export Export column family to CSV file
transform Transform key-value data using Python expressions
watch Watch for new entries in column family (real-time)
stats Show database or column family statistics
listcf List all column families
createcf Create new column family
dropcf Drop column family
keyformat Show detected key format and conversion examples
ai AI-powered database assistant (GraphChain)
help Help about any command
Use rocksdb-cli [command] --help for more information about each command.
Usage
Command Line Help
# Show comprehensive help
rocksdb-cli --help
# Docker version
docker run --rm rocksdb-cli --helpInteractive Mode
Start the interactive REPL:
# Native
rocksdb-cli --db /path/to/rocksdb
# Docker
docker run -it --rm -v "/path/to/db:/data" rocksdb-cli --db /dataDirect Command Usage
Get Last Entry
# Get last entry from column family
rocksdb-cli --db /path/to/db --last users
# With pretty JSON formatting
rocksdb-cli --db /path/to/db --last users --prettyPrefix Scan
# Scan keys starting with a specific prefix
rocksdb-cli --db /path/to/db --prefix users --prefix-key "user:"
# Prefix scan with pretty JSON formatting
rocksdb-cli --db /path/to/db --prefix users --prefix-key "user:" --pretty
# Example output:
# user:1001: {"id":1001,"name":"Alice","email":"alice@example.com"}
# user:1002: {"id":1002,"name":"Bob","email":"bob@example.com"}Range Scan
# Scan all entries in a column family
rocksdb-cli --db /path/to/db --scan users
# Scan with range
rocksdb-cli --db /path/to/db --scan users --start "user:1000" --end "user:2000"
# Scan with options
rocksdb-cli --db /path/to/db --scan users --start "user:1000" --limit 10 --reverse
# Keys only (no values)
rocksdb-cli --db /path/to/db --scan users --keys-onlyCSV Export
# Export column family to CSV (default comma separator)
rocksdb-cli --db /path/to/db --export-cf users --export-file users.csv
# Use semicolon as separator
rocksdb-cli --db /path/to/db --export-cf users --export-file users.csv --export-sep ";"
# Use tab as separator (for TSV/Excel)
rocksdb-cli --db /path/to/db --export-cf users --export-file users.tsv --export-sep "\\t"--export-sep <sep>(optional): Specify CSV separator. Supports,(default),;,\t(tab), etc.- In interactive mode, you can also use:
export users users.csv --sep=";"orexport logs logs.tsv --sep="\\t".
Search and Export
# Search and export results to CSV
rocksdb-cli --db /path/to/db --search users --search-key "admin" --search-export users_admin.csv
rocksdb-cli --db /path/to/db --search logs --search-value "error" --search-export errors.csv --search-export-sep ";"
# Export only keys (no values)
rocksdb-cli --db /path/to/db --search products --search-key "prod:" --search-keys-only --search-export products_keys.csv
# Search with multiple patterns and export
rocksdb-cli --db /path/to/db --search users --search-key "user:" --search-value "admin" --search-export admin_users.csvWatch Mode (Real-time Monitoring)
# Monitor column family for new entries
rocksdb-cli --db /path/to/db --watch users
rocksdb-cli --db /path/to/db --watch logs --interval 500msInteractive Commands
Once in interactive mode, you can use these commands:
Basic Operations
# Column family management
usecf <cf> # Switch current column family
listcf # List all column families
createcf <cf> # Create new column family
dropcf <cf> # Drop column family
# Data operations
get [<cf>] <key> [--pretty] # Query by key (use --pretty for JSON formatting)
put [<cf>] <key> <value> # Insert/Update key-value pair
prefix [<cf>] <prefix> [--pretty] # Query by key prefix (supports --pretty for JSON)
last [<cf>] [--pretty] # Get last key-value pair from CF
# Advanced operations
scan [<cf>] [start] [end] [options] # Scan range with options
jsonquery [<cf>] <field> <value> [--pretty] # Query by JSON field value
search [<cf>] [options] # Fuzzy search with export support
export [<cf>] <file_path> # Export CF to CSV file
# Help and exit
help # Show interactive help
exit/quit # Exit the CLI
Command Usage Patterns
There are two ways to use commands:
- Set current CF and use simplified commands:
usecf users # Set current CF
get user:1001 # Use current CF
put user:1006 {"name":"Alice","age":25}
prefix user: # Use current CF for prefix scan
prefix user: --pretty # Use current CF with pretty JSON formatting
- Explicitly specify CF in commands:
get users user:1001 # Specify CF in command
put users user:1006 {"name":"Alice","age":25}
prefix users user: # Specify CF for prefix scan
prefix users user: --pretty # Specify CF with pretty formatting
JSON Pretty Print
When retrieving JSON values, use the --pretty flag for formatted output:
# Store JSON value
put users user:1001 {"name":"John","age":30,"hobbies":["reading","coding"]}
# Regular get (single line)
get users user:1001
{"name":"John","age":30,"hobbies":["reading","coding"]}
# Pretty printed get
get users user:1001 --pretty
{
"name": "John",
"age": 30,
"hobbies": [
"reading",
"coding"
]
}
JSON Field Querying
The jsonquery command allows you to search for entries based on JSON field values:
# Query by string field
jsonquery users name Alice
Found 1 entries in 'users' where field 'name' = 'Alice':
user:1001: {"id":1001,"name":"Alice","email":"alice@example.com","age":25}
# Query by number field
jsonquery users age 30
Found 1 entries in 'users' where field 'age' = '30':
user:1002: {"id":1002,"name":"Bob","age":30}
# Query with explicit column family
jsonquery products category fruit
Found 2 entries in 'products' where field 'category' = 'fruit':
prod:apple: {"name":"Apple","price":1.50,"category":"fruit"}
prod:banana: {"name":"Banana","price":0.80,"category":"fruit"}
# Query with pretty JSON output
jsonquery users name Alice --pretty
Found 1 entries in 'users' where field 'name' = 'Alice':
user:1001: {
"age": 25,
"email": "alice@example.com",
"id": 1001,
"name": "Alice"
}
# Using current column family
usecf logs
jsonquery level ERROR
Found entries where field 'level' = 'ERROR' in current CF
Supported field types:
- String: Exact match (
"Alice") - Number: Numeric comparison (
30,1.5) - Boolean: Boolean comparison (
true,false) - Null: Null comparison (
null)
Range Scanning
The scan command provides powerful range scanning with various options:
# Basic range scan
scan users user:1001 user:1005
# Scan with options
scan users user:1001 user:1005 --reverse --limit=10 --values=no
# Available options:
# --reverse : Scan in reverse order
# --limit=N : Limit results to N entries
# --values=no : Return only keys without values
Generate Test Database
Create a comprehensive test database with sample data:
# Generate test database
go run scripts/gen_testdb.go ./testdb
# Use with CLI
rocksdb-cli --db ./testdb
# Use with Docker
docker run -it --rm -v "$PWD/testdb:/data" rocksdb-cli --db /dataThe test database includes:
- default: Basic key-value pairs and configuration data
- users: User profiles in JSON format
- products: Product information with categories
- logs: Application logs with different severity levels
Example Usage with Test Data
# Interactive mode
rocksdb-cli --db ./testdb
# In REPL:
> listcf # List all column families
> usecf users # Switch to users
> prefix user: # Get all users starting with "user:"
> prefix user: --pretty # Get all users with pretty JSON formatting
> get user:1001 --pretty # Get specific user with JSON formatting
> jsonquery name Alice # Find users named Alice
> jsonquery users age 25 # Find users aged 25
> scan user:1001 user:1005 # Scan range of users
> export users users.csv # Export to CSV
> search --key=admin --export=admins.csv # Search and export admin users
> usecf logs # Switch to logs
> prefix error: # Get all error logs starting with "error:"
> jsonquery level ERROR # Find error logs by JSON field
> watch logs --interval 1s # Watch for new log entriesMCP Server Support
RocksDB CLI includes a Model Context Protocol (MCP) server that enables integration with AI assistants like Claude Desktop, allowing AI tools to interact with your RocksDB databases through natural language.
Quick Start with MCP Server
# Start MCP server (read-only mode, recommended)
./cmd/mcp-server/rocksdb-mcp-server --db /path/to/database --readonly
# With configuration file
./cmd/mcp-server/rocksdb-mcp-server --config config/mcp-server.yamlClaude Desktop Integration
Add to your claude_desktop_config.json:
{
"mcpServers": {
"rocksdb": {
"command": "/path/to/rocksdb-mcp-server",
"args": ["--db", "/path/to/database", "--readonly"]
}
}
}Then interact with your database using natural language:
- "Show me all users in the database"
- "Export the products table to CSV"
- "Find all logs with error level"
Available MCP Tools
The MCP server provides these tools for AI assistants:
- Database Operations: Get, scan, prefix search
- Column Family Management: List, create, drop CFs
- Data Export: CSV export functionality
- JSON Queries: Search by JSON field values
MCP vs GraphChain Agent
| Feature | MCP Server | GraphChain Agent |
|---|---|---|
| Integration | External AI (Claude Desktop) | Built-in AI agent |
| Protocol | Standard MCP protocol | Direct LLM integration |
| Setup | Requires Claude Desktop | Self-contained |
| Security | Tool-level permissions | Full security controls |
📖 For comprehensive MCP documentation, including detailed configuration, API reference, security considerations, and troubleshooting, see docs/MCP_SERVER_README.md.
Docker Technical Details
The Docker image includes:
- RocksDB v10.2.1 - manually compiled for compatibility with grocksdb v1.10.1
- Multi-stage build - optimized for size and security
- Non-root user - runs as
rocksdbuser for security - Debian bullseye base - for compatibility and stability
Build Process
- Build stage: Compiles RocksDB and Go application
- Runtime stage: Minimal image with only runtime dependencies
- Total build time: ~6-7 minutes (RocksDB compilation takes most time)
- Final image size: ~200MB
Proxy Support
The Docker build automatically handles proxy configurations:
- Detects
HTTP_PROXYandHTTPS_PROXYenvironment variables - Passes them to the build process if present
- No manual configuration needed in most cases
Performance Notes
- Docker: Slight overhead but consistent across platforms
- Native: Best performance, platform-specific
- Memory: RocksDB compilation requires ~2GB RAM in Docker
- Storage: Built image is ~200MB
Contributing
- Fork the repository
- Create your feature branch
- Make changes and add tests
- Ensure Docker build works:
./build_docker.sh - Submit a pull request
License
MIT License - see LICENSE file for details
Prefix Scan Caveats for Binary/Uint64 Keys
For column families with binary or uint64 keys, the prefix command only matches byte prefixes, not numeric string prefixes. For example,
prefix 0x00matches all keys starting with byte 0x00, butprefix 123only matches the key with value 123 as an 8-byte big-endian integer.