chrhansen/poser
A tool to give feedback on skiing techniques. Built by @chrhansen in Innsbruck, Austria.
Poser: Advanced Skier Pose Analysis Pipeline
Poser is a full-stack video analysis system for skier pose detection, turn segmentation, and
technique metrics. It ships as a web app (React + Rails) with an analysis GPU pipeline on RunPod.
Visual Example
| Landing Page | Results Dashboard |
![]() |
![]() |
Key Features
- End-to-end web workflow: Upload, analyze, and review results from a browser with live status updates.
- Embeddable partner widget: Hosted JS widget with domain allowlisting and email confirmation flow.
- Pose analysis pipeline: SAM2 skier tracking, SAM3D Body pose estimation, and temporal smoothing.
- Technique metrics: Edge similarity scoring and turn segmentation surfaced in the UI.
- Artifact exports: Pose overlay video and CSV metrics for parity checks.
- GPU acceleration: RunPod GPU workers for production analysis jobs.
Documentation
- Pipeline details and metrics:
analysis/README.md - Web app spec:
docs/spec-webapp.md - Embed widget flow:
docs/embed-widget.md - Rails backend details:
backend/README.md - Analysis DB table contract (
read_when: schema/internal callback changes):backend/db/data-model-contract.md
Processing Pipeline (High Level)
- Load video metadata and normalize rotation.
- Track skier from prompt and estimate 3D pose landmarks.
- Smooth landmarks in time.
- Compute metrics (edge similarity + turn segmentation).
- Render outputs and publish artifacts.
See analysis/README.md for the full breakdown.
Repository Structure
poser/
├── analysis/ # Analysis pipeline + RunPod service image
├── backend/ # Rails API server (auth, storage, jobs)
│ ├── app/ # Controllers, services, serializers
│ ├── db/ # Migrations + structure.sql
│ └── spec/ # Rails specs
├── frontend/ # React + Vite frontend + embed widget
│ ├── src/ # UI, pages, embed widget
│ └── dist/ # Built assets (generated)
├── docs/ # Product + engineering docs
├── docker-compose.yml # Local dev services
├── fly.web.toml # Fly.io config for poser-web
├── Caddyfile.local # Local reverse proxy rules
├── tests/ # Repo-level tests (e.g., Caddyfile)
├── input/ # Local sample inputs
├── output/ # Local analysis outputs
└── scratch/ # Local experiments
Development
Analysis Python environment
Install analysis test/lint dependencies:
python -m pip install -r analysis/requirements-ci.txt
python -m pip install -r analysis/requirements-dev.txtDocker Compose (local stack)
Run the web app, database, and proxy locally:
docker compose upNote: local uploads require S3 credentials in .env (Tigris or an S3-compatible bucket).
Local URLs:
- App: http://localhost
- Backend: http://localhost:8000
Analysis local checks
ruff --config analysis/pyproject.toml check analysis analysis/tests
ruff --config analysis/pyproject.toml format --check analysis analysis/tests
mypy --config-file analysis/pyproject.toml analysis
pytest analysis/tests -qContainer Architecture
Web + DB run on Fly. Analysis runs on RunPod.
Browser
│
▼
Edge Proxy
- Local: Caddy (`Caddyfile.local`)
- Production: Fly edge router
│
▼
poser-web (Rails + static frontend)
│ ├─ reads/writes: poser-db (Postgres)
│ ├─ presigned uploads/downloads: Tigris S3
│ └─ triggers: analysis pod (/analyze)
│
▼
analysis pod (RunPod GPU service)
├─ downloads input from Tigris S3
├─ runs analysis pipeline (Step 1..5)
└─ posts progress + artifacts to /api/internal/* on poser-web
Public endpoints (poser-web)
POST /api/auth/request-codePOST /api/auth/verify-codePOST /api/analyses/create-uploadPOST /api/analyses/{id}/confirm-uploadGET /api/analysesGET /api/analyses/{id}GET /api/analyses/{id}/edge-similarityGET /api/analyses/{id}/turnsGET /api/analyses/{id}/download/{artifact_kind}DELETE /api/analyses/{id}POST /api/contactGET /api/embed/{partner_slug}/configPOST /api/embed/{partner_slug}/submitPOST /api/embed/{partner_slug}/upload-completeGET /api/embed/{partner_slug}/status/{analysis_id}GET /api/embed/{partner_slug}/feedback/{analysis_id}GET /api/embed/confirm?token=...GET /api/embed/results/{token}
Internal endpoints (analysis → backend)
PUT /api/internal/analysis-runs/{analysis_run_id}/progressPUT /api/internal/analysis-runs/{analysis_run_id}/statusPOST /api/internal/analysis-runs/{analysis_run_id}/frames/batchesPOST /api/internal/analysis-runs/{analysis_run_id}/turn-structurePOST /api/internal/analysis-runs/{analysis_run_id}/metrics/runPOST /api/internal/analysis-runs/{analysis_run_id}/metrics/segmentsPOST /api/internal/analysis-runs/{analysis_run_id}/artifactsPOST /api/internal/analyses/{id}/reprocessings(admin one-by-one analysis re-run; requiresX-Internal-Token)
Database Schema
Source of truth: backend/db/structure.sql.
Behavior contract for analysis tables: backend/db/data-model-contract.md.
Core runtime tables:
users,verification_codes,email_change_tokensanalysesanalysis_runsuploads(input upload lifecycle)artifacts(artifact metadata + S3 key by kind)frames(one row per frame, run-scoped)turn_segments(run-scoped left/right turn cycles)turn_transitions(run-scoped shared transition windows)metric_definitions,metric_values(run/segment metric storage)partners,embed_confirmation_tokens
analyses (analysis-level parent + upload source)
- lifecycle:
status(enum int),progress(jsonb),error_log - input metadata:
filename,s3_input_key,trim_start_seconds,trim_end_seconds,confirmed_at - tracking selection:
bbox_x1,bbox_y1,bbox_x2,bbox_y2,click_normalized_time,click_object_id
analysis_runs (versioned pipeline executions)
- one row per run for an analysis (
queued/running/succeeded/failed) - run metadata:
pipeline_version,num_frames,fps,started_at,finished_at,error - reruns create new rows; latest run drives public read APIs
frames (run-scoped per-frame scalars)
- scalar frame metrics (
frame_index,timestamp_ms, COM/angulation/shin metrics, raw turn signal) - canonical turn phase fields live here (
turn_segment_id,turn_transition_id,turn_phase_01,transition_phase_signed,in_transition_window) - unique key:
(analysis_run_id, frame_index)
turn_segments (run-scoped first-class segments)
- canonical left/right turn cycles only
- stores
segment_index,start_frame,end_frame, optionalapex_frame - optional
start_transition_id/end_transition_id
turn_transitions (run-scoped transition windows)
- shared left<->right transition windows between adjacent turn cycles
- stores
transition_index,window_start_frame,center_frame,window_end_frame
metric_definitions + metric_values
- metric catalog + value rows for
runandsegmentscopes metric_valueskeyed byanalysis_run_id+ metric definition, optionally linked toturn_segments- enables adding new metrics without schema churn
Production & CI/CD
Fly.io deployment
- poser-web: Combined backend + frontend (static assets built into the image).
- poser-db: Fly Postgres cluster attached to
poser-web. - Object storage: Tigris S3-compatible bucket for uploads and artifacts.
Communication in production:
- Browser ↔ poser-web for auth, uploads, results, and the embed widget (static assets are served by poser-web).
- poser-web ↔ RunPod Serverless endpoint via
RUNPOD_ANALYSIS_ENDPOINT_ID. - analysis pod ↔ poser-web run-scoped internal endpoints for progress, frames, segments, metrics, and artifacts.
- poser-web ↔ poser-db for persistence.
- Both services ↔ Tigris for file storage.
CI workflow
- Feature branches: Open a PR to
mainto trigger lint + test jobs. - Main branch: On merge, CI deploys
poser-web;analysisimage/pod deploy is handled byanalysis-runpod-image. - Checks: Rails specs and frontend tests on
ci-cd; analysis checks in dedicated workflow. - Deploy: GitHub Actions uses
flyctl deploywithfly.web.toml.
License
This project is licensed under the MIT License - see the LICENSE file for details.
Acknowledgments
- YOLO26 by Ultralytics
- MediaPipe by Google
- BoT-SORT tracking algorithm
- Open source computer vision community

