hoangsonww/LatticeDB-NextGen-DBMS
🗂️ A next-gen relational database & DBMS with ACID transactions, mergeable CRDT tables, time-travel queries, vector search, B+ Tree indexes, and differential privacy built-in. Written in C++17 with a powerful SQL engine, WAL storage, a modern GUI, and more!
LatticeDB - Modern Next-Gen Relational Database Management System 🚀
A feature-complete, high-performance RDBMS with native AI/ML support, time-travel queries, streaming statistics, built-in GUI, and enterprise security.
Important
Mission: Deliver a modern database that seamlessly integrates traditional RDBMS capabilities with cutting-edge features like vector search, temporal queries, adaptive compression, and comprehensive security - all in a single, cohesive system.
Table of Contents
- Why LatticeDB
- Feature Matrix & How It’s Different
- Architecture Overview
- Quick Start
- Start with GUI
- Zipping the DBMS
- Core Concepts & Examples
- Storage, Transactions & Replication
- SQL: LatticeSQL Extensions
- Operations & Observability
- Implementation Status
- Benchmarks
- Deployment
- GitHub Actions
- Contributing
- License
- Appendix: Glossary
Why LatticeDB
LatticeDB is a Next-Gen RDBMS built from scratch in modern C++17, combining traditional database capabilities with cutting-edge features:
- Complete SQL Support: Full SQL parser with DDL, DML, DCL, TCL - JOINs (INNER, LEFT, RIGHT), GROUP BY, aggregates (COUNT, SUM, AVG, MIN, MAX), subqueries, and CTEs all fully implemented.
- Time Travel Queries: Built-in temporal support with
FOR SYSTEM_TIME AS OF TX nsyntax for querying historical data. - Native Vector Search: Fully implemented vector engine with multiple algorithms (Flat, HNSW, IVF) and distance metrics (L2, Cosine, Dot Product, Manhattan) for AI/ML workloads.
- Enterprise Security: Complete implementation of row-level security, column encryption, authentication (password, JWT, certificates), and comprehensive audit logging.
- Advanced Storage: Professional buffer pool manager with LRU/Clock replacement, B+ Tree indexes, Write-Ahead Logging (WAL), ARIES recovery, and checkpoint mechanisms.
- ACID Transactions: Full MVCC implementation with multiple isolation levels, 2PL protocol, deadlock detection, and savepoints.
- Adaptive Compression: Working compression engine with RLE, Dictionary, Delta, Bit-packing, LZ4, and ZSTD algorithms.
- Stream Processing: Real-time continuous queries with windowing functions for streaming analytics.
These features are designed to work seamlessly together in a single, unified engine—unlike many incumbents that require extensions, plugins, or external services to achieve similar functionality.
Feature Matrix & How It’s Different
LatticeDB focuses on features that major RDBMS generally don’t provide natively out-of-the-box all together:
| Capability | LatticeDB | PostgreSQL | MySQL | SQL Server |
|---|---|---|---|---|
| Full SQL with JOINs, GROUP BY, Aggregates | ✅ Fully Implemented | ✅ Built-in | ✅ Built-in | ✅ Built-in |
| Time Travel Queries (FOR SYSTEM_TIME) | ✅ Fully Implemented | Via extensions | Limited | Temporal tables |
| Native Vector Search (AI/ML) | ✅ Fully Implemented | pgvector extension | Via plugins | Limited |
| Row-Level Security | ✅ Fully Implemented | ✅ Built-in | Via views | ✅ Built-in |
| Column Encryption | ✅ Fully Implemented | Via extensions | TDE only | TDE + Always Encrypted |
| Adaptive Compression | ✅ Fully Implemented | Basic support | Basic | Advanced |
| Stream Processing | ✅ Fully Implemented | Via extensions | Via binlog | CDC |
| ACID with MVCC | ✅ Fully Implemented | ✅ Built-in | ✅ InnoDB | ✅ Built-in |
| Write-Ahead Logging | ✅ Fully Implemented | ✅ Built-in | ✅ Built-in | ✅ Built-in |
| B+ Tree Indexes | ✅ Fully Implemented | ✅ Built-in | ✅ Built-in | ✅ Built-in |
| Comprehensive Audit Logging | ✅ Fully Implemented | Via extensions | Limited | ✅ Built-in |
| Custom Merge Policies (CRDTs) | ✅ Fully Implemented | Third-party | Third-party | Third-party |
| Vector Search with Multiple Algorithms | ✅ Fully Implemented | pgvector extension | Via plugins | Limited |
| Built-in Web GUI | ✅ Fully Implemented | Third-party | Third-party | Third-party |
Important
Notes: These comparisons refer to native, unified features in a single engine. Many incumbents can achieve parts of this via extensions, editions, or external services, but not as a cohesive, first-class design as in LatticeDB.
Architecture Overview
The architecture is modular, with clear separation of concerns:
flowchart TD
subgraph "Client Layer"
GUI[Web GUI]
CLI[CLI Client]
SDK[SDKs]
end
subgraph "Query Engine"
PARSE[SQL Parser]
OPT[Query Optimizer]
EXEC[Executor]
end
subgraph "Storage Layer"
MVCC[MVCC Controller]
ULS[Universal Log Store]
IDX[B+ Tree & Vector Indexes]
end
subgraph "CRDT Engine"
LWW[LWW Register]
GSET[G-Set]
COUNTER[PN-Counter]
end
subgraph "Security & Privacy"
RLS[Row-Level Security]
DP[Differential Privacy]
AUDIT[Audit Logger]
end
GUI --> PARSE
CLI --> PARSE
SDK --> PARSE
PARSE --> OPT
OPT --> EXEC
EXEC --> MVCC
MVCC --> ULS
ULS --> IDX
EXEC --> RLS
EXEC --> DP
EXEC --> AUDIT
ULS --> LWW
ULS --> GSET
ULS --> COUNTER
F1 --> O3
F2 --> O3
F3 --> O3
Quick Start
Prerequisites
- C++17 toolchain (clang++/g++)
- CMake ≥ 3.15
- Linux/macOS/WSL2
- Optional: Python 3.x to run simple workload scripts
- Recommended: VS Code with Dev Containers for easy setup
Build from Source
This requires you to have the prerequisites installed. Then:
git clone https://github.com/hoangsonww/LatticeDB-NextGen-DBMS.git
cd LatticeDB-NextGen-DBMS
mkdir build && cd build
cmake .. -DCMAKE_BUILD_TYPE=Release
make -j$(nproc)
./latticedb # launches CLI interfaceDevContainer (VS Code)
You do not need to install any dependencies locally. Just:
-
Install VS Code and the Remote - Containers extension.
-
Open the project folder in VS Code.
-
When prompted, reopen in the container.
-
Run (in a terminal inside the container):
cmake -S . -B build-container -DCMAKE_BUILD_TYPE=Release -G Ninja cmake --build build-container --parallel ./build-container/latticedb --version # will show version info ./build-container/latticedb # launches CLI interface
When successful, you should see the following:
Feel free to try out some commands such as help, \d (list all tables), and \stats (show DB stats), and more!
Hello World - Working Examples
-- Create tables with various data types
CREATE TABLE users (
id INTEGER PRIMARY KEY,
name VARCHAR(100) NOT NULL,
email VARCHAR(255) UNIQUE,
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
);
CREATE TABLE orders (
id INTEGER PRIMARY KEY,
user_id INTEGER,
amount DOUBLE,
status VARCHAR(20)
);
-- Insert data
INSERT INTO users (id, name, email) VALUES
(1, 'Alice Johnson', 'alice@example.com'),
(2, 'Bob Smith', 'bob@example.com');
INSERT INTO orders VALUES
(1, 1, 99.99, 'completed'),
(2, 1, 149.50, 'pending'),
(3, 2, 75.00, 'completed');
-- JOIN queries (fully working)
SELECT u.name, COUNT(o.id) as order_count, SUM(o.amount) as total
FROM users u
INNER JOIN orders o ON u.id = o.user_id
GROUP BY u.id, u.name;
-- Aggregates (all working)
SELECT status,
COUNT(*) as count,
SUM(amount) as total,
AVG(amount) as average,
MIN(amount) as minimum,
MAX(amount) as maximum
FROM orders
GROUP BY status;
-- Time travel query
SELECT * FROM orders FOR SYSTEM_TIME AS OF TX 5;
-- Vector search (if table has vector column)
CREATE TABLE embeddings (
id INTEGER PRIMARY KEY,
content TEXT,
vector VECTOR(768)
);
-- Transaction example
BEGIN;
UPDATE orders SET status = 'shipped' WHERE id = 2;
INSERT INTO orders VALUES (4, 2, 199.99, 'pending');
COMMIT;
EXIT;Note
The REPL demonstrates the core LatticeDB concepts end-to-end in a single process. For distributed mode, use the coordinator + shard binaries (see /cmd).
Command-Line Interface (CLI)
The CLI provides an interactive shell to run SQL commands:
./latticedbType help for a list of commands.
Tip
Visit any ./latticedb file in either build or build-container to see available options and run it so you don't have to manually build every time.
Start with GUI
LatticeDB includes a modern web-based GUI with powerful features:
GUI Features
- 🎨 Dark/Light Mode: Full theme support with persistent settings
- 📝 SQL Editor: Monaco-based editor with syntax highlighting
- 📊 Results Visualization: Tabular view with export capabilities
- 🕐 Query History: Track and replay previous queries
- ⭐ Favorites: Save frequently used queries
- 🗂️ Schema Browser: Interactive database schema exploration
- 🎯 Mock Mode: Try LatticeDB without running the server
Running the GUI
# 1) Build and start the HTTP server:
cmake -S . -B build && cmake --build build -j
./build/latticedb_server
# 2) In a new terminal, start the GUI:
cd gui
npm install
npm run dev
# Open http://localhost:5173Ensure your server is running before using the GUI:
Mock Mode (No Server Required)
The GUI can run standalone with sample data, just in case you don’t want to start the server:
cd gui
npm install
npm run dev
# Toggle "Mock Mode" in the UI to use sample dataHow the GUI looks...
GUI Preview
Important
Preview the GUI without building from source by visiting the https://latticedb.vercel.app/ hosted on Vercel.
Zipping the DBMS
To create a distributable package of the built binaries and necessary files:
cd build # or build-container if using DevContainer
cmake -E tar cf ../latticedb-build.zip --format=zip .This will create latticedb-build.zip in the parent directory, containing all compiled binaries and resources needed to run LatticeDB on another compatible system. You can then unzip and run the server or CLI directly.
Tip
Instead of manually zipping the DBMS, you can visit Releases to download pre-packaged binaries/ZIPs for your platform.
Core Concepts & Examples
Mergeable Relational Tables (MRT)
- Per-column merge policies:
lww,sum_bounded(min,max),gset, and custom WASM resolvers. - Ideal for active-active replication and edge/offline writes.
stateDiagram-v2
[*] --> LocalWrite: Write Operation
LocalWrite --> CRDTDelta: Generate Delta
CRDTDelta --> VectorClock: Update Clock
VectorClock --> Broadcast: Ship to Peers
Broadcast --> MergeResolve: Apply Policy
MergeResolve --> LWW: Last-Writer-Wins
MergeResolve --> Union: Set Union
MergeResolve --> Custom: Custom Resolver
LWW --> Apply: Persist
Union --> Apply: Persist
Custom --> Apply: Persist
Apply --> [*]
Example
-- Add a custom WASM resolver for notes (pseudo)
CREATE MERGE RESOLVER rev_note LANGUAGE wasm
AS 'wasm://org.example.merges/resolve_rev_note@1.0';
ALTER TABLE tickets
ALTER COLUMN note SET MERGE USING rev_note;Bitemporal Time Travel & Lineage
- Every row carries
tx_from/tx_toandvalid_from/valid_to. - Ask: “What did we believe on Aug 10?” vs “What was valid on Aug 10?”
sequenceDiagram
participant Q as Query
participant P as Parser
participant T as Temporal Rewriter
participant I as Temporal Index
participant E as Executor
Q->>P: FOR SYSTEM_TIME AS OF timestamp
P->>T: Extract temporal predicates
T->>I: CSN range lookup
I-->>E: Pruned row versions
E-->>Q: Snapshot-consistent results
Example
-- Snapshot by transaction time
SELECT * FROM orders FOR SYSTEM_TIME AS OF '2025-08-10T13:37:00Z' WHERE id=42;
-- Correct valid time retroactively
UPDATE orders VALID PERIOD ['2025-07-01','2025-07-31') SET status='canceled' WHERE id=42;
-- Why did it change?
SELECT lineage_explain(orders, 42, '2025-08-10T13:37:00Z');Policy-as-Data & Differential Privacy
- Declarative policies stored in the catalog; enforced in the planner and executor.
- RLS/CLS, masking, retention, and ε-budgeted differentially private aggregates.
flowchart TD
U[User/Service] --> AUTH[Authentication]
AUTH --> CTX[Security Context]
CTX --> POL[Policy Engine]
POL --> RLS[Row-Level Security]
POL --> CLS[Column-Level Security]
POL --> DP[Differential Privacy]
RLS --> PLAN[Query Plan]
CLS --> PLAN
DP --> BUDGET[Epsilon Budget]
BUDGET --> PLAN
PLAN --> EXEC[Executor]
EXEC --> AUDIT[Audit Logger]
Example
CREATE POLICY ssn_mask
ON people AS COLUMN (ssn)
USING MASK WITH (expr => 'CASE WHEN has_role(''auditor'') THEN ssn ELSE sha2(ssn) END');
CREATE POLICY dp_count_sales
ON sales AS DP USING (epsilon => 0.5, sensitivity => 1);
SET DP_EPSILON = 0.5;
SELECT DP_COUNT(*) FROM sales WHERE region='NA';Vectors & Semantic Joins
- Built-in
VECTOR<D>columns and ANN indexes (HNSW/IVF plugins). - Optimizer uses vector distance prefilters before relational joins.
CREATE TABLE items(
id UUID PRIMARY KEY,
title TEXT,
embedding VECTOR<768> INDEX HNSW (M=32, ef_search=64)
);
SELECT o.id, i.title
FROM orders o
JOIN ANN items ON distance(o.query_vec, items.embedding) < 0.25
WHERE o.status = 'open';Streaming Materialized Views
- Exactly-once incremental MVs consuming table CDC or external logs (Kafka/Pulsar).
- Backfill and catch-up integrate with temporal indexes.
flowchart LR
subgraph Sources
CDC[Table CDC]
KAFKA[Kafka Stream]
end
subgraph Processing
DECODE[Decoder]
WINDOW[Window Assignment]
AGG[Incremental Aggregation]
end
subgraph Output
STATE[State Store]
MV[Materialized View]
end
CDC --> DECODE
KAFKA --> DECODE
DECODE --> WINDOW
WINDOW --> AGG
AGG --> STATE
STATE --> MV
Example
CREATE MATERIALIZED VIEW revenue_daily
WITH (refresh='continuous', watermark = INTERVAL '1 minute')
AS
SELECT DATE_TRUNC('day', ts) d, SUM(amount) amt
FROM STREAM OF payments
GROUP BY d;
CALL mv.backfill('revenue_daily', source => 'payments_archive', from => '2025-01-01');Storage, Transactions & Replication
LatticeDB uses a Unified Log-Structured Storage (ULS):
- Append-friendly row pages (OLTP), columnar projections (OLAP).
- Temporal pruning with min/max and validity intervals.
- B+Tree/ART for point/range, inverted indexes for JSON, ANN for vectors.
- WAL, checksums, compression, and envelope encryption.
flowchart LR
subgraph "Storage Engine"
WAL[Write-Ahead Log]
MEM[MemTable]
L0[L0 SSTables]
L1[L1 SSTables]
L2[L2 SSTables]
end
subgraph "Indexes"
BT[B+ Tree]
VEC[Vector HNSW]
BLOOM[Bloom Filter]
end
WAL --> MEM
MEM --> L0
L0 --> L1
L1 --> L2
L0 --> BT
L0 --> VEC
L0 --> BLOOM
Transactions & Consistency
- MVCC with serializable option.
- Deterministic lane batches high-conflict transactions (Calvin-style).
- Causal+ snapshots with bounded staleness hints.
sequenceDiagram
participant C as Client
participant P as Parser
participant O as Optimizer
participant E as Executor
participant M as MVCC
participant S as Storage
participant A as Audit
C->>P: SQL Query
P->>O: AST + Policies
O->>E: Physical Plan
E->>M: Begin TX (CSN)
M->>S: Read/Write at CSN
S->>S: Apply CRDT Merge
S-->>M: Result Set
M-->>E: Versioned Data
E->>A: Log Access
E-->>C: COMMIT (txid)
SQL: LatticeSQL Extensions
MERGEpolicies in column definitions (MERGE lww,MERGE sum_bounded(a,b),MERGE gset).FOR SYSTEM_TIME AS OFfor time travel;VALID PERIOD [from,to).DP_COUNT(*)and other DP aggregates (withSET DP_EPSILON).VECTOR<D>withDISTANCE(vec, [..])predicates.- Streaming
STREAM OFsources inCREATE MATERIALIZED VIEW.
Note
LatticeSQL is a strict superset of a familiar ANSI subset—with new temporal, merge, vector, DP, and streaming constructs.
Operations & Observability
- Resource groups with workload classes (OLTP/Analytics/Vector/Streaming).
- Admission control, plan shaping, and graceful degradation under pressure.
- End-to-end tracing, metrics, and lineage/audit explorer.
flowchart TD
IN[Incoming Queries] --> CLASS[Classifier]
CLASS -->|OLTP| RG1[Low-Latency]
CLASS -->|Analytics| RG2[Throughput]
CLASS -->|Vector| RG3[Cache-Heavy]
CLASS -->|Streaming| RG4[Deadline]
RG1 --> ADM[Admission Controller]
RG2 --> ADM
RG3 --> ADM
RG4 --> ADM
ADM --> RUN[Executors]
Implementation Status
Core Components
- ✅ SQL Parser: Full recursive descent parser with LatticeSQL extensions
- ✅ Storage Engine: Page-based disk manager with buffer pool management
- ✅ Index Structures: B+ tree implementation with iterator support
- ✅ Query Processing: Query planner and executor framework
- ✅ Type System: Support for basic types, vectors, sets, and CRDTs
- ✅ Transaction Management: MVCC foundation with isolation levels
- ✅ REPL Interface: Interactive command-line shell (288 lines)
- ✅ HTTP Server: REST API bridge for web clients (118 lines)
- ✅ Web GUI: Modern React/TypeScript interface with Monaco editor
Test Coverage
Our comprehensive test suite includes:
- Temporal Operations: Time-travel queries, bitemporal support
- CRDT Merging: LWW, G-Set, counters, bounded sums
- Vector Search: Distance queries, similarity operations
- Differential Privacy: DP aggregates with epsilon budgets
- Transactions: ACID properties, isolation levels, savepoints
- Complex Joins: All join types, multi-table operations
- Schema Evolution: Online DDL, constraints, computed columns
- Streaming: Materialized views, windowing functions
- Security: Row/column-level security, audit logging
Run tests with:
cd tests
./run_all.shBenchmarks
Preliminary benchmarks show competitive performance on OLTP workloads compared to PostgreSQL and MySQL, with room for optimization in OLAP and vector search scenarios.
Visit the benchmarks file for details.
Run the native suite from the repository root:
cmake -S . -B build -DCMAKE_BUILD_TYPE=Release # configure (skip if build/ already exists)
cmake --build build --target latticedb_bench --config Release
./build/latticedb_benchAdd -v to show per-iteration progress and --vector to include the optional vector-search benchmarks. Each run prints a summary table in the terminal and writes a benchmark_results.csv artifact in the project root.
Deployment
Wanna run LatticeDB in production? No problem! LatticeDB provides production-ready deployment configurations for major cloud providers and HashiCorp stack, enabling you to deploy your database with enterprise-grade reliability, security, and scalability.
Built-in Monitoring & Observability
All LatticeDB deployments include integrated Prometheus and Grafana monitoring out of the box:
⚡ Prometheus Metrics Collection
- Application Metrics: Request rates, response times, error rates, database performance
- Infrastructure Metrics: CPU, memory, disk, network utilization across all platforms
- Service Discovery: Automatic discovery of LatticeDB instances and supporting services
- Multi-tenant: Environment-specific monitoring (dev/staging/prod)
📈 Grafana Dashboards
- LatticeDB Dashboard: Query performance, connection pools, transaction metrics
- Infrastructure Dashboard: System resources, container metrics, storage utilization
- Platform-specific Dashboards: AWS CloudWatch, Azure Monitor, GCP Cloud Monitoring integration
- HashiCorp Stack Dashboard: Consul, Vault, Nomad cluster health
🚨 Intelligent Alerting
- Critical Alerts: Service down, high error rates, database failures (PagerDuty)
- Warning Alerts: Performance degradation, resource pressure (Slack)
- Info Alerts: Deployment events, scaling activities (Email)
# Access monitoring dashboards
# AWS: kubectl port-forward svc/grafana 3000:3000
# Azure: az containerapp browse --name grafana
# GCP: gcloud run services proxy grafana --port=3000
# HashiCorp: consul connect proxy -service grafana -upstream grafana:3000Ready-to-use monitoring for production workloads with zero configuration required! 🎯
Tip
See MONITORING.md for detailed setup, customization, and alerting instructions.
Supported Platforms
| Platform | Service | Features | Best For |
|---|---|---|---|
| ☁️ AWS | ECS Fargate + ALB | Auto-scaling, EFS storage, CloudWatch | Enterprise workloads |
| 🌐 Azure | Container Apps | Serverless, Auto-scaling, Azure Files | Modern cloud-native apps |
| 🚀 GCP | Cloud Run + Cloud SQL | Serverless, Global load balancer | Global applications |
| 🏗️ HashiCorp | Nomad + Consul + Vault | Service mesh, Secret management | On-premises/hybrid |
Quick Deploy
Choose your preferred platform and deploy with a single command:
# AWS - Deploy to ECS Fargate with auto-scaling
cd aws && ./deploy.sh
# Azure - Deploy to Container Apps
cd azure && ./deploy.sh
# GCP - Deploy to Cloud Run (serverless)
export GCP_PROJECT=your-project-id
cd gcp && ./deploy.sh
# HashiCorp Stack - Deploy to Nomad with service mesh
cd hashicorp && ./deploy.shWhat You Get
Each deployment provides:
- 🔄 Auto-scaling: Scales based on CPU, memory, and request load
- 🔒 Security: Network isolation, secrets management, TLS encryption
- 📊 Monitoring: Comprehensive logging, metrics, and alerting
- 💾 Persistence: Durable storage with automated backups
- 🌐 Load Balancing: High availability with health checks
- 📱 Management: Web UIs and CLI tools for operations
- 🛠️ CI/CD: Ready for integration with your pipelines
- 🧩 Extensible: Customize with Terraform variables and modules
Production Features
- Infrastructure as Code: All deployments use Terraform
- CI/CD Ready: Includes Travis CI and Jenkins configurations
- Multi-Environment: Development, staging, and production configs
- Cost Optimized: Auto-scaling and resource optimization
- Security Hardened: Following cloud security best practices
Example: Deploy to AWS
cd aws
# Customize your deployment
cp terraform.tfvars.example terraform.tfvars
# Edit terraform.tfvars with your AWS region, domain, etc.
# Deploy with monitoring and auto-scaling
./deploy.sh
# Your LatticeDB is now running at:
# https://your-domain.com (or load balancer DNS)CI/CD Integration
Automated deployments with:
- Travis CI: Multi-cloud parallel deployments
- Jenkins: Comprehensive pipeline with testing
- GitHub Actions: Build, test, and deploy on push/PR
Tip
For complete deployment instructions, troubleshooting, and advanced configurations, see DEPLOYMENT.md.
GitHub Actions
In addition to local builds & Travis/Jenkins, we also use GitHub Actions for CI/CD:
The workflow automatically builds and tests the code on every push and pull request, ensuring code quality and stability.
Contributing
We ❤️ contributions! Ways to help:
- Tackle issues labeled
good-first-issueorhelp-wanted. - Add merge resolvers and UDFs in WASM.
- Extend the optimizer (cardinality models, join ordering, vector pushdowns).
- Improve docs—especially temporal/lineage tutorials.
Note
Please run clang-format with cmake --build build-container --target format before submitting PRs! Replace build-container with your build directory if different.
License
Unless stated otherwise in the repository, LatticeDB is released under the MIT License. See LICENSE for details.
Appendix: Glossary
- MRT — Mergeable Relational Table (CRDT-backed).
- Bitemporal — Tracks both transaction time (what the DB believed) and valid time (what was true in the domain).
- Causal+ — Causal consistency with convergence guarantees.
- TEE — Trusted Execution Environment (SGX/SEV-SNP).
Bonus: ER Model for Governance & Provenance
erDiagram
TABLE ||--o{ ROW_VERSION : has
POLICY ||--o{ POLICY_BINDING : applies_to
ROW_VERSION }o--|| LINEAGE_EVENT : derived_from
USER ||--o{ QUERY_SESSION : initiates
QUERY_SESSION ||--o{ AUDIT_LOG : writes
TABLE ||--o{ INDEX : has
TABLE ||--o{ MATERIALIZED_VIEW : source
TABLE {
uuid table_id PK
string name
json schema_versions
json merge_policy
string crdt_type
interval retention_period
}
ROW_VERSION {
uuid row_id
uuid table_id FK
bigint csn_min
bigint csn_max
timestamptz valid_from
timestamptz valid_to
jsonB data
jsonB provenance
vector_clock merge_clock
}
POLICY {
uuid policy_id PK
string name
string type "RLS|CLS|DP|MASK"
json spec
float epsilon_budget
}
INDEX {
uuid index_id PK
uuid table_id FK
string type "BTREE|HNSW|BITMAP|BLOOM"
json config
}
MATERIALIZED_VIEW {
uuid view_id PK
string name
string refresh_type
interval refresh_interval
}
Why LatticeDB vs. “Big Three”
Important
LatticeDB natively combines CRDT mergeability, bitemporal & lineage, policy-as-data with differential privacy, streaming MVs, vector search, and WASM extensibility into the core engine—so you can build offline-tolerant, audited, privacy-preserving, real-time apps without stitching together sidecars, plugins, and external services.
Thank you for exploring LatticeDB! We’re excited about the future of databases and would love to hear your feedback and contributions.






