197 results for “topic:dbt-core”
This extension makes vscode seamlessly work with dbt™: Auto-complete, preview, column lineage, AI docs generation, health checks, cost estimation etc
Generate the ERD as a code from dbt artifacts
Make dbt great again! Extend dbt with plugins, local docs and custom adapters — fast, safe, and developer-friendly
Linter for dbt metadata
A dbt-core plugin to weave together multi-project dbt-core deployments
A dbt-core python package that automates the management and creation of dbt groups, contracts, access, and versions.
A portable Datamart and Business Intelligence suite built with Docker, sqlmesh + dbtcore, DuckDB and Superset
Datailot-cli is the command line interface for accessing the AI teammate for engineers to ensure best practices in their SQL and dbt projects.
The dbt-toolkit is an early-stage plugin designed to enhance your experience working with dbt-core projects in JetBrains IDEs.
Scalable OLAP system for credit card transaction analysis, leveraging AWS S3, Databricks, and dbt. Features end-to-end batch processing pipeline, medallion architecture, and interactive fraud detection dashboards. Demonstrates expertise in cloud-based data engineering and advanced analytical modeling for financial data.
The warehouse-native LLM evaluation package for dbt™ - monitor AI quality without data egress
learning-by-doing data model built with dbt-core
dbt Core MCP Server: Interact with dbt projects via Model Context Protocol
Distributed run of dbt models using Airflow
A command-line tool which helps to manage DBT projects.
A production-grade data pipeline that ingests, transforms, and analyzes 5.3+ million rows of daily U.S. equity market data, focusing on Russell 3000 constituents.
An open-source tool that partially automates the migration of dbt packages to Dataform
Improving DX for Analytics Engineers
Learn how to load data, create data models, add data quality tests and documentation using dbt Core with Snowflake
A neovim plugin with shortcuts for everyday dbt tasks.
BigData Pipeline is a local testing environment for experimenting with various storage solutions (RDB, HDFS), query engines (Trino), schedulers (Airflow), and ETL/ELT tools (DBT). It supports MySQL, Hadoop, Hive, Kudu, and more.
DataTalkClub Data engineering Bootcamp Project: Building an Efficient Batch Data Pipeline for Analytical Insights
Develop data models and dashboards
Quickstart from https://quickstarts.snowflake.com/guide/data_engineering_with_apache_airflow/
🛍️ GO Sales Data Warehouse Project A dbt-powered datawarehouse project built on the IBM GO Sales sample dataset using 🦆 DuckDB and 🐍 Python.. It models the data through a layered design architecture - raw, staging, detail and mart layers.
Full-Funnel AI Marketing Analytics. A modern data stack powered by dbt MetricFlow and MCP. Natural language insights across Google/Meta Ads, CRM, and 5 data warehouses. Includes XGBoost lead scoring and a $0/mo architecture.
No description provided.
Demo data project with IAC, CI/CD, testing and data manipulation with Terraform, Python, AWS, Airflow, DBT, and Databricks
Docker deployment of Dagster, DBT, and OpenMetadata
Learn about Data Engineering ⛏️, Data Pipelines Building 🪈 ,Batch Processing 🥅and Data Streaming 🎏 with PostgreSQL, Docker, dbt , AirFlow ,Airbyte, Spark and Kafka