Build

Build

Wire it right.
Document it well.

You already know what's broken. The question is who actually ships the warehouse, the models, the taxonomy, the dashboards, the monitoring, and the documentation — without it dragging across two quarters and three handoffs. Build is the engagement most clients want when they call: a working, documented, defensible data layer your next hire can read on day one.

Typical engagement
6–12 weeks, scoped after Diagnose
Includes
Data Engineering · BI & Reporting

What we do

What gets delivered.

Seven defensible artifacts you own outright at handoff — the warehouse, the models, the pipelines, the taxonomy, the dashboards, the monitoring, the docs. Specific scope is set in the SOW against your audit findings.

  1. 01

    One governed warehouse you actually own.

    Snowflake, BigQuery, or Redshift — picked against your team's stack and skills, not our preference. Provisioned in your cloud account with dev/prod separation and role-based access. Every BI tool, marketing platform, and alert reads from this layer. Disagreement about a number becomes a fight at the warehouse, not in the Monday meeting.

  2. 02

    A dbt project your next analyst can read.

    Staging, intermediate, and marts — version-controlled, tested on every build, structured to the dbt Labs reference so anyone you hire next already knows the layout. Primary-key tests on every model. Referential integrity tests across the warehouse. The lineage graph lives in the repo.

  3. 03

    Ingestion pipelines with documented contracts.

    Fivetran, Airbyte, or custom Python — landed against every source named in the architecture. Each source ships with a documented schema, a freshness expectation, and a source-freshness check. When something breaks upstream, a named human gets paged. Your first full-time analyst inherits a system, not a folder of broken cron jobs.

  4. 04

    An event taxonomy written before tags ship.

    A tracking plan on Segment Spec or Snowplow IGLU conventions — object plus action, snake_case properties, a written purpose and a named owner for every event. GTM, GA4, Segment, or Snowplow configured against the plan, not against whatever the last agency clicked through. The plan is the source-of-truth for what counts as a conversion, a session, or a customer.

  5. 05

    Executive dashboards your CFO will defend.

    Power BI, Tableau, Looker, or Metabase — picked against what your team will actually maintain. Three to six dashboards, eight to twelve KPIs each. Every panel traces back to a tested dbt model. Every metric is signed off against a written contract. No spreadsheet appendices, no reconciliation calls, no ‘the dashboard says one thing, the CRM says another.’

  6. 06

    Monitoring routed to a named human.

    dbt source freshness with warn and error thresholds tied to the metric contract's SLAs. Model tests run on every build. Failures route to Slack and to a named owner — not a channel everyone has muted. A runbook for every alert, exercised at least once before handover.

  7. 07

    Documentation a new hire can read on day one.

    Published dbt docs, an architecture diagram, dashboard ownership table, metric glossary, and a written onboarding plan for the analyst inheriting the system. The acceptance test: a clean reader can stand up a dev environment from the docs alone in under two hours.

Method

How the work goes.

Six to twelve weeks, scoped against the Diagnose findings or a written internal audit. Weekly demos against real data — you see what we're building as we build it. Milestones are set in the SOW against the agreed scope.

  1. Weeks 1–2

    Architecture & scope

    We re-read the Diagnose report or your written audit, re-interview your sponsor and one or two power users, and turn the findings into numbered architecture decisions — warehouse, ingestion, BI tool, modeling convention, monitoring. Your sponsor signs the architecture and a v0 metric contract before any code ships.

  2. Weeks 3–4

    Foundation

    Warehouse provisioned in your cloud account with role-based access and dev/prod separation. Source connectors landed against the architecture. Staging layer modeled one-to-one against sources. CI green on every PR. By the end of week four your team can query staging in SQL and source freshness is live.

  3. Weeks 5–8

    Modeling & instrumentation

    Business-grain marts built backward from the metric contract. Event taxonomy wired through GTM and GA4 (or Segment, or Snowplow) and verified end-to-end into the warehouse. Weekly demos against real data — disagreements about a definition trigger a written amendment, not a quiet model change. We aim to land the first trusted executive dashboard inside the first half of the engagement; the specific milestone is set in the SOW against the agreed scope.

  4. Weeks 9–11

    Dashboards & monitoring

    The full dashboard suite — three to six dashboards, every panel traceable to a named dbt model and a written metric definition. Monitoring wired with warn and error thresholds against the SLAs in the metric contract. At least one freshness alert is fired and resolved per the runbook before handover. Documentation written alongside the code, not after.

  5. Week 12

    Walk-through & ownership

    Half-day walk-through with your sponsor and the inheriting team — every dashboard, every alert, every runbook, recorded. Repo, warehouse, and BI ownership transferred. A written onboarding plan handed over for the analyst inheriting the system. A scoped post-handover availability window is set in the SOW — typically a few weeks of clarifying questions on what we shipped.

Stack

What we build with.

The picks are scoped to what your team will own after handoff. Boring is a feature. New is a risk we charge for.

  • SQL
  • Python
  • dbt
  • Snowflake
  • BigQuery
  • Redshift
  • Power BI
  • Tableau
  • Looker
  • Metabase
  • Fivetran
  • Airbyte
  • Airflow
  • GTM
  • GA4
  • GitHub

Outcomes

What this looks like
in practice.

Three anonymized patterns from prior work — different industries, same shape: a warehouse and a metric contract that survived a leadership change.

Mid-market retail — multi-banner

Monday reporting compressed from three days to four hours.

A multi-banner retailer running $20M of promotional spend across nine storefronts. Forty-seven competing definitions of revenue, no shared SKU spine. We landed every source in bronze, modeled a SKU spine and promo-calendar dimension, and shipped one governed Looker dashboard against a tested mart. Stack: BigQuery, Fivetran, dbt, Looker. Eleven weeks.

Digital banking — regulated

Fourteen source systems unified into one governed warehouse.

A Canadian digital bank under OSFI supervision. Sixty analysts running ad-hoc SQL against operational read-replicas, fourteen competing definitions of an active customer. We landed bronze, silver, and gold with audit lineage tagged to a signed metric contract — SCD-2 customer dimension, an event-grain fact capturing every customer-affecting interaction with stage attribution. The weekly executive funnel replaced the quarterly board deck. Stack: Snowflake, Fivetran plus custom CDC, dbt Core, Looker, Datadog.

CPG — multi-brand media

Three brands moved from spreadsheet attribution to a signed cross-channel contract.

Three brands under one holding, two CMO changes in eighteen months. We landed Meta, Google, TikTok, and DSP spend into a unified warehouse joined to first-party sales through a shared campaign dimension. Last non-direct, modeled MMM, and halo exclusions written into a signed attribution contract across all three brands. Survived two further CMO changes. Stack: Snowflake, Fivetran, dbt, Power BI. Eight weeks.

Next

Ready to start with Build?

Send us a note describing what your stack looks like today and where it's breaking. If you already have a Diagnose-grade audit, share it. We'll scope the build against your existing tools, your team's capacity, and the next hire you're planning to make.

© 2026 Coded Level — Toronto analytics studio.Toronto, ON · Canada