Instrumentation
GA4, GTM, Segment, or Snowplow — wired against a written tracking plan, not whatever the last agency clicked through.
Practice
Coded Level is a Toronto analytics studio. Small, opinionated, design-forward. We work past the dashboard, into the decision — with mid-market teams whose data has started costing them more than it returns.
Thesis
Three things break in mid-market analytics teams, in roughly the same order. Definitions drift. Dashboards lose trust. The team that built it leaves. By the time a CFO is quoting one revenue number and a CMO is quoting another, the fix isn't a new tool — it's a sharper diagnosis, a tighter contract on what each number means, and code your next analyst can read without a tour.
We take on a small number of engagements at a time, on purpose. The work shows up in the room — every metric traces back to a model, every dashboard has a named owner, every assumption is in plain English. The point is to leave you ready to hire a full-timer. Not to keep us on retainer forever.
Capabilities
The scope of the practice, in plain language. Each engagement scopes which of these matter — and which ones stay out of the SOW.
GA4, GTM, Segment, or Snowplow — wired against a written tracking plan, not whatever the last agency clicked through.
Snowflake, BigQuery, or Redshift with a dbt project structured for someone else to inherit. Tests on every model, lineage you can defend.
Power BI, Tableau, Looker, or Metabase — every panel traces to a tested mart, every metric is signed.
Multi-touch, Bayesian MMM, or geo-lift — calibrated to ground truth, not whichever model flatters the channel mix.
Cohort-economics marts, BG/NBD or survival LTV, predicted-value scoring routed to the ad platforms that act on it.
Churn, lead scoring, demand forecasting — boring baselines first, fancy models only when they demonstrably beat them.
Metric contracts, freshness SLAs, named owners. The argument moves to the warehouse, not the meeting.
Statistical baselines on canary metrics, alerts routed to a named human — not a channel everyone has muted.
Principles
These apply regardless of engagement type. They're the rules we won't bend, even when the client asks. Click through to see how each one shows up in the work.
Every engagement starts with a written diagnosis. If we can't tell you precisely what's broken and why in under two weeks, building the fix is guessing.
What this looks like in practice: a written audit lands before any code ships. If a client wants to skip the diagnosis, we explain why we won't — broken definitions don't get found by building on top of them.
Engagement
From the first note to the handover. No surprise process, no procurement gauntlet, no rolling scope.
A short form on /contact. Where your data function is today, what brought you here, anything else worth knowing. No pitch deck, no procurement gauntlet to clear first.
If we're a fit, we book a call. We confirm there's a real problem we can help with. If we aren't the right fit, we say so on the call and point you at someone who is — that's a better outcome than a bad engagement.
Deliverables named, timeline named, milestones named, price named. No verbal handshakes, no "we'll figure it out as we go." The SOW is the contract; the work is what the SOW says.
Not slides. You see the work every week — disagreements about a definition trigger a written amendment to the metric contract, not a quiet model change. The room is the only place the work lives.
Repo, warehouse, dashboards, runbooks — transferred. A scoped clarifying-questions window lives in the SOW. The point is to leave you ready to inherit, not to keep us on retainer forever.
Work with us
Tell us what's working, what isn't, and what you've already tried. If we're a fit, we'll book a call. If we aren't, we'll point you at someone who is.
See our programsToronto, ON · Canada · Est. 2026