Audit. Build. Run. — What Actually Happens.
The three-word version of the method is on the homepage. The thirty-page version lives inside our client folder. This is the middle version — long enough to be honest about what happens, short enough that you can read it in a sitting and decide if you want the deeper conversation.
To make it concrete, walk through it with a composite client. Call him D. He is 41, owns a fitness business doing somewhere in the high six figures, lifts five days a week, has trained seriously for nine years, ran a meet two years ago, has been chasing the same squat number for fourteen months. He sleeps 6.5 hours on a good night, drinks two beers on Fridays, travels for one event a quarter, and has a four-year-old. He emailed us in March. Below is how each phase ran for him. Names and details are altered — the shape is real.
Why we don't start with a plan
The first instinct most coaches have when a lifter shows up is to write a program. That is the part of the job that looks like the job. It is also where most engagements quietly fail, because the program is being written against a guess about the bottleneck rather than a measured one.
When D got on the discovery call, his self-diagnosis was that his programming had gone stale and he needed more intensity. He had been running a high-volume hypertrophy block for the previous twelve weeks. He wanted a strength block. A coach without an Audit phase would have written it, charged him, and watched him plateau again twelve weeks later for the same underlying reason.
We refuse to write a plan in the first week. The cost of refusing is friction with people who want action. The benefit is that we stop solving the wrong problem.
What the Audit actually surfaces
Day 1 through Day 10 of D's engagement was the Audit. He sent us nine months of training logs from his app of choice, exported as CSV. He filled out an 80-question intake covering training history, current programming, recovery patterns, nutrition, lifestyle stressors, prior coaches, prior injuries, and the gap between his stated goals and his last 12 weeks of behavior. He gave us read access to his wearable for the previous 90 days. We pulled HRV trend, sleep duration, sleep consistency, resting heart rate, and respiratory rate variance.
We held a 60-minute Audit call on Day 4. Most of the call was him talking and us writing. Twenty minutes in, the actual bottleneck surfaced in passing. He mentioned, without thinking it was important, that he had not taken a deload in nine months because he kept feeling like he was about to break through. He also mentioned that his Friday training session was almost always after a Thursday business dinner, and that he had been skipping the morning readiness check for about a year because he could not remember why he started doing it.
Day 8, we delivered the Audit document. Eleven pages. Three prioritized gaps. The first was recovery debt — nine months of accumulated under-recovery driving the plateau and, more importantly, driving the perceived need for more intensity. The second was a measurement gap — he was capturing inputs but not aggregating them weekly, so the plateau had been invisible to him for months. The third was Friday session quality — running fatigued top sets after a sleep-disrupted Thursday and unintentionally calibrating the rest of his week off bad data.
His own diagnosis of "need more intensity" was nowhere in the top three. That was the whole point.
What "Build" means — and what gets cut
Day 11 through Day 18 was the Build. The product was a 22-page operating document. It contained four things.
A 12-week programming block, structured as a moderate-intensity strength block with a forced deload at week four. Top set load is gated by a readiness check on the morning of. Friday session is moved to Saturday morning for the duration of the block. Total weekly volume is down about fifteen percent from his previous programming, which felt wrong to him until we walked through the recovery numbers.
A recovery protocol with a defined sleep window, a wind-down sequence, a morning readiness check (HRV trend, subjective score, sleep quality), and a rule for what readiness scores trigger which session modifications. No new wearable. No new app. The data was already in his existing tools; he just was not reading it.
A measurement loop — daily inputs, weekly aggregates, monthly recalibration, quarterly recompete. A Google Sheet, not a proprietary platform. (See the Measurement Stack article for the full structure.)
An accountability cadence: weekly written check-in by Sunday 9 PM, response from us by Tuesday 9 AM, monthly call on the first Tuesday, quarterly recompete document at week 12.
Things that did not make it into the Build: a meal plan, a supplement stack, a peptide protocol, a body fat target, a max-out attempt schedule. He asked about all five at various points. The Audit had not flagged any of them as the leverage. We declined to write them. This is the part of the practice that frustrates new clients and saves the engagement.
The Build kicked off with a 75-minute onboarding call. We walked through every page of the document, set up the measurement sheet, ran his first readiness check live, and answered questions. Day 18, he started Week 1.
How "Run" prevents the slow drift that kills most engagements
Run is the unglamorous ninety percent. It is also where most independent lifters and most coach-client relationships quietly fail, because the failure is invisible week to week and obvious only at the three-month mark.
D's first six weeks looked like this. Weekly check-ins on Sunday — a structured form covering sessions completed, top-set RPE distribution, average sleep, readiness scores, subjective recovery, nutrition adherence, and one open field for anything off-pattern. Tuesday morning he had a written response from us: what we observed in his numbers, what one adjustment was being made for the upcoming week (raise top-set load by 5 lbs / hold volume / move Friday accessory work to Saturday), and a short note acknowledging whatever was off-pattern.
In week three he asked to skip the forced deload. We declined. He took the deload. In week six he hit a new top set, the first in fourteen months. He almost did not notice because he was so focused on the long-term arc.
Week eight, we had the first monthly recalibration call. Thirty-eight minutes. We looked at the four-week arc: planned versus actual volume, top-set progression, readiness trend, behavioral adherence. We made two small changes — increased the squat frequency from two to three sessions per week, and added a Wednesday mobility block. We did not make five changes. Discipline of one change is the rule.
Week twelve, we ran his quarterly recompete document. Eight pages. It compared his current data against the Day 1 Audit. The three priority gaps had each closed measurably. Recovery debt was paid down; HRV trend was up eleven percent from baseline. The measurement loop was running on its own; he had not missed a Sunday review in twelve weeks. Friday session quality, now a Saturday session, was the highest-quality session of his week. His squat top set was up forty pounds.
What he wrote in the recompete intake field was the line we hear most often from clients who stay: "I cannot believe I tried to fix this with more intensity for fourteen months."
Where this falls apart (and when to walk away)
The method does not work in every case. It is worth being honest about where.
It does not work if the client cannot commit a real thirty minutes on Sunday for the review. The cadence is the system. A client who is too busy for the Sunday review is too busy for the engagement. We end engagements over this and the people we end them for are usually relieved.
It does not work if the client is fundamentally not ready to be coached — wants validation rather than feedback, treats the weekly check-in as a confessional, or argues with the data when the data is inconvenient. We catch most of this on the discovery call and decline the engagement before any money changes hands.
It does not work for beginners. If you have fewer than two years of consistent training, you do not need a custom system. You need a free program, time, and a willingness to be patient. We will tell you this on the discovery call and recommend three free programs you can run instead. This costs us business and it is the right answer.
And it does not work if the Audit reveals that the actual bottleneck is something we cannot help with — an unmanaged injury, a clinical sleep issue, a nutrition disorder, a relationship or work situation that needs different professional support first. When that happens, the engagement does not start. If Audit reveals you don't need us, we refund and recommend an alternative. That is the 14-day window on the refund policy and we honor it.
That is the practice. It is mostly boring. The boring part is the point.