Sales Velocity Engine
Module 6 is the revenue acceleration layer of the operating system. It converts clean pipeline data into velocity metrics that determine funding tier progression in Module 8 and receives quality signals from Module 5's client health data to prioritize the highest-probability leads.
Module 6 receives upstream signals from three modules. Module 1's Vision Canvas feeds a direct routing signal: red on pipeline accuracy or forecast reliability activates hardened CRM validation rules and compresses follow-up cadences. Module 5 (Client Success Loop) provides expansion lead data, which enters Module 6 as the highest-conversion pipeline source. Module 4 (Integrated Tech Stack) supplies the clean CRM data that pipeline hygiene depends on. Dirty data at the integration layer means dirty data in the pipeline.
Downstream, Module 6's velocity metrics feed into Module 8 (Agile Capital Allocation), where Pipeline Value Velocity (PV2) determines whether revenue-side gate KPIs are met for funding tier advancement. Module 6's AI lead scoring and proposal generation components are governed by Module 7 (AI Deployment Canvas).
A standalone sales optimization program cleans the pipeline and measures velocity. Module 6 cleans the pipeline, measures velocity, and routes the results into capital allocation decisions while receiving quality signals from client success and integrity checks from the integration hub.
Why Pipeline Leaks Revenue
Up to 25% of pipeline value leaks through dirty data, inconsistent follow-up, and manual inefficiencies. The money is there. It escapes through operational gaps.
Pattern 1: Dirty data destroys forecast accuracy. Opportunities without close dates, stale deals sitting in the same stage for 30 days, and missing next-step fields make it impossible to predict revenue. The leadership team makes capital allocation decisions based on a pipeline number that is 20 to 40% inflated. Module 8's funding gates need accurate revenue projections. Dirty pipeline data means wrong funding decisions.
Pattern 2: Follow-up speed determines win rate. Studies consistently show that the first vendor to respond to an inbound lead wins 35 to 50% of the time. Most mid-market sales teams respond in hours. The winning teams respond in minutes. The gap between "good enough" and "fastest" is the gap between average and exceptional win rates.
Pattern 3: Manual proposals waste seller time. The average B2B proposal takes 45 minutes to an hour of manual assembly. Sales teams that produce 20 proposals per week lose 15 to 20 hours to document assembly. That time comes directly from selling hours.
Module 6 addresses all three patterns through pipeline hygiene rules (clean data), a proactive follow-up cadence (speed), and AI-assisted proposal generation (efficiency).
Pipeline Hygiene Rules
Four core rules
Rule 1: Mandatory Close Date. Every opportunity must have a defined close date. Simple on the surface, essential for any reliable forecast. Opportunities without close dates are invisible to Module 8's revenue projections.
Rule 2: Stage Aging Threshold. A hard limit on the maximum time a deal can spend in a specific stage. Discovery to Proposal: 14 days maximum. Proposal to Decision: 21 days maximum. Deals that exceed thresholds get flagged automatically. One PE-backed SaaS company improved forecast accuracy from 62% to 89% after enforcing this single rule.
Rule 3: Next-Step Field. Must be populated immediately upon any stage change. An opportunity without a next step is an opportunity without momentum. The field ensures continuous action and prevents deals from stalling silently.
Rule 4: Lost Reason Code. Required within 24 hours of close-lost. This data feeds back into Module 1's quarterly diagnostic. Patterns in lost reason codes reveal systemic issues: pricing objections may signal a Module 8 problem (wrong funding tier for the market), product gaps may signal a Module 5 problem (client success not validating product-market fit).
CRM enforcement
These are not guidelines. CRM validation rules block saves if rules are violated. A sales rep cannot advance an opportunity without a close date, a next step, or compliance with stage aging thresholds. A nightly script flags stale opportunities and triggers a manager notification.
The Hygiene Score
A dedicated Sales Ops Data Steward audits a weekly Hygiene Score: the ratio of clean fields to total fields across all active opportunities. This score is itself a Module 3 KPI. If the Hygiene Score drops below the amber threshold, Module 3's Variance Alert Engine routes the signal to the sales leadership team for intervention.
How Module 1 adjusts the rules
When Module 1's Vision Canvas shows red on pipeline accuracy, Module 6 activates hardened validation. Stage aging thresholds compress (14 days becomes 10). The Hygiene Score threshold tightens (amber triggers at 5% deviation instead of 10%). Follow-up cadences accelerate. The system becomes more demanding because the diagnostic identified the pipeline as unreliable.
Proactive Follow-Up Cadence
Architecture
The follow-up cadence is a sequenced series of touches designed to engage buyers before the competition.
Day 0: Qualification call recap email summarizing what was discussed and next steps. Day 2: Pain-point resource share tailored to the buyer's specific challenge. Day 5: Low-friction voicemail and email. Day 8: Social proof case study matching the buyer's industry or company size. Day 14: Break-up email that creates a decision point.
Enhancements
Time-zone smart send. An AI model detects the recipient's local time zone and schedules delivery for 8:17 AM recipient time. Odd minutes outperform round numbers in open rate testing.
Service Level Objective (SLO). All inbound leads touched within 10 minutes. This SLO is a Module 3 KPI. If response time exceeds the threshold, the Variance Alert Engine flags it as amber, and Module 10 (Change Enablement Sprint) may need to run an adoption sprint on the inbound response process.
Custom line requirement. Every follow-up sequence requires at least one line of custom personalization. This prevents the robotic feel that kills response rates in over-automated sequences.
AI Lead Scoring and Proposal Generation
Lead scoring
The AI lead scoring model combines firmographic fit, engagement signals, title seniority, and web intent into a composite score.
Score 80 or above: routes directly to an Account Executive. Score 60 to 79: enters automated nurturing sequences. Score below 60: returns to marketing for re-engagement.
Account Executives can override scores by up to 10 points in either direction with a documented reason. Override frequency is tracked. A high override rate signals that the model needs retraining, not that the reps are wrong.
Proposal generation
The AI proposal workflow runs in four steps. The rep triggers "Generate Proposal" in the CRM. The AI uses opportunity fields and the configured price matrix to produce a draft. The draft routes to a document for rep edits (tone, compliance, pricing validation). The final PDF is stored and the link is auto-sent to the buyer.
Average time saved: 22 minutes per proposal. One VP of Sales reclaimed 8 hours per month across the team.
Connection to Module 7
Both the lead scoring model and the proposal generator are AI deployments governed by Module 7. They appear in the AI Register with dual ownership (Model Owner from the data team, Business Owner from sales leadership). Quarterly bias audits verify that the lead scoring model does not systematically disadvantage specific industries, company sizes, or geographies. The proposal generator includes a compliance checklist and two-person review to catch hallucinated features or incorrect pricing.
If Module 1's AI Readiness score is below 40%, the lead scoring model runs as a recommendation engine with mandatory human approval. The proposal generator defaults to template-only mode without AI drafting. These restrictions lift as the organization's readiness score improves.
Velocity Metrics
Three core metrics
Cycle Length. Close date minus create date. Measures the time from opportunity creation to close. Shorter is better, but only if win rate holds.
Win Rate. Won divided by (Won plus Lost). The percentage of resolved opportunities that close successfully.
Pipeline Value Velocity (PV2). (Pipeline Value times Win Rate) divided by Cycle Length. A single number that captures how fast value flows through the pipeline. PV2 is the metric that matters most for Module 8 because it predicts revenue throughput.
Dashboard and routing
PV2 is displayed on a BI card with a trend line and daily Slack digest. Traffic-light banding follows Module 3's system: green (at or above target), amber (within 10%), red (more than 10% below target).
When PV2 goes red: Module 6's weekly war-room activates. Sales and customer success leadership review the three component metrics to identify which one is dragging. Is it a pipeline volume problem (not enough deals)? A win rate problem (losing too many)? Or a cycle length problem (deals taking too long)? The diagnosis determines the intervention.
When PV2 feeds Module 8: Revenue-side gate KPIs in the Agile Capital Allocation system use PV2 as a primary input. A project in Tier 2 (MVP) that depends on revenue growth cannot advance to Tier 3 (Scaling) if PV2 is red. This creates a direct connection between sales execution and capital deployment. Funding follows evidence, and PV2 is the evidence.
What makes this different from standard B2B sales metrics
Every B2B sales organization tracks some version of cycle length, win rate, and pipeline value. These are standard metrics available in any CRM reporting package.
Module 6 adds three things that standard sales metrics do not include.
First, the metrics are seeded by Module 1's diagnostic. A red score on pipeline accuracy does not just appear on a dashboard. It tightens the hygiene rules, compresses the stage aging thresholds, and accelerates the follow-up cadence. The system responds to the diagnosis.
Second, the metrics route into Module 8's capital allocation gates. PV2 is not just a sales metric. It is a funding criterion. This means the sales team's execution directly affects how capital is deployed across the company. The connection creates accountability that standard metrics lack.
Third, the AI components (lead scoring, proposal generation) are governed by Module 7 and gated by Module 1's readiness score. Standard sales AI tools deploy when the vendor says they are ready. Module 6's AI tools deploy when the organization is ready, as measured by a 14-dimension diagnostic.
Who This Module Is For
Module 6 was designed for mid-market companies that have a sales team, a CRM, and pipeline data, but cannot reliably predict revenue or connect sales performance to company-wide capital decisions.
These companies know their win rate. They know their cycle length. What they lack is the connection between those metrics and the rest of the business. A drop in win rate prompts a sales meeting. It does not trigger a capital allocation review, a client success investigation, or a diagnostic recalibration. Module 6 creates those connections.