wcgos / ai-deployment-canvas
Module 07

AI Deployment Canvas

Position in the System

Module 7 is the AI governance layer of the operating system. It governs every AI component across every other module and determines whether AI deployments proceed, pause, or roll back based on structured scoring and continuous audit.

No other mid-market operating system has a dedicated AI governance module. EOS does not address AI. Scaling Up does not address AI. OKR frameworks do not address AI. This is not a gap they chose to leave. It is a capability category that did not exist when those frameworks were designed. The VWCG OS was built in an era where AI is not optional, and Module 7 reflects that reality.

Module 7 receives its primary upstream signal from Module 1's AI Readiness Index. Organizations that score below 40% on the five-dimension readiness assessment cannot deploy through Module 7 until they complete foundation work in Modules 2, 3, and 4. This is a hard gate, not a recommendation. Module 1 determines whether Module 7 opens. Module 7 determines how AI operates once it does.

Downstream, Module 7 governs AI components embedded in six other modules: Module 2's Draft-With-AI workflow, Module 5's churn predictor and QBR deck generator, Module 6's lead scoring model and proposal generator, Module 8's scenario modeling, Module 11's turnover risk model, and Module 12's automated threat detection. Every one of these AI tools appears in Module 7's AI Register with dual ownership, incident logging, and quarterly audit requirements.

Why AI Deployments Fail in Mid-Market Companies

The average mid-market company invests heavily in its first AI initiative. Organizations without structured readiness assessment fail far more often than they succeed. The failures are almost never technical.

Pattern 1: Deploying AI on undocumented processes. A company tries to automate a workflow that exists only in someone's head. The AI model is trained on inconsistent data because the process varies by person. Output quality fluctuates. Trust erodes. The initiative is abandoned. This is why Module 1's process clarity score gates Module 7: if Module 2's SOP Codex has not documented the process, Module 7 will not automate it.

Pattern 2: No governance framework. A team deploys an AI tool that works well for six months, then begins producing biased or degraded outputs. Nobody notices because there is no audit cadence. Nobody is accountable because there is no ownership model. The tool quietly damages decisions until someone runs a manual analysis and finds the drift. By then, the damage is measured in months of bad data.

Pattern 3: Shiny object deployment. The CEO reads about AI and mandates adoption. The team selects use cases based on excitement rather than ROI analysis. High-risk, low-readiness projects launch first because they sound impressive. They fail. The organization develops AI skepticism that prevents the practical, high-ROI projects from getting funded.

Module 7 addresses all three patterns through a scoring grid (structured prioritization), a staged deployment pipeline (risk-managed rollout), an AI Register (governance and accountability), and quarterly audits (continuous monitoring).

The Use-Case Scoring Grid

What it does

The Use-Case Scoring Grid evaluates every potential AI project across five dimensions before any resources are committed. This prevents shiny-object syndrome and ensures capital flows to the highest-ROI, lowest-risk opportunities first.

Five dimensions

ROI Potential (1 to 5). Estimated financial or efficiency return. A score of 5 means clear, measurable cost reduction or revenue increase. A score of 1 means speculative or hard-to-quantify benefit.

Data Readiness (1 to 5). Quality and availability of required data. This dimension draws directly from Module 4's Data Quality Monitor. If the integration hub shows data quality issues for the relevant data sources, the readiness score drops.

Data Sensitivity (1 to 5, inverted). Level of personal or regulated data involved. Higher sensitivity means higher risk. A project using anonymized operational data scores 1 (low risk). A project using customer PII scores 4 or 5 (high risk, requiring Module 12's Privacy Impact Assessment).

Error Tolerance (1 to 5, inverted). Consequences if the AI makes mistakes. An email draft assistant that produces a typo: high tolerance, low risk. A financial forecasting model that misallocates capital: low tolerance, high risk. This dimension determines the level of human oversight required.

Implementation Effort (1 to 5, inverted). Practical difficulty including API maturity, data cleaning requirements, and system integration readiness. Module 4's System Inventory provides the raw data for this assessment.

Projects scoring high on ROI and Data Readiness and low on Sensitivity, Error Tolerance, and Implementation Effort rise to the top. The grid creates a heat map that makes prioritization visual and defensible.

Connection to Module 8

The scoring grid outputs feed directly into Module 8 (Agile Capital Allocation). AI projects that score well enter Module 8's funding pipeline at the appropriate tier. An Exploration-tier AI project can proceed quickly with minimal approval. A Scaling-tier AI project requires gate KPIs from the scoring grid and pilot data before capital is released.

The Pilot-to-Scale Pipeline

Four stages

Stage 1: Sandbox. Isolated testing on historical or synthetic data with zero customer exposure. The team runs the model, measures baseline accuracy, and conducts the initial risk review based on the scoring grid. No production data enters the sandbox.

Stage 2: MVP Pilot. Limited real-world introduction with a small internal team or a tiny fraction of interactions. Human oversight is mandatory at this stage. Every AI output is reviewed before acting on it. Four KPIs are tracked: accuracy percentage, human override rate, cycle time delta, and satisfaction delta.

Stage 3: Controlled Rollout. Gradual expansion (typically 25% of traffic or interactions). KPIs are monitored continuously. A documented kill switch and rollback plan must exist before this stage begins. Only proceed if KPIs meet or exceed defined thresholds from Stage 2.

Stage 4: Full Production. Complete deployment with ongoing monitoring. The model enters the AI Register. Regular audit cycles begin. The quarterly bias and drift audit calendar activates.

Each stage has a go/no-go gate. The gate criteria are defined before Stage 1 begins and locked (consistent with Module 8's gate KPI locking principle). Changing gate criteria after deployment starts requires board-level approval.

How Module 1 controls the pipeline

Module 1's AI Readiness Index determines how fast an organization can move through the pipeline.

Below 40%: Module 7 is locked. The organization completes foundation work in Modules 2 (process documentation), 3 (measurement baselines), and 4 (data integration) before any AI project enters the sandbox.

40% to 70%: The pipeline operates in supervised mode. Stage 2 (MVP Pilot) requires human review of every output. Stage 3 (Controlled Rollout) operates with monthly accuracy reviews instead of quarterly. Bias audits run monthly.

Above 70%: The pipeline operates at full capacity with standard governance. Quarterly audit cycles apply. Automation is approved at Stage 4 with standard oversight.

This tiered approach means the governance intensity matches the organization's readiness. Companies with strong foundations move fast. Companies with weak foundations build the foundation first. No other operating system ties AI deployment speed to a diagnostic readiness score.

The AI Register

What it does

The AI Register is the central logbook for every AI system in production across the entire VWCG OS. It is the single source of truth for what AI is running, who owns it, what data it uses, and what happened when something went wrong.

Key fields

Each registered AI system includes a unique identifier, description of function, specific model or provider, data sources used, initial risk score from the scoring grid, Model Owner (technical accountability, usually data or ML team), Business Owner (impact accountability, usually the module lead), status (active, paused, or retired), and an incident log.

Current AI deployments across the VWCG OS

The AI Register for a fully deployed VWCG OS includes:

  • Module 2: Draft-With-AI SOP generation workflow
  • Module 5: Client churn predictor and QBR deck auto-generator
  • Module 6: Lead scoring model and proposal generation engine
  • Module 8: Scenario modeling engine for capital allocation
  • Module 11: Employee turnover risk prediction model
  • Module 12: Automated threat detection and anomaly monitoring

Each of these has a dual owner, a risk score, a deployment stage, and an incident history. The Register provides board-level visibility into the organization's AI footprint.

Incident management

The incident log captures date, summary, root cause, action taken, and lessons learned for every AI failure. Quarterly rollups of incident histograms and override trends inform future scoring cycles. If a model's incident rate exceeds a defined threshold, it automatically downgrades to Stage 2 (MVP Pilot) with mandatory human oversight until root cause is resolved.

Quarterly Bias and Drift Audits

Why quarterly

AI models degrade over time. The data distribution changes. User behavior shifts. The competitive landscape evolves. A model that was 92% accurate at deployment may be 78% accurate six months later. This degradation happens silently. Without structured audits, nobody knows until the damage is done.

Bias audit methodology

Export a sample of 1,000 predictions. Split error rates by demographic segment, customer size, industry, or geography. Identify whether the model performs significantly worse for any subgroup. If bias is detected, the model returns to Stage 2 for retraining and revalidation.

Drift audit methodology

Compare current quarter accuracy against the baseline established at deployment. Compute a confusion matrix. Compare it to the previous quarter's results. If accuracy has dropped more than 5 percentage points from baseline, the model enters a retraining cycle.

Audit output

A concise two-page PDF audit report, stored in the AI Register, with a summary presented to the board as part of the quarterly governance review. This connects to Module 13 (Core Execution), where the quarterly recalibration cycle includes AI audit review as a standing agenda item.

What makes this different from standalone AI governance

Standalone AI governance frameworks (NIST AI RMF, EU AI Act compliance toolkits, vendor-specific governance modules) provide excellent methodology. They are designed as horizontal frameworks that apply to any organization.

Module 7 is a vertical governance layer embedded in a specific operating system. The difference is integration. A standalone framework tells an organization to audit its AI models. Module 7 tells the organization which models to audit (the AI Register), how readiness affects deployment speed (Module 1 gating), how AI failures route into capital decisions (Module 8 gate KPI impact), how data quality affects model reliability (Module 4 Data Quality Monitor), and how AI-specific adoption challenges are managed (Module 10 Change Enablement Sprint with AI-specific micro-training).

The governance is not a separate initiative. It is a layer of the operating system that touches every module where AI is deployed.

Who This Module Is For

Module 7 was designed for mid-market companies that are deploying or planning to deploy AI, but lack a governance framework that connects AI decisions to the rest of their operations.

These companies have AI tools. Often multiple AI tools, deployed by different teams, with different levels of oversight. Module 7 brings them under a single governance framework that is gated by organizational readiness, funded through structured capital allocation, and audited on a quarterly cadence. The module does not slow AI adoption. It ensures that AI adoption succeeds by connecting it to every other operational discipline in the system.

See How the VWCG OS Connects Diagnostics to Execution
Request a Working Session