Implementation Guidelines
Companion reference for Applied Enterprise Agility (Book 1).
This appendix provides starting points and guiding principles for your transformation journey. It is not a prescription. Every organization’s context differs. Use these as a compass, not a map.
Getting Started
The chapters in Part 2 describe the mechanics. This section offers a brief menu of initial moves to help you begin.
Establish Organizational Outcomes
Before selecting improvement initiatives, clarify what success looks like at the enterprise level. Run a focused session using the Value Acceleration Process (see Chapter 8) with your leadership team. The goal is not a comprehensive strategic plan. It is a small set of measurable outcomes that will guide prioritization and filter competing initiatives. Without this, improvement efforts scatter.
Assess Your Current State
Use the diagnostic framework from Part 1 to evaluate where your organization sits today. Walk through each barrier and its associated gaps. For each gap, ask: Does this exist here? How severely? What evidence supports that assessment?
Consider these questions as you assess:
-
Which barrier creates the most friction for value delivery today?
-
Which gaps appear across multiple teams or value streams?
-
Where do leaders and practitioners disagree about severity?
-
Which gaps have we tried to fix before, and why did those efforts stall?
Document what you find. The gaps you identify become candidates for your improvement backlog.
Select a Quick Start
Chapter 9 offers Quick Starts designed to address common gap clusters. Review your assessment findings and select a Quick Start that matches where you are on the adoption continuum. Resist the temptation to tackle everything simultaneously. A single focused improvement, sustained over time, beats five parallel initiatives that starve each other for attention.
Guiding Principles
These principles reflect patterns that consistently separate organizations that improve from those that stall. They are not rules to enforce. They are directions to move toward. Your starting point matters less than your trajectory.
Flow
-
Limit Work in Progress. Less active work means faster completion. When everything is a priority, nothing moves quickly. Constrain the number of initiatives, projects, or features in flight at any level. The discomfort of saying “not yet” pays dividends in throughput and focus. (Relates to: Commitment Overload, Dependency Gridlock)
-
Prefer Smaller Batches. Smaller increments reduce risk, accelerate feedback, and expose problems earlier. Large batches hide delays, mask dependencies, and defer learning until it is too late to adapt. When in doubt, break it down further. (Relates to: Release Bottleneck, Delayed Validation)
-
Leave Slack in the System. One hundred percent utilization kills responsiveness. Systems running at full capacity have no room to absorb variability, handle urgent requests, or pursue improvement. Aim for 70-80% planned utilization. The remaining capacity is not waste. It is what allows the system to flow. (Relates to: Commitment Overload, Resource Turbulence)
-
Manage Queues, Not Just Work. Work waiting is invisible waste. Items sitting in backlogs, approval queues, or handoff buffers consume time without consuming attention. Make queues visible. Limit their size. Age kills value. (Relates to: Approval Maze, Handoff Friction)
-
Reduce Handoffs. Every handoff loses context and adds delay. Information degrades each time it transfers between people, teams, or systems. Prefer whole-team ownership over specialized silos. When handoffs are unavoidable, make them explicit and minimize their frequency. (Relates to: Handoff Friction, Siloed Delivery)
Decisions
-
Push Decisions to the Work. Decisions made closest to the information are fastest and best. Centralized decision-making creates bottlenecks and delays action. Provide guardrails and clear boundaries, then let the people doing the work decide within them. (Relates to: Approval Maze, Decision Drift)
-
Make Decisions Reversible. Prefer two-way doors. Big, irreversible decisions demand extensive analysis and consensus. Small, reversible decisions can move fast and course-correct based on evidence. Structure your work to maximize reversible choices. (Relates to: Commitment Overload, Initiative Sprawl)
-
Explicit Over Implicit. Unwritten rules create inconsistency and conflict. When decision criteria, priorities, and trade-offs live only in people’s heads, every handoff becomes a negotiation. Make the implicit explicit. Write it down. Make it visible. (Relates to: Portfolio Fog, Unclear Ownership)
Feedback
-
Shorten Feedback Loops. The gap between action and learning determines your adaptation speed. Long feedback loops let problems compound and assumptions calcify. Measure your feedback cycles in days, not quarters. If you cannot get signal faster, you cannot adapt faster. (Relates to: Delayed Validation, Metric Theater)
-
Validate Before Scaling. Proof of concept before portfolio commitment. Small experiments before big bets. The cost of a failed pilot is a fraction of the cost of a failed program. Treat validation as a gate, not a formality. (Relates to: Initiative Sprawl, Delayed Validation)
-
Instrument for Learning, Not Reporting. Metrics exist to inform decisions, not decorate dashboards. If no one acts on a measurement, stop collecting it. If a metric drives the wrong behavior, change it. Measurement without action is overhead. (Relates to: Metric Theater, Vanity Metrics)
Alignment
-
Outcomes Over Outputs. Measure value delivered, not activity completed. Features shipped, story points burned, and projects closed are outputs. Customer problems solved, revenue generated, and costs reduced are outcomes. Outputs are easy to count. Outcomes are what matter. (Relates to: Output Obsession, Feature Factory)
-
Strategy Must Be Operational. If teams cannot translate strategy into daily decisions, it is not strategy. It is aspiration. Strategic intent must connect to operational reality through clear priorities, funded capacity, and explicit trade-offs. The test is simple: can a team lead explain how their current work advances the strategy? (Relates to: Language Drift, Portfolio Fog)
-
Fund Capacity, Not Projects. Stable teams with flexible backlogs outperform project-staffed initiatives. Persistent teams build knowledge, relationships, and velocity over time. Project-based staffing treats people as interchangeable resources and pays the ramp-up tax repeatedly. (Relates to: Resource Turbulence, Initiative Sprawl)
Improvement
-
Improvement Is Work. If it is not on the backlog with an owner and cadence, it will not happen. Improvement competes with delivery for attention. Without explicit commitment, structure, and accountability, improvement loses. Treat it as work, or watch it evaporate. (Relates to: all barriers)
-
Start Where You Are. Do not wait for perfect conditions. Executive alignment, organizational restructuring, and tool migrations are not prerequisites for improvement. They are often excuses for delay. Improve what you control today. Momentum builds from action, not from permission. (Relates to: Vicious Cycle dynamics)
AI Governance and Readiness
AI pressurizes every structural problem this book describes. Misaligned organizations deploy AI faster into the wrong work. Choked governance queues strangle AI experiments before they produce learning. Broken feedback loops ignore AI-generated signals the same way they ignore every other signal. Fix the plumbing first, or AI just builds more pressure behind the clogs.
-
Tier Your AI Governance by Risk. One approval process for all AI use cases guarantees one of two failures: either high-risk deployments get insufficient scrutiny, or low-risk experiments die in the queue. Define three tiers. Tier 1 (exploration): internal use, synthetic data, no customer exposure. Team-level decision. Inform, do not ask. Tier 2 (controlled deployment): limited scope, known data sources, reversible. Value-stream-level approval. Lightweight risk checklist. Tier 3 (production integration): customer-facing, real data, regulatory implications. Enterprise-level review. Full risk assessment. Match governance rigor to actual risk. (Relates to: Approval Maze, Experiment Prohibition)
-
Govern the System, Not Just the Model. Most AI governance focuses on the model: accuracy, bias, compliance. The integration is where enterprise risk lives. A compliant model that feeds data into a system with no access controls creates risk the model governance never addressed. Govern the flow the AI operates within, not just the AI itself. (Relates to: Siloed Delivery, Handoff Friction)
-
Do Not Let the Ethics Committee Become a Bottleneck. Responsible AI governance is necessary. A centralized committee that reviews every use case regardless of tier becomes Governance Drag with a new name. Design ethics review into the tier structure, not on top of it. Tier 1 experiments do not need committee review. Tier 3 deployments do. Route the committee’s time to the decisions where it matters. (Relates to: Approval Maze, Decision Drift)
-
Test Alignment Before Investing. Can you articulate the enterprise-level outcomes your AI investment serves? If AI initiatives are a portfolio of departmental experiments with no strategic coherence, every department will report AI success and the enterprise will report AI cost. The investment is not in AI capabilities. It is in the alignment architecture that gives AI a purpose. (Relates to: Portfolio Fog, Initiative Sprawl)
-
Test Flow Before Scaling. Can your governance, compliance, and decision-making structures operate at the speed AI enables? If compliance review takes six weeks regardless of risk level, AI experiments die in the same calendar-driven queues the book diagnosed in Chapter 4. The investment is not in AI capabilities. It is in governance redesign. (Relates to: Approval Maze, Governance Drag)
-
Test Feedback Before Trusting. When AI-generated evidence contradicts current strategy, does your organization have the structural capacity to change course? AI-generated signals arrive faster than most organizations can evaluate them, challenge assumptions with data leadership does not fully understand, and contradict investment decisions that carry sunk-cost weight. If you cannot act on evidence now, faster evidence will not help. (Relates to: Delayed Validation, Zombie Retrospectives)
-
Fix and Deploy Simultaneously. You do not have to complete an enterprise agility transformation before starting AI. You do have to work on both at once. Deploy AI into value streams where alignment is clear, flow is functioning, and feedback loops exist. Use those deployments to demonstrate the compound effect. Then expand AI as you expand the structural foundation. (Relates to: all barriers)