Why does your IT organization exist?
In most conversations with IT leaders about their organization's purpose, the answers revolve around delivery: keeping systems available, completing projects on time, supporting business operations, and enabling digital transformation. Improvement itself — the disciplined, ongoing practice of getting better at how IT works — rarely leads the list. And Continual Improvement (CI), even among practitioners who know the ITIL framework well, is seldom described as central to what the organization actually does.
Continual Improvement occupies a different position than the other 33 practices. AXELOS is explicit in the ITIL 4 Foundation manual: the CI practice applies to every element of the Service Value System, to the guiding principles, the Service Value Chain, and the practices themselves. It is the only practice designed to improve all the others, making it less a standalone discipline and more the operating system beneath everything your organization runs. And in most IT organizations, it is the most consistently underfunded, under-championed, and under-executed item on that list.
This isn't a framework summary but a field guide built from real engagements, real teams, and real results. Whether you're a practitioner trying to build CI from the ground up, a manager trying to make it stick, a consultant walking into a resistant organization, or an MSP running improvement across dozens of client environments at once: this article is for you. By the end of it, you'll know exactly how to stand up CI in your organization, how to run it week by week, and how to make it the engine behind everything that comes next.
How to use this guide: New to CI? Start at Why CI Stalls. Ready to act? Jump to Your First 30 Days. Need exec buy-in? See Selling CI Internally. Bringing in a partner? Let's talk.
Continual Improvement is one of the 34 ITIL 4 practices. Its purpose: "to align the organization's practices and services with changing business needs through the ongoing identification and improvement of services, service components, practices, or any element involved in the efficient and effective management of products and services." (AXELOS, ITIL 4 Foundation, 2019)
ITIL 4 uses the word continual deliberately. Continuous improvement implies one initiative after another with no pause to assess whether any of it worked: churn, not progress. Continual improvement is methodical: plan, execute, let the change sit long enough to generate real data, analyze what changed, then act on what you learned. The time between cycles is where measurement happens and where the organization decides what to improve next.
AXELOS chose this word carefully. So should you.
Why Continual Improvement Stalls
The improvement methodology is rarely the problem. PDCA, the 7-Step CI Model, and the CI register are well-understood frameworks that have been available to ITSM practitioners for years. When CI stalls, the cause is almost never methodological. It is organizational.
The following five failure patterns appear consistently across my consulting engagements. They're not drawn from published research but from years of observing what happens to CI programs after the launch energy fades.
No one owns it. Continual improvement is framed as everyone's responsibility, which means it defaults to being no one's. Without a named owner, an Improvement Champion, a CI Lead, or a Process Manager with explicit accountability, the practice happens inconsistently at best and disappears entirely at worst. The meetings get skipped when things get busy. The register goes unreviewed. The improvement identified three months ago still sits in the backlog with no owner and no date. Shared responsibility for CI is the organizational equivalent of a shared inbox that nobody checks.
No dedicated time. Improvement requires slack in the system. Organizations running at full utilization, entirely reactive, can't improve because there's no capacity to do anything except respond to the queue. This isn't a discipline problem; it is a structural one. If every hour of every technician's day is committed to incident response, change execution, and ticket resolution, CI will never compete for attention. The improvement work will always lose to the urgent work, and the urgent work will never stop being urgent because nothing is improving.
The register becomes a graveyard. A CI register where items go in and never come out is worse than no register at all. It demoralizes the team, creates cynicism about the process, and signals loudly that improvement ideas are not taken seriously. The graveyard register is almost always the symptom of two upstream failures: no named owner and no review cadence. Items accumulate because no one is making prioritization decisions, and no one is making those decisions because no one owns the practice.
Metrics without meaning. Many ITSM platforms make it effortless to track dozens of metrics. Tracking forty metrics because the tool supports it, while having no consensus on which two or three actually indicate whether things are getting better, produces noise instead of signal. Improvement work requires a clear baseline and a defined target. Without both, you can't tell whether the initiative succeeded, failed, or simply happened.
The Measurement and Reporting practice is not functioning. This is the stall reason that rarely gets named directly. Step 2 of the ITIL CI Model asks: where are we now? That question requires reliable data from a functioning Measurement and Reporting practice. When the organization can't answer it with confidence, the entire improvement cycle is built on assumptions. Teams skip the baseline because pulling the data is hard or the data doesn't exist, and then they have no way to demonstrate that anything changed. The CI practice and the Measurement and Reporting practice are interdependent (AXELOS, ITIL 4 Foundation, 2019). Neglect one and you undermine both.
I've watched CI collapse in an organization that did everything right at the start. A rising star stepped up as CI owner. The teams participated. The backlog grew with real, actionable items. For several weeks, it worked.
Then the firefighting became too oppressive. High-priority incidents overwhelmed the queue. Unplanned changes triggered major incidents in sequence. The CI owner was pulled back into incident response. The stand-ups stopped. The muscle we had built disappeared within weeks.
That organization did not fail at CI because the framework was wrong or the people were uncommitted. It failed because the reactive workload consumed every resource available for improvement. That is a structural problem — and it is exactly what Service Evolution, with AI absorbing the repetitive reactive layer, is designed to solve.
From PDCA to the ITIL CI Model
PDCA, short for Plan-Do-Check-Act, is the improvement cycle most IT professionals encounter first. Its roots trace to Walter Shewhart's quality work at Bell Labs and were later formalized across manufacturing and Lean practice. ITIL 4 acknowledges PDCA as a foundational influence on its own improvement model. If you have used PDCA before, you already understand the underlying logic of what follows.
ITIL 4 takes that foundation and builds a more structured approach: the 7-Step Continual Improvement Model. Where PDCA gives you the rhythm, the 7-Step Model gives you the precision. It is the authoritative CI framework in ITIL 4 and the one that will guide your work (AXELOS, ITIL 4 Foundation, 2019).
Plan
Steps 1–3: Vision, baseline, target
Do
Steps 4–5: Improvement plan + execution
Check
Step 6: Did we get there?
Act
Step 7: Close the loop, start again
The 7-Step Improvement Model
The 7-Step Continual Improvement Model is the authoritative improvement framework in ITIL 4. Whether you're a solo IT director managing everything yourself or a team of thirty with dedicated process owners, the model scales to where you are.
To make the model concrete, this section runs a single scenario in parallel with each step. A regional managed services provider is starting Monday morning with 35 open P2 incidents across their client base and a customer satisfaction score of 2.4 out of 5.0. Their Critical Success Factor is getting to 4.0. At 2.4, contract renewals are at risk. At 4.0, clients renew and refer. That gap is the improvement opportunity.
Service Evolution doesn't replace ITIL 4. It amplifies it — and at each step below, you'll see exactly how.
The 7-Step Model at a Glance
Source: AXELOS, ITIL 4 Foundation, 2019
Step 1: What's the vision?
Every improvement initiative begins with a question most teams skip: why does this improvement matter at the strategic level? Not what you're going to change, and not how — why. When your initiative begins with a strategic vision rather than a list of action items, the team understands the stakes, executives stay engaged, and the work has direction when the details get complicated. Step 1 is also the anchor you return to every time a prioritization decision needs to be made: when two improvements compete for the same resource, the one more directly connected to the vision wins.
35 open P2s. CSAT: 2.4. Before touching the queue, the service delivery manager writes one sentence: "We exist to make our clients' businesses run without interruption." Every improvement initiative from this point is measured against it.
With Service Evolution, the planning meeting starts with the evidence already on the table. Before a single ticket is touched, AI has analyzed 18 months of satisfaction data, ticket trends, and SLA performance — surfacing the top three client experience gaps with business impact attached. The vision is grounded before the conversation starts, so the meeting is about committing to a direction rather than justifying one. Without it, vision gets set on whatever data the manager could pull in a few hours: sounds strategic on Monday, forgotten by Week 3.
Step 2: Where are we now?
Step 2 is a full baseline assessment, and it is the one most organizations rush or skip. ITIL 4's Four Dimensions — Organizations and People, Information and Technology, Partners and Suppliers, and Value Streams and Processes — provide the framework. A baseline that only measures ticket volume is a partial picture. The Measurement and Reporting practice is the operational dependency here: if your data has gaps, document what you can measure reliably and what you can't. Those gaps become improvement items in their own right.
All four dimensions surface problems: two engineers hired without ITIL training or client documentation access; post-resolution surveys reaching only 40% of clients due to a broken automation rule; a third-party vendor responsible for 11 of 35 open P2s with no SLA tracking; and P2 incidents triaged manually with no escalation path. Baseline CSAT: 2.4 — and incomplete, because most clients were never surveyed.
Before: The baseline takes days to compile manually and is still incomplete when it is done. The broken survey automation goes undetected for months because no one has time to audit the pipeline. The vendor's contribution to P2 volume is visible in hindsight, not in time to act.
Service Evolution: AI initiates the baseline before being asked. It flags the broken survey automation, surfaces the vendor's 31% share of P2s, and identifies the two newer engineers as resolution time outliers, all cross-referenced against the four dimensions. It closes with a recommendation before you have pulled a single report.
What this frees the human to do: Have the hard vendor conversation instead of spending three days discovering there was one to have.
Step 3: Where do we want to be?
This step defines the target — the realistic, measurable next position. ITIL 4 is explicit that CI operates in iterative cycles, not one-time transformations. Resist the instinct to reach for the highest possible number. A CSAT of 5.0 is a marketing slogan; a target of 4.0, up from a verified 2.4 baseline, is a CI target. A well-defined target also tells you when to stop. When you reach 4.0, you assess whether further improvement on this metric is still the highest-value use of improvement capacity, or whether another priority has emerged.
Target: client CSAT 4.0 by end of Q3 2026. Supporting targets: 100% CSAT survey delivery on resolved P2 tickets, vendor SLA response times tracked in the monthly service review, and both new engineers fully onboarded by Q2 close. Three targets. One cycle.
Before: The target is picked by feel or by choosing a round number that sounds ambitious but has no grounding in what's actually achievable given the specific gaps the team is starting from.
Service Evolution: AI benchmarks the 2.4 baseline against peer profiles and models three scenarios: 3.4 with the survey fix alone, 3.8 adding the escalation path and vendor work, 4.0 requiring two full cycles. Each comes with rationale. The team chooses a scenario that fits their capacity — not a number picked from optimism.
What this frees the human to do: Make the strategic call and explain it credibly to clients — backed by evidence, not optimism.
Step 4: How do we get there?
Step 4 produces the improvement plan, and the improvement plan feeds the CI register. The most common mistake is trying to solve everything at once. A CI cycle should have three to five focused initiatives, sequenced by dependency and priority, with named owners and realistic timelines. Budget, headcount, and vendor renegotiation can't be decided by the CI team alone. Identify them explicitly in Step 4 so they can be escalated before the cycle begins.
Three initiatives, sequenced: (1) Fix the broken CSAT automation rule, Week 2 — ITSM platform admin, no budget. (2) P2 escalation path with client communication template, Week 4 — Service Delivery Manager. (3) Vendor SLA tracking in monthly service review, Week 6 — Operations Director. Engineer onboarding: Q2 register item.
At Step 4, the value isn't in building the plan — it's in applying judgment to one.
With Service Evolution, AI sequences the improvement plan correctly the first time — the survey fix flagged as a prerequisite for every downstream initiative, Q4 candidates pre-loaded, dependencies mapped before the service delivery manager opens a spreadsheet. Review, adjust, approve. Planning time: minutes, not days.
Without it, the plan gets drafted manually, revised twice before the cycle starts, and is still overloaded with more initiatives than the team can execute.
Step 5: Take action
This is execution. The improvement initiatives designed in Step 4 go through the Change Enablement practice. Improvement changes are still changes and carry risk that must be assessed. The discipline at Step 5 is to resist adding to the plan while the cycle is running. New improvement ideas will surface; they go into the register as candidates for the next iteration. Scope discipline is the difference between a completed cycle and a perpetually in-progress one.
The automation fix goes in as a Standard Change and closes in three days — survey delivery climbs from 40% to 96% by Week 2. The escalation path and client communication template are drafted and reviewed with the engineering team. The vendor SLA conversation is scheduled for Week 3.
Before: Execution is tracked in status meetings and a shared spreadsheet that is out of date by Wednesday. The client communication template gets skipped under pressure. The vendor conversation scheduled for Week 3 slips to Week 6 with no one flagging it.
Service Evolution: AI confirms the survey fix is working in real time — delivery climbing to 96% by Day 3. Client communications go out from the approved template automatically at every P2 close, no engineer remembering required. When the vendor conversation passes its scheduled date with no logged outcome, AI flags it: "This item is 8 days past target. Recommend escalating before end of week."
What this frees the human to do: Focus on the high-judgment work — the vendor negotiation, the working session, the conversation that requires a human voice.
Step 6: Did we get there?
Measurement closes the loop. At a defined point after implementation, you run the same reports that established your baseline and compare. How you communicate results matters as much as what you report: executives respond to cost and risk impact, service desk engineers respond to workload quality, and clients respond to outcomes that affect their business. Step 6 produces communication tailored to the audience, not the same slide deck sent to every room.
CSAT: 3.6 at Q3 close. The escalation path cut P2 resolution time 22%; vendor SLA tracking triggered a contract renegotiation. At the QBR: "Satisfaction moved from 2.4 to 3.6. Resolution time improved 22%. Next quarter we target 4.0."
Before: The Step 6 report takes a week to compile manually. Half the QBR is spent explaining what the data means rather than discussing what to do next. The attribution question — whether the initiative moved the number or something else did — never gets a clean answer because no one was tracking the right variables throughout the cycle.
Service Evolution: AI has been measuring since Week 1. By cycle close it produces an insight, not a report: "CSAT moved from 2.4 to 3.6. The remaining gap to 4.0 is concentrated in two client segments. Here's what Q4 should focus on, and why." The QBR deck is ready before the meeting is scheduled.
What this frees the human to do: Walk into the client conversation with insights instead of data. That is a different kind of client relationship.
Step 7: How do we keep the momentum going?
The improvement is complete. The cycle closes. Step 7 is not a debrief. It is a handoff. Update the CI register with the result of what you completed and with the new visibility the cycle gave you. Every improvement cycle reveals things about your environment that the previous cycle couldn't see. Document them. They're the seeds of the next cycle. Then start again at Step 1. That is what makes it continual.
The register gets two new items from the vendor SLA gap, and the 3.6-to-4.0 gap becomes the Q4 target. At the next QBR, the team opens with closed cycle data and a target for what comes next. That is a different kind of conversation with a paying client.
When the cycle closes, Service Evolution initiates the handoff before you ask for it: closed cycle summary, lessons learned, a ranked list of Q4 register candidates with projected impact. The vendor scorecard and client segment items are already loaded when the Step 7 meeting starts. The meeting becomes a decision meeting — what do we commit to next quarter, and in what order?
That's the compounding effect. Without it, Step 7 is a 30-minute meeting that produces a to-do list in someone's notebook. Lessons aren't captured. The next cycle starts with the team trying to remember what they learned. ITIL 4 at its full potential, amplified at every step — that's what changes.
The two steps I see skipped most consistently are Step 2 and Step 7. Step 2 gets skipped because pulling a complete baseline is hard work and the team is eager to get to solutions. Step 7 gets skipped because the cycle is over and everyone moves on. Both omissions cost you the same thing: the evidence that improvement happened at all. You can't prove progress you did not measure, and you can't build on lessons you did not document. Those two steps, done well, are what separate a CI program that builds organizational confidence over time from a series of disconnected initiatives that leave no trace.
Building a CI Register That Actually Gets Used
The CI register is the central artifact of continual improvement. It is where improvement opportunities live between cycles — captured, categorized, prioritized, and owned. A functional register is the difference between a CI program that compounds over time and a collection of good intentions that never gets revisited.
Most organizations have some version of a register. A spreadsheet, a backlog in their ITSM platform, a shared doc someone built after a training course. The format is rarely the problem. The operating model around it is.
The 6 Non-Negotiables
What will improve?
Specific, not vague.
✗ "Improve our feedback process"
✓ "Fix broken CSAT automation — surveys reaching only 40% of clients"
Business outcome
Ties back to value, not IT convenience.
✗ "Better data"
✓ "Without accurate surveys we can't measure progress toward a 4.0 CSAT target"
One named owner
Not "the team." One person accountable.
✗ "The IT team"
✓ "ITSM platform admin — survey automation fix"
Priority score
Impact × Effort. Decide — don't just collect.
✗ "Add everything, sort it out later"
✓ "High impact, zero budget — goes first"
Target date
A date makes it a commitment.
✗ "Q3" or "soon"
✓ "End of Week 2"
Success metric
Define it before you start, not after.
✗ "Better survey delivery"
✓ "100% of P2 tickets triggering a CSAT survey within 24 hours, 30 consecutive days"
The healthiest CI registers I've seen share one characteristic: they're public inside the organization. The backlog is visible to everyone, not just the CI owner and IT leadership. Any engineer, analyst, or team lead can see what's in the queue, what's being worked, and what has been completed.
That visibility does two things. It generates ideas — people submit improvements when they believe the register is real and not performative. And it creates accountability — items that sit without movement for too long become visible to everyone, which creates gentle but persistent pressure to either work them or formally defer them. A public register is a culture signal as much as it is a tool.
"Continual improvement is an ITIL practice. It's also an organizational muscle. Like any muscle, it atrophies without consistent use — and it takes intentional training to build."
The Operating Rhythm That Works
A CI register without a cadence is a parking lot. Items go in, nothing comes out, and after a few months the register is the graveyard described in the previous section. The operating rhythm is what prevents that. It is the skeleton that keeps CI alive between improvement cycles — the recurring meetings, the standing agenda, and the named deliverables that turn improvement from an intention into a practice.
Two meetings. That is all it takes to sustain a functional CI program. A 15-minute weekly stand-up and a 30-minute monthly review. Neither requires a project manager, a consultant on-site, or a dedicated CI budget. What they require is someone who owns them.
The Rising Star Model
Before you schedule either meeting, you need to find the right person to run them.
In most organizations, the CI Champion doesn't need to be the most senior person in the room. What they need is genuine investment in making things better. They're usually already visible — the engineer who keeps bringing up the same process gap in team meetings, the analyst who built a personal tracker because the official one wasn't working, the team lead who asks "why do we do it this way?" more often than everyone else. That person is your rising star, and they're the CI Champion you're looking for.
Don't hand them a title and a task list. Teach them how to run the improvement practice, then step back. Show them the 7-Step Model. Walk them through the register. Run the first three weekly stand-ups with them, then hand the facilitation to them while you participate as a contributor. The goal is a CI owner who doesn't need you in the room: someone who has internalized the rhythm and can sustain it when the external consultant is gone, when the manager changes, and when the firefighting gets heavy.
The rising star model also creates organizational resilience. A CI program owned by a single senior leader collapses when that leader leaves. A rising star who has been running the practice for six months carries the muscle with them through reorganizations, departures, and role changes.
The 15-Minute Weekly
Format: stand-up. Same time every week. The CI Champion runs it; the team attends — service desk engineers, analysts, whoever touches the service being improved. The agenda has one item: go around the room and ask each person for one thing they noticed this week that could be better. One observation, not a solution. The CI Champion captures it. The deliverable is a register update. Every stand-up ends with ideas logged before end of day, raw and unscored. Scoring and prioritization happen in the monthly. The weekly stand-up is about keeping the input pipeline open.
The 30-Minute Monthly
Format: working session, register open on screen. The CI Champion, the manager or IT director, and a consultant or coach if one is engaged. First ten minutes: review what moved last month — which items were worked, what was the outcome, did the metric move. Second ten minutes: work the backlog — score unscored items, prioritize the top three to five for the next cycle, name owners. Final ten minutes: barriers and budget. What's blocking current work, what decisions need to go up the chain. The deliverable is a prioritized, owner-assigned register ready for the next cycle.
The 30-minute monthly is the meeting I watch organizations try to cancel most. The reasoning is always the same: "We have too much going on right now, we will pick it back up next month." That sentence has killed more CI programs than any methodology failure I've ever seen. The months when there's too much going on are exactly the months the monthly review needs to happen — because those are the months when the register is fullest, the pressure to defer improvements is highest, and the CI Champion most needs the manager in the room making prioritization decisions. Protect the monthly. Especially when it is inconvenient.
The Stand-Up
- One idea from each person
- Log directly to the register
- Rising star runs it — not the manager
Deliverable: register entries
The Review
- Manager + CI owner review backlog
- Prioritize, categorize, assign
- Align on budget and barriers
Deliverable: prioritized register + decisions
The QBR
- IT director presents to business leadership
- Before and after — what improved, by how much
- This is how IT earns trust and budget
Deliverable: improvement results + next quarter targets
The Strategic Reset
- Leadership and IT review the full portfolio
- Reset priorities to match next year's strategy
- Loops directly back to Step 1 — What's the vision?
Deliverable: updated vision + reprioritized register
Tooling: Start Where You Are
One of the seven ITIL 4 guiding principles is "Start Where You Are." The principle is direct: don't discard what exists. Do not assume you need to build something new before you can begin. Observe what's already in place, assess whether it can serve the purpose, and build from there. The principle was written for service management broadly, but it applies nowhere more precisely than to CI tooling.
The most common reason organizations delay starting their CI program is tooling. They're waiting for the right ITSM platform. They're evaluating a new module. They're planning a migration. And while they wait, the queue grows, the CSAT number sits at 2.4, and the improvement that would have moved it stays in someone's head instead of a register.
You don't need a new tool to start CI. You need a register.
"Honor the past but don't be bound to it."
— Jeff Jensen, I Train IT Leaders, ITSM mentor and colleague
Start With What You Have
A spreadsheet is a register. Google Sheets, Excel, a shared Confluence page — any of these can hold the six non-negotiables: description, business impact, owner, priority score, target date, success metric. The format doesn't generate improvement. The discipline of filling it in and reviewing it does.
If your organization already runs an ITSM platform — ServiceNow, Jira Service Management, Freshservice, Zendesk, or any of the others — it almost certainly has a feature set that can support a CI register today. A backlog, a custom queue, a project board. You don't need to implement a new module. You need to decide what field maps to what non-negotiable and start entering items.
The right time to evaluate dedicated CI tooling is after your first two improvement cycles, not before. By then you know what the team actually needs: what fields get used, what views matter, where the friction is in the current setup. You're evaluating against real operational experience, not a vendor demo. That is when a tooling decision produces a good outcome.
When to Formalize
The signal that you've outgrown your starting tool is volume: the number of register items and the complexity of tracking dependencies across concurrent improvement cycles. A single-team CI program with 20 to 30 register items per quarter runs cleanly in a well-maintained spreadsheet. An MSP managing improvement cycles across 15 client environments simultaneously needs something with better filtering, role-based access, and reporting. The tool should match the operational reality, not the aspiration.
The Optimize and Automate Principle
A second ITIL 4 guiding principle is relevant here: Optimize and Automate. The sequence matters. Optimize first — make the manual process work well, eliminate waste, confirm the workflow is sound. Automate second — once the process is stable and proven, apply automation to reduce the human effort required to run it.
This is the order Service Evolution follows. Stand up CI manually. Run the 7-Step Model with discipline. Build the operating rhythm. Make the register functional and the cadence consistent. Then bring in AI to amplify what's working. Automating a broken process produces broken results faster. Optimizing first means the AI is amplifying signal, not noise.
Selling CI Internally
The service desk gets CI intuitively. The executive suite is the hard room.
Front-line practitioners and team leads understand CI intuitively. They live with the friction every day. They know exactly which process wastes their time, which metric doesn't reflect reality, and which recurring incident should have been solved six months ago. Getting them engaged in the improvement practice is mostly a matter of structure and momentum.
Executives are a different conversation entirely. And the way most CI champions approach that conversation is exactly wrong.
The Exec Gap
Most CI programs get introduced to leadership as a framework initiative. Someone presents the 7-Step Model, explains the CI register, outlines the operating cadence, and walks through the ITIL 4 definition. The executives nod politely, ask how long this will take to implement, and mentally file it under "IT project" — low urgency, low accountability, something to check in on at the quarterly review.
The framework isn't what's killing the conversation. The entry point is. Leading with methodology asks executives to care about how IT works internally, and most of them don't. What they care about is whether IT is helping the business achieve something that matters, and whether they can be held to account for it.
CI presented as a framework is a cost center project. CI presented as the mechanism by which IT delivers on the executive's definition of success is a different conversation.
The CSF Question
The unlock is a single question: "What does success look like for your team?"
Not "what metrics are you tracking" — that produces a list of KPIs they may or may not actually care about. Not "what are your IT priorities" — that produces a project list. The question is about success: the specific outcome that, if achieved, would make this executive feel that IT is genuinely delivering value to the business.
The answers to that question are always framed in business terms. Faster onboarding for new hires. Less time lost to system downtime during peak revenue periods. Customer-facing applications that don't generate support calls. The ability to close a quarter without a major IT incident derailing the finance team's reporting cycle.
Every one of those answers is a CSF — a Critical Success Factor — and every one of them is improvable through the CI practice. The CI Champion's job after that question is to connect the dots: "Here's the current state on that metric. Here's the gap. Here's the improvement initiative that closes it. Here's how long it will take and what we need from you to get there." That is a conversation executives engage with. They're not being asked to care about CI. They're being asked to define success — and then being shown the mechanism that delivers it.
Focus on Value
The ITIL 4 guiding principle behind this approach is Focus on Value. Everything the organization does should link, directly or indirectly, to value for its stakeholders. Focus on Value isn't a positioning exercise — it's a discipline. The improvement work has to be genuinely connected to outcomes that matter, and that connection has to be visible to everyone, from the CI Champion to the executive who approved the budget. (AXELOS, ITIL 4 Foundation, 2019)
When CI is untethered from value — when the register is full of internally interesting improvements that have no visible connection to business outcomes — it loses executive support quickly and deserves to. The discipline of starting every improvement initiative with the CSF question is what keeps the practice anchored to the work that actually matters.
The CSF question has never failed me in an executive conversation. Not once. I've walked into rooms where the IT director told me in advance that the executive team was checked out on IT improvement, that they viewed it as overhead, that they wouldn't engage. I asked the question anyway: what does success look like for your team?
Every time, the answer was specific, it was tied to something the business was actively trying to achieve, and it had a measurable dimension. And every time, the conversation shifted. Not because CI suddenly became interesting — because the executive finally saw the connection between what IT was doing and what they were accountable for delivering. That question is the honest starting point for any improvement initiative that needs organizational support to survive.
CI as Culture
Continual Improvement is a practice. But practices only survive inside organizations that have the culture to sustain them. You can install the 7-Step Model, stand up a register, run the weekly stand-up, and execute a flawless first cycle and still watch the program collapse six months later because the organization's default response to improvement work is resistance.
Culture runs on incentives — what gets rewarded and what gets punished. An organization where firefighting is celebrated, where heroics earn recognition, and where anyone who raises a process problem is handed ownership of fixing it without support. That organization has a culture that actively works against CI. Not out of malice. Out of momentum. That momentum has to be named, understood, and interrupted.
What Dev Teams Already Know
Software development teams figured out the improvement loop before ITSM did. The Scrum Sprint Retrospective and SAFe's Inspect and Adapt event are both CI meetings by another name — scheduled, structured moments to assess performance and adjust. The DORA 2024 State of DevOps report found that elite-performing engineering teams deploy more frequently and recover faster from failures (DORA, 2024). In my experience, the differentiator isn't talent — it's the discipline of continuous learning built into how those teams operate. ITSM practitioners have the same opportunity. When both operations and development are running improvement loops, the compounding effect accelerates: incidents identified in operations feed into development's next sprint, and process failures in change management inform architectural decisions upstream.
The Anti-CI Diagnostic
Before you invest in standing up the practice, it is worth assessing whether your organization has the cultural antibodies that will work against it. These are not fatal. Culture changes, but the antibodies need to be named before they can be addressed.
We put the full diagnostic in a separate guide: five patterns that predict CI failure, and what each one tells you about where to start. If more than two sound familiar, that's your starting point.
The organizations that build the strongest CI cultures are almost never the ones that started with the best conditions. The most durable programs I've seen came out of teams that were stretched thin, running reactive, and deeply skeptical that improvement work was worth the time. What changed wasn't the workload. It was one person, usually a rising star, who ran the first cycle well enough that the team saw something move. One closed loop. One metric that actually improved. One register item that went from "idea" to "done." That first visible win is worth more than any training program or methodology rollout. Culture follows evidence. Give the team evidence.
Your First 30 Days
The best time to start CI is before you feel ready. Not after the new platform is live. Not after the reorg settles. Not after the next quarter when things calm down. The first 30 days of a CI program don't require a perfect environment. They require a decision that improvement work starts now, alongside whatever else is in motion.
For a solo IT director, that means opening a spreadsheet and entering the first three register items before this article ends. For a consultant walking into a new engagement, it means standing up the register and scheduling the first weekly stand-up within the first two weeks, not as a separate workstream, but as part of how the engagement runs from day one.
One loop, run with discipline, is more valuable than a comprehensive CI program planned but never executed.
The Solo Path
If you're a one-person IT department, the operating rhythm looks different but the model doesn't change. There's no team to go around in the weekly stand-up — that stand-up becomes a 10-minute personal review. One thing you noticed this week that could be better. One item logged. You're both the CI Champion and the team.
The monthly review becomes a self-audit: what moved, what's blocked, what needs a decision? If you have a manager or business owner above you, bring them into the monthly. The CI program still needs a sponsor, even if the team is a team of one. The register is still public — shared with whoever has a stake in IT performance. Transparency doesn't require a large team to be meaningful.
If you're a consultant or an MSP account manager, the first 30 days are where you either embed CI into the engagement or lose the window. Clients in the early phase of an engagement are open to new structures in a way they won't be six months later, when habits have formed and the team has decided how things work.
Stand up the register in Week 1. Schedule the weekly stand-up before the end of Week 2. Run the first monthly review with the manager before Day 30. By the time the larger engagement work is delivering results, the CI practice is already running in parallel — not something to introduce later, but something that was there from the beginning. The organizations that sustain CI after a consultant exits are the ones where CI started before the consultant became load-bearing. Build it early enough that it doesn't need you to keep running.
If you want to bring in help to do this at scale, let's talk. Before that conversation, the ITSM ROI Calculator can help quantify what your current process gaps are costing each year.
AI and Service Evolution
Every technology adoption conversation eventually arrives at the same question: where does AI fit? For CI, the answer is not complicated, but the sequence matters more than most organizations get right.
The temptation is to lead with AI. To buy the platform, activate the feature, and expect the improvement cycle to run itself. The organizations that do this skip the most important part: building the understanding of how improvement actually works before handing any of it to a machine. An AI that surfaces patterns in ticket data is only useful to a team that knows what to do with those patterns. An AI that drafts an improvement plan is only useful to a CI Champion who can evaluate whether the plan is sequenced correctly. Without the manual foundation, the AI output lands in a vacuum.
Manual First, Always
Running the 7-Step Model by hand — pulling the baseline yourself, scoring the register yourself, running the stand-up yourself — is not inefficiency. It is the learning process. It is how you internalize what Step 2 actually requires, why Step 4 dependencies matter, what a good Step 6 attribution looks like versus a misleading one. That understanding is what makes you a competent consumer of AI output. Without it, you can't tell the difference between an insight and a plausible-sounding error.
That gap doesn't close as AI improves. The judgment required to lead an improvement program — deciding what matters, reading organizational resistance, making the call when the data points in two directions — will always be human work. AI accelerates the inputs to that judgment. It doesn't replace the judgment itself.
Where AI Earns Its Place
Of the seven steps, two produce the most disproportionate return when AI is applied well.
Step 2 — Where are we now? — is the most data-intensive step in the model and the one most likely to be incomplete or rushed when done manually. A complete four-dimension baseline requires pulling data from multiple systems, auditing process flows, reviewing vendor relationships, and comparing performance by team member — work that can take days and still leave gaps. AI runs that analysis continuously and surfaces the gaps before a human has to go looking for them. The broken survey automation in the MSP scenario wasn't discovered by a manual audit. In a Service Evolution environment, it would have been flagged weeks before the improvement cycle began. A better baseline produces better targets, better plans, and better results at every downstream step.
Step 6 — Did we get there? — is where organizations most often produce misleading results. A manual end-of-cycle review compares a before snapshot to an after snapshot and calls the difference improvement. That attribution is almost always incomplete, because the environment changed during the cycle and the snapshot can't account for it. AI measuring continuously from the baseline forward produces attribution-quality data: it knows what moved, when it moved, and what was happening in the environment when it moved. The difference between "CSAT improved" and "CSAT improved because of this specific initiative, not because of seasonal volume reduction" is the difference between a story and a fact. Steps 2 and 6 are where the story gets replaced by the fact.
"Teams need to stand up CI manually first, learn the inner workings, and then figure out where to bring in AI to amplify and initiate better, faster, and more trusted improvements — while they focus on the issues that matter: giving customers an incredible experience and making the hard decisions."
— Ryan Holzer, ITIL Expert & Principal ITSM Consultant
That sequence — manual discipline first, AI amplification second — is Service Evolution. Not AI instead of ITIL 4. Not AI bolted onto a broken process. AI amplifying a practice that has already been built, understood, and run with discipline. The engine behind the improvement is human judgment. The fuel AI provides is better data, faster analysis, and more consistent execution of the steps that humans find hardest to sustain under pressure.
The organizations that will compound their improvement capability fastest are not the ones that adopt AI first. They're the ones that build the manual practice well enough that they know exactly what to hand to AI, and what to keep.
• AXELOS — ITIL 4 Foundation, 2019 (Continual Improvement practice definition, 7-Step CI Model, Four Dimensions of Service Management, guiding principles: Focus on Value, Start Where You Are, Optimize and Automate, Collaborate and Promote Visibility)
• AXELOS — ITIL 4 Create, Deliver and Support (CDS), 2020
• Sinek, Simon — Start With Why, Portfolio/Penguin, 2009 (Golden Circle, purpose-led leadership)
• Scrum Guide, 2020 — Schwaber, K. & Sutherland, J. (Sprint Retrospective) — scrumguides.org
• Scaled Agile Framework — Inspect and Adapt — framework.scaledagile.com
• DORA — State of DevOps Report 2024 (elite team improvement cadence and deployment frequency)
• Jensen, Jeff — I Train IT Leaders — itrainitleaders.com (practitioner principle: "Honor the past but don't be bound to it")
Ryan Holzer is an ITIL Expert and Founder & Principal ITSM Consultant at Tideline Insights, serving IT leaders across the U.S. Founder, Florida ITSM Meetup Series.