top of page

New Episode - From Licenses to Lift: Building an AI Operating Model That Ships

  • Writer: Angelo Materlik
    Angelo Materlik
  • Nov 19
  • 4 min read

If you’ve ever watched an AI initiative stall after the first demo, you know the truth: tools don’t change outcomes. Ownership, process, and patience do. The difference between “we have licenses” and “we ship improvements every sprint” is an operating model—clear roles, simple rules, and a cadence that respects how real work happens. Listen now to our latest Born & Kepler Orbit episode: uncovering how AI moves from ambition to daily operational reality


Insight

The fastest way to make AI dependable is to treat it like any other capability that spans people, data, and software. That means designing for two things at once: central consistency and local context. You need shared standards so efforts are safe and reusable. You need embedded ownership so real-world edge cases are accounted for. And you need expectations that make room for learning without turning the organization upside down.


A practical model has five parts:


1) Safe, shared access Before a single automation goes live, give people guardrails. Approve the tools, set sensible defaults, and explain the basics of privacy and security in plain language. What can people paste into prompts? What must be reviewed by a human? Which data is off-limits? Where are interactions stored, and for how long? When you don’t provide this scaffolding, the work goes underground; shadow usage creeps in, and reliability suffers. When you do, people experiment with confidence and a shared baseline.


2) A central enabling role Most organizations benefit from a role that is equal parts process cartographer and platform steward. Call it an AI operations architect or automation lead. They map workflows, choose connectors, manage permissions, and build templates and playbooks that make each new automation cheaper than the last. They don’t own every use case; they make it easier for everyone else to own theirs. Think of them as the maintainers of the internal toolkit, not gatekeepers.


3) Embedded champions in key functions Support knows where triage really breaks. Finance ops knows which reconciliation steps are predictable and which are messy. Field services know what information actually reaches crews. Champions in these functions become the translators between central enablement and day-to-day reality. They pilot, give feedback quickly, and own adoption. Their success metric is not a slide; it’s a smoother Tuesday.


4) Product-focused AI inside product and engineering If you ship software, AI features belong inside product and engineering, not in a separate lab. These teams build user-facing capabilities—draft proposals, smart search, routing, retrieval—on top of reusable components. They pin model versions, write evaluation tests, and treat AI like any other shipping dependency. The lab mind-set is useful for exploration; the product mind-set is essential for reliability.


5) Expectations and change management AI shifts workflows. That requires slack. Set a simple rule: every sprint, each team attempts at least one tangible improvement with AI assistance. If it fails, fine—log why, and try again. Pair that rhythm with disciplined change management: for any cross-functional change, document the old way, the new way, who’s affected, what success looks like, and how to roll back. Treat updates to prompts, templates, and routing logic like software changes—versioned, tested, and reversible.


Examples

- Support triage: The automation lead provides a catalog of common intents and a routing template. The support champion curates examples and edge cases. Product integrates a draft-mode triage assistant into the ticketing UI. Every week, the champion reviews misrouted tickets, updates examples, and the model’s evaluation tests get refreshed. Within a month, first-response times drop, and the handoff to humans carries more context.


- Proposal drafting: Product ships a proposal generator that pulls from a canonical catalog and clause library. Sales operations owns the templates. The automation lead locks down identity and access so pricing is scoped by role. Each sprint, the sales team logs how many proposals were produced in draft, how many required heavy edits, and where the generator broke constraints. Over time, the edit distance shrinks as catalog structure improves.


- Payment reconciliation: Finance ops defines deterministic rules for the majority of matches and a short list of exception types where an assistant can suggest likely resolutions. The automation lead builds the workflow with logging and thresholds. The finance champion monitors the “needs review” queue and feeds back new patterns. Product exposes a compact UI where suggestions are visible, traceable, and easy to accept or override. The result is less back-and-forth and a smaller pile of end-of-month surprises.


Implications

- Fewer single points of failure: With central enablement and local champions, knowledge spreads. When one person leaves, playbooks remain.

- Better safety with less ceremony: Identity, permissions, and model usage policies are handled once and reused. Teams focus on the work, not vendor legalese.

- Measurable progress: The “one improvement per sprint” rule reveals what’s actually shipping. Logs and evaluation harnesses turn anecdotes into clear decisions.

- Sustainable costs: Standardized connectors, reusable prompts and templates, and model choice discipline keep spend predictable—and allow teams to pick smaller, cheaper models for structured tasks.


A Week-to-Quarter Cadence That Works

- Weekly: Champions review logs from their automations. They collect 5–10 examples where the system succeeded and where it stumbled. Quick fixes happen immediately.

- Biweekly or sprintly: Teams demo one AI-assisted improvement, successful or not. Lessons learned are documented in the playbook.

- Monthly: The automation lead and product review evaluation metrics, costs by workflow, adoption stats, and incident reports. They agree on 1–2 changes that will make the next month’s improvements cheaper.

- Quarterly: Leadership reviews ROI across a small portfolio of use cases, removes blockers (e.g., missing data integrations), and reaffirms the narrow scope for any actuation.


Common Pitfalls—and How This Model Mitigates Them

- Shadow AI. Mitigated by safe access and norms. - Lab-to-nowhere. Mitigated by product ownership of shipping features. - Fragmented prompts and duplicative efforts. Mitigated by central templates and playbooks. - Overreach into autonomy. Mitigated by clear scoping, thresholds, and audit trails. - “Set and forget.” Mitigated by weekly log reviews and evaluation harnesses.


Takeaway

Operationalizing AI is not about anointing a guru or buying a bundle of licenses. It’s about the quiet work of building a system where improvements are cheap to attempt, safe to ship, and easy to measure. Give people guardrails. Appoint an enabler. Empower champions. Put AI inside product. Make room for learning, and treat process changes like software changes. Do this, and the organization starts to move—not in leaps that fade, but in steps that compound.

You won’t need to announce a transformation. The new reality will announce itself: fewer handoffs, faster drafts, cleaner data, calmer weeks. That’s what an operating model buys you—lift, not noise.

 
 
 

Comments


bottom of page