Stewarding AI: How to Build Responsible Principles, Workflows, and Practices
‘AI’ is not a tool to deploy. It's a set of relationships to steward.
Most leaders I talk to know AI is reshaping their organizations — but they’re approaching it backwards.
-
They're asking "What's our AI strategy?" when they should be asking "What future are we trying to create, and does AI help or hinder that?"
-
They're deploying tools without understanding the sociotechnical systems those tools shape and are shaped by.
-
They're automating workflows without recognizing what gets lost—human judgment, meaning making, accountability, learning-through-practice, adaptive capacity.
-
They’re asking “Is the tool performing as expected” not “What organizational practices is the tool patterning?”
Six months later, they discover that roles have blurred, responsibility has drifted, and the "efficiency gains" have hollowed out exactly the capabilities they need most.
The result? Organizations that move faster but think less. That optimize everything but understand nothing. That become — without ever intending to — complicit with the environmental costs of data centers, the exploitation of data laborers, and systemic harms they never signed up for.
Cohort 1 is open for enrollment. It will take place July 3, 10, 17, and 24 from 9:00 - 11:00 am PT. Enrollment is capped at 30 participants. Secure your spot today.
Full Price: $1200
Annual Untangled Members Pay $950.
I reserve a small number of spots at a cost of $750 for participants who can't afford the full price. If that is you, please get in touch.
What you get
I want to be concrete about what's actually in this course — because "you'll leave with actionable insights" is the kind of thing every course promises and almost none deliver.
-
You'll leave with a values-aligned approach to AI adoption — not principles on a page, but a diagnostic process for assessing whether a specific AI use advances your strategy, aligns with your values, and avoids complicity with harms you're not willing to accept.
-
You'll have the full STEWARD framework: a seven-step process for redesigning workflows that take advantage of what machines do well — pattern detection, retrieval, bounded execution — while protecting what makes us irreplaceably human: judgment, meaning-making, accountability, learning-through-practice.
-
You'll have stewardship practices — feedback loops that protect human judgement, sense-making that nurtures adaptive change, and the ability to translate across difference (because everyone is using the same words but talking past one another!) — designed to compound over time, so the work doesn't stop when the course does.
You won't leave wondering where to start. You'll have specific actions mapped to this week, this month, and this quarter — built around your actual organization and your actual workflows.
You’ll also receive:
-
8 hours of interactive, lab-style sessions across four weeks. Not lectures. We work through real problems, practice the frameworks, and leave time for the questions that don't fit neatly into the curriculum.
-
A Playbook with exercises and tools designed for ongoing use, not filed away after week one.
-
A coaching session with me, one-on-one, where we work through whatever is specific to your situation — the workflow you can't figure out, the stakeholder who's blocking you, the decision you keep postponing.
-
Private membership in a peer cohort of tech & society leaders — people from organizations like New_Public, Discord, Center for Tech & Civic Life, Siegel Family Endowment, Annie E. Casey Foundation, Stanford's Digital Civil Society Lab, and others — who are grappling with exactly the same questions you are. This is the part people consistently tell me they didn't expect to value as much as they do.
-
Bonus: you also get a free subscription to Untangled for a year!
Who is this for?
This course is for leaders who are making decisions about AI — or who are advising people who are.
That includes:
-
CEOs and Executive Directors who are setting organizational direction and C-suite leaders — CTOs, COOs, Chief Risk Officers — who are managing the operational reality.
-
People who've been handed the mandate "we need an AI strategy" or “we need AI principles that reflect our values” and are now staring at a blank page.
-
Function leaders in ops, legal, HR, product, trust & safety, whose teams are already using AI — but they don’t know how to guide their workflow.
What they all have in common: AI is already happening or imminent in their organization, and they feel the tension between "move fast" and "do this right." They're asking practical questions like:
-
Should we actually do this?
-
How do we redesign this workflow without breaking what makes us good?
-
Who's accountable when it goes wrong?
-
And they're exhausted by AI hype that never quite connects to the specific, concrete decisions they have to make on Monday morning.
This course is not for people who want to learn how the models work under the hood. You don't need to be a data scientist or an ML engineer. You don't need to have already decided whether AI is good or bad — honestly, if you've already decided, this course might unsettle you a little, and that's fine too.
How is the course organized?
I've organized the course around my framework for stewarding responsible AI adoption.
A STEWARD is someone who …
-
responsible for taking care of something.
-
a guardian or keeper
Most AI training teaches you what the technology can do. STEWARD teaches you what you need to do to use it strategically and responsibly.
Most AI consultants help you automate faster. STEWARD helps you preserve the capacities that make your organization adaptive and human.
Most AI policies are documents that sit on shelves. STEWARD gives you processes, tools, and practices that change daily behavior.
I call it STEWARD — not because I love acronyms, but because the word captures something most AI frameworks miss: that this is ongoing work, not a one-time decision, that requires responsibility and care.
See the System
Trace What Must Stay Human & What Machines Do Well
Evaluate New Risks, Accountability Shifts, and Loss
Workflow Redesign
Adjust Interfaces & Tempo
Review What The System Teaches
Detect Drift & Design Corrective Moves
S — See the System (not just the tool)Before you ask "What's our AI strategy?" you need to ask "What future are we trying to create?" AI doesn't enter a neutral environment. It enters a web of relationships, power dynamics, information flows, and existing incentives — and it reshapes all of them. Most leaders skip this step entirely. That's a big ol' problem -- you'll learn how to see it.
T — Trace What Must Stay Human
Not everything can or should be delegated to a machine. Meaning-making. Judgment under uncertainty. Ethical discernment. The learning that only happens by doing the work. If a task requires deciding why, whether, or for whom — it should stay human-led. You'll learn a process for making these critical, nuanced distinctions.
E – Evaluate New Risks, Accountability Shifts, and Loss
Three questions matter here: Does the use of AI introduce new risks and harms? Does it shift accountability in ways we haven't named? Does it cause long-term loss of capability we'll miss later? I'll help you evaluate these questions and give you my framework for stewarding change amidst real tradeoffs.
W — Workflow Redesign
This is where accountability gets concrete. Who is responsible at each decision point — not "the system," but a specific role? What happens when the machine is wrong? Where does responsibility sit ethically, socially? Which decisions are irreversible and must stay human? Don't bolt AI onto an existing process. I'll teach you how to redesign the whole thing.
A — Adjust Interfaces & Tempo
How a tool is designed shapes how people think. Interfaces that make AI outputs look authoritative teach passive acceptance. As a result, stewardship often means making uncertainty visible, making alternatives explicit, and using time as a design variable. I'll teach you how to do this.
R — Review What the System Teaches
This is the step most organizations skip — and where the most damage accumulates. AI teaches people how to work through the act of using it. I'll help you interrogate not just whether the tool is working as intended, but what it's training your people to do. I'll also offer you tools for protecting what makes humans irreplaceably unique.
D — Detect Drift & Design Corrective Moves
Drift doesn't announce itself. Roles blur gradually. Judgment atrophies quietly. The question to ask isn't "What changed?" but "What's changing?" I'll teach you how to bring diverse actors together to sense change from multiple vantage points, translate across difference, and build the relational infrastructure to adapt together.
Most organizational approaches ask: what can AI do or what can we automate?
STEWARD asks: what kind of organization do we become when we use it?
That's a much harder question. It's also the right one.
If you want to work through this framework with your own organization — with real workflows, real accountability maps, and a community of leaders grappling with the same questions — that's exactly what the course is designed to do.
What else?
Price: $1200
-
Annual Untangled Members Pay $950
-
I reserve a small number of spots at a cost of $750 for participants who can't afford the full price. If that is you, please get in touch.
Next Cohort: July 3, 10, 17, 24 (Time TBD)
Other Options
-
Book the training for your company or organization. Get in touch.
Refund Policy
-
Plans change. I understand that — but I also want to tailor this course to the participants as much as I can. To minimize disruption, you can receive a 50% refund up to three weeks before the course starts. After that, there will be no refunds offered.
FAQs