CONTENTS12 sections
Beyond 'Deployed but Unused'
Copilot rolled out company-wide, but usage sits at 10%. Onboarding ended with one briefing, and the winning prompts are buried in someone's personal Notion. This isn't a skills problem — it's a missing organizational design: there's no natural adoption loop.
How to reproduce a natural adoption loop with "AI Champion × monthly LT × hackathon × KPI linkage," covered from both the deploying side (IT / DX / line leads) and the provider side (AI product vendors).
"Deployed but unused" isn't a staff skill problem — it's a missing natural-adoption-loop design on the organization side. AI Champion × monthly LT × hackathon × KPI linkage reproduces that loop inside a closed environment.
- "Deployed but unused" is an organizational design problem, not a staff skill problem.
- What made Claude Code / Codex spread on social was three things: (1) individuals can try them, (2) there's a place to post, (3) the posters get credit.
- The toolkit to reproduce that inside a closed environment: AI Champion program / monthly LT / hackathons / KPI linkage.
- The balance between mandate and self-direction decides outcomes. Light KPI linkage + recognition, evaluated at the team level.
The "we've all seen this" symptoms and what's really going on
>1-1Failure patterns we keep seeing
- Copilot rolled out company-wide, usage stuck at 10%: licenses handed out, no one touches it
- It ends with the kickoff training: the briefing gets buzz, but three months later only 1 in 10 has opened the tool — it never lands in the team's workflow
- AI lead and field teams are disconnected: the lead's roadmap doesn't line up with the day-to-day problems the field actually has
- Winning examples never get reused: great prompts scatter across personal Notion / Slack, and six months later nobody can reproduce them
>1-2What's really going on
This is not a skill gap on the part of individual employees — it's an organizational design problem:
- "A reason to use it" and "a system where using it gets rewarded" are both missing
- No psychological room to pay the learning cost (the evaluation axes are still old-school)
- Low psychological safety to fail
→ The answer is not pushing a product, but designing a natural adoption loop.
Why Claude Code / Codex spread "on their own"
Before fixing internal AI adoption, observe the natural adoption happening outside. The reason Codex / Claude Code / Cursor exploded is that the five-step loop in Fig.0 kept turning — and the preconditions that keep it turning boil down to three.
On top of that, Codex / Claude Code open-sourced the CLI / Agent layer. This isn't just transparency — it's free distribution of operational know-how: how to run an AI agent, how to execute tools safely, how to design prompts. Dify and n8n have the same structure.
(1) Individuals can use it (low barrier to trying)
(2) There's a place to post about it (social / OSS / communities)
(3) Posters get credit (followers / job offers / side income)
→ Inside a company, these three are not in place. That's why nothing spreads.
The approach for the side that distributes AI inside the company
For IT / DX leads / middle managers / AI Champion candidates
Three mechanisms to reproduce "the Claude phenomenon" inside a company
>3-1AI Champion program (foundation)
- In each department, appoint AI-savvy staff as official Champions (self-nomination is required)
- Role: run a 30-min study session once a month / share prompts in a dedicated Slack channel (
#ai-use-cases) / mentor peers - Incentives: recognition, internal points, budget priority, chances to present to executives, the felt experience of becoming an "internal star"
Success case: Sumitomo Corporation's "Copilot Champion" program → ¥1.2B/year cost reduction, 10,000 person-hours/month saved
Minimum viable config: start with 2-3 people → roll out in earnest after 3 months
>3-2Monthly use-case lightning talks (LT)
The mechanism with the fastest payoff.
- Format: lightning talks, 5-10 min per person, held monthly
- Mandatory presentation template:
Problem → Prompt → Result → Time saved - Non-engineers welcome (cases from sales, finance, and HR resonate most)
- Chain effect: when a presenter is featured as a "win" in the company newsletter, applicants for the next round double
- Examples: SmartNews Knowledge Share (196 attendees) / at Sen Co., Ltd. it's run by the marketing team — IT doesn't have to lead
>3-3Hackathons (use-case generator)
Run them as a two-layer setup: study sessions = knowledge sharing, hackathons = use-case generation.
| Topic | Recommendation |
|---|---|
| Format | Half-day to one day: build a prototype that solves your own work problem with AI → present results |
| Frequency | Every 6 months (monthly is too heavy, yearly is too thin) |
| Day | Hold it on company time. Saturday events skew the audience (single, young employees only) |
| Incentives | Winning team gets a development budget and resources; top entries are considered for production |
| Evaluation tie-in | Link the presentation to the half-year review sheet (as a bonus factor, not a requirement) |
Three effects: (1) early adopters emerge naturally, (2) cross-department team formation builds an internal network, (3) even for proprietary products you can't open-source, you get a "loop where the market discovers your use cases."
★ Balancing mandate and self-direction
This is the central question of internal AI adoption: leadership wants to lock things down with KPIs, the field wants to move on its own initiative. How do you resolve that tension?
❌ Pure mandate: "Everyone must post one use case per month" → a flood of low-quality posts, morale drops
❌ Pure opt-in: "If you're interested, feel free" → only existing fans move, never spreads across the org
>4-1The Sumitomo Corporation model (recommended)
- Add "at least one case shared per team" to manager reviews (by team, not by individual)
- Run the KSFs (Champion, LT, hackathon) as a "nudge," not a mandate
- Not an obligation but "creating opportunities" — give those who want to present a stage
- Evaluation is a bonus (no penalty for those who don't)
>4-2Design points
- Distribute KPI pressure across teams so no individual is squeezed
- Make positive incentives (recognition, budget priority) primary; KPI linkage is a supplement
- At an All Hands, leadership states out loud that "experiments are OK, failure is OK" to secure psychological safety
Measurement: layering KGI - KSF - KPI
>5-1Concrete KPIs and rough ranges
| Metric | Rough range | Example calc |
|---|---|---|
| Time saved on meeting notes | △2-5 h/month/person | 100 people × △3h × 12 months = △3,600 h/year |
| Meeting time reduction | △10-20% | Tens of millions of yen/year in overtime equivalent |
| First-response time for inquiries | △30-40% | Customer support |
| Manufacturing / testing time | △x% per industry | Design measurement by industry |
| Shared use cases | x per month | Slack posts + LT presentations combined |
| Use cases in production | x per quarter | Things in production use (excludes PoCs) |
Trying to put a number on everything leads to spreadsheet disease. Make a point of picking up "things that matter but can't be measured" too (cross-department friction drops, juniors get more eager to post, and so on).
Aggregate use cases into a "Second Brain"
The raw notes coming out of LTs, hackathons, and Slack should be collected raw at first. Don't aim for clean documentation up front.
cookbook (recipes by workflow) / prompt library / workflow templates / failure log / adoption checklist — raw material extracted and structured from Slack / Notion / Obsidian via Codex / Claude Code.
You could call this the organizational version of Karpathy's LLM Wiki.
→ See Obsidian → LLM Wiki → HTML → AI Deploy for a separate write-up.
>6-1Handling duplicates and disagreements
Keep aggregating long enough and it will happen: multiple teams post different prompts for the same workflow, one insists "this is the right one" while the other has a better alternative, the same prompt yields different results in different teams. The rule is "keep both with context," not "pick a winner and delete the other."
| Situation | Common failure | Recommended approach |
|---|---|---|
| Duplicates (same case from multiple teams) | Delete the older one and overwrite | Keep both and cross-link. Champion decides on consolidation each quarter |
| Different approaches | Snap-judge a winner and delete the other | Show as "Case A / Case B" with context attached (department, data scale, constraints) |
| Same prompt, different result | Report "doesn't work" and stop there | Record under "Known variations." Often the most valuable information |
Organizational use cases are context-dependent. Sales and engineering can legitimately have different "right answers" with the same tool. Calling one "best practice" excludes the other and kills the urge to post. Show all the evidence; let the user decide — this is the same principle whether the Wiki is personal or organizational.
One step deeper: even the same person uses a tool differently depending on context. So the wiki's job is less "build the right classification" and more "pile up the raw material so each reader can make sense of it in their own situation." Meaning-making happens at read-time, not write-time — a Champion's curation is less about organizing and more about keeping things findable.
>6-2Operational mechanisms
Lightweight on classification, heavy on findability.
- Champion's job is findability, not order: keep entries reachable. Don't try to perfect a taxonomy that nobody asked for
- Quarterly Wiki Day: archive only the truly dead. Don't pick a "winner" and delete the rest
- Tags are hints, not classification:
context:sales/context:engjust record "this entry was written in this situation." The reader makes the call - Require date and model name: assume the model will move in six months and the same prompt will yield different results
- Threads for debate, frozen body: edit wars on the main entry are hostile. Keep discussion in the linked Slack thread or comment area
- Assume LLM-based search: build for "ask in natural language, get relevant cases" rather than relying on classification — that's what field-level judgment actually wants
The approach for the side providing the AI product
For SaaS vendors / AI product providers / CSMs / solution engineers
Selling product = selling KPI consulting
When a client's AI adoption fails, the cause is usually not the implementation but the design of the adoption requirements. The provider has to sell not just a product, but KPI consulting:
For a monthly spend of ¥X, how much labor cost is reduced, and how much does productivity go up?
Answering that question is the provider's job. Without it articulated, the client falls into a triple bind:
- Requirements don't get nailed down
- Implementation doesn't start
- Procurement won't sign off (cost vs. impact can't be shown)
>7-1Co-design the client's KGI / OKR
Break the provider-side KGI (GMV growth) into a product of three factors.
So the provider's plays all converge on "maximize use-case density." Not pushing license counts — the growth headroom in GMV is depth of use per account. That's why the provider should bake Champion / LT / hackathon (from Part 1) into the service itself:
- Propose a Champion program design at onboarding
- Distribute templates for monthly LTs (schedule, presentation format, evaluation rubric)
- Have the CSM lead a hackathon six months in
These can be built in as part of the contracted service.
Alternative approaches when you can't open-source
The open-source Claude Code / Codex spread by "free distribution of operational know-how." If you can't open-source your product, you need alternatives:
| Tactic | What it does |
|---|---|
| Public templates | cookbook / prompt library / workflow templates (distributing know-how) |
| Demos / videos | "You can see how to use it / you can see the outcome" |
| Case studies | "Reproducible" adoption stories |
| API / SDK access | Lets users learn integration patterns |
| Run a community | User groups, Discord (build a posting culture through peer networks) |
| ★ Host hackathons | A loop where the market discovers your use cases |
Especially hackathons are powerful as a place where "users discover use cases on their own." You can run both the internal community and external hackathons.
How to move starting tomorrow
An execution guide common to both the deploying side and the provider side
First action by role
What matters is not what you read, but what you do. Bring it down to something you can move on today or this week:
A 3-month + α roadmap
Five failure patterns
First, here are the three tension axes we've been discussing, on one page. Most failure patterns come from going all the way to one side of one of these axes.
| Axis | Left: fails if pushed all the way | Right: fails if pushed all the way | The right answer |
|---|---|---|---|
| Mandate vs self-driven | Squeeze with KPIs → self-direction dies | All-voluntary → nothing happens | Light KPI linkage + evaluation incentives |
| KPI vs aggregation | Chase numbers only → spreadsheet disease | Just collect → impact never reaches leadership | Show via KPI, accumulate as knowledge |
| SaaS vs OSS | SaaS only → lock-in / cost bloat | OSS only → ops overhead / knowledge silos | Compete on use-case density (tools are means) |
The typical pitfalls to avoid:
- Running KSFs as a mandate: hard KPI enforcement → self-direction dies
- No posting culture: presenting doesn't get rewarded → no one presents — a vicious cycle
- Leadership doesn't say "experiments are OK": zero psychological safety, no one tries anything
- Use cases never get aggregated: events get hyped, but good prompts scatter across Slack and personal Notion and never become org knowledge
- No KPI story on the provider side: adoption cost and impact aren't linked, procurement stalls
Design the natural adoption loop from both sides: the organization and the provider
The core of internal AI adoption is not pushing a product, but designing a natural adoption loop. The four pieces to line up:
- A place to post (Slack channel / LT / Wiki)
- A culture where posters get credit (Champion program / executive presentations)
- A way to try things small (PoCs / hackathons / start with non-confidential data)
- KPIs that make the impact visible (deliver time and cost savings to leadership)
On top of that:
- Balance mandate and self-direction (light KPI linkage + positive incentives)
- Run KPI and aggregation in parallel (show via numbers, accumulate as knowledge)
- Compete on use-case density, not SaaS-vs-OSS (tools are just means)
Don't order people to "use AI." Instead, both the organization and the provider should set up a design where using AI gets you credit, is fun, and makes tomorrow's work easier. That's the core of internal AI adoption from 2026 onward.