okikusan-public / articles / Beyond 'Deployed but Unused'
JA EN
← articles
CONTENTS12 sections
  1. 01Symptoms & problem
  2. 02The Claude phenomenon
  3. 03Three mechanisms
  4. 04Mandate vs voluntary
  5. 05KGI-KSF-KPI
  6. 06Second Brain
  7. 07Selling KPI consulting
  8. 08OSS alternatives
  9. 09Action by role
  10. 10Roadmap
  11. 115 failure patterns
  12. Summary
INTERNAL AI ADOPTION / 2026

Beyond 'Deployed but Unused'

Copilot rolled out company-wide, but usage sits at 10%. Onboarding ended with one briefing, and the winning prompts are buried in someone's personal Notion. This isn't a skills problem — it's a missing organizational design: there's no natural adoption loop.

How to reproduce a natural adoption loop with "AI Champion × monthly LT × hackathon × KPI linkage," covered from both the deploying side (IT / DX / line leads) and the provider side (AI product vendors).

AI adoption AI Champion KPI KSF Hackathon DX 2026.05.17 · 10 min read
FIG.0 — NATURAL SPREAD LOOP
// WHY CLAUDE CODE / CODEX SPREAD ACCELERATOR OSS release Individuals use it Post on social Reputation rises Fans accumulate New users flow in
The natural adoption loop of the social-media era. OSS becomes an accelerator by acting as free distribution of operational know-how. Inside a company, the three pieces of this loop (can try it / a place to post / posters get credit) are missing.
▍ THE PROMISE

"Deployed but unused" isn't a staff skill problem — it's a missing natural-adoption-loop design on the organization side. AI Champion × monthly LT × hackathon × KPI linkage reproduces that loop inside a closed environment.

▍ TL;DR
§ 01 CONTEXT

The "we've all seen this" symptoms and what's really going on

>1-1Failure patterns we keep seeing

>1-2What's really going on

This is not a skill gap on the part of individual employees — it's an organizational design problem:

→ The answer is not pushing a product, but designing a natural adoption loop.

§ 02 OBSERVATION

Why Claude Code / Codex spread "on their own"

Before fixing internal AI adoption, observe the natural adoption happening outside. The reason Codex / Claude Code / Cursor exploded is that the five-step loop in Fig.0 kept turning — and the preconditions that keep it turning boil down to three.

On top of that, Codex / Claude Code open-sourced the CLI / Agent layer. This isn't just transparency — it's free distribution of operational know-how: how to run an AI agent, how to execute tools safely, how to design prompts. Dify and n8n have the same structure.

▍ The three pieces of the natural adoption loop

(1) Individuals can use it (low barrier to trying)
(2) There's a place to post about it (social / OSS / communities)
(3) Posters get credit (followers / job offers / side income)

→ Inside a company, these three are not in place. That's why nothing spreads.

▍ PART 1 — THE DEPLOYING SIDE

The approach for the side that distributes AI inside the company

For IT / DX leads / middle managers / AI Champion candidates

§ 03 PART 1 / MECHANISMS

Three mechanisms to reproduce "the Claude phenomenon" inside a company

FIG.1 — THREE MECHANISMS
FOUNDATION AI Champion Self-nominated + recognized Start with 2-3 people // Sumitomo: ¥1.2B/yr saved FAST IMPACT Monthly LT 5-10 min/person, fastest payoff Template: Problem→Prompt→Impact // SmartNews: 196 attendees USECASE GENERATOR Hackathon Every 6 mo, on company time Prize money + dev budget // Surfaces early adopters // THE 3 PILLARS THAT REBUILD THE LOOP IN A CLOSED ENV
Champion = foundation, LT = fast payoff, hackathon = use-case generator. Only the combination of all three is sustainable.

>3-1AI Champion program (foundation)

Success case: Sumitomo Corporation's "Copilot Champion" program → ¥1.2B/year cost reduction, 10,000 person-hours/month saved
Minimum viable config: start with 2-3 people → roll out in earnest after 3 months

>3-2Monthly use-case lightning talks (LT)

The mechanism with the fastest payoff.

>3-3Hackathons (use-case generator)

Run them as a two-layer setup: study sessions = knowledge sharing, hackathons = use-case generation.

TopicRecommendation
FormatHalf-day to one day: build a prototype that solves your own work problem with AI → present results
FrequencyEvery 6 months (monthly is too heavy, yearly is too thin)
DayHold it on company time. Saturday events skew the audience (single, young employees only)
IncentivesWinning team gets a development budget and resources; top entries are considered for production
Evaluation tie-inLink the presentation to the half-year review sheet (as a bonus factor, not a requirement)

Three effects: (1) early adopters emerge naturally, (2) cross-department team formation builds an internal network, (3) even for proprietary products you can't open-source, you get a "loop where the market discovers your use cases."

§ 04 PART 1 / BALANCE

★ Balancing mandate and self-direction

This is the central question of internal AI adoption: leadership wants to lock things down with KPIs, the field wants to move on its own initiative. How do you resolve that tension?

▍ The two extremes that fail

Pure mandate: "Everyone must post one use case per month" → a flood of low-quality posts, morale drops
Pure opt-in: "If you're interested, feel free" → only existing fans move, never spreads across the org

>4-1The Sumitomo Corporation model (recommended)

>4-2Design points

§ 05 PART 1 / KPI

Measurement: layering KGI - KSF - KPI

FIG.2 — KGI / KSF / KPI HIERARCHY
KGI / TOP-LEVEL GOAL Efficiency & productivity gains KSF / Key success factors Champion Monthly LT Hackathon Wiki KPI / METRICS AI usage rate · shared use cases · hours saved · use cases in production
KSFs are success factors, not means. KPIs are a tool for quantifying adoption. Run it as a "nudge," not a "mandate."

>5-1Concrete KPIs and rough ranges

MetricRough rangeExample calc
Time saved on meeting notes△2-5 h/month/person100 people × △3h × 12 months = △3,600 h/year
Meeting time reduction△10-20%Tens of millions of yen/year in overtime equivalent
First-response time for inquiries△30-40%Customer support
Manufacturing / testing time△x% per industryDesign measurement by industry
Shared use casesx per monthSlack posts + LT presentations combined
Use cases in productionx per quarterThings in production use (excludes PoCs)
▍ The quantification trap

Trying to put a number on everything leads to spreadsheet disease. Make a point of picking up "things that matter but can't be measured" too (cross-department friction drops, juniors get more eager to post, and so on).

§ 06 PART 1 / AGGREGATION

Aggregate use cases into a "Second Brain"

The raw notes coming out of LTs, hackathons, and Slack should be collected raw at first. Don't aim for clean documentation up front.

FIG.5 — WRITE → STORE → READ
// 1. WRITE // 2. STORE // 3. READ pile up raw parallel preservation make sense in context Slack post LT slide Failure log Prompt Hackathon Notion memo WIKI // parallel Case A · context:sales Case B · context:eng Case C · same theme Case D · alternative + date + model [SALES] Pick Case A [ENG] Combine B + D [NEW HIRE] Learn from diffs // meaning-making happens at read-time, not write-time
Write = pile up raw / Store = parallel preservation + tags / Read = each reader interprets in context. The wiki doesn't classify — it stays findable.
▍ Artifact types that grow in the Wiki

cookbook (recipes by workflow) / prompt library / workflow templates / failure log / adoption checklist — raw material extracted and structured from Slack / Notion / Obsidian via Codex / Claude Code.

You could call this the organizational version of Karpathy's LLM Wiki.
→ See Obsidian → LLM Wiki → HTML → AI Deploy for a separate write-up.

>6-1Handling duplicates and disagreements

Keep aggregating long enough and it will happen: multiple teams post different prompts for the same workflow, one insists "this is the right one" while the other has a better alternative, the same prompt yields different results in different teams. The rule is "keep both with context," not "pick a winner and delete the other."

SituationCommon failureRecommended approach
Duplicates (same case from multiple teams)Delete the older one and overwriteKeep both and cross-link. Champion decides on consolidation each quarter
Different approachesSnap-judge a winner and delete the otherShow as "Case A / Case B" with context attached (department, data scale, constraints)
Same prompt, different resultReport "doesn't work" and stop thereRecord under "Known variations." Often the most valuable information
▍ Why "parallel preservation" is the right default

Organizational use cases are context-dependent. Sales and engineering can legitimately have different "right answers" with the same tool. Calling one "best practice" excludes the other and kills the urge to post. Show all the evidence; let the user decide — this is the same principle whether the Wiki is personal or organizational.

One step deeper: even the same person uses a tool differently depending on context. So the wiki's job is less "build the right classification" and more "pile up the raw material so each reader can make sense of it in their own situation." Meaning-making happens at read-time, not write-time — a Champion's curation is less about organizing and more about keeping things findable.

>6-2Operational mechanisms

Lightweight on classification, heavy on findability.

▍ PART 2 — THE PROVIDER SIDE

The approach for the side providing the AI product

For SaaS vendors / AI product providers / CSMs / solution engineers

§ 07 PART 2 / PROVIDER

Selling product = selling KPI consulting

When a client's AI adoption fails, the cause is usually not the implementation but the design of the adoption requirements. The provider has to sell not just a product, but KPI consulting:

For a monthly spend of ¥X, how much labor cost is reduced, and how much does productivity go up?

Answering that question is the provider's job. Without it articulated, the client falls into a triple bind:

>7-1Co-design the client's KGI / OKR

Break the provider-side KGI (GMV growth) into a product of three factors.

FIG.6 — KGI / GMV DECOMPOSITION
// PROVIDER-SIDE KGI DECOMPOSITION ▍ KGI GMV growth ↑ = decompose × × Customer accounts // sales / marketing market reach Users per account // adoption rate penetration Tokens per user // usage depth depth // product of last 2 = use-case density ▍ PROVIDER'S LEVERAGE Use-case density ↑ // Bake Part 1's Champion / LT / Hackathon into CS to move the needle
Provider-side KGI factors into three multiplicands. The product of the last two = use-case density — the leverage point the provider can actually move. Don't push license counts; lift depth of use per account.

So the provider's plays all converge on "maximize use-case density." Not pushing license counts — the growth headroom in GMV is depth of use per account. That's why the provider should bake Champion / LT / hackathon (from Part 1) into the service itself:

These can be built in as part of the contracted service.

§ 08 PART 2 / ALTERNATIVES

Alternative approaches when you can't open-source

The open-source Claude Code / Codex spread by "free distribution of operational know-how." If you can't open-source your product, you need alternatives:

TacticWhat it does
Public templatescookbook / prompt library / workflow templates (distributing know-how)
Demos / videos"You can see how to use it / you can see the outcome"
Case studies"Reproducible" adoption stories
API / SDK accessLets users learn integration patterns
Run a communityUser groups, Discord (build a posting culture through peer networks)
★ Host hackathonsA loop where the market discovers your use cases

Especially hackathons are powerful as a place where "users discover use cases on their own." You can run both the internal community and external hackathons.

▍ COMMON

How to move starting tomorrow

An execution guide common to both the deploying side and the provider side

§ 09 COMMON / ACTIONS

First action by role

What matters is not what you read, but what you do. Bring it down to something you can move on today or this week:

FIG.3 — FIRST ACTION MATRIX
[EXEC] EXECUTIVES At the next All Hands, say "AI experiments OK, failure OK." Reserve budget for a Champion program. [MGR] MIDDLE MANAGERS Ask 1-2 AI-savvy people on your team: "Want to be a Champion candidate?" [STAFF] INDIVIDUAL CONTRIBUTORS Post one recent use of AI to the #ai-use-cases channel on Slack/Teams (a paragraph is enough) [CSM] PROVIDER CSM Book a call with your client to interview their current KPI design // THINGS YOU CAN DO THIS WEEK
Don't stop at reading. Take a concrete action that fits your role this week. Four axes: executives, middle managers, ICs, and providers.
§ 10 COMMON / ROADMAP

A 3-month + α roadmap

FIG.4 — 12-MONTH ROADMAP
M1 FOUNDATION
Pick 2-3 Champions
Open Slack channel
All Hands declaration
Initial KPI definition
M2 BROADCAST
Start monthly LT
Build the template
Spin up the Wiki
First KPI snapshot
M3 GENERATE
Run hackathon
Winners in newsletter
1 case in mgr review
Build cross-team ties
M4-6 EXPAND
More Champions
Build the cookbook
Cross-team LT
Tune the KSFs
M7-12 EMBED
Annual hackathon
External talks
Cross-org use cases
Market formation
// 3 MONTHS TO LAUNCH → 12 MONTHS TO EMBED
Five phases: foundation → broadcast → generate → expand → embed. The key to sustainability: never let the monthly LT slip.
§ 11 COMMON / FAILURES

Five failure patterns

First, here are the three tension axes we've been discussing, on one page. Most failure patterns come from going all the way to one side of one of these axes.

AxisLeft: fails if pushed all the wayRight: fails if pushed all the wayThe right answer
Mandate vs self-drivenSqueeze with KPIs → self-direction diesAll-voluntary → nothing happensLight KPI linkage + evaluation incentives
KPI vs aggregationChase numbers only → spreadsheet diseaseJust collect → impact never reaches leadershipShow via KPI, accumulate as knowledge
SaaS vs OSSSaaS only → lock-in / cost bloatOSS only → ops overhead / knowledge silosCompete on use-case density (tools are means)

The typical pitfalls to avoid:

  1. Running KSFs as a mandate: hard KPI enforcement → self-direction dies
  2. No posting culture: presenting doesn't get rewarded → no one presents — a vicious cycle
  3. Leadership doesn't say "experiments are OK": zero psychological safety, no one tries anything
  4. Use cases never get aggregated: events get hyped, but good prompts scatter across Slack and personal Notion and never become org knowledge
  5. No KPI story on the provider side: adoption cost and impact aren't linked, procurement stalls
▍ THE WORLDVIEW — FROM "FORCED" AI TO AI PEOPLE ACTUALLY USE

Design the natural adoption loop from both sides: the organization and the provider

The core of internal AI adoption is not pushing a product, but designing a natural adoption loop. The four pieces to line up:

  1. A place to post (Slack channel / LT / Wiki)
  2. A culture where posters get credit (Champion program / executive presentations)
  3. A way to try things small (PoCs / hackathons / start with non-confidential data)
  4. KPIs that make the impact visible (deliver time and cost savings to leadership)

On top of that:

  • Balance mandate and self-direction (light KPI linkage + positive incentives)
  • Run KPI and aggregation in parallel (show via numbers, accumulate as knowledge)
  • Compete on use-case density, not SaaS-vs-OSS (tools are just means)

Don't order people to "use AI." Instead, both the organization and the provider should set up a design where using AI gets you credit, is fun, and makes tomorrow's work easier. That's the core of internal AI adoption from 2026 onward.