okikusan-public / articles / Obsidian → LLM Wiki → HTML → AI Deploy
JA EN
← articles
SECOND BRAIN PIPELINE / 2026

Obsidian LLM Wiki HTML AI Deploy

Your Obsidian notes, becoming articles on their own.

Just keep dropping rough notes into daily/, and the AI grows a wiki behind the scenes, writes the HTML in JA and EN, and ships it to a static host for you. Minimum effort in, maximum artifact out — a note on speeding up second-brain publishing with AI.

Obsidian Claude Code Karpathy LLM Wiki Multi-language Static Hosting 2026.05.16 · 8 min read
FIG.0 — PIPELINE OVERVIEW
🌱 DAILY obsidian/ 1. Obsidian raw input by hand INDEX 2. LLM Wiki AI grows it JA EN 3. HTML × ja/en AI writes it CDN 4. Deploy AI deploys it reader see · touch // SECOND BRAIN PIPELINE
Human throws notes in; AI cultivates, writes, deploys. The reader reads. The human only touches the very start and the very end.
CONTENTS10 sections
  1. 01Context
  2. 02Principle
  3. 03Overview
  4. 04Phase 1 / Obsidian
  5. 05Phase 2 / LLM Wiki
  6. 06Phase 3 / HTML × Lang
  7. 07Phase 4 / Deploy
  8. Worldview
  9. 08Positioning
  10. 09Summary
▍ THE PROMISE

Just keep dropping notes into Obsidian — they turn into polished articles on their own. AI grows the wiki behind the scenes, writes the HTML in JA and EN, and ships it to wherever you host it.

▍ TL;DR
§ 01 CONTEXT

Why I don't make Zenn / Qiita the final output anymore

Zenn and Qiita are great. They have SEO, they're a way into technical communities. But as the final artifact people actually read, they have some limits.

Markdown article (Zenn/Qiita)HTML artifact (self-hosted)
DiagramsLimited (just image inserts)SVG / animations / interactive
StructureLinear (top-down only)Graphs / hover / zoom / progressive reveal
DesignLocked to the themeFully open
AI generationText-centricThe whole artifact can be generated
Multi-languageRewrite each article by handJA / EN generated in parallel from one source
Update costEdit by handRegenerate from the wiki

The point is that the strong medium in the AI era is HTML / SVG / interactive UI, and Markdown has been demoted to input. I'm not killing Zenn / Qiita — I'm using them as the front door for problem framing, a single key diagram, a soft intro — with the real artifact living on my own HTML.

§ 02 PRINCIPLE

What AI reads vs. what humans read

This is a point Anthropic's Claude Code team (Thariq and others) have also been making, summarized roughly:

As agent outputs get huge (over 1000 lines), the "wall of Markdown" problem gets serious. Once you cross 100 lines, basically nobody reads it.
Markdown is for AI to read; HTML is for humans to read. The two are splitting into different roles.

Abstract that a little and you get this dichotomy:

FIG.1 — AI FORM vs HUMAN FORM
▸ second-brain.md 01 02 03 04 05 06 07 08 # Second Brain - [[daily/...]] - [[knowledge/...]] tags: [llm, wiki] type: moc links: 8 --- graph: ▸ FOR AI // parses well · diffs well · token-cheap AI RENDER second-brain.html ▸ FOR HUMANS ▸ SVG ▸ INTERACTIVE TAP // info-dense · instant · touchable
Don't keep the same information in both forms. Keep it in a single convertible source and generate each form on demand.
For AIFor humans
ExamplesMarkdown / YAML / JSON / graphsHTML / SVG / React / UI
StrengthsToken-efficient, parseable, diffableVisual density, layout, instant comprehension
WeaknessesLong to scroll, weak diagrams, effort to readToken-heavy, hard to diff in git
RoleAI input / AI-to-AI intermediate formFinal artifact for humans
▍ DESIGN PRINCIPLE

Keep the layer the AI grows as Markdown / YAML / graph. Convert only the human-facing layer to HTML / SVG. Don't carry the same content in two forms; generate one from the other on demand.

§ 03 OVERVIEW

The big picture: a 4-phase pipeline

From here, I'll walk through how rough Obsidian input becomes an HTML artifact on the web, split into four phases.

FIG.0.5 — 4-PHASE BREAKDOWN
HUMAN AI in scribble out read PHASE 1 Obsidian daily/ PHASE 2 LLM Wiki knowledge/ PHASE 3 HTML × Lang ja.html + en.html PHASE 4 Deploy CLI / MCP curate write ship
The human only touches Phase 1 (input) and the final read. The three middle stages run on AI.
§ 04 PHASE 1 / OBSIDIAN

Phase 1 ── Drop everything into Obsidian

>1-1Use the graph as a blueprint for the future

Obsidian's Graph View is usually treated as a way to display the notes you've already written. There's a more interesting use:

The moment you write a [[link to a note that doesn't exist yet]], a "🌱 not-yet-real node" sprouts in the graph.
Later, just promote the ones whose sprouts look strong into real notes.

This matches what Karpathy suggested: that an LLM Wiki grows from dangling links. The graph stops being a mirror of the past and starts being a blueprint for the future.

FIG.2 — GRAPH AS FUTURE-DESIGN TOOL
SecondBrain Karpathy LLMWiki Obsidian Claude CLAUDE.md Wiki 🌱 🌱 🌱 🌱 real note 🌱 not-yet-real (sprout)
Real notes (purple) and not-yet-real 🌱 sprouts (green dashed) live in the same graph. The graph is a blueprint of the future, not a mirror of the past.

>1-2Scribble into daily/

FIG.2.5 — RAW INPUT: SHALLOW FOLDER + UNGROOMED .md
▸ FILE TREE 📂 daily/ └─ 📂 2026/ └─ 📂 20260516/ ├─ 📝 idea.md ├─ 📝 abc-test.md ├─ 📝 misc.md └─ 📝 rough-note.md // Only 3 levels deep. No tags. No index. ▸ rough-note.md # Random thought try the AB idea - foo (continued from yesterday) - bar??? maybe wrong revisit tomorrow no tags no links no frontmatter
Shallow folder, raw content: no formatting, no tags, no links. The rule is "keep the friction of input at zero".
"Rough actually makes me want to write more" is the truth on the ground. The moment you let AI tidy this layer, the desire to write dies.

>1-3Let AI suggest only, never apply

Cleanup, tagging, linking — all of it happens as AI suggests, human accepts. That way daily/ stays as the raw first-draft of your brain, and the LLM Wiki grows as a secondary, derived layer.

§ 05 PHASE 2 / LLM WIKI

Phase 2 ── Grow it into an LLM Wiki (Karpathy style)

From here on, the AI is doing real work. I adopt Andrej Karpathy's proposed LLM Wiki structure directly.

>2-13-layer architecture

FIG.3 — 3-LAYER ARCHITECTURE
▸ SCHEMA CLAUDE.md / AGENTS.md AI behavior rules rulebook ▸ WIKI knowledge/*.md AI grows it INDEX + MOC ▸ RAW daily/<YYYY>/<YYYYMMDD>/*.md human scribbles AI never touches ABSTRACT CONCRETE
Concrete to abstract, bottom to top. Raw is the source of truth, Wiki is the curated layer, Schema constrains how the AI behaves.

Roles: the human curates, analyzes, asks good questions; the LLM summarizes, links, maintains consistency, records contradictions.

>2-2Three core operations

FIG.4 — INGEST / QUERY / LINT
on add on ask LLM WIKI daily → → answer audit 01 Ingest Read new notes, append to MOC 02 Query INDEX → MOC → leaf, with citations 03 Lint Detect conflicts, orphans, missing links scheduled
Ingest = intake. Query = search & answer. Lint = consistency. Write these three operations into CLAUDE.md and Claude Code reads them every time.

>2-3Don't resolve contradictions — keep them in parallel

If two notes on the same topic disagree, on different days, don't merge them, don't delete one. Keep both. Append both to the "by-date" section of the relevant MOC. Which one is "right" is a human decision.

▍ WHY KEEP THEM IN PARALLEL

If you let the AI clean up freely, you lose the history of your own thinking. Karpathy's framing treats compounding accumulation as the prize. The contradictions themselves are footprints of how the thinking evolved.

§ 06 PHASE 3 / HTML × MULTI-LANG

Phase 3 ── HTML + multi-language output

>3-1Why HTML

Content that demands "effort to read" is weak in the AI era. The shift is from read → understand to see → touch → drill into what you need. HTML can co-host all of these on a single page:

>3-2Let AI write the whole artifact

I tell Claude Code, together with the LLM Wiki: "convert this MOC into a single HTML page." It reads the structured Markdown, draws what it can in SVG, and outputs a fully-navigable HTML document. The human is left with structure design and final review only.

If you go a step further with something like a Claude Code Fancy HTML Hook — a PostToolUse hook that auto-generates HTML when an .md file is saved — the visualized HTML keeps running alongside the source.

>3-3Multi-language is a free side-effect

I originally wanted "to reach overseas with what I write," but the moment the AI was already writing HTML for me, multi-language fell out of the system for free.
If the AI is generating HTML from a Markdown source, doing the language switch in the same step costs nothing extra.

FIG.5 — ONE SOURCE → JA + EN
.md one source knowledge/*.md AI parallel generation prompt × 2 langs ja.html 日本語 en.html ENGLISH // 2 langs from 1 source
From the same .md, the AI generates ja.html and en.html in parallel. Not a translation — each language version is rebuilt as a natural article.
▍ WHY "PARALLEL GENERATION" NOT "TRANSLATION"

Machine translation drags the original sentence structure with it, so the English version doesn't read naturally as English. Instead, I hand the LLM the structured source from the wiki and tell it: "Write en.html for an English-speaking audience." The AI recomposes the article per language. Section order, examples, and analogies all shift to match the target language's culture.

>3-4Where to put the output

Cut a dedicated folder inside the vault (e.g. external-cloudflare/) and put only the HTML you intend to publish there. knowledge/ and daily/ are sensitive — never copy anything into the publish folder by mistake.

§ 07 PHASE 4 / DEPLOY

Phase 4 ── Let AI deploy it

The generated HTML just needs to land on a host. The AI does that for you via CLI or MCP. Netlify, GitHub Pages, Vercel, Cloudflare Pages, S3 + CloudFront — any static host you can drive from a CLI or MCP works. Pick whatever fits your hand.

The interesting part isn't the steps. It's that the human never opens a dashboard. From the writer's side, the flow is:

▍ HUMAN INTERACTION SURFACE

human → drop rough note into daily/
    ↓
Claude → updates knowledge/
    ↓
human: "publish this as a post"
    ↓
Claude → generates ja.html + en.html → deploys to any host via CLI/MCP → returns the live URL

The only things the human touches are natural-language instructions and rough daily input. The setup details for the deploy itself are out of scope for this post.

▍ THE WORLDVIEW — "Obsidian notes become articles on their own"

Stack rough notes long enough, and finished articles just appear

The real strength of this pipeline is that you stop having to "write articles" consciously.

You scribble into daily/ every day. That's it. The text doesn't have to be coherent, no links, nothing fancy. Keep going, and Claude quietly bundles things into topical MOCs, holds contradictions in parallel, grows the graph.

One day you ask "what's the state of that thing?" — and the AI pulls from the wiki and hands you back a polished HTML article, in both JA and EN. Without realizing it, a piece of your brain is now sitting on the web as a permanent artifact.

This isn't "writing a blog" as a workflow. It's your thinking, continuously being polished by AI in the background. The cognitive load of "writing" drops near zero, and the output is HTML, multi-language, interactive.
Taking notes becomes publishing.

§ 08 POSITIONING

How this slots in with Zenn / Qiita

I'm not killing "writing long pieces on Zenn / Qiita." I'm shifting the role they play.

Using Zenn or Qiita as the final destination is weak, as discussed. But as a front door, they're still strong. Search engines and tech communities deliver first-touch readers to those platforms far more reliably than to a self-hosted URL. SEO and community density both favor them.

So the operational pattern is: put a short intro article on Zenn / Qiita — problem framing, one diagram, soft conclusion, and a CTA toward the HTML artifact. Zenn / Qiita captures reach; the self-hosted HTML provides the deeper experience. Two stages, two strengths.

"Writing the same thing twice feels wasteful" is a fair concern, but the front door and the main artifact differ in resolution of expression. The intro is a compressed version made to be read; the main artifact is a full version made to be touched. Split the role and each becomes natural to write.

FIG.6 — ZENN/QIITA → SELF-HOSTED HTML
▸ ENTRANCE Zenn / Qiita (front door) problem framing · one-pager · soft intro · SEO · community REDIRECT ▸ MAIN ARTIFACT Self-hosted HTML (the real thing) Interactive HTML / SVG / Graph / multi-lang / AI Playground
Zenn / Qiita is the front door. Self-hosted HTML is the real artifact. They both work better in their own role.
▍ THE OPERATIONAL PATTERN

Intro post on Zenn / Qiita: problem statement + one-line conclusion + one key diagram + CTA to the HTML artifact. ~500–1000 words.
Main artifact on your own host: the full version with diagrams, SVG, interactive components, multi-language, code samples. AI assembles it from the LLM Wiki.

§ 09 SUMMARY

Summary — what really changed

OldNew
WriteHumans build PowerPoint / HTML / articlesAI generates the artifact
CollectTidy as you writeDump raw; AI tidies later
PublishHand-edit Markdown into Zenn / QiitaAI writes HTML and ships via CLI / MCP
TranslateRedo the whole thing per languageJA / EN generated in parallel from one source
ReadTop-down read-throughSee · touch · drill into what you need

When you wire the second brain (Obsidian + LLM Wiki) all the way through to an HTML artifact on the web as a single pipeline, you get three things at once:

This isn't "blogging" anymore. It's continuously projecting a slice of your brain onto the web. A new shape of publishing.