Obsidian → LLM Wiki → HTML → AI Deploy
Your Obsidian notes, becoming articles on their own.
Just keep dropping rough notes into daily/, and the AI grows a wiki behind the scenes, writes the HTML in JA and EN, and ships it to a static host for you. Minimum effort in, maximum artifact out — a note on speeding up second-brain publishing with AI.
CONTENTS10 sections
Just keep dropping notes into Obsidian — they turn into polished articles on their own. AI grows the wiki behind the scenes, writes the HTML in JA and EN, and ships it to wherever you host it.
- Obsidian is your dumping ground. Scribble daily notes without thinking.
- Claude Code grows them into an LLM Wiki (Karpathy-style second brain) behind the scenes.
- For publishing, skip Zenn / Qiita. Let AI write an HTML artifact and ship it to a static host.
- The same pipeline gives you JA / EN multi-language output from a single source — basically for free.
- Markdown / YAML / graph is for AI; HTML / SVG is for humans. Anthropic's Claude Code team is saying roughly the same thing.
Why I don't make Zenn / Qiita the final output anymore
Zenn and Qiita are great. They have SEO, they're a way into technical communities. But as the final artifact people actually read, they have some limits.
| Markdown article (Zenn/Qiita) | HTML artifact (self-hosted) | |
|---|---|---|
| Diagrams | Limited (just image inserts) | SVG / animations / interactive |
| Structure | Linear (top-down only) | Graphs / hover / zoom / progressive reveal |
| Design | Locked to the theme | Fully open |
| AI generation | Text-centric | The whole artifact can be generated |
| Multi-language | Rewrite each article by hand | JA / EN generated in parallel from one source |
| Update cost | Edit by hand | Regenerate from the wiki |
The point is that the strong medium in the AI era is HTML / SVG / interactive UI, and Markdown has been demoted to input. I'm not killing Zenn / Qiita — I'm using them as the front door for problem framing, a single key diagram, a soft intro — with the real artifact living on my own HTML.
What AI reads vs. what humans read
This is a point Anthropic's Claude Code team (Thariq and others) have also been making, summarized roughly:
As agent outputs get huge (over 1000 lines), the "wall of Markdown" problem gets serious. Once you cross 100 lines, basically nobody reads it.
Markdown is for AI to read; HTML is for humans to read. The two are splitting into different roles.
Abstract that a little and you get this dichotomy:
| For AI | For humans | |
|---|---|---|
| Examples | Markdown / YAML / JSON / graphs | HTML / SVG / React / UI |
| Strengths | Token-efficient, parseable, diffable | Visual density, layout, instant comprehension |
| Weaknesses | Long to scroll, weak diagrams, effort to read | Token-heavy, hard to diff in git |
| Role | AI input / AI-to-AI intermediate form | Final artifact for humans |
Keep the layer the AI grows as Markdown / YAML / graph. Convert only the human-facing layer to HTML / SVG. Don't carry the same content in two forms; generate one from the other on demand.
The big picture: a 4-phase pipeline
From here, I'll walk through how rough Obsidian input becomes an HTML artifact on the web, split into four phases.
- Phase 1 — Drop everything into Obsidian: scribble in
daily/. The AI doesn't touch this. - Phase 2 — Grow it into an LLM Wiki (Karpathy style): Claude maintains a 3-layer Schema / Wiki / Raw structure.
- Phase 3 — Convert to HTML, plus multi-language: the AI writes the artifact. JA and EN come from the same source.
- Phase 4 — Let AI deploy: CLI or MCP ships it. The human never opens a dashboard.
Phase 1 ── Drop everything into Obsidian
>1-1Use the graph as a blueprint for the future
Obsidian's Graph View is usually treated as a way to display the notes you've already written. There's a more interesting use:
The moment you write a[[link to a note that doesn't exist yet]], a "🌱 not-yet-real node" sprouts in the graph.
Later, just promote the ones whose sprouts look strong into real notes.
This matches what Karpathy suggested: that an LLM Wiki grows from dangling links. The graph stops being a mirror of the past and starts being a blueprint for the future.
>1-2Scribble into daily/
- Shallow path only:
daily/<YYYY>/<YYYYMMDD>/*.md - No tags, no links, no frontmatter — none of it needed
- No formatting. Typos OK. Contradictions OK.
"Rough actually makes me want to write more" is the truth on the ground. The moment you let AI tidy this layer, the desire to write dies.
>1-3Let AI suggest only, never apply
Cleanup, tagging, linking — all of it happens as AI suggests, human accepts. That way daily/ stays as the raw first-draft of your brain, and the LLM Wiki grows as a secondary, derived layer.
Phase 2 ── Grow it into an LLM Wiki (Karpathy style)
From here on, the AI is doing real work. I adopt Andrej Karpathy's proposed LLM Wiki structure directly.
>2-13-layer architecture
Roles: the human curates, analyzes, asks good questions; the LLM summarizes, links, maintains consistency, records contradictions.
>2-2Three core operations
>2-3Don't resolve contradictions — keep them in parallel
If two notes on the same topic disagree, on different days, don't merge them, don't delete one. Keep both. Append both to the "by-date" section of the relevant MOC. Which one is "right" is a human decision.
If you let the AI clean up freely, you lose the history of your own thinking. Karpathy's framing treats compounding accumulation as the prize. The contradictions themselves are footprints of how the thinking evolved.
Phase 3 ── HTML + multi-language output
>3-1Why HTML
Content that demands "effort to read" is weak in the AI era. The shift is from read → understand to see → touch → drill into what you need. HTML can co-host all of these on a single page:
- Interactive diagrams (SVG, hover, zoom)
- Progressive reveal (click to expand)
- Animation (show concepts in motion)
- Graph structures (visualize relationships)
- Embedded playgrounds (try it inline)
>3-2Let AI write the whole artifact
I tell Claude Code, together with the LLM Wiki: "convert this MOC into a single HTML page." It reads the structured Markdown, draws what it can in SVG, and outputs a fully-navigable HTML document. The human is left with structure design and final review only.
If you go a step further with something like a Claude Code Fancy HTML Hook — a PostToolUse hook that auto-generates HTML when an .md file is saved — the visualized HTML keeps running alongside the source.
>3-3Multi-language is a free side-effect
I originally wanted "to reach overseas with what I write," but the moment the AI was already writing HTML for me, multi-language fell out of the system for free.
If the AI is generating HTML from a Markdown source, doing the language switch in the same step costs nothing extra.
Machine translation drags the original sentence structure with it, so the English version doesn't read naturally as English. Instead, I hand the LLM the structured source from the wiki and tell it: "Write en.html for an English-speaking audience." The AI recomposes the article per language. Section order, examples, and analogies all shift to match the target language's culture.
>3-4Where to put the output
Cut a dedicated folder inside the vault (e.g. external-cloudflare/) and put only the HTML you intend to publish there. knowledge/ and daily/ are sensitive — never copy anything into the publish folder by mistake.
Phase 4 ── Let AI deploy it
The generated HTML just needs to land on a host. The AI does that for you via CLI or MCP. Netlify, GitHub Pages, Vercel, Cloudflare Pages, S3 + CloudFront — any static host you can drive from a CLI or MCP works. Pick whatever fits your hand.
The interesting part isn't the steps. It's that the human never opens a dashboard. From the writer's side, the flow is:
human → drop rough note into daily/
↓
Claude → updates knowledge/
↓
human: "publish this as a post"
↓
Claude → generates ja.html + en.html → deploys to any host via CLI/MCP → returns the live URL
The only things the human touches are natural-language instructions and rough daily input. The setup details for the deploy itself are out of scope for this post.
Stack rough notes long enough, and finished articles just appear
The real strength of this pipeline is that you stop having to "write articles" consciously.
You scribble into daily/ every day. That's it. The text doesn't have to be coherent, no links, nothing fancy. Keep going, and Claude quietly bundles things into topical MOCs, holds contradictions in parallel, grows the graph.
One day you ask "what's the state of that thing?" — and the AI pulls from the wiki and hands you back a polished HTML article, in both JA and EN. Without realizing it, a piece of your brain is now sitting on the web as a permanent artifact.
This isn't "writing a blog" as a workflow. It's your thinking, continuously being polished by AI in the background. The cognitive load of "writing" drops near zero, and the output is HTML, multi-language, interactive.
Taking notes becomes publishing.
How this slots in with Zenn / Qiita
I'm not killing "writing long pieces on Zenn / Qiita." I'm shifting the role they play.
Using Zenn or Qiita as the final destination is weak, as discussed. But as a front door, they're still strong. Search engines and tech communities deliver first-touch readers to those platforms far more reliably than to a self-hosted URL. SEO and community density both favor them.
So the operational pattern is: put a short intro article on Zenn / Qiita — problem framing, one diagram, soft conclusion, and a CTA toward the HTML artifact. Zenn / Qiita captures reach; the self-hosted HTML provides the deeper experience. Two stages, two strengths.
"Writing the same thing twice feels wasteful" is a fair concern, but the front door and the main artifact differ in resolution of expression. The intro is a compressed version made to be read; the main artifact is a full version made to be touched. Split the role and each becomes natural to write.
Intro post on Zenn / Qiita: problem statement + one-line conclusion + one key diagram + CTA to the HTML artifact. ~500–1000 words.
Main artifact on your own host: the full version with diagrams, SVG, interactive components, multi-language, code samples. AI assembles it from the LLM Wiki.
Summary — what really changed
| Old | New | |
|---|---|---|
| Write | Humans build PowerPoint / HTML / articles | AI generates the artifact |
| Collect | Tidy as you write | Dump raw; AI tidies later |
| Publish | Hand-edit Markdown into Zenn / Qiita | AI writes HTML and ships via CLI / MCP |
| Translate | Redo the whole thing per language | JA / EN generated in parallel from one source |
| Read | Top-down read-through | See · touch · drill into what you need |
When you wire the second brain (Obsidian + LLM Wiki) all the way through to an HTML artifact on the web as a single pipeline, you get three things at once:
- Your appetite to write doesn't drop (rough is fine)
- Knowledge accrues with compounding (no deletion; contradictions stay)
- Publishing becomes fast (AI handles artifact + translation + deploy)
This isn't "blogging" anymore. It's continuously projecting a slice of your brain onto the web. A new shape of publishing.