diff --git a/docs/core_abstraction/node.md b/docs/core_abstraction/node.md
index de6586b..bf3360d 100644
--- a/docs/core_abstraction/node.md
+++ b/docs/core_abstraction/node.md
@@ -10,7 +10,7 @@ nav_order: 1
A **Node** is the smallest building block. Each Node has 3 steps `prep->exec->post`:
-

+
1. `prep(shared)`
diff --git a/docs/design_pattern/agent.md b/docs/design_pattern/agent.md
index 3141bd5..a086426 100644
--- a/docs/design_pattern/agent.md
+++ b/docs/design_pattern/agent.md
@@ -10,7 +10,7 @@ nav_order: 1
Agent is a powerful design pattern in which nodes can take dynamic actions based on the context.
-

+
## Implement Agent with Graph
diff --git a/docs/design_pattern/mapreduce.md b/docs/design_pattern/mapreduce.md
index de680e3..237b3cb 100644
--- a/docs/design_pattern/mapreduce.md
+++ b/docs/design_pattern/mapreduce.md
@@ -14,7 +14,7 @@ MapReduce is a design pattern suitable when you have either:
and there is a logical way to break the task into smaller, ideally independent parts.
-

+
You first break down the task using [BatchNode](../core_abstraction/batch.md) in the map phase, followed by aggregation in the reduce phase.
diff --git a/docs/design_pattern/rag.md b/docs/design_pattern/rag.md
index a2629e4..a534782 100644
--- a/docs/design_pattern/rag.md
+++ b/docs/design_pattern/rag.md
@@ -10,7 +10,7 @@ nav_order: 3
For certain LLM tasks like answering questions, providing relevant context is essential. One common architecture is a **two-stage** RAG pipeline:
-

+
1. **Offline stage**: Preprocess and index documents ("building the index").
diff --git a/docs/design_pattern/workflow.md b/docs/design_pattern/workflow.md
index 476dfb4..92dc536 100644
--- a/docs/design_pattern/workflow.md
+++ b/docs/design_pattern/workflow.md
@@ -10,7 +10,7 @@ nav_order: 2
Many real-world tasks are too complex for one LLM call. The solution is to **Task Decomposition**: decompose them into a [chain](../core_abstraction/flow.md) of multiple Nodes.
-

+
> - You don't want to make each task **too coarse**, because it may be *too complex for one LLM call*.
diff --git a/docs/guide.md b/docs/guide.md
index 4d29e79..eba0049 100644
--- a/docs/guide.md
+++ b/docs/guide.md
@@ -56,7 +56,7 @@ Agentic Coding should be a collaboration between Human System Design and Agent I
3. **Utilities**: Based on the Flow Design, identify and implement necessary utility functions.
- Think of your AI system as the brain. It needs a body—these *external utility functions*—to interact with the real world:
-
+
- Reading inputs (e.g., retrieving Slack messages, reading emails)
- Writing outputs (e.g., generating reports, sending emails)
@@ -127,7 +127,7 @@ Agentic Coding should be a collaboration between Human System Design and Agent I
- > **You'll likely iterate a lot!** Expect to repeat Steps 3–6 hundreds of times.
>
- >
+ >
{: .best-practice }
8. **Reliability**
diff --git a/docs/index.md b/docs/index.md
index d16d65a..fa2e0e3 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -13,7 +13,7 @@ A [100-line](https://github.com/the-pocket/PocketFlow/blob/main/pocketflow/__ini
- **Agentic-Coding**: Intuitive enough for AI agents to help humans build complex LLM applications.
-

+
@@ -29,7 +29,7 @@ We model the LLM workflow as a **Graph + Shared Store**:
- [(Advanced) Parallel](./core_abstraction/parallel.md) nodes/flows handle I/O-bound tasks.
-

+
## Design Pattern
@@ -44,7 +44,7 @@ From there, it’s easy to implement popular design patterns:
- [(Advanced) Multi-Agents](./design_pattern/multi_agent.md) coordinate multiple agents.
-

+
## Utility Function