diff --git a/docs/async.md b/docs/async.md index 637413f..29a4367 100644 --- a/docs/async.md +++ b/docs/async.md @@ -1,11 +1,11 @@ --- layout: default -title: "Async" +title: "(Advanced) Async" parent: "Core Abstraction" nav_order: 5 --- -# Async +# (Advanced) Async **Mini LLM Flow** allows fully asynchronous nodes by implementing `prep_async()`, `exec_async()`, `exec_fallback_async()`, and/or `post_async()`. This is useful for: diff --git a/docs/index.md b/docs/index.md index a6cfda5..3258b51 100644 --- a/docs/index.md +++ b/docs/index.md @@ -6,16 +6,15 @@ nav_order: 1 # Mini LLM Flow -A [100-line](https://github.com/zachary62/miniLLMFlow/blob/main/minillmflow/__init__.py) minimalist LLM framework for agents, task decomposition, RAG, etc. +A [100-line](https://github.com/zachary62/miniLLMFlow/blob/main/minillmflow/__init__.py) minimalist LLM framework for *Agents, Task Decomposition, RAG, etc*. -We model the LLM workflow as a **Nested Flow**: -- Each **Node** handles a simple LLM task. -- Nodes are chained together to form a **Flow** for compute-intensive tasks. -- One Node can be chained to multiple Nodes through **Actions** as an agent. -- A Flow can be treated as a Node for **Nested Flows**. -- Both Nodes and Flows can be **Batched** for data-intensive tasks. -- Nodes and Flows can be **Async** for user inputs. -- **Async** Nodes and Flows can be executed in **Parallel**. +We model the LLM workflow as a **Nested Directed Graph**: +- **Nodes** handle simple (LLM) tasks. +- Nodes connect through **Actions** (labeled edges) for *Agents*. +- **Flows** orchestrate a directed graph of Nodes for *Task Decomposition*. +- A Flow can be used as a Node (for **Nesting**). +- **Batch** Nodes/Flows for data-intensive tasks. +- **Async** Nodes/Flows allow waits or **Parallel** execution
@@ -27,8 +26,8 @@ We model the LLM workflow as a **Nested Flow**:
- [Flow](./flow.md)
- [Communication](./communication.md)
- [Batch](./batch.md)
-- [Async](./async.md)
-- [Parallel](./parallel.md)
+- [(Advanced) Async](./async.md)
+- [(Advanced) Parallel](./parallel.md)
## Preparation
diff --git a/docs/node.md b/docs/node.md
index 32b20b6..39d96b0 100644
--- a/docs/node.md
+++ b/docs/node.md
@@ -7,7 +7,7 @@ nav_order: 1
# Node
-A **Node** is the smallest building block of Mini LLM Flow. Each Node has three lifecycle methods:
+A **Node** is the smallest building block of Mini LLM Flow. Each Node has 3 steps:
1. **`prep(shared)`**
- Reads and preprocesses data from the `shared` store for LLMs.
@@ -25,6 +25,7 @@ A **Node** is the smallest building block of Mini LLM Flow. Each Node has three
- Examples: finalize outputs, trigger next steps, or log results.
- Returns a **string** to specify the next action (`"default"` if nothing or `None` is returned).
+All 3 steps are optional. For example, you might only need to run the Prep without calling the LLM.
## Fault Tolerance & Retries
diff --git a/docs/parallel.md b/docs/parallel.md
index b1b8d3a..b7da09f 100644
--- a/docs/parallel.md
+++ b/docs/parallel.md
@@ -1,11 +1,11 @@
---
layout: default
-title: "Parallel"
+title: "(Advanced) Parallel"
parent: "Core Abstraction"
nav_order: 6
---
-# Parallel
+# (Advanced) Parallel
**Parallel** Nodes and Flows let you run multiple tasks **concurrently**—for example, summarizing multiple texts at once. Unlike a regular **BatchNode**, which processes items sequentially, **AsyncParallelBatchNode** and **AsyncParallelBatchFlow** can fire off tasks in parallel. This can improve performance by overlapping I/O and compute.