update docs

This commit is contained in:
zachary62 2025-01-01 19:37:50 +00:00
parent 976a266cd3
commit 27a86e8568
4 changed files with 16 additions and 16 deletions

View File

@ -1,11 +1,11 @@
--- ---
layout: default layout: default
title: "Async" title: "(Advanced) Async"
parent: "Core Abstraction" parent: "Core Abstraction"
nav_order: 5 nav_order: 5
--- ---
# Async # (Advanced) Async
**Mini LLM Flow** allows fully asynchronous nodes by implementing `prep_async()`, `exec_async()`, `exec_fallback_async()`, and/or `post_async()`. This is useful for: **Mini LLM Flow** allows fully asynchronous nodes by implementing `prep_async()`, `exec_async()`, `exec_fallback_async()`, and/or `post_async()`. This is useful for:

View File

@ -6,16 +6,15 @@ nav_order: 1
# Mini LLM Flow # Mini LLM Flow
A [100-line](https://github.com/zachary62/miniLLMFlow/blob/main/minillmflow/__init__.py) minimalist LLM framework for agents, task decomposition, RAG, etc. A [100-line](https://github.com/zachary62/miniLLMFlow/blob/main/minillmflow/__init__.py) minimalist LLM framework for *Agents, Task Decomposition, RAG, etc*.
We model the LLM workflow as a **Nested Flow**: We model the LLM workflow as a **Nested Directed Graph**:
- Each **Node** handles a simple LLM task. - **Nodes** handle simple (LLM) tasks.
- Nodes are chained together to form a **Flow** for compute-intensive tasks. - Nodes connect through **Actions** (labeled edges) for *Agents*.
- One Node can be chained to multiple Nodes through **Actions** as an agent. - **Flows** orchestrate a directed graph of Nodes for *Task Decomposition*.
- A Flow can be treated as a Node for **Nested Flows**. - A Flow can be used as a Node (for **Nesting**).
- Both Nodes and Flows can be **Batched** for data-intensive tasks. - **Batch** Nodes/Flows for data-intensive tasks.
- Nodes and Flows can be **Async** for user inputs. - **Async** Nodes/Flows allow waits or **Parallel** execution
- **Async** Nodes and Flows can be executed in **Parallel**.
<div align="center"> <div align="center">
<img src="https://github.com/zachary62/miniLLMFlow/blob/main/assets/minillmflow.jpg?raw=true" width="400"/> <img src="https://github.com/zachary62/miniLLMFlow/blob/main/assets/minillmflow.jpg?raw=true" width="400"/>
@ -27,8 +26,8 @@ We model the LLM workflow as a **Nested Flow**:
- [Flow](./flow.md) - [Flow](./flow.md)
- [Communication](./communication.md) - [Communication](./communication.md)
- [Batch](./batch.md) - [Batch](./batch.md)
- [Async](./async.md) - [(Advanced) Async](./async.md)
- [Parallel](./parallel.md) - [(Advanced) Parallel](./parallel.md)
## Preparation ## Preparation

View File

@ -7,7 +7,7 @@ nav_order: 1
# Node # Node
A **Node** is the smallest building block of Mini LLM Flow. Each Node has three lifecycle methods: A **Node** is the smallest building block of Mini LLM Flow. Each Node has 3 steps:
1. **`prep(shared)`** 1. **`prep(shared)`**
- Reads and preprocesses data from the `shared` store for LLMs. - Reads and preprocesses data from the `shared` store for LLMs.
@ -25,6 +25,7 @@ A **Node** is the smallest building block of Mini LLM Flow. Each Node has three
- Examples: finalize outputs, trigger next steps, or log results. - Examples: finalize outputs, trigger next steps, or log results.
- Returns a **string** to specify the next action (`"default"` if nothing or `None` is returned). - Returns a **string** to specify the next action (`"default"` if nothing or `None` is returned).
All 3 steps are optional. For example, you might only need to run the Prep without calling the LLM.
## Fault Tolerance & Retries ## Fault Tolerance & Retries

View File

@ -1,11 +1,11 @@
--- ---
layout: default layout: default
title: "Parallel" title: "(Advanced) Parallel"
parent: "Core Abstraction" parent: "Core Abstraction"
nav_order: 6 nav_order: 6
--- ---
# Parallel # (Advanced) Parallel
**Parallel** Nodes and Flows let you run multiple tasks **concurrently**—for example, summarizing multiple texts at once. Unlike a regular **BatchNode**, which processes items sequentially, **AsyncParallelBatchNode** and **AsyncParallelBatchFlow** can fire off tasks in parallel. This can improve performance by overlapping I/O and compute. **Parallel** Nodes and Flows let you run multiple tasks **concurrently**—for example, summarizing multiple texts at once. Unlike a regular **BatchNode**, which processes items sequentially, **AsyncParallelBatchNode** and **AsyncParallelBatchFlow** can fire off tasks in parallel. This can improve performance by overlapping I/O and compute.