node docs

This commit is contained in:
zachary62 2024-12-27 23:23:58 +00:00
parent b0b845728e
commit f1f89d223a
6 changed files with 93 additions and 2 deletions

0
docs/async.md Normal file
View File

0
docs/batch.md Normal file
View File

0
docs/communication.md Normal file
View File

7
docs/flow.md Normal file
View File

@ -0,0 +1,7 @@
---
layout: default
title: "Flow"
---
# Flow

View File

@ -5,11 +5,22 @@ title: "Home"
# Mini LLM Flow
A **100-line minimalist LLM framework** for agents, task decomposition, retrieval-augmented generation, and more.
A 100-line minimalist LLM framework for agents, task decomposition, RAG, etc.
![Alt text](/docs/assets/minillmflow.jpg)
## Core Abstraction
- [Node & Flow](./node_and_flow.md)
We model the LLM workflow as a **Nested Flow**:
- Each **Node** handles a simple LLM task (e.g., text summarization, structure extraction, or question answering).
- Nodes are chained together to form a **Flow** for more complex tasks. One Node can be chained to multiple Nodes based on **Actions**, e.g., for agentic steps.
- A Flow can be treated as a Node for **Nested Flows**.
- Both Nodes and Flows can be **Batched** for data-intensive tasks (e.g., processing one file at a time in a directory of files).
- Nodes and Flows can be **Async**, e.g., for user feedback before proceeding.
- [Node](./node.md)
- [Flow](./flow.md)
- [Communication](./communication.md)
- [Batch](./batch.md)
- [Async](./async.md)
@ -20,6 +31,7 @@ A **100-line minimalist LLM framework** for agents, task decomposition, retrieva
- Agent
- Map Reduce
- RAG
- Structured Output
## Example Use Cases

72
docs/node.md Normal file
View File

@ -0,0 +1,72 @@
---
layout: default
title: "Node"
---
# Node
A **Node** is the smallest building block of Mini LLM Flow. Each Node has three lifecycle methods:
1. **`prep(shared)`**
- Optionally preprocess data before calling your LLM or doing heavy computation.
- Often used for tasks like reading files, chunking text, or validation.
- Returns `prep_res`, which will be passed to both `exec()` and `post()`.
2. **`exec(shared, prep_res)`**
- The main execution step, typically where you call your LLM or any external APIs.
- Returns `exec_res`, which is passed to `post()`.
3. **`post(shared, prep_res, exec_res)`**
- Optionally perform post-processing, such as writing results back to the `shared` store or deciding the next action.
- Often used to finalize outputs, trigger next steps, or log results.
- Returns a **string** to specify the next action (`"default"` if nothing or `None` is returned).
## Fault Tolerance & Retries
Nodes in Mini LLM Flow can **retry** execution if `exec()` raises an exception. You control this via a `max_retries` parameter when you create the Node. By default, `max_retries = 1` (meaning no retry).
```
# Example usage:
my_node = SummarizeFile(max_retries=3)
```
When an exception occurs in `exec()`, the Node automatically retries until:
- It either succeeds, **or**
- The Node has retried `max_retries - 1` times already and fails on the last attempt.
If you want to **gracefully handle** the error rather than raising it, you can override:
``
def process_after_fail(self, shared, prep_res, exc):
raise exc
``
By **default**, it just re-raises `exc`. But you can return a fallback result instead. That fallback result becomes the `exec_res` passed to `post()`.
## Minimal Example
``
class SummarizeFile(Node):
def prep(self, shared):
filename = self.params["filename"]
return shared["data"][filename]
def exec(self, shared, prep_res):
if not prep_res:
raise ValueError("Empty file content!")
prompt = f"Summarize this text in 10 words: {prep_res}"
summary = call_llm(prompt) # might fail
return summary
def process_after_fail(self, shared, prep_res, exc):
# Provide a simple fallback instead of crashing
return "There was an error processing your request."
def post(self, shared, prep_res, exec_res):
filename = self.params["filename"]
shared["summary"][filename] = exec_res
# Return "default" by not returning anything
``