diff --git a/docs/decomp.md b/docs/decomp.md index 09bac8e..e83ff72 100644 --- a/docs/decomp.md +++ b/docs/decomp.md @@ -13,28 +13,29 @@ Many real-world tasks are too complex for one LLM call. The solution is to decom ```python class GenerateOutline(Node): - def exec(self, topic): - prompt = f"Create a detailed outline for an article about {topic}" - return call_llm(prompt) + def prep(self, shared): return shared["topic"] + def exec(self, topic): return call_llm(f"Create a detailed outline for an article about {topic}") + def post(self, shared, prep_res, exec_res): shared["outline"] = exec_res class WriteSection(Node): - def exec(self, section): - prompt = f"Write content for this section: {section}" - return call_llm(prompt) + def prep(self, shared): return shared["outline"] + def exec(self, outline): return call_llm(f"Write content based on this outline: {outline}") + def post(self, shared, prep_res, exec_res): shared["draft"] = exec_res class ReviewAndRefine(Node): - def exec(self, draft): - prompt = f"Review and improve this draft: {draft}" - return call_llm(prompt) + def prep(self, shared): return shared["draft"] + def exec(self, draft): return call_llm(f"Review and improve this draft: {draft}") + def post(self, shared, prep_res, exec_res): shared["final_article"] = exec_res -# Connect nodes +# Connect nodes outline = GenerateOutline() write = WriteSection() review = ReviewAndRefine() outline >> write >> review -# Create flow +# Create and run flow writing_flow = Flow(start=outline) -writing_flow.run({"topic": "AI Safety"}) +shared = {"topic": "AI Safety"} +writing_flow.run(shared) ``` diff --git a/docs/node.md b/docs/node.md index aeaa61e..47a46b0 100644 --- a/docs/node.md +++ b/docs/node.md @@ -9,24 +9,23 @@ nav_order: 1 A **Node** is the smallest building block of Mini LLM Flow. Each Node has 3 steps: -1. **`prep(shared)`** - - Reads and preprocesses data from the `shared` store for LLMs. +1. `prep(shared)` + - A reliable step for preprocessing data from the `shared` store. - Examples: *query DB, read files, or serialize data into a string*. - - Returns `prep_res`, which will be passed to both `exec()` and `post()`. + - Returns `prep_res`, which is used by `exec()` and `post()`. -2. **`exec(prep_res)`** - - The main execution step where the LLM is called. - - Optionally has built-in retry and error handling (below). - - ⚠️ If retry enabled, ensure implementation is idempotent. +2. `exec(prep_res)` + - The **main execution** step, with optional retries and error handling (below). + - Examples: *primarily for LLMs, but can also for remote APIs*。 + - ⚠️ If retries enabled, ensure idempotent implementation. - Returns `exec_res`, which is passed to `post()`. -3. **`post(shared, prep_res, exec_res)`** - - Writes results back to the `shared` store or decides the next action. - - Examples: *finalize outputs, trigger next steps, or log results*. - - Returns a **string** to specify the next action (`"default"` if nothing or `None` is returned). +3.`post(shared, prep_res, exec_res)` + - A reliable postprocessing step to write results back to the `shared` store and decide the next Action. + - Examples: *update DB, change states, log results, decide next Action*. + - Returns a **string** specifying the next Action (`"default"` if none). - -> All 3 steps are optional. For example, you might only need to run the Prep without calling the LLM. +> All 3 steps are optional. You could run only `prep` if you just need to prepare data without calling the LLM. {: .note }