From c01c510be7f51777c3a6e7b128bd5d9c98184582 Mon Sep 17 00:00:00 2001 From: zachary62 Date: Sat, 28 Dec 2024 04:17:33 +0000 Subject: [PATCH] fix mermaid --- docs/flow.md | 37 +++++++++++++++++++------------------ docs/llm.md | 10 ++++------ 2 files changed, 23 insertions(+), 24 deletions(-) diff --git a/docs/flow.md b/docs/flow.md index a4008c4..1176173 100644 --- a/docs/flow.md +++ b/docs/flow.md @@ -48,7 +48,7 @@ Here's a simple expense approval flow that demonstrates branching and looping. T - `"approved"`: expense is approved, move to payment processing - `"needs_revision"`: expense needs changes, send back for revision -- `"rejected"`: expense is denied, end the process +- `"rejected"`: expense is denied, finish the process We can wire them like this: @@ -56,10 +56,10 @@ We can wire them like this: # Define the flow connections review - "approved" >> payment # If approved, process payment review - "needs_revision" >> revise # If needs changes, go to revision -review - "rejected" >> end # If rejected, end the process +review - "rejected" >> finish # If rejected, finish the process revise >> review # After revision, go back for another review -payment >> end # After payment, end the process +payment >> finish # After payment, finish the process flow = Flow(start=review) ``` @@ -68,16 +68,16 @@ Let's see how it flows: 1. If `review.post()` returns `"approved"`, the expense moves to `payment` node 2. If `review.post()` returns `"needs_revision"`, it goes to `revise` node, which then loops back to `review` -3. If `review.post()` returns `"rejected"`, it moves to `end` node and stops +3. If `review.post()` returns `"rejected"`, it moves to `finish` node and stops ```mermaid flowchart TD review[Review Expense] -->|approved| payment[Process Payment] review -->|needs_revision| revise[Revise Report] - review -->|rejected| end[End Process] - + review -->|rejected| finish[End Process] + revise --> review - payment --> end + payment --> finish ``` ## Running Individual Nodes vs. Running a Flow @@ -147,18 +147,19 @@ This creates a clean separation of concerns while maintaining a clear execution ```mermaid flowchart TD - subgraph "Payment Flow" - A[Validate Payment] --> B[Process Payment] --> C[Payment Confirmation] + + subgraph paymentFlow["Payment Flow"] + A[Validate Payment] --> B[Process Payment] --> C[Payment Confirmation] end - - subgraph "Inventory Flow" - D[Check Stock] --> E[Reserve Items] --> F[Update Inventory] + + subgraph inventoryFlow["Inventory Flow"] + D[Check Stock] --> E[Reserve Items] --> F[Update Inventory] end - - subgraph "Shipping Flow" - G[Create Label] --> H[Assign Carrier] --> I[Schedule Pickup] + + subgraph shippingFlow["Shipping Flow"] + G[Create Label] --> H[Assign Carrier] --> I[Schedule Pickup] end - - Payment Flow --> Inventory Flow - Inventory Flow --> Shipping Flow + + paymentFlow --> inventoryFlow + inventoryFlow --> shippingFlow ``` diff --git a/docs/llm.md b/docs/llm.md index fd49f9a..dd0755a 100644 --- a/docs/llm.md +++ b/docs/llm.md @@ -28,7 +28,7 @@ call_llm("How are you?") ## Improvements You can enhance the function as needed. Examples: -1. Handle chat history: +- Handle chat history: ```python def call_llm(messages): @@ -41,7 +41,7 @@ def call_llm(messages): return r.choices[0].message.content ``` -2. Add in-memory caching: +- Add in-memory caching: ```python from functools import lru_cache @@ -52,7 +52,7 @@ def call_llm(prompt): pass ``` -3. Enable logging: +- Enable logging: ```python def call_llm(prompt): @@ -63,12 +63,10 @@ def call_llm(prompt): return response ``` -You can also try libraries like `litellm` ## Why not provide an LLM call function? I believe it is a bad practice to provide LLM-specific implementations in a general framework: - LLM APIs change frequently. Hardcoding them makes maintenance difficult. - You may need flexibility to switch vendors, use fine-tuned models, or deploy local LLMs. -- Custom optimizations like prompt caching, request batching, or response streaming may be required. - +- You may need optimizations like prompt caching, request batching, or response streaming.