diff --git a/docs/async.md b/docs/async.md index 29a4367..c3cc235 100644 --- a/docs/async.md +++ b/docs/async.md @@ -7,7 +7,7 @@ nav_order: 5 # (Advanced) Async -**Mini LLM Flow** allows fully asynchronous nodes by implementing `prep_async()`, `exec_async()`, `exec_fallback_async()`, and/or `post_async()`. This is useful for: +**Async Nodes** implement `prep_async()`, `exec_async()`, `exec_fallback_async()`, and/or `post_async()`. This is useful for: 1. **prep_async()** - For *fetching/reading data (files, APIs, DB)* in an I/O-friendly way. @@ -18,7 +18,6 @@ nav_order: 5 3. **post_async()** - For *awaiting user feedback*, *coordinating across multi-agents* or any additional async steps after `exec_async()`. -Each step can be either sync or async; the framework automatically detects which to call. **Note**: `AsyncNode` must be wrapped in `AsyncFlow`. `AsyncFlow` can also include regular (sync) nodes. @@ -59,6 +58,4 @@ async def main(): print("Final Summary:", shared.get("summary")) asyncio.run(main()) -``` - -Keep it simple: go async only when needed, handle errors gracefully, and leverage Python’s `asyncio`. +``` \ No newline at end of file diff --git a/docs/parallel.md b/docs/parallel.md index b7da09f..f822fbe 100644 --- a/docs/parallel.md +++ b/docs/parallel.md @@ -7,11 +7,11 @@ nav_order: 6 # (Advanced) Parallel -**Parallel** Nodes and Flows let you run multiple tasks **concurrently**—for example, summarizing multiple texts at once. Unlike a regular **BatchNode**, which processes items sequentially, **AsyncParallelBatchNode** and **AsyncParallelBatchFlow** can fire off tasks in parallel. This can improve performance by overlapping I/O and compute. +**Parallel** Nodes and Flows let you run multiple **Async** Nodes and Flows **concurrently**—for example, summarizing multiple texts at once. This can improve performance by overlapping I/O and compute. ## AsyncParallelBatchNode -Like **AsyncBatchNode**, but uses `prep_async()`, `exec_async()`, and `post_async()` in **parallel**: +Like **AsyncBatchNode**, but run `exec_async()` in **parallel**: ```python class ParallelSummaries(AsyncParallelBatchNode): @@ -47,11 +47,8 @@ await parallel_flow.run_async(shared) ## Best Practices -- **Ensure Tasks Are Independent** - If each item depends on the output of a previous item, **don’t** parallelize. Parallelizing dependent tasks can lead to inconsistencies or race conditions. +- **Ensure Tasks Are Independent**: If each item depends on the output of a previous item, **do not** parallelize. -- **Beware Rate Limits** - Parallel calls can **quickly** trigger rate limits on LLM services. You may need a **throttling** mechanism (e.g., semaphores or sleep intervals) to avoid hitting vendor limits. +- **Beware of Rate Limits**: Parallel calls can **quickly** trigger rate limits on LLM services. You may need a **throttling** mechanism (e.g., semaphores or sleep intervals). -- **Consider Single-Node Batch APIs** - Some LLMs offer a **batch inference** API where you can send multiple prompts in a single call. This is more complex to implement but can be more efficient than launching many parallel requests. Conceptually, it can look similar to an **AsyncBatchNode** or **BatchNode**, but the underlying call bundles multiple items into **one** request. \ No newline at end of file +- **Consider Single-Node Batch APIs**: Some LLMs offer a **batch inference** API where you can send multiple prompts in a single call. This is more complex to implement but can be more efficient than launching many parallel requests and mitigates rate limits.