improve docs
This commit is contained in:
parent
098480f124
commit
493a06d915
|
|
@ -7,7 +7,7 @@ nav_order: 5
|
||||||
|
|
||||||
# (Advanced) Async
|
# (Advanced) Async
|
||||||
|
|
||||||
**Mini LLM Flow** allows fully asynchronous nodes by implementing `prep_async()`, `exec_async()`, `exec_fallback_async()`, and/or `post_async()`. This is useful for:
|
**Async Nodes** implement `prep_async()`, `exec_async()`, `exec_fallback_async()`, and/or `post_async()`. This is useful for:
|
||||||
|
|
||||||
1. **prep_async()**
|
1. **prep_async()**
|
||||||
- For *fetching/reading data (files, APIs, DB)* in an I/O-friendly way.
|
- For *fetching/reading data (files, APIs, DB)* in an I/O-friendly way.
|
||||||
|
|
@ -18,7 +18,6 @@ nav_order: 5
|
||||||
3. **post_async()**
|
3. **post_async()**
|
||||||
- For *awaiting user feedback*, *coordinating across multi-agents* or any additional async steps after `exec_async()`.
|
- For *awaiting user feedback*, *coordinating across multi-agents* or any additional async steps after `exec_async()`.
|
||||||
|
|
||||||
Each step can be either sync or async; the framework automatically detects which to call.
|
|
||||||
|
|
||||||
**Note**: `AsyncNode` must be wrapped in `AsyncFlow`. `AsyncFlow` can also include regular (sync) nodes.
|
**Note**: `AsyncNode` must be wrapped in `AsyncFlow`. `AsyncFlow` can also include regular (sync) nodes.
|
||||||
|
|
||||||
|
|
@ -59,6 +58,4 @@ async def main():
|
||||||
print("Final Summary:", shared.get("summary"))
|
print("Final Summary:", shared.get("summary"))
|
||||||
|
|
||||||
asyncio.run(main())
|
asyncio.run(main())
|
||||||
```
|
```
|
||||||
|
|
||||||
Keep it simple: go async only when needed, handle errors gracefully, and leverage Python’s `asyncio`.
|
|
||||||
|
|
@ -7,11 +7,11 @@ nav_order: 6
|
||||||
|
|
||||||
# (Advanced) Parallel
|
# (Advanced) Parallel
|
||||||
|
|
||||||
**Parallel** Nodes and Flows let you run multiple tasks **concurrently**—for example, summarizing multiple texts at once. Unlike a regular **BatchNode**, which processes items sequentially, **AsyncParallelBatchNode** and **AsyncParallelBatchFlow** can fire off tasks in parallel. This can improve performance by overlapping I/O and compute.
|
**Parallel** Nodes and Flows let you run multiple **Async** Nodes and Flows **concurrently**—for example, summarizing multiple texts at once. This can improve performance by overlapping I/O and compute.
|
||||||
|
|
||||||
## AsyncParallelBatchNode
|
## AsyncParallelBatchNode
|
||||||
|
|
||||||
Like **AsyncBatchNode**, but uses `prep_async()`, `exec_async()`, and `post_async()` in **parallel**:
|
Like **AsyncBatchNode**, but run `exec_async()` in **parallel**:
|
||||||
|
|
||||||
```python
|
```python
|
||||||
class ParallelSummaries(AsyncParallelBatchNode):
|
class ParallelSummaries(AsyncParallelBatchNode):
|
||||||
|
|
@ -47,11 +47,8 @@ await parallel_flow.run_async(shared)
|
||||||
|
|
||||||
## Best Practices
|
## Best Practices
|
||||||
|
|
||||||
- **Ensure Tasks Are Independent**
|
- **Ensure Tasks Are Independent**: If each item depends on the output of a previous item, **do not** parallelize.
|
||||||
If each item depends on the output of a previous item, **don’t** parallelize. Parallelizing dependent tasks can lead to inconsistencies or race conditions.
|
|
||||||
|
|
||||||
- **Beware Rate Limits**
|
- **Beware of Rate Limits**: Parallel calls can **quickly** trigger rate limits on LLM services. You may need a **throttling** mechanism (e.g., semaphores or sleep intervals).
|
||||||
Parallel calls can **quickly** trigger rate limits on LLM services. You may need a **throttling** mechanism (e.g., semaphores or sleep intervals) to avoid hitting vendor limits.
|
|
||||||
|
|
||||||
- **Consider Single-Node Batch APIs**
|
- **Consider Single-Node Batch APIs**: Some LLMs offer a **batch inference** API where you can send multiple prompts in a single call. This is more complex to implement but can be more efficient than launching many parallel requests and mitigates rate limits.
|
||||||
Some LLMs offer a **batch inference** API where you can send multiple prompts in a single call. This is more complex to implement but can be more efficient than launching many parallel requests. Conceptually, it can look similar to an **AsyncBatchNode** or **BatchNode**, but the underlying call bundles multiple items into **one** request.
|
|
||||||
|
|
|
||||||
Loading…
Reference in New Issue