update the doc structure

This commit is contained in:
zachary62 2025-03-03 11:27:33 -05:00
parent 3b6200642b
commit 6f20c7d8bc
20 changed files with 25 additions and 23 deletions

View File

@ -81,7 +81,7 @@ Here:
## 2. Params
**Params** let you store *per-Node* or *per-Flow* config that doesn't need to live in the shared store. They are:
- **Immutable** during a Nodes run cycle (i.e., they dont change mid-`prep->exec->post`).
- **Immutable** during a Node's run cycle (i.e., they don't change mid-`prep->exec->post`).
- **Set** via `set_params()`.
- **Cleared** and updated each time a parent Flow calls it.

View File

@ -8,11 +8,11 @@ nav_order: 6
# Agent
Agent is a powerful design pattern, where node can take dynamic actions based on the context it receives.
To express an agent, create a Node (the agent) with [branching](./flow.md) to other nodes (Actions).
To express an agent, create a Node (the agent) with [branching](../core_abstraction/flow.md) to other nodes (Actions).
> The core of build **performant** and **reliable** agents boils down to:
>
> 1. **Context Management:** Provide *clear, relevant context* so agents can understand the problem.E.g., Rather than dumping an entire chat history or entire files, use a [Workflow](./decomp.md) that filters out and includes only the most relevant information.
> 1. **Context Management:** Provide *clear, relevant context* so agents can understand the problem.E.g., Rather than dumping an entire chat history or entire files, use a [Workflow](./workflow.md) that filters out and includes only the most relevant information.
>
> 2. **Action Space:** Define *a well-structured, unambiguous, and easy-to-use* set of actions. For instance, avoid creating overlapping actions like `read_databases` and `read_csvs`. Instead, unify data sources (e.g., move CSVs into a database) and design a single action. The action can be parameterized (e.g., string for search) or programmable (e.g., SQL queries).
{: .best-practice }

View File

@ -12,7 +12,7 @@ MapReduce is a design pattern suitable when you have either:
- Large output data (e.g., multiple forms to fill)
and there is a logical way to break the task into smaller, ideally independent parts.
You first break down the task using [BatchNode](./batch.md) in the map phase, followed by aggregation in the reduce phase.
You first break down the task using [BatchNode](../core_abstraction/batch.md) in the map phase, followed by aggregation in the reduce phase.
### Example: Document Summarization

View File

@ -8,7 +8,7 @@ nav_order: 4
# RAG (Retrieval Augmented Generation)
For certain LLM tasks like answering questions, providing context is essential.
Use [vector search](./tool.md) to find relevant context for LLM responses.
Use [vector search](../utility_function/tool.md) to find relevant context for LLM responses.
### Example: Question Answering

View File

@ -7,7 +7,7 @@ nav_order: 2
# Workflow
Many real-world tasks are too complex for one LLM call. The solution is to decompose them into a [chain](./flow.md) of multiple Nodes.
Many real-world tasks are too complex for one LLM call. The solution is to decompose them into a [chain](../core_abstraction/flow.md) of multiple Nodes.
> - You don't want to make each task **too coarse**, because it may be *too complex for one LLM call*.
@ -46,3 +46,5 @@ writing_flow = Flow(start=outline)
shared = {"topic": "AI Safety"}
writing_flow.run(shared)
```
For *dynamic cases*, consider using [Agents](./agent.md).

View File

@ -30,18 +30,18 @@ We model the LLM workflow as a **Nested Directed Graph**:
## Core Abstraction
- [Node](./node.md)
- [Flow](./flow.md)
- [Communication](./communication.md)
- [Batch](./batch.md)
- [(Advanced) Async](./async.md)
- [(Advanced) Parallel](./parallel.md)
- [Node](./core_abstraction/node.md)
- [Flow](./core_abstraction/flow.md)
- [Communication](./core_abstraction/communication.md)
- [Batch](./core_abstraction/batch.md)
- [(Advanced) Async](./core_abstraction/async.md)
- [(Advanced) Parallel](./core_abstraction/parallel.md)
## Utility Function
- [LLM Wrapper](./llm.md)
- [Tool](./tool.md)
- [Viz and Debug](./viz.md)
- [LLM Wrapper](./utility_function/llm.md)
- [Tool](./utility_function/tool.md)
- [Viz and Debug](./utility_function/viz.md)
- Chunking
> We do not provide built-in utility functions. Example implementations are provided as reference.
@ -50,13 +50,13 @@ We model the LLM workflow as a **Nested Directed Graph**:
## Design Pattern
- [Structured Output](./structure.md)
- [Workflow](./decomp.md)
- [Map Reduce](./mapreduce.md)
- [RAG](./rag.md)
- [Chat Memory](./memory.md)
- [Agent](./agent.md)
- [(Advanced) Multi-Agents](./multi_agent.md)
- [Structured Output](./design_pattern/structure.md)
- [Workflow](./design_pattern/workflow.md)
- [Map Reduce](./design_pattern/mapreduce.md)
- [RAG](./design_pattern/rag.md)
- [Chat Memory](./design_pattern/memory.md)
- [Agent](./design_pattern/agent.md)
- [(Advanced) Multi-Agents](./design_pattern/multi_agent.md)
- Evaluation
## [Build your LLM App](./guide.md)
## [Develop your LLM Apps](./guide.md)