update guide
This commit is contained in:
parent
0a420b3513
commit
b3420359be
|
|
@ -32,13 +32,12 @@ These system designs should be a collaboration between humans and AI assistants:
|
|||
|
||||
2. **Flow Design**: Outline at a high level, describe how your AI system orchestrates nodes.
|
||||
- Identify applicable design patterns (e.g., [Map Reduce](./design_pattern/mapreduce.md), [Agent](./design_pattern/agent.md), [RAG](./design_pattern/rag.md)).
|
||||
- Draw the Flow in a mermaid diagram. For example:
|
||||
- Outline the flow and draw it in a mermaid diagram. For example:
|
||||
```mermaid
|
||||
flowchart LR
|
||||
firstNode[First Node] --> secondNode[Second Node]
|
||||
secondNode --> thirdNode[Third Node]
|
||||
```
|
||||
- For each node in the flow, provide a high-level purpose description.
|
||||
|
||||
3. **Utilities**: Based on the Flow Design, identify and implement necessary utility functions.
|
||||
- Think of your AI system as the brain. It needs a body—these *external utility functions*—to interact with the real world:
|
||||
|
|
@ -47,9 +46,8 @@ These system designs should be a collaboration between humans and AI assistants:
|
|||
- Reading inputs (e.g., retrieving Slack messages, reading emails)
|
||||
- Writing outputs (e.g., generating reports, sending emails)
|
||||
- Using external tools (e.g., calling LLMs, searching the web)
|
||||
|
||||
- NOTE: *LLM-based tasks* (e.g., summarizing text, analyzing sentiment) are **NOT** utility functions; rather, they are *core functions* internal in the AI system.
|
||||
- For each utility function, implement it and write a simple test (e.g., under `if __name__ == "__main__":` ).
|
||||
- **NOTE**: *LLM-based tasks* (e.g., summarizing text, analyzing sentiment) are **NOT** utility functions; rather, they are *core functions* internal in the AI system.
|
||||
- For each utility function, implement it and write a simple test.
|
||||
- Document their input/output, as well as why they are necessary. For example:
|
||||
- *Name*: Embedding (`utils/get_embedding.py`)
|
||||
- *Input*: `str`
|
||||
|
|
@ -61,9 +59,8 @@ These system designs should be a collaboration between humans and AI assistants:
|
|||
- Start with the shared data design
|
||||
- For simple systems, use an in-memory dictionary.
|
||||
- For more complex systems or when persistence is required, use a database.
|
||||
- **Remove Data Redundancy**: Don’t store the same data. Use in-memory references or foreign keys.
|
||||
- For each node, specify its type, how it reads and writes data, and which utility function it uses.
|
||||
- Keep it specific but high-level without codes. For example:
|
||||
- **Don't Repeat Yourself"**: Use in-memory references or foreign keys.
|
||||
- For each node, describe its type, how it reads and writes data, and which utility function it uses. Keep it specific but high-level without codes. For example:
|
||||
- `type`: Regular (or Batch, or Async)
|
||||
- `prep`: Read "text" from the shared store
|
||||
- `exec`: Call the embedding utility function
|
||||
|
|
|
|||
|
|
@ -25,6 +25,7 @@ Here, we provide some minimal example implementations:
|
|||
call_llm("How are you?")
|
||||
```
|
||||
> Store the API key in an environment variable like OPENAI_API_KEY for security.
|
||||
{: .best-practice }
|
||||
|
||||
2. Claude (Anthropic)
|
||||
```python
|
||||
|
|
|
|||
Loading…
Reference in New Issue