update doc nav
This commit is contained in:
parent
382dfd4a63
commit
10f7464db3
|
|
@ -1,7 +1,8 @@
|
||||||
---
|
---
|
||||||
layout: default
|
layout: default
|
||||||
title: "Async"
|
title: "Async"
|
||||||
nav_order: 7
|
parent: Core Abstraction
|
||||||
|
nav_order: 5
|
||||||
---
|
---
|
||||||
|
|
||||||
# Async
|
# Async
|
||||||
|
|
|
||||||
|
|
@ -1,7 +1,8 @@
|
||||||
---
|
---
|
||||||
layout: default
|
layout: default
|
||||||
title: "Batch"
|
title: "Batch"
|
||||||
nav_order: 6
|
parent: Core Abstraction
|
||||||
|
nav_order: 4
|
||||||
---
|
---
|
||||||
|
|
||||||
# Batch
|
# Batch
|
||||||
|
|
|
||||||
|
|
@ -1,7 +1,8 @@
|
||||||
---
|
---
|
||||||
layout: default
|
layout: default
|
||||||
title: "Communication"
|
title: "Communication"
|
||||||
nav_order: 5
|
parent: Core Abstraction
|
||||||
|
nav_order: 3
|
||||||
---
|
---
|
||||||
|
|
||||||
# Communication
|
# Communication
|
||||||
|
|
|
||||||
|
|
@ -1,7 +1,8 @@
|
||||||
---
|
---
|
||||||
layout: default
|
layout: default
|
||||||
title: "Flow"
|
title: "Flow"
|
||||||
nav_order: 4
|
parent: Core Abstraction
|
||||||
|
nav_order: 2
|
||||||
---
|
---
|
||||||
|
|
||||||
# Flow
|
# Flow
|
||||||
|
|
|
||||||
|
|
@ -23,6 +23,7 @@ We model the LLM workflow as a **Nested Flow**:
|
||||||
## Preparation
|
## Preparation
|
||||||
|
|
||||||
- [LLM Integration](./llm.md)
|
- [LLM Integration](./llm.md)
|
||||||
|
- [Tools](./tool.md)
|
||||||
|
|
||||||
## Core Abstraction
|
## Core Abstraction
|
||||||
|
|
||||||
|
|
|
||||||
20
docs/llm.md
20
docs/llm.md
|
|
@ -1,14 +1,12 @@
|
||||||
---
|
---
|
||||||
layout: default
|
layout: default
|
||||||
title: "LLM Integration"
|
title: "LLM Integration"
|
||||||
nav_order: 3
|
nav_order: 2
|
||||||
---
|
---
|
||||||
|
|
||||||
# LLM Wrappers
|
# LLM Wrappers
|
||||||
|
|
||||||
For your LLM app, implement a wrapper function to call LLMs yourself.
|
We **don't** provide built-in wrapper LLM wrappers. Instead, please implement your own, for example by asking an assistant like ChatGPT or Claude. If you ask ChatGPT to "implement a `call_llm` function that takes a prompt and returns the LLM response," you shall get something like:
|
||||||
You can ask an assistant like ChatGPT or Claude to implement it.
|
|
||||||
For instance, asking ChatGPT to "implement a `call_llm` function that takes a prompt and returns the LLM response" gives:
|
|
||||||
|
|
||||||
```python
|
```python
|
||||||
def call_llm(prompt):
|
def call_llm(prompt):
|
||||||
|
|
@ -26,7 +24,7 @@ call_llm("How are you?")
|
||||||
```
|
```
|
||||||
|
|
||||||
## Improvements
|
## Improvements
|
||||||
You can enhance the function as needed. Examples:
|
Feel free to enhance your `call_llm` function as needed. Here are examples:
|
||||||
|
|
||||||
- Handle chat history:
|
- Handle chat history:
|
||||||
|
|
||||||
|
|
@ -58,15 +56,15 @@ def call_llm(prompt):
|
||||||
def call_llm(prompt):
|
def call_llm(prompt):
|
||||||
import logging
|
import logging
|
||||||
logging.info(f"Prompt: {prompt}")
|
logging.info(f"Prompt: {prompt}")
|
||||||
response = ...
|
response = ... # Your implementation here
|
||||||
logging.info(f"Response: {response}")
|
logging.info(f"Response: {response}")
|
||||||
return response
|
return response
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
||||||
## Why not provide LLM Wrappers?
|
## Why Not Provide a Built-in LLM Wrapper?
|
||||||
I believe it is a bad practice to provide LLM-specific implementations in a general framework:
|
I believe it is a **bad practice** to provide LLM-specific implementations in a general framework:
|
||||||
- LLMs change frequently. Hardcoding them makes maintenance difficult.
|
- **LLM APIs change frequently**. Hardcoding them makes maintenance a nighmare.
|
||||||
- You may need flexibility to switch vendors, use fine-tuned models, or deploy local LLMs.
|
- You may need **flexibility** to switch vendors, use fine-tuned models, or deploy local LLMs.
|
||||||
- You may need optimizations like prompt caching, request batching, or response streaming.
|
- You may need **optimizations** like prompt caching, request batching, or response streaming.
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -1,7 +1,8 @@
|
||||||
---
|
---
|
||||||
layout: default
|
layout: default
|
||||||
title: "Node"
|
title: "Node"
|
||||||
nav_order: 3
|
parent: Core Abstraction
|
||||||
|
nav_order: 1
|
||||||
---
|
---
|
||||||
|
|
||||||
# Node
|
# Node
|
||||||
|
|
|
||||||
Loading…
Reference in New Issue