diff --git a/docs/async.md b/docs/async.md index e366327..589234e 100644 --- a/docs/async.md +++ b/docs/async.md @@ -1,7 +1,8 @@ --- layout: default title: "Async" -nav_order: 7 +parent: Core Abstraction +nav_order: 5 --- # Async diff --git a/docs/batch.md b/docs/batch.md index 540de5a..bc53d00 100644 --- a/docs/batch.md +++ b/docs/batch.md @@ -1,7 +1,8 @@ --- layout: default title: "Batch" -nav_order: 6 +parent: Core Abstraction +nav_order: 4 --- # Batch diff --git a/docs/communication.md b/docs/communication.md index b4115f6..3e00552 100644 --- a/docs/communication.md +++ b/docs/communication.md @@ -1,7 +1,8 @@ --- layout: default title: "Communication" -nav_order: 5 +parent: Core Abstraction +nav_order: 3 --- # Communication diff --git a/docs/flow.md b/docs/flow.md index 1000ad4..7e2fcb1 100644 --- a/docs/flow.md +++ b/docs/flow.md @@ -1,7 +1,8 @@ --- layout: default title: "Flow" -nav_order: 4 +parent: Core Abstraction +nav_order: 2 --- # Flow diff --git a/docs/index.md b/docs/index.md index 6cfe5d7..a6b8913 100644 --- a/docs/index.md +++ b/docs/index.md @@ -23,6 +23,7 @@ We model the LLM workflow as a **Nested Flow**: ## Preparation - [LLM Integration](./llm.md) +- [Tools](./tool.md) ## Core Abstraction diff --git a/docs/llm.md b/docs/llm.md index 7a0852f..c76c89e 100644 --- a/docs/llm.md +++ b/docs/llm.md @@ -1,14 +1,12 @@ --- layout: default title: "LLM Integration" -nav_order: 3 +nav_order: 2 --- # LLM Wrappers -For your LLM app, implement a wrapper function to call LLMs yourself. -You can ask an assistant like ChatGPT or Claude to implement it. -For instance, asking ChatGPT to "implement a `call_llm` function that takes a prompt and returns the LLM response" gives: +We **don't** provide built-in wrapper LLM wrappers. Instead, please implement your own, for example by asking an assistant like ChatGPT or Claude. If you ask ChatGPT to "implement a `call_llm` function that takes a prompt and returns the LLM response," you shall get something like: ```python def call_llm(prompt): @@ -26,7 +24,7 @@ call_llm("How are you?") ``` ## Improvements -You can enhance the function as needed. Examples: +Feel free to enhance your `call_llm` function as needed. Here are examples: - Handle chat history: @@ -58,15 +56,15 @@ def call_llm(prompt): def call_llm(prompt): import logging logging.info(f"Prompt: {prompt}") - response = ... + response = ... # Your implementation here logging.info(f"Response: {response}") return response ``` -## Why not provide LLM Wrappers? -I believe it is a bad practice to provide LLM-specific implementations in a general framework: -- LLMs change frequently. Hardcoding them makes maintenance difficult. -- You may need flexibility to switch vendors, use fine-tuned models, or deploy local LLMs. -- You may need optimizations like prompt caching, request batching, or response streaming. +## Why Not Provide a Built-in LLM Wrapper? +I believe it is a **bad practice** to provide LLM-specific implementations in a general framework: +- **LLM APIs change frequently**. Hardcoding them makes maintenance a nighmare. +- You may need **flexibility** to switch vendors, use fine-tuned models, or deploy local LLMs. +- You may need **optimizations** like prompt caching, request batching, or response streaming. diff --git a/docs/node.md b/docs/node.md index 22bc81b..022f235 100644 --- a/docs/node.md +++ b/docs/node.md @@ -1,7 +1,8 @@ --- layout: default title: "Node" -nav_order: 3 +parent: Core Abstraction +nav_order: 1 --- # Node