From 382dfd4a63631134d8e58867cd5f50be0e2c939d Mon Sep 17 00:00:00 2001 From: zachary62 Date: Sat, 28 Dec 2024 04:40:44 +0000 Subject: [PATCH] refine docs --- docs/index.md | 1 + docs/llm.md | 10 +++++----- 2 files changed, 6 insertions(+), 5 deletions(-) diff --git a/docs/index.md b/docs/index.md index b2f94ee..6cfe5d7 100644 --- a/docs/index.md +++ b/docs/index.md @@ -39,6 +39,7 @@ We model the LLM workflow as a **Nested Flow**: - Map Reduce - RAG - Structured Output +- Evaluation ## Example Use Cases diff --git a/docs/llm.md b/docs/llm.md index dd0755a..7a0852f 100644 --- a/docs/llm.md +++ b/docs/llm.md @@ -4,10 +4,10 @@ title: "LLM Integration" nav_order: 3 --- -# Call LLM +# LLM Wrappers -For your LLM application, implement a function to call LLMs yourself. -You can ask an assistant like ChatGPT or Claude to generate an example. +For your LLM app, implement a wrapper function to call LLMs yourself. +You can ask an assistant like ChatGPT or Claude to implement it. For instance, asking ChatGPT to "implement a `call_llm` function that takes a prompt and returns the LLM response" gives: ```python @@ -64,9 +64,9 @@ def call_llm(prompt): ``` -## Why not provide an LLM call function? +## Why not provide LLM Wrappers? I believe it is a bad practice to provide LLM-specific implementations in a general framework: -- LLM APIs change frequently. Hardcoding them makes maintenance difficult. +- LLMs change frequently. Hardcoding them makes maintenance difficult. - You may need flexibility to switch vendors, use fine-tuned models, or deploy local LLMs. - You may need optimizations like prompt caching, request batching, or response streaming.