From 9227b6381190be68a0af0ef887c0e28c2da83031 Mon Sep 17 00:00:00 2001 From: zachary62 Date: Wed, 1 Jan 2025 22:42:46 +0000 Subject: [PATCH] update docs --- docs/index.md | 26 ++++++++------------------ docs/llm.md | 5 +++++ 2 files changed, 13 insertions(+), 18 deletions(-) diff --git a/docs/index.md b/docs/index.md index cd3c002..207bd93 100644 --- a/docs/index.md +++ b/docs/index.md @@ -8,11 +8,6 @@ nav_order: 1 A [100-line](https://github.com/zachary62/miniLLMFlow/blob/main/minillmflow/__init__.py) minimalist LLM framework for *Agents, Task Decomposition, RAG, etc*. -
- -
- -## Core Abstraction We model the LLM workflow as a **Nested Directed Graph**: - **Nodes** handle simple (LLM) tasks. @@ -22,7 +17,12 @@ We model the LLM workflow as a **Nested Directed Graph**: - **Batch** Nodes/Flows for data-intensive tasks. - **Async** Nodes/Flows allow waits or **Parallel** execution -To learn more: +
+ +
+ +## Core Abstraction + - [Node](./node.md) - [Flow](./flow.md) - [Communication](./communication.md) @@ -30,22 +30,12 @@ To learn more: - [(Advanced) Async](./async.md) - [(Advanced) Parallel](./parallel.md) -## LLM Wrapper & Tools +## Low-Level Details (We Do Not Provide) -**We DO NOT provide built-in LLM wrappers and tools!** - -I believe it is a *bad practice* to provide low-level implementations in a general framework: -- **APIs change frequently.** Hardcoding them makes maintenance a nightmare. -- You may need **flexibility.** E.g., using fine-tunined LLMs or deploying local ones. -- You may need **optimizations.** E.g., prompt caching, request batching, response streaming... - -We provide some simple example implementations: - [LLM Wrapper](./llm.md) - [Tool](./tool.md) -## Paradigm - -Based on the core abstraction, we implement common high-level paradigms: +## High-Level Paradigm - [Structured Output](./structure.md) - Task Decomposition diff --git a/docs/llm.md b/docs/llm.md index 55e44ab..789cd06 100644 --- a/docs/llm.md +++ b/docs/llm.md @@ -62,3 +62,8 @@ def call_llm(prompt): return response ``` +## Why Not Provide Built-in LLM Wrappers? +I believe it is a **bad practice** to provide LLM-specific implementations in a general framework: +- **LLM APIs change frequently**. Hardcoding them makes maintenance a nighmare. +- You may need **flexibility** to switch vendors, use fine-tuned models, or deploy local LLMs. +- You may need **optimizations** like prompt caching, request batching, or response streaming.