diff --git a/docs/llm.md b/docs/llm.md index 7ab4d97..b3ffd51 100644 --- a/docs/llm.md +++ b/docs/llm.md @@ -91,6 +91,6 @@ def call_llm(prompt): ## Why Not Provide Built-in LLM Wrappers? I believe it is a **bad practice** to provide LLM-specific implementations in a general framework: -- **LLM APIs change frequently**. Hardcoding them makes maintenance a nighmare. +- **LLM APIs change frequently**. Hardcoding them makes maintenance a nightmare. - You may need **flexibility** to switch vendors, use fine-tuned models, or deploy local LLMs. - You may need **optimizations** like prompt caching, request batching, or response streaming.