From 395fb0d81a06e876b29b53e99ccc233387a0e8c7 Mon Sep 17 00:00:00 2001 From: Zuzanna Osborn Date: Fri, 28 Feb 2025 23:25:05 -0500 Subject: [PATCH] Update llm.md --- docs/llm.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/llm.md b/docs/llm.md index 7ab4d97..b3ffd51 100644 --- a/docs/llm.md +++ b/docs/llm.md @@ -91,6 +91,6 @@ def call_llm(prompt): ## Why Not Provide Built-in LLM Wrappers? I believe it is a **bad practice** to provide LLM-specific implementations in a general framework: -- **LLM APIs change frequently**. Hardcoding them makes maintenance a nighmare. +- **LLM APIs change frequently**. Hardcoding them makes maintenance a nightmare. - You may need **flexibility** to switch vendors, use fine-tuned models, or deploy local LLMs. - You may need **optimizations** like prompt caching, request batching, or response streaming.