Merge pull request #7 from mildlyinfuriating514/patch-1

Update llm.md
This commit is contained in:
Zachary Huang 2025-02-28 23:25:56 -05:00 committed by GitHub
commit 4c364aa3aa
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
1 changed files with 1 additions and 1 deletions

View File

@ -91,6 +91,6 @@ def call_llm(prompt):
## Why Not Provide Built-in LLM Wrappers? ## Why Not Provide Built-in LLM Wrappers?
I believe it is a **bad practice** to provide LLM-specific implementations in a general framework: I believe it is a **bad practice** to provide LLM-specific implementations in a general framework:
- **LLM APIs change frequently**. Hardcoding them makes maintenance a nighmare. - **LLM APIs change frequently**. Hardcoding them makes maintenance a nightmare.
- You may need **flexibility** to switch vendors, use fine-tuned models, or deploy local LLMs. - You may need **flexibility** to switch vendors, use fine-tuned models, or deploy local LLMs.
- You may need **optimizations** like prompt caching, request batching, or response streaming. - You may need **optimizations** like prompt caching, request batching, or response streaming.