Update llm.md
This commit is contained in:
parent
a2447f0bb6
commit
395fb0d81a
|
|
@ -91,6 +91,6 @@ def call_llm(prompt):
|
|||
|
||||
## Why Not Provide Built-in LLM Wrappers?
|
||||
I believe it is a **bad practice** to provide LLM-specific implementations in a general framework:
|
||||
- **LLM APIs change frequently**. Hardcoding them makes maintenance a nighmare.
|
||||
- **LLM APIs change frequently**. Hardcoding them makes maintenance a nightmare.
|
||||
- You may need **flexibility** to switch vendors, use fine-tuned models, or deploy local LLMs.
|
||||
- You may need **optimizations** like prompt caching, request batching, or response streaming.
|
||||
|
|
|
|||
Loading…
Reference in New Issue