commit
4c364aa3aa
|
|
@ -91,6 +91,6 @@ def call_llm(prompt):
|
||||||
|
|
||||||
## Why Not Provide Built-in LLM Wrappers?
|
## Why Not Provide Built-in LLM Wrappers?
|
||||||
I believe it is a **bad practice** to provide LLM-specific implementations in a general framework:
|
I believe it is a **bad practice** to provide LLM-specific implementations in a general framework:
|
||||||
- **LLM APIs change frequently**. Hardcoding them makes maintenance a nighmare.
|
- **LLM APIs change frequently**. Hardcoding them makes maintenance a nightmare.
|
||||||
- You may need **flexibility** to switch vendors, use fine-tuned models, or deploy local LLMs.
|
- You may need **flexibility** to switch vendors, use fine-tuned models, or deploy local LLMs.
|
||||||
- You may need **optimizations** like prompt caching, request batching, or response streaming.
|
- You may need **optimizations** like prompt caching, request batching, or response streaming.
|
||||||
|
|
|
||||||
Loading…
Reference in New Issue