1.5 KiB
1.5 KiB
| layout | title | parent | nav_order |
|---|---|---|---|
| default | LLM Wrapper | Preparation | 1 |
LLM Wrappers
We don't provide built-in LLM wrappers. Instead, please implement your own, for example by asking an assistant like ChatGPT or Claude. If you ask ChatGPT to "implement a call_llm function that takes a prompt and returns the LLM response," you shall get something like:
def call_llm(prompt):
from openai import OpenAI
# Set the OpenAI API key (use environment variables, etc.)
client = OpenAI(api_key="YOUR_API_KEY_HERE")
r = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": prompt}]
)
return r.choices[0].message.content
# Example usage
call_llm("How are you?")
Improvements
Feel free to enhance your call_llm function as needed. Here are examples:
- Handle chat history:
def call_llm(messages):
from openai import OpenAI
client = OpenAI(api_key="YOUR_API_KEY_HERE")
r = client.chat.completions.create(
model="gpt-4",
messages=messages
)
return r.choices[0].message.content
- Add in-memory caching:
from functools import lru_cache
@lru_cache(maxsize=1000)
def call_llm(prompt):
# Your implementation here
pass
- Enable logging:
def call_llm(prompt):
import logging
logging.info(f"Prompt: {prompt}")
response = ... # Your implementation here
logging.info(f"Response: {response}")
return response