Skip to the content.

Quickstart/Examples

1) Install the package into your environment (see [[Installation]])

2) Create a config file for the LLM or LLMs you want to use:

e.g. myconf.hjson

{
  llms: [
    { llm: "openai/gpt-4o"
      api_key_env: "OPENAI_KEY1"
      temperature: 0
      alias: "openai1"
    }
    { llm: "gemini/gemini-1.5-flash", temperature: 1, alias: "geminiflash1" }
    { llm: "gemini/gemini-1.5-flash", temperature: 0, alias: "geminiflash2" }
]
  providers: {
    gemini: {
      api_key_env: "GEMINI_KEY1"
    }
  }
}

3) In your code:

from llms_wrapper.llms import LLMS
from llms_wrapper.config import read_config_file

# [....]
config = read_config_file("myconf.hjson")
llms = LLMS(config)

messages = llms.make_messages(query="What is a monoid?")
response = llms.query("geminiflash", messages, return_cost=True)
if response["ok"]:
    print("Got the answer:", response["answer"])
    print("Cost:", response["cost"])
else:
    print("Got an error:", response["error"])