Skip to main content

Azure AI Studio

Sample Usage​

Ensure the following:

  1. The API Base passed ends in the /v1/ prefix example:

    api_base = "https://Mistral-large-dfgfj-serverless.eastus2.inference.ai.azure.com/v1/"
  2. The model passed is listed in supported models. You DO NOT Need to pass your deployment name to litellm. Example model=azure/Mistral-large-nmefg

Quick Start

import litellm
response = litellm.completion(
model="azure/command-r-plus",
api_base="<your-deployment-base>/v1/"
api_key="eskk******"
messages=[{"role": "user", "content": "What is the meaning of life?"}],
)

Sample Usage - LiteLLM Proxy​

  1. Add models to your config.yaml

    model_list:
    - model_name: mistral
    litellm_params:
    model: azure/mistral-large-latest
    api_base: https://Mistral-large-dfgfj-serverless.eastus2.inference.ai.azure.com/v1/
    api_key: JGbKodRcTp****
    - model_name: command-r-plus
    litellm_params:
    model: azure/command-r-plus
    api_key: os.environ/AZURE_COHERE_API_KEY
    api_base: os.environ/AZURE_COHERE_API_BASE
  1. Start the proxy

    $ litellm --config /path/to/config.yaml
  2. Send Request to LiteLLM Proxy Server

    import openai
    client = openai.OpenAI(
    api_key="sk-1234", # pass litellm proxy key, if you're using virtual keys
    base_url="http://0.0.0.0:4000" # litellm-proxy-base url
    )

    response = client.chat.completions.create(
    model="mistral",
    messages = [
    {
    "role": "user",
    "content": "what llm are you"
    }
    ],
    )

    print(response)

Supported Models​

Model NameFunction Call
Cohere command-r-pluscompletion(model="azure/command-r-plus", messages)
Cohere ommand-rcompletion(model="azure/command-r", messages)
mistral-large-latestcompletion(model="azure/mistral-large-latest", messages)