Skip to main content

Caila Python client

It is more convenient to use a special client library to access services in Caila from a Python program. You can use the Caila SDK for:

  • Accessing services on the Caila platform
  • Creating new services for Caila
  • Developing CI/CD pipelines and complex client applications

The SDK is freely available on GitHub: https://github.com/just-ai/mlp-python-sdk

Code examples using the SDK are available here: https://github.com/just-ai/mlp-python-examples

You can read more about the SDK in the corresponding section.

In this article, we will look at a few simple examples of how to call an Caila service from a Python program.

Example of a call through MlpClientSDK

Install the mlp-python-sdk dependency.

pip3 install git+https://git@github.com/just-ai/mlp-python-sdk.git@v1.0.0

Set the MLP_CLIENT_TOKEN environment variable. Insert the API key for access to Caila.

export MLP_CLIENT_TOKEN=<API-token>

Example of accessing the vectorizer-caila-roberta service via the standard Caila GRPC client.

# Client initialization
from mlp_sdk.transport.MlpClientSDK import MlpClientSDK
sdk = MlpClientSDK()
sdk.init()

# Defining the service parameters we are accessing
author = "just-ai"
service = "vectorizer-caila-roberta"
# API key will be taken from the MLP_CLIENT_TOKEN environment variable

# Creating an object with request parameters. For standard service types
# SDK contains a formal description of data classes. If there are none for the service you need,
# you can form a JSON request object in any way
from mlp_sdk.types import TextsCollection
req = TextsCollection(texts=["hello"])

# Sending the request and outputting the response
res = sdk.predict(account=author, model=service, data=req.json())
print(res)

# Shutting down the client
sdk.shutdown()

Example of a call through MlpRestClient

Install the mlp-python-sdk dependency.

pip3 install git+https://git@github.com/just-ai/mlp-python-sdk.git@v1.0.0

Example of accessing the openai-proxy service using the REST client.

from mlp_api.api.process_endpoint_api import ProcessEndpointApi
from mlp_api.models.predict_request_data import PredictRequestData
from mlp_sdk.transport.MlpClientSDK import MlpRestClient

mlp_api_key = "<PUT API KEY HERE>"
account_id = "just-ai"
model_name = "openai-proxy"
rest_client = MlpRestClient(url="https://caila.io/", token=mlp_api_key)
model = ProcessEndpointApi(rest_client)

request = {
"model": "gpt-3.5-turbo",
"messages": [{"role": "user", "content": "hello"}],
}
response = model.predict_with_config(
account_id,
model_name,
predict_request_data=PredictRequestData(data=request),
)
print(response)

Please note that you can also access GPT services through an OpenAI-compatible interface using familiar libraries like Langchain. Read about this in the OpenAI Adapter section.

Which option to use

  • Using MlpClientSDK is the preferred way due to slightly better performance and reliability.
  • Use standard third-party libraries like openai-python or langchain if you are already using them and switch to Caila to utilise new models.
  • Use MlpRestClient for applications that use not only the Predict API but also other endpoints. For example, as part of model training pipelines or in CI/CD pipelines.