FastAPI (0.1.0)

Download OpenAPI specification:Download

Authentication

APIKeyHeader

Security Scheme Type API Key
Header parameter name: X-API-Key

manage

Health

Responses

Response samples

Content type
application/json
null

Healthz

Duplicate of /health for compatibility

Responses

Response samples

Content type
application/json
null

Status

Authorizations:

Responses

Response samples

Content type
application/json
{
  • "version": "string",
  • "whylogs_logger_status": {
    },
  • "config": { }
}

Log Debug Info

Log the output of /status

Authorizations:

Responses

Response samples

Content type
application/json
null

Publish Profiles

Authorizations:

Responses

Response samples

Content type
application/json
null

Deep Health

Authorizations:

Responses

Response samples

Content type
application/json
null

profile

Profile tabular data

Profile tabular data. The Swagger UI isn't able to call this currently.

Sample curl request:

curl -X 'POST'         -H "X-API-Key: <password>"         -H "Content-Type: application/json"         'http://localhost:8000/log'         --data-raw '{
    "datasetId": "model-62",
    "multiple": {
        "columns": [ "age", "workclass", "fnlwgt", "education" ],
        "data": [
            [ 25, "Private", 226802, "11th" ]
        ]
    }
}'

Sample Python client request:

from whylogs_container_client import AuthenticatedClient
import whylogs_container_client.api.profile.log as Log
from whylogs_container_client.models import LogRequest, LogMultiple
from datetime import datetime

client = AuthenticatedClient(base_url="http://localhost:8000", token="password", prefix="", auth_header_name="X-API-Key")

data = LogRequest(
    dataset_id="model-1",
    timestamp=int(datetime.now().timestamp() * 1000),
    multiple=LogMultiple(
        columns=["col1", "col2"],
        data=[[1, 2], [3, 4]],
    )
)

response = Log.sync_detailed(client=client, json_body=data)
if response.status_code != 200:
    raise Exception(f"Failed to log data. Status code: {response.status_code}")
# API is async, it won't fail and has no return body

Sample Python request (using requests):

import requests

# Define your API key
api_key = "<password>"

# API endpoint
url = 'http://localhost:8000/log'

# Sample data
data = {
    "datasetId": "model-62",
    "multiple": {
        "columns": ["age", "workclass", "fnlwgt", "education"],
        "data": [
            [25, "Private", 226802, "11th"]
        ]
    }
}

# Make the POST request
headers = {"X-API-Key": api_key}
response = requests.post(url, json=data, headers=headers)
Authorizations:
Request Body schema: application/json
datasetId
required
string (Datasetid)
required
object (LogMultiple)
integer or null (Timestamp)

Responses

Request samples

Content type
application/json
{
  • "datasetId": "string",
  • "multiple": {
    },
  • "timestamp": 0
}

Response samples

Content type
application/json
null

Profile embeddings

This endpoint requires a custom configuration to set up before hand. See https://docs.whylabs.ai/docs/integrations-whylogs-container/ for setting up embeddings support.

Log embeddings data. The Swagger UI isn't able to call this currently.

Sample curl request:

curl -X 'POST'         -H "X-API-Key: <password>"         -H "Content-Type: application/octet-stream"         'http://localhost:8000/log-embeddings'         --data-raw '{
    "datasetId": "model-62",
    "timestamp": 1634235000,
    "embeddings": {
        "embeddings": [[0.12, 0.45, 0.33, 0.92]]
    }
}'

Sample Python request (using requests):

import requests

# Define your API key
api_key = "<password>"

# API endpoint
url = 'http://localhost:8000/log-embeddings'

# Sample data
data = {
    "datasetId": "model-62",
    "timestamp": 1634235000,  # an example timestamp
    "embeddings": {
        "embeddings": [[0.12, 0.45, 0.33, 0.92]]
    }
}

# Make the POST request
headers = {"X-API-Key": api_key, "Content-Type": "application/octet-stream"}
response = requests.post(url, json=data, headers=headers)
Authorizations:
Request Body schema: application/json
dataset_id
required
string (Dataset Id)
timestamp
required
integer (Timestamp)
required
object (Embeddings)

Responses

Request samples

Content type
application/json
{
  • "dataset_id": "string",
  • "timestamp": 0,
  • "embeddings": {
    }
}

Response samples

Content type
application/json
null

Log Pubsub

Authorizations:

Responses

Response samples

Content type
application/json
null

Log Pubsub Embeddings

Authorizations:

Responses

Response samples

Content type
application/json
null

llm

Evaluate and log a single prompt/response pair using langkit asynchronously.

This is a convenience wrapper around the llm request type for calling /log, which accepts bulk data.

Authorizations:
Request Body schema: application/json
string or null (Prompt)
string or null (Response)
InputContext (object) or null
string or null (Id)
datasetId
required
string (Datasetid)
timestamp
integer (Timestamp)
object (Additional Data)
RunOptions (object) or null
object or null (Metadata)

Responses

Request samples

Content type
application/json
{
  • "prompt": "string",
  • "response": "string",
  • "context": {
    },
  • "id": "string",
  • "datasetId": "string",
  • "timestamp": 0,
  • "additional_data": {
    },
  • "options": {
    },
  • "metadata": {
    }
}

Response samples

Content type
application/json
null

Evaluate and log a single prompt/response pair asynchronously using sqs.

Request Body schema: application/json
string or null (Prompt)
string or null (Response)
InputContext (object) or null
string or null (Id)
datasetId
required
string (Datasetid)
timestamp
integer (Timestamp)
object (Additional Data)
RunOptions (object) or null
object or null (Metadata)

Responses

Request samples

Content type
application/json
{
  • "prompt": "string",
  • "response": "string",
  • "context": {
    },
  • "id": "string",
  • "datasetId": "string",
  • "timestamp": 0,
  • "additional_data": {
    },
  • "options": {
    },
  • "metadata": {
    }
}

Response samples

Content type
application/json
null

Evaluate and log a single prompt/response pair using langkit.

Run langkit evaluation and return the validation results, as well as the generated metrics.

Args: log (bool, optional): Determines if logging to WhyLabs is enabled for the request. Defaults to True.

Authorizations:
query Parameters
log
boolean (Log)
Default: true
perf_info
boolean (Perf Info)
Default: false
trace
boolean (Trace)
Default: true
metadata_info
boolean (Metadata Info)
Default: false
Request Body schema: application/json
string or null (Prompt)
string or null (Response)
InputContext (object) or null
string or null (Id)
datasetId
required
string (Datasetid)
timestamp
integer (Timestamp)
object (Additional Data)
RunOptions (object) or null
object or null (Metadata)

Responses

Request samples

Content type
application/json
{
  • "prompt": "string",
  • "response": "string",
  • "context": {
    },
  • "id": "string",
  • "datasetId": "string",
  • "timestamp": 0,
  • "additional_data": {
    },
  • "options": {
    },
  • "metadata": {
    }
}

Response samples

Content type
application/json
{
  • "metrics": [
    ],
  • "validation_results": {
    },
  • "perf_info": {
    },
  • "action": {
    },
  • "score_perf_info": {
    },
  • "scores": [ ],
  • "metadata": { }
}

Get a list of available metrics that can be referenced in policies.

Get a list of available metrics that can be referenced in policies.

Responses

Response samples

Content type
application/json
{
  • "metrics_names": [
    ]
}

Get the JSON schema for policy files

query Parameters
string or null (Schema Version)

Responses

Response samples

Content type
application/json
{ }

Dump debug information about the known policies.

Authorizations:

Responses

Response samples

Content type
application/json
{ }

Evaluate and log a single prompt/response pair from lightllm.

Run langkit evaluation and return the validation results, as well as the generated metrics.

Args: log (bool, optional): Determines if logging to WhyLabs is enabled for the request. Defaults to True.

Authorizations:
query Parameters
log
boolean (Log)
Default: true
perf_info
boolean (Perf Info)
Default: false
trace
boolean (Trace)
Default: true
metadata_info
boolean (Metadata Info)
Default: false
Request Body schema: application/json
string or null (Prompt)
string or null (Response)
InputContext (object) or null
string or null (Id)
datasetId
required
string (Datasetid)
timestamp
integer (Timestamp)
object (Additional Data)
RunOptions (object) or null
object or null (Metadata)

Responses

Request samples

Content type
application/json
{
  • "prompt": "string",
  • "response": "string",
  • "context": {
    },
  • "id": "string",
  • "datasetId": "string",
  • "timestamp": 0,
  • "additional_data": {
    },
  • "options": {
    },
  • "metadata": {
    }
}

Response samples

Content type
application/json
{
  • "metrics": [
    ],
  • "validation_results": {
    },
  • "perf_info": {
    },
  • "action": {
    },
  • "score_perf_info": {
    },
  • "scores": [ ],
  • "metadata": { }
}

debug

Evaluate and log a single prompt/response pair using langkit and a policy file.

Authorizations:
Request Body schema: application/json
string or null (Prompt)
string or null (Response)
InputContext (object) or null
string or null (Id)
datasetId
required
string (Datasetid)
timestamp
integer (Timestamp)
object (Additional Data)
RunOptions (object) or null
object or null (Metadata)
policy
required
string (Policy)

Responses

Request samples

Content type
application/json
{
  • "prompt": "string",
  • "response": "string",
  • "context": {
    },
  • "id": "string",
  • "datasetId": "string",
  • "timestamp": 0,
  • "additional_data": {
    },
  • "options": {
    },
  • "metadata": {
    },
  • "policy": "string"
}

Response samples

Content type
application/json
{
  • "metrics": [
    ],
  • "validation_results": {
    },
  • "perf_info": {
    },
  • "action": {
    },
  • "score_perf_info": {
    },
  • "scores": [ ],
  • "metadata": { }
}

Generate embeddings for inputs

Authorizations:
Request Body schema: application/json
string or null (Prompt)
string or null (Response)
InputContext (object) or null

Responses

Request samples

Content type
application/json
{
  • "prompt": "string",
  • "response": "string",
  • "context": {
    }
}

Response samples

Content type
application/json
{
  • "metrics": [
    ],
  • "validation_results": {
    },
  • "perf_info": {
    },
  • "action": {
    },
  • "score_perf_info": {
    },
  • "scores": [ ],
  • "metadata": { }
}

Dump debug information about the known policies.

Authorizations:

Responses

Response samples

Content type
application/json
{ }

ui

Interactive policy editor. Visit this endpoint in a browser.

Responses

Response samples

Content type
application/json
null

lightllm

Evaluate and log a single prompt/response pair from lightllm.

Run langkit evaluation and return the validation results, as well as the generated metrics.

Args: log (bool, optional): Determines if logging to WhyLabs is enabled for the request. Defaults to True.

Authorizations:
query Parameters
log
boolean (Log)
Default: true
perf_info
boolean (Perf Info)
Default: false
trace
boolean (Trace)
Default: true
metadata_info
boolean (Metadata Info)
Default: false
Request Body schema: application/json
string or null (Prompt)
string or null (Response)
InputContext (object) or null
string or null (Id)
datasetId
required
string (Datasetid)
timestamp
integer (Timestamp)
object (Additional Data)
RunOptions (object) or null
object or null (Metadata)

Responses

Request samples

Content type
application/json
{
  • "prompt": "string",
  • "response": "string",
  • "context": {
    },
  • "id": "string",
  • "datasetId": "string",
  • "timestamp": 0,
  • "additional_data": {
    },
  • "options": {
    },
  • "metadata": {
    }
}

Response samples

Content type
application/json
{
  • "metrics": [
    ],
  • "validation_results": {
    },
  • "perf_info": {
    },
  • "action": {
    },
  • "score_perf_info": {
    },
  • "scores": [ ],
  • "metadata": { }
}

integrations

Evaluate and log a single prompt/response pair from lightllm.

Run langkit evaluation and return the validation results, as well as the generated metrics.

Args: log (bool, optional): Determines if logging to WhyLabs is enabled for the request. Defaults to True.

Authorizations:
query Parameters
log
boolean (Log)
Default: true
perf_info
boolean (Perf Info)
Default: false
trace
boolean (Trace)
Default: true
metadata_info
boolean (Metadata Info)
Default: false
Request Body schema: application/json
string or null (Prompt)
string or null (Response)
InputContext (object) or null
string or null (Id)
datasetId
required
string (Datasetid)
timestamp
integer (Timestamp)
object (Additional Data)
RunOptions (object) or null
object or null (Metadata)

Responses

Request samples

Content type
application/json
{
  • "prompt": "string",
  • "response": "string",
  • "context": {
    },
  • "id": "string",
  • "datasetId": "string",
  • "timestamp": 0,
  • "additional_data": {
    },
  • "options": {
    },
  • "metadata": {
    }
}

Response samples

Content type
application/json
{
  • "metrics": [
    ],
  • "validation_results": {
    },
  • "perf_info": {
    },
  • "action": {
    },
  • "score_perf_info": {
    },
  • "scores": [ ],
  • "metadata": { }
}