Skip to content

trustyai-explainability/trustyai-service-operator

Repository files navigation

TrustyAI Kubernetes Operator

Controller Tests YAML lint Gosec Security Scan Go Report Card

Overview

The TrustyAI Kubernetes Operator aims at simplifying the deployment and management of various TrustyAI Kubernetes components, such as:

  • TrustyAI Service: A service that deploys alongside KServe models and collects inference data to enable model explainability, fairness monitoring, and drift tracking.
  • FMS-Guardrails: A modular framework for guardrailing LLMs
  • LM-Eval: A job-based architecture for deploying and managing LLM evaluations, based on EleutherAI's lm-evaluation-harness library.

Prerequisites

  • Kubernetes cluster v1.19+ or OpenShift cluster v4.6+
  • kubectl v1.19+ or oc client v4.6+
  • kustomize v5+

Installation

This operator is available as an image on Quay.io. To deploy it on your cluster:

OPERATOR_NAMESPACE=opendatahub
make manifest-gen NAMESPACE=$OPERATOR_NAMESPACE KUSTOMIZE=kustomize
oc apply -f release/trustyai_bundle.yaml -n $OPERATOR_NAMESPACE

You can also build your own image, and use that as your TrustyAI operator:

OPERATOR_NAMESPACE=opendatahub
OPERATOR_IMAGE=quay.io/yourorg/your-image-name:latest
podman build -t $OPERATOR_IMAGE --platform linux/amd64 -f Dockerfile .
podman push $OPERATOR_IMAGE
make manifest-gen NAMESPACE=$OPERATOR_NAMESPACE OPERATOR_IMAGE=$OPERATOR_IMAGE KUSTOMIZE=kustomize
oc apply -f release/trustyai_bundle.yaml -n $OPERATOR_NAMESPACE

Usage

For usage information, please see the OpenDataHub documentation of TrustyAI.

Contributing

Please see the CONTRIBUTING.md file for more details on how to contribute to this project.

License

This project is licensed under the Apache License Version 2.0 - see the LICENSE file for details.