Integrate Red Hat Developer Hub with {openshift-ai-connector-name} to leverage AI models
Installing, configuring, and troubleshooting OpenShift AI Connector for Red Hat Developer Hub
Abstract
1. Understand how AI assets map to the Red Hat Developer Hub Catalog
This section describes Developer Preview features in the OpenShift AI Connector for Red Hat Developer Hub plugin. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to functionality in advance of possible inclusion in a Red Hat product offering. Customers can use these features to test functionality and provide feedback during the development process. Developer Preview features might not have any documentation, are subject to change or removal at any time, and have received limited testing. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA.
For more information about the support scope of Red Hat Developer Preview features, see Developer Preview Support Scope.
The OpenShift AI Connector for Red Hat Developer Hub (OpenShift AI Connector for RHDH) serves as a crucial link, enabling the discovery and accessibility of AI assets managed within the Red Hat OpenShift AI offering directly within your RHDH instance.
For more information on model registry components, see Overview of model registries and model catalog.
1.1. Model-to-Entity mapping
This offering interfaces with the OpenShift AI Connector for RHDH, model catalog, and KServe-based Model Deployments (InferenceServices) to create familiar Backstage entities.
| RHOAI Artifact | RHDH/Backstage Entity Kind | RHDH/Backstage Entity Type | Purpose |
|---|---|---|---|
|
Model Server (InferenceService) |
Component |
|
Represents a running, accessible AI model endpoint. See Configuring your model-serving platform. |
|
AI Model (Model Registry Version) |
Resource |
|
Represents the specific AI model artifact, for example, |
|
Model Server API Details |
API |
|
Provides the OpenAPI/Swagger specification for the REST endpoint of the model. See Red Hat OpenShifT AI: API Tiers |
|
Model Cards |
TechDocs |
N/A |
Model cards from the RHOAI model catalog are associated with the Component and Resource entities. See Registering a model from the model catalog. |
Once the OpenShift AI Connector for RHDH is installed and connected with RHOAI, the transfer of information commences automatically.
1.2. Out-of-the-Box as asset details synched from RHOAI
The connector propagates the following key data:
InferenceServices (Component type model-server):
- URL of the OpenShift Route (if exposed).
- URL of the Kubernetes Service.
- Authentication requirement status.
Model registry (Resource type
ai-model):- Model description, artifact URIs, and author/owner information.
Model catalog:
- Links to the Model Card (as RHDH TechDocs).
- Model license URL.
2. Setting up OpenShift AI Connector for Red Hat Developer Hub with Red Hat OpenShift AI
The installation of the OpenShift AI Connector for Red Hat Developer Hub requires manual updates to RHDH-related Kubernetes resources.
RHOAI Prerequisites
To import model cards from the model catalog into TechDocs, you need to use RHOAI 2.25.
NoteIf you upgraded to RHOAI 2.25 from an earlier version, you must manually enable the model catalog dashboard and model registry before you can import model cards.
- If you used the model catalog in earlier versions of RHOAI, TechDocs propagation does not work for any models you registered into the model registry while at those earlier versions; only models registered into model registry from a RHOAI 2.25 model catalog have their model cards transferred to RHDH as TechDocs.
- For the rest of the features, version 2.20 or later suffices. Enabling model registry and its associated dashboard allows for a user experience that more directly allows for customizing AI Model metadata. For best overall experience, RHOAI 2.25 is recommended.
For more details, see Enabling the model registry component.
Procedure
Configure RHOAI-related RBAC and credentials. A Kubernetes
ServiceAccountand aservice-account-tokenSecret are required for the connector to retrieve data from RHOAI. The following resources must be created, replacing namespace names (ai-rhdhfor RHDH,rhoai-model-registriesfor RHOAI) as needed:-
ServiceAccount(rhdh-rhoai-bridge) -
ClusterRoleandClusterRoleBinding(rhdh-rhoai-bridge) to allow access to OCP resources likeroutes,services, andinferenceservices. -
RoleandRoleBindingto allow ConfigMap updates within the RHDH namespace. -
RoleBindingin the RHOAI namespace to grant the RHDHServiceAccountread permissions to the Model Registry data (binding toregistry-user-modelregistry-public). -
Secret (
rhdh-rhoai-bridge-token) of typekubernetes.io/service-account-tokenthat goes along with therhdh-rhoai-bridgeServiceAccount.
-
Update your RHDH dynamic plugin configuration. The RHDH Pod requires two dynamic plugins.
In your RHDH dynamic plugins ConfigMap, add the following code:
plugins: - disabled: false package: oci://ghcr.io/redhat-developer/rhdh-plugin-export-overlays/red-hat-developer-hub-backstage-plugin-catalog-backend-module-model-catalog:bs_1.42.5__0.7.0!red-hat-developer-hub-backstage-plugin-catalog-backend-module-model-catalog - disabled: false package: oci://ghcr.io/redhat-developer/rhdh-plugin-export-overlays/red-hat-developer-hub-backstage-plugin-catalog-techdoc-url-reader-backend:bs_1.42.5__0.3.0!red-hat-developer-hub-backstage-plugin-catalog-techdoc-url-reader-backend
Add the
Connectorsidecar containers to the RHDH Pod.- If RHDH was installed using the Operator, modify your RHDH custom resource (CR) instance.
- If RHDH was installed using the Helm charts, modify the Deployment specification.
The system relies on three sidecar containers (model catalog bridge) running alongside the
backstage-backendcontainer. Add these sidecar containers to your configuration referencing therhdh-rhoai-bridge-tokenSecret:-
location: Provides the REST API for RHDH plugins to fetch model metadata. -
storage-rest: Maintains a cache of AI Model metadata in a ConfigMap calledbac-import-model. rhoai-normalizer: Acts as a Kubernetes controller and RHOAI client, normalizing RHOAI metadata for the connector. The following code block is an example:spec: template: spec: containers: - name: backstage-backend - env: - name: NORMALIZER_FORMAT value: JsonArrayFormat - name: POD_IP valueFrom: fieldRef: fieldPath: status.podIP - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace envFrom: - secretRef: name: rhdh-rhoai-bridge-token image: quay.io/redhat-ai-dev/model-catalog-location-service@sha256:4f6ab6624a29f627f9f861cfcd5d18177d46aa2c67a81a75a1502c49bc2ff012 imagePullPolicy: Always name: location ports: - containerPort: 9090 name: location protocol: TCP volumeMounts: - mountPath: /opt/app-root/src/dynamic-plugins-root name: dynamic-plugins-root workingDir: /opt/app-root/src - env: - name: NORMALIZER_FORMAT value: JsonArrayFormat - name: STORAGE_TYPE value: ConfigMap - name: BRIDGE_URL value: http://localhost:9090 - name: POD_IP valueFrom: fieldRef: fieldPath: status.podIP - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace envFrom: - secretRef: name: rhdh-rhoai-bridge-token image: quay.io/redhat-ai-dev/model-catalog-storage-rest@sha256:398095e7469e86d84b1196371286363f4b7668aa3e26370b4d78cb8d4ace1dc9 imagePullPolicy: Always name: storage-rest volumeMounts: - mountPath: /opt/app-root/src/dynamic-plugins-root name: dynamic-plugins-root workingDir: /opt/app-root/src - env: - name: NORMALIZER_FORMAT value: JsonArrayFormat - name: POD_IP valueFrom: fieldRef: fieldPath: status.podIP - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace envFrom: - secretRef: name: rhdh-rhoai-bridge-token image: quay.io/redhat-ai-dev/model-catalog-rhoai-normalizer@sha256:fe6c05d57495d6217c4d584940ec552c3727847ff60f39f5d04f94be024576d8 imagePullPolicy: Always name: rhoai-normalizer volumeMounts: - mountPath: /opt/app-root/src/dynamic-plugins-root name: dynamic-plugins-root workingDir: /opt/app-root/src
-
Enable
Connectorin yourRHDHapp-config.yamlfile. In yourBackstage `app-config.extra.yamlfile, configureEntity Providerunder thecatalog.providerssection:providers: modelCatalog: development: baseUrl: http://localhost:9090
where:
modelCatalog- Specifies the name of the provider.
development-
Defines future connector capability beyond a single
baseUrl. baseUrl- For Developer Preview, this value is the only one supported. Future releases might support external routes.
3. Enrich AI model metadata for enhanced Red Hat Developer Hub experience
While RHOAI provides essential data, an AI platform engineer can enrich the Backstage experience by adding custom properties to the ModelVersion or RegisteredModel (or annotations to the KServe InferenceService if the Model Registry is not used) so that the OpenShift AI Connector for Red Hat Developer Hub can add the information to the RHDH entities it creates.
| Property Key | Entity Field Populated | Description |
|---|---|---|
|
|
API Definition Tab |
The OpenAPI / Swagger JSON specification for the model REST API. |
|
|
API Type |
Correlates to supported RHDH/Backstage API types (defaults to |
|
|
TechDocs |
URL pointing to a Git repository that follows RHDH TechDocs conventions for the Model Card. Use this setting only if the Model Card to TechDocs mapping is not active. |
|
|
Links |
A URL considered the home page for the model. |
|
|
Owner |
Overrides the default OpenShift user as the entity owner. |
|
|
Lifecycle |
Serves a means to express the lifecycle notion of RHDH/Backstage. |
|
|
Links |
A URL that points to usage documentation. |
|
|
Links |
A URL to the license file of the model. |
3.1. Populating the API Definition tab
The AI platform engineer must follow these steps to provide this valuable information because RHOAI does not expose the OpenAPI specification by default.
Procedure
Retrieve OpenAPI JSON: Use a tool like
curlto fetch the specification directly from the running endpoint of the AI model server. The following command provides the precise endpoint (/openapi.json) and shows how to include aBearertoken if the model requires authentication for access.curl -k -H "Authorization: Bearer $MODEL_API_KEY" https://$MODEL_ROOT_URL_INCLUDING_PORT/openapi.json | jq > open-api.json
Set Property in RHOAI.
In the RHOAI dashboard, go to Model Registry and select the appropriate Model Version.
NoteWe recommend using Model Version instead of Registered Model to maintain stability if the API changes between versions.
-
In the Properties section, set a key/value pair where the key is
API Specand the value is the entire JSON content from theopen-api.jsonfile.
- Propagation: The OpenShift AI Connector for Red Hat Developer Hub periodically polls the RHOAI Model Registry, propagates this JSON, and renders the interactive API documentation in the RHDH API Entity Definition tab.
4. Troubleshooting Connector functionality
The connector system consists of the two dynamic plugins and the three Model Catalog Bridge sidecar containers. Generally speaking, the logs collected should be provided to Red Hat Support for analysis.
The actual contents of the diagnostic data are not part of any product guaranteed specification, and can change at any time.
4.1. Checking Dynamic Plugins status
Validate that the dynamic plugins have been successfully installed into your RHDH project Pod by using the following command:
+
oc logs -c install-dynamic-plugins deployment/<your RHDH deployment>
The install-dynamic-plugin logs allow you to check the following installation logs for successful logs:
-
red-hat-developer-hub-backstage-plugin-catalog-backend-module-model-catalog(Entity Provider) -
red-hat-developer-hub-backstage-plugin-catalog-techdoc-url-reader-backend(TechDoc URL Reader)
4.2. Inspecting plugin logs
View the OpenShift AI Connector for Red Hat Developer Hubplugins in the backstage-backend container. Items to look for:
| Plugin Component | Logger Service Target | Common Log Text |
|---|---|---|
|
Model Catalog Entity Provider |
|
|
|
Model Catalog TechDoc URL Reader |
|
|
To enable debug logging, set the LOG_LEVEL environment variable to debug on the backstage-backend container. For more information, see Monitoring and logging.
4.3. Inspecting the Model Catalog Bridge
The Model Catalog Bridge sidecars manage the data fetching and storage:
OpenShift AI Connector for Red Hat Developer Hub collects feedback from users who engage with the feedback feature. If a user submits feedback, the feedback score (thumbs up or down), text feedback (if entered), the user query, and the LLM provider response are stored locally in the file system of the Pod. Red Hat does not have access to the collected feedback data.
Check Cached Data (ConfigMap): The processed AI Model metadata is stored in a
ConfigMap.oc get configmap bac-import-model -o json | jq -r '.binaryData | to_entries[] | "=== \(.key) ===\n" + (.value | @base64d | fromjson | .body | @base64d | fromjson | tostring)' | jq -R 'if startswith("=== ") then . else (. | fromjson) end'Check Location Service API: Confirm the location service is providing data to the RHDH Entity Provider.
oc rsh -c backstage-backend deployment/<your RHDH deployment> curl http://localhost:9090/list
Check Sidecar Container Logs:
oc logs -c rhoai-normalizer deployment/<your RHDH deployment> oc logs -c storage-rest deployment/<your RHDH deployment> oc logs -c location deployment/<your RHDH deployment>
4.4. OpenShift AI model registry and model catalog queries
To access the same RHOAI data as the connector, use curl to query the RHOAI model registry and model catalog APIs, ensuring the ServiceAccount token has correct access control:
Example: Fetch registered models
curl -k -H "Authorization: Bearer $TOKEN" $RHOAI_MODEL_REGISTRY_URL/api/model_registry/v1alpha3/registered_models | jq
Example: Fetch model versions
curl -k -H "Authorization: Bearer $TOKEN" $RHOAI_MODEL_REGISTRY_URL/api/model_registry/v1alpha3/model_versions | jq
Example: Fetch model artifacts
curl -k -H "Authorization: Bearer $TOKEN" $RHOAI_MODEL_REGISTRY_URL/api/model_registry/v1alpha3/model_artifacts | jq
Example: Fetch inference services
curl -k -H "Authorization: Bearer $TOKEN" $RHOAI_MODEL_REGISTRY_URL/api/model_registry/v1alpha3/inference_services | jq
Example: Fetch serving environments
curl -k -H "Authorization: Bearer $TOKEN" $RHOAI_MODEL_REGISTRY_URL/api/model_registry/v1alpha3/serving_environments | jq
Example: Fetch catalog sources
curl -k -H "Authorization: Bearer $TOKEN" $RHOAI_MODEL_CATALOG_URL/api/model_catalog/v1alpha1/sources | jq