Red Hat Developer Hub 1.8

Integrate Red Hat Developer Hub with {openshift-ai-connector-name} to leverage AI models

Installing, configuring, and troubleshooting OpenShift AI Connector for Red Hat Developer Hub

Red Hat Customer Content Services

Abstract

As a developer, when you require access to centralized AI/ML services, you can integrate AI models and model servers from Red Hat OpenShift AI directly into the Red Hat Developer Hub (RHDH) Catalog, so that you can provide a single, consistent hub for discovering, managing, and consuming all components, accelerating time-to-market.

1. Understand how AI assets map to the Red Hat Developer Hub Catalog

Important

This section describes Developer Preview features in the OpenShift AI Connector for Red Hat Developer Hub plugin. Developer Preview features are not supported by Red Hat in any way and are not functionally complete or production-ready. Do not use Developer Preview features for production or business-critical workloads. Developer Preview features provide early access to functionality in advance of possible inclusion in a Red Hat product offering. Customers can use these features to test functionality and provide feedback during the development process. Developer Preview features might not have any documentation, are subject to change or removal at any time, and have received limited testing. Red Hat might provide ways to submit feedback on Developer Preview features without an associated SLA.

For more information about the support scope of Red Hat Developer Preview features, see Developer Preview Support Scope.

The OpenShift AI Connector for Red Hat Developer Hub (OpenShift AI Connector for RHDH) serves as a crucial link, enabling the discovery and accessibility of AI assets managed within the Red Hat OpenShift AI offering directly within your RHDH instance.

For more information on model registry components, see Overview of model registries and model catalog.

1.1. Model-to-Entity mapping

This offering interfaces with the OpenShift AI Connector for RHDH, model catalog, and KServe-based Model Deployments (InferenceServices) to create familiar Backstage entities.

RHOAI ArtifactRHDH/Backstage Entity KindRHDH/Backstage Entity TypePurpose

Model Server (InferenceService)

Component

model-server

Represents a running, accessible AI model endpoint. See Configuring your model-serving platform.

AI Model (Model Registry Version)

Resource

ai-model

Represents the specific AI model artifact, for example, Llama-3-8B.

Model Server API Details

API

openapi (Default)

Provides the OpenAPI/Swagger specification for the REST endpoint of the model. See Red Hat OpenShifT AI: API Tiers

Model Cards

TechDocs

N/A

Model cards from the RHOAI model catalog are associated with the Component and Resource entities. See Registering a model from the model catalog.

Once the OpenShift AI Connector for RHDH is installed and connected with RHOAI, the transfer of information commences automatically.

1.2. Out-of-the-Box as asset details synched from RHOAI

The connector propagates the following key data:

  • InferenceServices (Component type model-server):

    • URL of the OpenShift Route (if exposed).
    • URL of the Kubernetes Service.
    • Authentication requirement status.
  • Model registry (Resource type ai-model):

    • Model description, artifact URIs, and author/owner information.
  • Model catalog:

    • Links to the Model Card (as RHDH TechDocs).
    • Model license URL.

2. Setting up OpenShift AI Connector for Red Hat Developer Hub with Red Hat OpenShift AI

The installation of the OpenShift AI Connector for Red Hat Developer Hub requires manual updates to RHDH-related Kubernetes resources.

RHOAI Prerequisites

  • To import model cards from the model catalog into TechDocs, you need to use RHOAI 2.25.

    Note

    If you upgraded to RHOAI 2.25 from an earlier version, you must manually enable the model catalog dashboard and model registry before you can import model cards.

  • If you used the model catalog in earlier versions of RHOAI, TechDocs propagation does not work for any models you registered into the model registry while at those earlier versions; only models registered into model registry from a RHOAI 2.25 model catalog have their model cards transferred to RHDH as TechDocs.
  • For the rest of the features, version 2.20 or later suffices. Enabling model registry and its associated dashboard allows for a user experience that more directly allows for customizing AI Model metadata. For best overall experience, RHOAI 2.25 is recommended.

For more details, see Enabling the model registry component.

Procedure

  1. Configure RHOAI-related RBAC and credentials. A Kubernetes ServiceAccount and a service-account-token Secret are required for the connector to retrieve data from RHOAI. The following resources must be created, replacing namespace names (ai-rhdh for RHDH, rhoai-model-registries for RHOAI) as needed:

    • ServiceAccount (rhdh-rhoai-bridge)
    • ClusterRole and ClusterRoleBinding (rhdh-rhoai-bridge) to allow access to OCP resources like routes, services, and inferenceservices.
    • Role and RoleBinding to allow ConfigMap updates within the RHDH namespace.
    • RoleBinding in the RHOAI namespace to grant the RHDH ServiceAccount read permissions to the Model Registry data (binding to registry-user-modelregistry-public).
    • Secret (rhdh-rhoai-bridge-token) of type kubernetes.io/service-account-token that goes along with the rhdh-rhoai-bridge ServiceAccount.
  2. Update your RHDH dynamic plugin configuration. The RHDH Pod requires two dynamic plugins.

    1. In your RHDH dynamic plugins ConfigMap, add the following code:

      plugins:
        - disabled: false
          package: oci://ghcr.io/redhat-developer/rhdh-plugin-export-overlays/red-hat-developer-hub-backstage-plugin-catalog-backend-module-model-catalog:bs_1.42.5__0.7.0!red-hat-developer-hub-backstage-plugin-catalog-backend-module-model-catalog
        - disabled: false
          package: oci://ghcr.io/redhat-developer/rhdh-plugin-export-overlays/red-hat-developer-hub-backstage-plugin-catalog-techdoc-url-reader-backend:bs_1.42.5__0.3.0!red-hat-developer-hub-backstage-plugin-catalog-techdoc-url-reader-backend
  3. Add the Connector sidecar containers to the RHDH Pod.

    • If RHDH was installed using the Operator, modify your RHDH custom resource (CR) instance.
    • If RHDH was installed using the Helm charts, modify the Deployment specification.
  4. The system relies on three sidecar containers (model catalog bridge) running alongside the backstage-backend container. Add these sidecar containers to your configuration referencing the rhdh-rhoai-bridge-token Secret:

    • location: Provides the REST API for RHDH plugins to fetch model metadata.
    • storage-rest: Maintains a cache of AI Model metadata in a ConfigMap called bac-import-model.
    • rhoai-normalizer: Acts as a Kubernetes controller and RHOAI client, normalizing RHOAI metadata for the connector. The following code block is an example:

      spec:
        template:
          spec:
            containers:
              - name: backstage-backend
              - env:
                  - name: NORMALIZER_FORMAT
                    value: JsonArrayFormat
                  - name: POD_IP
                    valueFrom:
                      fieldRef:
                        fieldPath: status.podIP
                  - name: POD_NAMESPACE
                    valueFrom:
                      fieldRef:
                        fieldPath: metadata.namespace
                envFrom:
                  - secretRef:
                      name: rhdh-rhoai-bridge-token
                image: quay.io/redhat-ai-dev/model-catalog-location-service@sha256:4f6ab6624a29f627f9f861cfcd5d18177d46aa2c67a81a75a1502c49bc2ff012
      
                imagePullPolicy: Always
                name: location
                ports:
                  - containerPort: 9090
                    name: location
                    protocol: TCP
                volumeMounts:
                  - mountPath: /opt/app-root/src/dynamic-plugins-root
                    name: dynamic-plugins-root
                  workingDir: /opt/app-root/src
                - env:
                  - name: NORMALIZER_FORMAT
                    value: JsonArrayFormat
                  - name: STORAGE_TYPE
                    value: ConfigMap
                  - name: BRIDGE_URL
                    value: http://localhost:9090
                  - name: POD_IP
                    valueFrom:
                      fieldRef:
                        fieldPath: status.podIP
                  - name: POD_NAMESPACE
                    valueFrom:
                      fieldRef:
                        fieldPath: metadata.namespace
                envFrom:
                  - secretRef:
                      name: rhdh-rhoai-bridge-token
                image: quay.io/redhat-ai-dev/model-catalog-storage-rest@sha256:398095e7469e86d84b1196371286363f4b7668aa3e26370b4d78cb8d4ace1dc9
      
                imagePullPolicy: Always
                name: storage-rest
                volumeMounts:
                  - mountPath: /opt/app-root/src/dynamic-plugins-root
                    name: dynamic-plugins-root
                  workingDir: /opt/app-root/src
                - env:
                  - name: NORMALIZER_FORMAT
                    value: JsonArrayFormat
                  - name: POD_IP
                    valueFrom:
                      fieldRef:
                        fieldPath: status.podIP
                  - name: POD_NAMESPACE
                    valueFrom:
                      fieldRef:
                        fieldPath: metadata.namespace
                envFrom:
                  - secretRef:
                      name: rhdh-rhoai-bridge-token
                image: quay.io/redhat-ai-dev/model-catalog-rhoai-normalizer@sha256:fe6c05d57495d6217c4d584940ec552c3727847ff60f39f5d04f94be024576d8
      
                imagePullPolicy: Always
                name: rhoai-normalizer
                volumeMounts:
                  - mountPath: /opt/app-root/src/dynamic-plugins-root
                    name: dynamic-plugins-root
                  workingDir: /opt/app-root/src
  5. Enable Connector in your RHDHapp-config.yaml file. In your Backstage `app-config.extra.yaml file, configure Entity Provider under the catalog.providers section:

    providers:
      modelCatalog:
        development:
          baseUrl: http://localhost:9090

where:

modelCatalog
Specifies the name of the provider.
development
Defines future connector capability beyond a single baseUrl.
baseUrl
For Developer Preview, this value is the only one supported. Future releases might support external routes.

3. Enrich AI model metadata for enhanced Red Hat Developer Hub experience

While RHOAI provides essential data, an AI platform engineer can enrich the Backstage experience by adding custom properties to the ModelVersion or RegisteredModel (or annotations to the KServe InferenceService if the Model Registry is not used) so that the OpenShift AI Connector for Red Hat Developer Hub can add the information to the RHDH entities it creates.

Property KeyEntity Field PopulatedDescription

API Spec

API Definition Tab

The OpenAPI / Swagger JSON specification for the model REST API.

API Type

API Type

Correlates to supported RHDH/Backstage API types (defaults to openapi).

TechDocs

TechDocs

URL pointing to a Git repository that follows RHDH TechDocs conventions for the Model Card. Use this setting only if the Model Card to TechDocs mapping is not active.

Homepage URL

Links

A URL considered the home page for the model.

Owner

Owner

Overrides the default OpenShift user as the entity owner.

Lifecycle

Lifecycle

Serves a means to express the lifecycle notion of RHDH/Backstage.

How to use

Links

A URL that points to usage documentation.

License

Links

A URL to the license file of the model.

3.1. Populating the API Definition tab

The AI platform engineer must follow these steps to provide this valuable information because RHOAI does not expose the OpenAPI specification by default.

Procedure

  1. Retrieve OpenAPI JSON: Use a tool like curl to fetch the specification directly from the running endpoint of the AI model server. The following command provides the precise endpoint (/openapi.json) and shows how to include a Bearer token if the model requires authentication for access.

    curl -k -H "Authorization: Bearer $MODEL_API_KEY" https://$MODEL_ROOT_URL_INCLUDING_PORT/openapi.json | jq > open-api.json
  2. Set Property in RHOAI.

    1. In the RHOAI dashboard, go to Model Registry and select the appropriate Model Version.

      Note

      We recommend using Model Version instead of Registered Model to maintain stability if the API changes between versions.

    2. In the Properties section, set a key/value pair where the key is API Spec and the value is the entire JSON content from the open-api.json file.
  3. Propagation: The OpenShift AI Connector for Red Hat Developer Hub periodically polls the RHOAI Model Registry, propagates this JSON, and renders the interactive API documentation in the RHDH API Entity Definition tab.

4. Troubleshooting Connector functionality

The connector system consists of the two dynamic plugins and the three Model Catalog Bridge sidecar containers. Generally speaking, the logs collected should be provided to Red Hat Support for analysis.

The actual contents of the diagnostic data are not part of any product guaranteed specification, and can change at any time.

4.1. Checking Dynamic Plugins status

Validate that the dynamic plugins have been successfully installed into your RHDH project Pod by using the following command:

+

oc logs -c install-dynamic-plugins deployment/<your RHDH deployment>

The install-dynamic-plugin logs allow you to check the following installation logs for successful logs:

  • red-hat-developer-hub-backstage-plugin-catalog-backend-module-model-catalog (Entity Provider)
  • red-hat-developer-hub-backstage-plugin-catalog-techdoc-url-reader-backend (TechDoc URL Reader)

4.2. Inspecting plugin logs

View the OpenShift AI Connector for Red Hat Developer Hubplugins in the backstage-backend container. Items to look for:

Plugin ComponentLogger Service TargetCommon Log Text

Model Catalog Entity Provider

ModelCatalogResourceEntityProvider

Discovering ResourceEntities from Model Server…​

Model Catalog TechDoc URL Reader

ModelCatalogBridgeTechdocUrlReader

ModelCatalogBridgeTechdocUrlReader.readUrl

To enable debug logging, set the LOG_LEVEL environment variable to debug on the backstage-backend container. For more information, see Monitoring and logging.

4.3. Inspecting the Model Catalog Bridge

The Model Catalog Bridge sidecars manage the data fetching and storage:

Important

OpenShift AI Connector for Red Hat Developer Hub collects feedback from users who engage with the feedback feature. If a user submits feedback, the feedback score (thumbs up or down), text feedback (if entered), the user query, and the LLM provider response are stored locally in the file system of the Pod. Red Hat does not have access to the collected feedback data.

  1. Check Cached Data (ConfigMap): The processed AI Model metadata is stored in a ConfigMap.

    oc get configmap bac-import-model -o json | jq -r '.binaryData | to_entries[] | "=== \(.key) ===\n" + (.value | @base64d | fromjson | .body | @base64d | fromjson | tostring)' | jq -R 'if startswith("=== ") then . else (. | fromjson) end'
  2. Check Location Service API: Confirm the location service is providing data to the RHDH Entity Provider.

    oc rsh -c backstage-backend deployment/<your RHDH deployment>
    curl http://localhost:9090/list
  3. Check Sidecar Container Logs:

    oc logs -c rhoai-normalizer deployment/<your RHDH deployment>
    oc logs -c storage-rest deployment/<your RHDH deployment>
    oc logs -c location deployment/<your RHDH deployment>

4.4. OpenShift AI model registry and model catalog queries

To access the same RHOAI data as the connector, use curl to query the RHOAI model registry and model catalog APIs, ensuring the ServiceAccount token has correct access control:

  • Example: Fetch registered models

    curl -k -H "Authorization: Bearer $TOKEN" $RHOAI_MODEL_REGISTRY_URL/api/model_registry/v1alpha3/registered_models | jq
  • Example: Fetch model versions

    curl -k -H "Authorization: Bearer $TOKEN" $RHOAI_MODEL_REGISTRY_URL/api/model_registry/v1alpha3/model_versions | jq
  • Example: Fetch model artifacts

    curl -k -H "Authorization: Bearer $TOKEN" $RHOAI_MODEL_REGISTRY_URL/api/model_registry/v1alpha3/model_artifacts | jq
  • Example: Fetch inference services

    curl -k -H "Authorization: Bearer $TOKEN" $RHOAI_MODEL_REGISTRY_URL/api/model_registry/v1alpha3/inference_services | jq
  • Example: Fetch serving environments

    curl -k -H "Authorization: Bearer $TOKEN" $RHOAI_MODEL_REGISTRY_URL/api/model_registry/v1alpha3/serving_environments | jq
  • Example: Fetch catalog sources

    curl -k -H "Authorization: Bearer $TOKEN" $RHOAI_MODEL_CATALOG_URL/api/model_catalog/v1alpha1/sources | jq

Legal Notice

Copyright © 2025 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.