Red Hat Developer Hub 1.9

Configuring dynamic plugins

Configuring dynamic plugins in Red Hat Developer Hub

Red Hat Customer Content Services

Abstract

As a platform engineer, you can configure dynamic plugins in Red Hat Developer Hub (RHDH) to access your development infrastructure or software development tools.

As a platform engineer, you can configure dynamic plugins in Red Hat Developer Hub (RHDH) to access your development infrastructure or software development tools.

1. Installing Ansible plugins for Red Hat Developer Hub

Access Ansible-specific portal experience with curated learning paths, push-button content creation, and integrated development tools.

Ansible plugins for Red Hat Developer Hub deliver an Ansible-specific portal experience with curated learning paths, push-button content creation, integrated development tools, and other opinionated resources.

2. Install and configure Argo CD

You can use the Argo CD plugin to visualize the Continuous Delivery (CD) workflows in OpenShift GitOps.

2.1. Enable the Argo CD plugin

The Argo CD plugin provides a visual overview of the application's status, deployment details, commit message, author of the commit, container image promoted to environment and deployment history.

Prerequisites

  • Add Argo CD instance information to your app-config.yaml configmap as shown in the following example:

    argocd:
      appLocatorMethods:
        - type: 'config'
          instances:
            - name: argoInstance1
              url: https://argoInstance1.com
              username: ${ARGOCD_USERNAME}
              password: ${ARGOCD_PASSWORD}
            - name: argoInstance2
              url: https://argoInstance2.com
              username: ${ARGOCD_USERNAME}
              password: ${ARGOCD_PASSWORD}
    Note

    Avoid using a trailing slash in the url, as it might cause unexpected behavior.

  • Add the following annotation to the entity's catalog-info.yaml file to identify the Argo CD applications.

    annotations:
      ...
      # The label that Argo CD uses to fetch all the applications. The format to be used is label.key=label.value. For example, rht-gitops.com/janus-argocd=quarkus-app.
    
      argocd/app-selector: '${ARGOCD_LABEL_SELECTOR}'
  • (Optional) Add the following annotation to the entity's catalog-info.yaml file to switch between Argo CD instances as shown in the following example:

     annotations:
       ...
        # The Argo CD instance name used in `app-config.yaml`.
    
        argocd/instance-name: '${ARGOCD_INSTANCE}'
    Note

    If you do not set this annotation, the Argo CD plugin defaults to the first Argo CD instance configured in app-config.yaml.

Procedure

  • Add the following to your dynamic-plugins ConfigMap to enable the Argo CD plugin.

    global:
      dynamic:
        includes:
          - dynamic-plugins.default.yaml
        plugins:
          - package: oci://ghcr.io/redhat-developer/rhdh-plugin-export-overlays/roadiehq-backstage-plugin-argo-cd-backend:<tag>
            disabled: false
          - package: oci://ghcr.io/redhat-developer/rhdh-plugin-export-overlays/backstage-community-plugin-argocd:<tag>
            disabled: false

The <tag> variable is your RHDH application's version of Backstage and the plugin version, in the format: bs_<backstage-version>__<plugin-version> (note the double underscore delimiter).

For example, because RHDH 1.9 is based on Backstage 1.45.3, the tag will have the format bs_1.45.3__<plugin-version>.

Tip

To ensure environment stability, use a SHA256 digest instead of a version tag. See Determining SHA256 Digests.

2.2. Enable Argo CD Rollouts

Enable advanced deployment strategies such as blue-green and canary deployments by integrating Argo CD Rollouts with the Red Hat Developer Hub Kubernetes plugin.

The optional Argo CD Rollouts feature enhances Kubernetes by providing advanced deployment strategies, such as blue-green and canary deployments, for your applications. When integrated into the backstage Kubernetes plugin, it allows developers and operations teams to visualize and manage Argo CD Rollouts seamlessly within the Developer Hub interface.

Prerequisites

  • The Developer Hub Kubernetes plugin (@backstage/plugin-kubernetes) is installed and configured.

  • You have access to the Kubernetes cluster with the necessary permissions to create and manage custom resources and ClusterRoles.
  • The Kubernetes cluster has the argoproj.io group resources (for example, Rollouts and Analysis Runs) installed.

Procedure

  1. In the app-config.yaml file in your Developer Hub instance, add the following customResources component under the kubernetes configuration to enable Argo Rollouts and Analysis Runs:

    kubernetes:
      ...
      customResources:
        - group: 'argoproj.io'
          apiVersion: 'v1alpha1'
          plural: 'Rollouts'
        - group: 'argoproj.io'
          apiVersion: 'v1alpha1'
          plural: 'analysisruns'
  2. Grant ClusterRole permissions for custom resources.

    Note
    1. If the Developer Hub Kubernetes plugin is already configured, the ClusterRole permissions for Rollouts and AnalysisRuns might already be granted.
    2. Use the prepared manifest to give read-only ClusterRole access to both the Kubernetes and ArgoCD plugins.
    1. If the ClusterRole permission is not granted, use the following YAML manifest to create the ClusterRole:
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: backstage-read-only
    rules:
      - apiGroups:
          - argoproj.io
        resources:
          - rollouts
          - analysisruns
        verbs:
          - get
          - list
    1. Apply the manifest to the cluster using kubectl:

      $ kubectl apply -f <your_cluster_role_file>.yaml
    2. Ensure the ServiceAccount accessing the cluster has this ClusterRole assigned.
  3. Add annotations to catalog-info.yaml to identify Kubernetes resources for Backstage.

    1. For identifying resources by entity ID:

      annotations:
        ...
        backstage.io/kubernetes-id: <BACKSTAGE_ENTITY_NAME>
    2. (Optional) For identifying resources by namespace:

      annotations:
        ...
        backstage.io/kubernetes-namespace: <RESOURCE_NAMESPACE>
    3. For using custom label selectors, which override resource identification by entity ID or namespace:

      annotations:
        ...
        backstage.io/kubernetes-label-selector: 'app=my-app,component=front-end'
      Note

      Ensure you specify the labels declared in backstage.io/kubernetes-label-selector on your Kubernetes resources. This annotation overrides entity-based or namespace-based identification annotations, such as backstage.io/kubernetes-id and backstage.io/kubernetes-namespace.

  4. Add label to Kubernetes resources to enable Developer Hub to find the appropriate Kubernetes resources.

    1. Developer Hub Kubernetes plugin label: Add this label to map resources to specific Developer Hub entities.

      labels:
        ...
        backstage.io/kubernetes-id: <BACKSTAGE_ENTITY_NAME>
    2. GitOps application mapping: Add this label to map Argo CD Rollouts to a specific GitOps application

      labels:
        ...
        app.kubernetes.io/instance: <GITOPS_APPLICATION_NAME>
    Note

    If using the label selector annotation (backstage.io/kubernetes-label-selector), ensure the specified labels are present on the resources. The label selector will override other annotations such as kubernetes-id or kubernetes-namespace.

Verification

  1. Push the updated configuration to your GitOps repository to trigger a rollout.
  2. Open Red Hat Developer Hub interface and navigate to the entity you configured.
  3. Select the CD tab and then select the GitOps application. The side panel opens.
  4. In the Resources table of the side panel, verify that the following resources are displayed:
  5. Rollouts
  6. Analysis Runs (optional)
  7. Expand a rollout resource and review the following details:
  8. The Revisions row displays traffic distribution details for different rollout versions.
  9. The Analysis Runs row displays the status of analysis tasks that evaluate rollout success.

3. Enable and configure the JFrog plugin

Enable and configure the JFrog Artifactory plugin to display container images from your repository in Red Hat Developer Hub.

3.1. Enable the JFrog Artifactory plugin

To enable the JFrog Artifactory plugin, set the disabled property to false.

Procedure

  • Set the value to false as follows:

    global:
      dynamic:
        includes:
          - dynamic-plugins.default.yaml
        plugins:
          - package: oci://ghcr.io/redhat-developer/rhdh-plugin-export-overlays/backstage-community-plugin-jfrog-artifactory:<tag>
            disabled: false

    The <tag> variable is your RHDH application's version of Backstage and the plugin version, in the format: bs_<backstage-version>__<plugin-version> (note the double underscore delimiter).

  • To find the correct image tag for <tag>:

    1. Look in the RHDH release notes preface for your Backstage version.
    2. Locate the plugin version for paths starting with oci://ghcr.io within one of the tables in the Dynamic Plugins Reference guide.

For example, because RHDH 1.9 is based on Backstage 1.45.3, the tag will have the format bs_1.45.3__<plugin-version>.

Tip

To ensure environment stability, use a SHA256 digest instead of a version tag. See Determining SHA256 Digests.

3.2. Configure the JFrog Artifactory plugin

Configure proxy settings and annotations to display container images stored in your JFrog Artifactory repository.

Procedure

  1. Set the proxy to the required JFrog Artifactory server in the app-config.yaml file as follows:

    proxy:
      endpoints:
        '/jfrog-artifactory/api':
          target: http://<hostname>:8082 # or https://<customer>.jfrog.io
          headers:
          # Authorization: 'Bearer <YOUR TOKEN>'
          # Change to "false" in case of using a self-hosted Artifactory instance with a self-signed certificate
          secure: true
  2. Add the following annotation to the entity’s catalog-info.yaml file to enable the JFrog Artifactory plugin features in RHDH components:

    metadata:
        annotations:
          'jfrog-artifactory/image-name': '<IMAGE-NAME>'

4. Enable and configure the Keycloak plugin

Integrate Keycloak into Red Hat Developer Hub to synchronize users and groups from your Red Hat Build of Keycloak (RHBK) realm. The supported RHBK version is 26.0.

4.1. Enable the Keycloak plugin

Enable the Keycloak plugin to synchronize users and groups from your Red Hat Build of Keycloak realm into Red Hat Developer Hub.

Prerequisites

  • To enable the Keycloak plugin, you must set the following environment variables:

    • KEYCLOAK_BASE_URL
    • KEYCLOAK_LOGIN_REALM
    • KEYCLOAK_REALM
    • KEYCLOAK_CLIENT_ID
    • KEYCLOAK_CLIENT_SECRET

Procedure

  • The Keycloak plugin is pre-loaded in Developer Hub with basic configuration properties. To enable it, set the disabled property to false as follows:

    global:
      dynamic:
        includes:
          - dynamic-plugins.default.yaml
        plugins:
          - package: ./dynamic-plugins/dist/backstage-community-plugin-catalog-backend-module-keycloak-dynamic
            disabled: false

4.2. Configure the Keycloak plugin

Configure schedule frequency, query parameters, and authentication methods for synchronizing Keycloak users and groups.

Procedure

  1. To configure the Keycloak plugin, add the following in your app-config.yaml file:

    schedule

    Configure the schedule frequency, timeout, and initial delay. The fields support cron, ISO duration, "human duration" as used in code.

         catalog:
           providers:
             keycloakOrg:
               default:
                 schedule:
                   frequency: { minutes: 1 }
                   timeout: { minutes: 1 }
                   initialDelay: { seconds: 15 }
    userQuerySize and groupQuerySize

    Optionally, configure the Keycloak query parameters to define the number of users and groups to query at a time. Default values are 100 for both fields.

       catalog:
         providers:
           keycloakOrg:
             default:
               userQuerySize: 100
               groupQuerySize: 100
    Authentication

    Communication between Developer Hub and Keycloak is enabled by using the Keycloak API. Username and password, or client credentials are supported authentication methods.

    The following table describes the parameters that you can configure to enable the plugin under catalog.providers.keycloakOrg.<ENVIRONMENT_NAME> object in the app-config.yaml file:

    NameDescriptionDefault ValueRequired

    baseUrl

    Location of the Keycloak server, such as https://localhost:8443/auth.

    ""

    Yes

    realm

    Realm to synchronize

    master

    No

    loginRealm

    Realm used to authenticate

    master

    No

    username

    Username to authenticate

    ""

    Yes if using password based authentication

    password

    Password to authenticate

    ""

    Yes if using password based authentication

    clientId

    Client ID to authenticate

    ""

    Yes if using client credentials based authentication

    clientSecret

    Client Secret to authenticate

    ""

    Yes if using client credentials based authentication

    userQuerySize

    Number of users to query at a time

    100

    No

    groupQuerySize

    Number of groups to query at a time

    100

    No

  2. When using client credentials

    1. Set the access type to confidential.
    2. Enable service accounts.
    3. Add the following roles from the realm-management client role:
  3. query-groups
  4. query-users
  5. view-users
  6. Optionally, if you have self-signed or corporate certificate issues, you can set the following environment variable before starting Developer Hub:

    NODE_TLS_REJECT_UNAUTHORIZED=0
    Warning

    Setting the environment variable is not recommended.

4.3. Keycloak plugin metrics

Monitor Keycloak fetch operations and diagnose issues by using OpenTelemetry metrics with Prometheus or Grafana.

The Keycloak backend plugin supports OpenTelemetry metrics that you can use to monitor fetch operations and diagnose potential issues.

4.3.1. Available Counters

Table 1. Keycloak metrics

Metric NameDescription

backend_keycloak_fetch_task_failure_count_total

Counts fetch task failures where no data was returned due to an error.

backend_keycloak_fetch_data_batch_failure_count_total

Counts partial data batch failures. Even if some batches fail, the plugin continues fetching others.

4.3.2. Labels

All counters include the taskInstanceId label, which uniquely identifies each scheduled fetch task. You can use this label to trace failures back to individual task executions.

Users can enter queries in the Prometheus UI or Grafana to explore and manipulate metric data.

In the following examples, a Prometheus Query Language (PromQL) expression returns the number of backend failures.

Example to get the number of backend failures associated with a taskInstanceId

backend_keycloak_fetch_data_batch_failure_count_total{taskInstanceId="df040f82-2e80-44bd-83b0-06a984ca05ba"} 1

Example to get the number of backend failures during the last hour

sum(backend_keycloak_fetch_data_batch_failure_count_total) - sum(backend_keycloak_fetch_data_batch_failure_count_total offset 1h)

Note

PromQL supports arithmetic operations, comparison operators, logical/set operations, aggregation, and various functions. Users can combine these features to analyze time-series data effectively.

Additionally, the results can be visualized using Grafana.

4.3.3. Exporting Metrics

You can export metrics by using any OpenTelemetry-compatible backend, such as Prometheus.

5. Enable and configure the Nexus Repository Manager plugin

Use the Nexus Repository Manager plugin to view build artifacts in your Developer Hub application. You can find this community-sourced plugin in the Community plugins migration table.

5.1. Enable the Nexus Repository Manager plugin

To enable the Nexus Repository Manager plugin, set the disabled property to false.

Procedure

  • Set the value to false as follows:

    global:
      dynamic:
        includes:
          - dynamic-plugins.default.yaml
        plugins:
          - package: oci://ghcr.io/redhat-developer/rhdh-plugin-export-overlays/backstage-community-plugin-nexus-repository-manager:<tag>
            disabled: false

    The <tag> variable is your RHDH application's version of Backstage and the plugin version, in the format: bs_<backstage-version>__<plugin-version> (note the double underscore delimiter).

  • To find the correct image tag for <tag>:

    1. Look in the RHDH release notes preface for your Backstage version.
    2. Locate the plugin version for paths starting with oci://ghcr.io within one of the tables in the Dynamic Plugins Reference guide.

For example, because RHDH 1.9 is based on Backstage 1.45.3, the tag will have the format bs_1.45.3__<plugin-version>.

Tip

To ensure environment stability, use a SHA256 digest instead of a version tag. See Determining SHA256 Digests.

5.2. Configure the Nexus Repository Manager plugin

Configure the Nexus Repository Manager plugin to display artifact information from your Nexus Repository Manager instance.

Procedure

  1. Set the proxy to the required Nexus Repository Manager server in the app-config.yaml file as follows:

    proxy:
        '/nexus-repository-manager':
        target: 'https://<NEXUS_REPOSITORY_MANAGER_URL>'
        headers:
            X-Requested-With: 'XMLHttpRequest'
            # Uncomment the following line to access a private Nexus Repository Manager using a token
            # Authorization: 'Bearer <YOUR TOKEN>'
        changeOrigin: true
        # Change to "false" in case of using self hosted Nexus Repository Manager instance with a self-signed certificate
        secure: true
  2. Optional: Change the base URL of Nexus Repository Manager proxy as follows:

    nexusRepositoryManager:
        # default path is `/nexus-repository-manager`
        proxyPath: /custom-path
  3. Optional: Enable the following experimental annotations:

    nexusRepositoryManager:
        experimentalAnnotations: true
  4. Annotate your entity using the following annotations:

    metadata:
        annotations:
        # insert the chosen annotations here
        # example
        nexus-repository-manager/docker.image-name: `<ORGANIZATION>/<REPOSITORY>`,

6. Enable the Tekton plugin

The Tekton plugin enables you to monitor CI/CD pipeline results across your Kubernetes or OpenShift clusters. It provides a high-level overview of all associated tasks, allowing you to track the real-time status of your application pipelines.

Prerequisites

  • You have installed and configured the @backstage/plugin-kubernetes and @backstage/plugin-kubernetes-backend dynamic plugins.
  • You have configured the Kubernetes plugin to connect to the cluster using a ServiceAccount.
  • The ClusterRole must be granted for custom resources (PipelineRuns and TaskRuns) to the ServiceAccount accessing the cluster.

    Note

    If you have the RHDH Kubernetes plugin configured, then the ClusterRole is already granted.

  • To view the pod logs, you have granted permissions for pods/log.
  • You can use the following code to grant the ClusterRole for custom resources and pod logs:

    kubernetes:
       ...
       customResources:
         - group: 'tekton.dev'
           apiVersion: 'v1'
           plural: 'pipelineruns'
         - group: 'tekton.dev'
           apiVersion: 'v1'
    
    
     ...
      apiVersion: rbac.authorization.k8s.io/v1
      kind: ClusterRole
      metadata:
        name: backstage-read-only
      rules:
        - apiGroups:
            - ""
          resources:
            - pods/log
          verbs:
            - get
            - list
            - watch
        ...
        - apiGroups:
            - tekton.dev
          resources:
            - pipelineruns
            - taskruns
          verbs:
            - get
            - list

    You can use the prepared manifest for a read-only ClusterRole, which provides access for both Kubernetes plugin and Tekton plugin.

  • Add the following annotation to the entity’s catalog-info.yaml file to identify whether an entity contains the Kubernetes resources:

    annotations:
      ...
    
      backstage.io/kubernetes-id: <BACKSTAGE_ENTITY_NAME>
  • You can also add the backstage.io/kubernetes-namespace annotation to identify the Kubernetes resources using the defined namespace.

    annotations:
      ...
    
      backstage.io/kubernetes-namespace: <RESOURCE_NS>
  • Add the following annotation to the catalog-info.yaml file of the entity to enable the Tekton related features in RHDH. The value of the annotation identifies the name of the RHDH entity:

    annotations:
      ...
    
      janus-idp.io/tekton : <BACKSTAGE_ENTITY_NAME>
  • Add a custom label selector, which RHDH uses to find the Kubernetes resources. The label selector takes precedence over the ID annotations.

    annotations:
      ...
    
      backstage.io/kubernetes-label-selector: 'app=my-app,component=front-end'
  • Add the following label to the resources so that the Kubernetes plugin gets the Kubernetes resources from the requested entity:

    labels:
      ...
    
      backstage.io/kubernetes-id: <BACKSTAGE_ENTITY_NAME>
    Note

    When you use the label selector, the mentioned labels must be present on the resource.

Procedure

  • To enable the Tekton plugin, set the disabled property to false as follows:

    global:
      dynamic:
        includes:
          - dynamic-plugins.default.yaml
        plugins:
          - package: oci://ghcr.io/redhat-developer/rhdh-plugin-export-overlays/backstage-community-plugin-tekton:<tag>
            disabled: false

The <tag> variable is your RHDH application's version of Backstage and the plugin version, in the format: bs_<backstage-version>__<plugin-version> (note the double underscore delimiter).

For example, because RHDH 1.9 is based on Backstage 1.45.3, the tag will have the format bs_1.45.3__<plugin-version>.

Tip

To ensure environment stability, use a SHA256 digest instead of a version tag. See Determining SHA256 Digests.

7. Install the Topology plugin

Install and configure the Topology plugin to visualize Kubernetes workloads and manage labels and annotations.

7.1. Install the Topology plugin

Visualize Kubernetes workloads like Deployments, Pods, and Virtual Machines by enabling the Topology plugin.

The Topology plugin enables you to visualize the workloads such as Deployment, Job, Daemonset, Statefulset, CronJob, Pods and Virtual Machines powering any service on your Kubernetes cluster.

Prerequisites

  • You have installed and configured the @backstage/plugin-kubernetes-backend dynamic plugins.
  • You have configured the Kubernetes plugin to connect to the cluster using a ServiceAccount.
  • The ClusterRole must be granted to ServiceAccount accessing the cluster.

    Note

    If you have the Developer Hub Kubernetes plugin configured, then the ClusterRole is already granted.

Procedure

  • The Topology plugin is pre-loaded in Developer Hub with basic configuration properties. To enable it, set the disabled property to false as follows:

    app-config.yaml fragment

    auth:
    global:
      dynamic:
        includes:
          - dynamic-plugins.default.yaml
        plugins:
          - package: ./dynamic-plugins/dist/backstage-community-plugin-topology
            disabled: false

7.2. Configure the Topology plugin

Configure the Topology plugin to view OpenShift routes, pod logs, Tekton PipelineRuns, and virtual machines.

7.2.1. View OpenShift routes

Grant read access to routes resource in ClusterRole to view OpenShift routes in the Topology plugin.

Procedure

  1. To view OpenShift routes, grant read access to the routes resource in the ClusterRole:

      apiVersion: rbac.authorization.k8s.io/v1
      kind: ClusterRole
      metadata:
        name: backstage-read-only
      rules:
        ...
        - apiGroups:
            - route.openshift.io
          resources:
            - routes
          verbs:
            - get
            - list
  2. Also add the following in kubernetes.customResources property in your app-config.yaml file:

    kubernetes:
        ...
        customResources:
          - group: 'route.openshift.io'
            apiVersion: 'v1'
            	  plural: 'routes'

7.2.2. View pod logs

Grant ClusterRole permissions to pods and pods/log resources to view pod logs in the Topology plugin.

Procedure

  • To view pod logs, you must grant the following permission to the ClusterRole:

     apiVersion: rbac.authorization.k8s.io/v1
      kind: ClusterRole
      metadata:
        name: backstage-read-only
      rules:
        ...
        - apiGroups:
            - ''
          resources:
            - pods
            - pods/log
          verbs:
            - get
            - list
            - watch

7.2.3. View Tekton PipelineRuns

Grant ClusterRole access to Tekton resources to view PipelineRuns status in the Topology plugin.

Procedure

  1. To view the Tekton PipelineRuns, grant read access to the pipelines, pipelineruns, and taskruns resources in the ClusterRole:

     ...
      apiVersion: rbac.authorization.k8s.io/v1
      kind: ClusterRole
      metadata:
        name: backstage-read-only
      rules:
        ...
        - apiGroups:
            - tekton.dev
          resources:
            - pipelines
            - pipelineruns
            - taskruns
          verbs:
            - get
            - list
  2. To view the Tekton PipelineRuns list in the side panel and the latest PipelineRuns status in the Topology node decorator, add the following code to the kubernetes.customResources property in your app-config.yaml file:

    kubernetes:
        ...
        customResources:
          - group: 'tekton.dev'
            apiVersion: 'v1'
            plural: 'pipelines'
          - group: 'tekton.dev'
            apiVersion: 'v1'
            plural: 'pipelineruns'
          - group: 'tekton.dev'
            apiVersion: 'v1'
            plural: 'taskruns'

7.2.4. View virtual machines

Grant ClusterRole access to VirtualMachines resources to view virtual machine nodes in the Topology plugin.

Prerequisites

  • The OpenShift Virtualization operator is installed and configured on a Kubernetes cluster.

Procedure

  1. Grant read access to the VirtualMachines resource in the ClusterRole:

     ...
      apiVersion: rbac.authorization.k8s.io/v1
      kind: ClusterRole
      metadata:
        name: backstage-read-only
      rules:
        ...
        - apiGroups:
            - kubevirt.io
          resources:
            - virtualmachines
            - virtualmachineinstances
          verbs:
            - get
            - list
  2. To view the virtual machine nodes on the topology plugin, add the following code to the kubernetes.customResources property in the app-config.yaml file:

    kubernetes:
        ...
        customResources:
          - group: 'kubevirt.io'
            apiVersion: 'v1'
            plural: 'virtualmachines'
          - group: 'kubevirt.io'
            apiVersion: 'v1'
            plural: 'virtualmachineinstances'

7.2.5. Enable the source code editor

Enable the source code editor to allow developers to open source code directly from RHDH.

Procedure

  1. Grant read access to the CheClusters resource in the ClusterRole:

     ...
      apiVersion: rbac.authorization.k8s.io/v1
      kind: ClusterRole
      metadata:
        name: backstage-read-only
      rules:
        ...
        - apiGroups:
            - org.eclipse.che
          resources:
            - checlusters
          verbs:
            - get
            - list
  2. Add the following configuration to the kubernetes.customResources property in your app-config.yaml file:

     kubernetes:
        ...
        customResources:
          - group: 'org.eclipse.che'
            apiVersion: 'v2'
            plural: 'checlusters'

7.3. Manage labels and annotations for Topology plugins

Configure labels and annotations to customize Kubernetes resource identification and visualization in the Topology plugin.

7.3.2. Add entity annotations and labels for the Kubernetes plugin

Add annotations and labels to enable RHDH to detect that an entity has Kubernetes components.

Procedure

  1. Add the following annotation to the catalog-info.yaml file of the entity:

    annotations:
      backstage.io/kubernetes-id: <BACKSTAGE_ENTITY_NAME>
  2. Add the following label to the resources so that the Kubernetes plugin gets the Kubernetes resources from the requested entity:

    labels:
      backstage.io/kubernetes-id: <BACKSTAGE_ENTITY_NAME>
    Note

    When using the label selector, the mentioned labels must be present on the resource.

7.3.3. Namespace annotation

Identify Kubernetes resources by namespace using the backstage.io/kubernetes-namespace annotation.

Procedure

  • To identify the Kubernetes resources using the defined namespace, add the backstage.io/kubernetes-namespace annotation:

    annotations:
      backstage.io/kubernetes-namespace: <RESOURCE_NS>

    The Red Hat OpenShift Dev Spaces instance is not accessible using the source code editor if the backstage.io/kubernetes-namespace annotation is added to the catalog-info.yaml file.

    To retrieve the instance URL, you require the CheCluster custom resource (CR). As the CheCluster CR is created in the openshift-devspaces namespace, the instance URL is not retrieved if the namespace annotation value is not openshift-devspaces.

7.3.4. Add a label selector query annotation

Add a custom label selector annotation so that RHDH uses your custom labels to find Kubernetes resources.

Procedure

  1. Add the backstage.io/kubernetes-label-selector annotation to the catalog-info.yaml file of the entity. The label selector takes precedence over the ID annotations:

    annotations:
      backstage.io/kubernetes-label-selector: 'app=my-app,component=front-end'
  2. Optional: If you have many entities while Red Hat Dev Spaces is configured and want multmanyities to support the edit code decorator that redirects to the Red Hat Dev Spaces instance, add the backstage.io/kubernetes-label-selector annotation to the catalog-info.yaml file for each entity:

    annotations:
      backstage.io/kubernetes-label-selector: 'component in (<BACKSTAGE_ENTITY_NAME>,che)'
  3. If you are using the previous label selector, add the following labels to your resources so that the Kubernetes plugin gets the Kubernetes resources from the requested entity:

    labels:
      component: che # add this label to your che cluster instance
    labels:
      component: <BACKSTAGE_ENTITY_NAME> # add this label to the other resources associated with your entity

    You can also write your own custom query for the label selector with unique labels to differentiate your entities. However, you need to ensure that you add those labels to the resources associated with your entities including your CheCluster instance.

7.3.5. Display a runtime icon in the topology node

Add a label to workload resources to display a runtime icon in the topology nodes.

Procedure

  • Add the following label to workload resources, such as Deployments:

    labels:
      app.openshift.io/runtime: <RUNTIME_NAME>

    Alternatively, you can include the following label to display the runtime icon:

    labels:
      app.kubernetes.io/name: <RUNTIME_NAME>

    Supported values of <RUNTIME_NAME> include django, dotnet, drupal, go-gopher, golang, grails, jboss, jruby, js, nginx, nodejs, openjdk, perl, phalcon, php, python, quarkus, rails, redis, rh-spring-boot, rust, java, rh-openjdk, ruby, spring, and spring-boot. Other values result in icons not being rendered for the node.

7.3.6. Group applications in the topology view

Add a label to workload resources to display them in a visual group in the topology view.

Procedure

  • Add the following label to workload resources, such as deployments or pods, to display them in a visual group:

    labels:
      app.kubernetes.io/part-of: <GROUP_NAME>

7.3.7. Node connector

Display visual connectors between workload resources like deployments and pods using annotations.

Procedure

  • To display the workload resources such as deployments or pods with a visual connector, add the following annotation:

    annotations:
      app.openshift.io/connects-to: '[{"apiVersion": <RESOURCE_APIVERSION>,"kind": <RESOURCE_KIND>,"name": <RESOURCE_NAME>}]'

8. Bulk importing in Red Hat Developer Hub

Automate onboarding of GitHub repositories and GitLab projects to Red Hat Developer Hub catalog, and monitor import status by using bulk import capabilities.

Important

These features are for Technology Preview only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend using them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information on Red Hat Technology Preview features, see Technology Preview Features Scope.

8.1. Enable and authorize Bulk Import capabilities in Red Hat Developer Hub

Enable Bulk Import plugins and configure RBAC permissions to allow users to import multiple GitHub repositories and GitLab projects into the catalog.

Prerequisites

Procedure

  1. The Bulk Import plugins are installed but disabled by default. To enable the ./dynamic-plugins/dist/red-hat-developer-hub-backstage-plugin-bulk-import-backend-dynamic and ./dynamic-plugins/dist/red-hat-developer-hub-backstage-plugin-bulk-import plugins, edit your dynamic-plugins.yaml with the following content:

    plugins:
      - package: ./dynamic-plugins/dist/red-hat-developer-hub-backstage-plugin-bulk-import-backend-dynamic
        disabled: false
      - package: ./dynamic-plugins/dist/red-hat-developer-hub-backstage-plugin-bulk-import
        disabled: false

    See Installing and viewing plugins in Red Hat Developer Hub.

  2. Configure the required bulk.import RBAC permission for the users who are not administrators as shown in the following code:

    rbac-policy.csv fragment

    p, role:default/bulk-import, bulk.import, use, allow
    g, user:default/<your_user>, role:default/bulk-import

    Note that only Developer Hub administrators or users with the bulk.import permission can use the Bulk Import feature. See Permission policies in Red Hat Developer Hub.

Verification

  1. The sidebar displays a Bulk Import option.
  2. The Bulk Import page shows a list of added GitHub repositories and GitLab projects.

8.2. Import multiple GitHub repositories

Select and import multiple GitHub repositories to the Red Hat Developer Hub catalog, automatically creating pull requests with required catalog-info.yaml files.

Procedure

  1. Click Bulk Import in Developer Hub left sidebar.
  2. If your RHDH instance has multiple source control tools configured, select GitHub from the Source control tool list.
  3. Select the repositories to import, and validate.

    Developer Hub creates a pull request in each selected repository to add the required catalog-info.yaml file.

  4. For each repository to import, click PR to review and merge the changes in GitHub.

Verification

  1. Click Bulk Import in Developer Hub left sidebar.
  2. Verify that each imported GitHub repository in the Selected repositories list has the status Waiting for approval or Imported.
  3. For each Waiting for approval repository, click the pull request link to review and merge the catalog-info.yaml file in the corresponding repository.

8.3. Import multiple GitLab repositories

Select and import multiple GitLab projects to the Red Hat Developer Hub catalog by using Technology Preview bulk import capabilities.

Important

These features are for Technology Preview only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend using them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information on Red Hat Technology Preview features, see Technology Preview Features Scope.

Prerequisites

  • You have enabled the Bulk Import feature and given access to it.
  • You have set up a GitLab personal access token (PAT).
  • You configured the GitLab integration by adding the following section to your RHDH app-config.yaml file:

    integrations:
      gitlab:
        - host: ${GITLAB_HOST}
          token: ${GITLAB_TOKEN}
  • You enabled the GitLab catalog provider plugin in your dynamic-plugins.yaml file to import GitLab users and groups:

    plugins:
      - package: './dynamic-plugins/dist/backstage-plugin-catalog-backend-module-gitlab-org-dynamic'
        disabled: false

Procedure

  1. In the Developer Hub left sidebar, click Bulk Import.
  2. If your RHDH instance has multiple source control tools configured, select GitLab as your Source control tool option.
  3. Select the projects to import, and validate.

    Developer Hub creates a merge request in each selected project to add the required catalog-info.yaml file.

  4. For each project to import, click PR to review and merge the changes in GitLab.

Verification

  1. Click Bulk Import in Developer Hub left sidebar.
  2. Verify that each imported GitLab project in the Selected projects list has the status Waiting for approval or Imported.
  3. For projects with the Waiting for approval status, click the merge request link to add the catalog-info.yaml file to the project repository.

8.4. Monitor Bulk Import actions using audit logs

Review Bulk Import backend plugin audit log events to monitor repository import operations, track API requests, and troubleshoot import issues.

Procedure

  1. Access your Developer Hub backend logs where audit log events are recorded.
  2. Review the following Bulk Import audit log events to monitor repository operations:

    BulkImportUnknownEndpoint
    Tracks requests to unknown endpoints.
    BulkImportPing
    Tracks GET requests to the /ping endpoint, which allows us to make sure the bulk import backend is up and running.
    BulkImportFindAllOrganizations
    Tracks GET requests to the /organizations endpoint, which returns the list of organizations accessible from all configured GitHub Integrations.
    BulkImportFindRepositoriesByOrganization
    Tracks GET requests to the /organizations/:orgName/repositories endpoint, which returns the list of repositories for the specified organization (accessible from any of the configured GitHub Integrations).
    BulkImportFindAllRepositories
    Tracks GET requests to the /repositories endpoint, which returns the list of repositories accessible from all configured GitHub Integrations.
    BulkImportFindAllImports
    Tracks GET requests to the /imports endpoint, which returns the list of existing import jobs along with their statuses.
    BulkImportCreateImportJobs
    Tracks POST requests to the /imports endpoint, which allows to submit requests to bulk-import one or many repositories into the Developer Hub catalog, by eventually creating import pull requests in the target repositories.
    BulkImportFindImportStatusByRepo
    Tracks GET requests to the /import/by-repo endpoint, which fetches details about the import job for the specified repository.
    BulkImportDeleteImportByRepo

    Tracks DELETE requests to the /import/by-repo endpoint, which deletes any existing import job for the specified repository, by closing any open import pull request that could have been created.

    Example audit log output:

    {
      "actor": {
        "actorId": "user:default/myuser",
        "hostname": "localhost",
        "ip": "::1",
        "userAgent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/128.0.0.0 Safari/537.36"
      },
      "eventName": "BulkImportFindAllOrganizations",
      "isAuditLog": true,
      "level": "info",
      "message": "'get /organizations' endpoint hit by user:default/myuser",
      "meta": {},
      "plugin": "bulk-import",
      "request": {
        "body": {},
        "method": "GET",
        "params": {},
        "query": {
          "pagePerIntegration": "1",
          "sizePerIntegration": "5"
        },
        "url": "/api/bulk-import/organizations?pagePerIntegration=1&sizePerIntegration=5"
      },
      "response": {
        "status": 200
      },
      "service": "backstage",
      "stage": "completion",
      "status": "succeeded",
      "timestamp": "2024-08-26 16:41:02"
    }

8.5. Input parameters for Bulk Import Scaffolder template

Define Scaffolder template parameters such as repository URL, name, organization, and branch details to customize bulk import automation workflows for your repositories.

As an administrator, you can use the Bulk Import plugin to run a Scaffolder template task with specified parameters, which you must define within the template.

The Bulk Import plugin analyzes Git repository information and provides the following parameters for the Scaffolder template task:

repoUrl

Normalized repository URL in the following format:

  ${gitProviderHost}?owner=${owner}&repo=${repository-name}
name
The repository name.
organization
The repository owner, which can be a user nickname or organization name.
branchName
The proposed repository branch. By default, the proposed repository branch is bulk-import-catalog-entity.
targetBranchName
The default branch of the Git repository.
gitProviderHost
The Git provider host parsed from the repository URL. You can use this parameter to write Git-provider-agnostic templates.

Example of a Scaffolder template:

parameters:
  - title: Repository details
    required:
      - repoUrl
      - branchName
      - targetBranchName
      - name
      - organization
    properties:
      repoUrl:
        type: string
        title: Repository URL ({product-short} format)
        description: github.com?owner=Org&repo=repoName
      organization:
        type: string
        title: Owner of the repository
      name:
        type: string
        title: Name of the repository
      branchName:
        type: string
        title: Branch to add the catalog entity to
      targetBranchName:
        type: string
        title: Branch to target the PR/MR to
      gitProviderHost:
        type: string
        title: Git provider host

8.6. Set up a custom Scaffolder workflow for Bulk Import

Create custom Scaffolder templates aligned with your organization’s repository conventions to automate bulk import tasks such as entity imports, pull request creation, and webhook integration.

As an administrator, you can create a custom Scaffolder template inline with the repository conventions of your organization and add the template into the Red Hat Developer Hub catalog for use by the Bulk Import plugin on many selected repositories.

You can define various custom tasks, including, but not limited to the following:

  • Importing existing catalog entities from a repository
  • Creating pull requests for cleanup
  • Calling webhooks for external system integration

Prerequisites

  • You created a custom Scaffolder template for the Bulk Import plugin.
  • You have run your RHDH instance with the following environment variable enabled to allow the use of the Scaffolder functionality:

    export NODE_OPTIONS=--no-node-snapshot

Procedure

  • Configure your app-config.yaml configuration to instruct the Bulk Import plugin to use your custom template as shown in the following example:

    bulkImport:
      importTemplate: <your_template_entity_reference_or_template_name>
      importAPI: `open-pull-requests` | `scaffolder`;

    where:

    importTemplate:
    Enter your Scaffolder template entity reference.
    importAPI
    Set the API to 'scaffolder' to trigger the defined workflow for high-fidelity automation. This field defines the import workflow and currently supports two following options:
    open-pull-requests
    This is the default import workflow, which includes the logic for creating pull requests for every selected repository.
    scaffolder

    This workflow uses an import scenario defined in the Scaffolder template to create import jobs. Select this option to use the custom import scenario defined in your Scaffolder template.

    Optional: You can direct the Bulk Import plugin to hand off the entire list of selected repositories to a custom Orchestrator workflow.

    Important

    The Scaffolder template must be generic and not specific to a single repository if you want your custom Scaffolder template to run successfully for every repository in the bulk list.

Verification

  • The Bulk Import plugin runs the custom Scaffolder template for the list of repositories using the /task-imports API endpoint.

8.7. Run Orchestrator workflows for bulk imports

Configure Bulk Import to use Orchestrator workflows for advanced bulk operations across multiple repositories, enabling automated pull request creation and configuration publishing at scale.

As a platform engineer, you can configure the Bulk Import plugin to run Orchestrator workflows for bulk import operations. This mode uses the Orchestrator engine to provide advanced capabilities, such as creating pull requests or publishing configurations across multiple repositories.

Prerequisites

  • You have installed and configured the Orchestrator plugin in your Developer Hub instance.
  • You have registered a generic custom workflow (for example, universal-pr) in the Orchestrator plugin.
  • You have role-based access control (RBAC) permissions to configure the Bulk Import plugin.

Procedure

  1. Configure the Bulk Import plugin by editing your app-config.yaml file to enable Orchestrator mode.

    bulkImport:
      orchestratorWorkflow: your_workflow_id
      importAPI: 'orchestrator'

    where:

    orchestratorWorkflow
    The ID of the workflow to run for each repository.
    importAPI
    The execution mode for the workflow. Enter orchestrator to enable workflow execution.
  2. Verify that the Orchestrator workflow receives the following input:

    {
      "inputData": {
        "owner": "redhat-developer",
        "repo": "rhdh-plugins",
        "baseBranch": "main",
        "targetBranch": "bulk-import-orchestrator"
      },
      "authTokens": [
        {
          "token": "<github_token>",
          "provider": "github"
        }
      ]
    }

    where:

    owner
    Specifies the repository owner (organization or user name).
    repo
    Specifies the repository name.
    baseBranch
    Specifies the default branch of the Git repository (for example, main).
    targetBranch
    Specifies the target branch for the import operation. By default, this is set to bulk-import-orchestrator.
    authTokens
    Specifies the authentication tokens for the Git provider:
  3. For GitHub: { token: <github_token>, provider: github }
  4. For GitLab: { token: <gitlab_token>, provider: gitlab }
  5. Navigate to the Bulk Import page in the sidebar and complete the following steps:

    1. Select your Git provider (for example, GitHub or GitLab).
    2. Select the projects you want to import.
  6. Click import to run the workflow.

Verification

  • Locate your repository and confirm status is COMPLETED.

8.8. Data handoff and custom workflow design

Design Scaffolder templates to receive repository data as parameters and automate repository-specific tasks when using Scaffolder mode for bulk imports.

When you configure the Bulk Import plugin by setting the importAPI field to scaffolder, the Bulk Import Backend passes all necessary context directly to the Scaffolder API.

As an administrator, you can define the Scaffolder template workflow and structure the workflow to do the following:

Define template parameters to consume input
Structure the Scaffolder template to receive the repository data as template parameters for the current workflow run. The template must be generic, and not specific to a single repository, so that it can successfully run for every repository in the bulk list.
Automate processing for each repository
Implement the custom logic needed for a single repository within the template. The Orchestrator iterates through the repository list, launching the template once for each repository and passes only the data for that single repository to the template run. This allows you to automate tasks such as creating the catalog-info.yaml, running compliance checks, or registering the entity with the catalog.

9. ServiceNow custom actions in Red Hat Developer Hub

In Red Hat Developer Hub, you can use ServiceNow custom actions to fetch and register resources within the catalog.

The custom actions in Developer Hub help you automate the management of records. By using the custom actions, you can:

  • Create, update, or delete a record
  • Retrieve information about a single record or many records

The ServiceNow custom actions plugin is community-sourced.

9.1. Enable ServiceNow custom actions plugin in Red Hat Developer Hub

To use ServiceNow custom actions, you must first activate the plugin.

Prerequisites

  • Red Hat Developer Hub is installed and running.
  • You have created a project in the Developer Hub.

Procedure

  1. Add a package with plugin name and update the disabled field in your Helm chart as follows:

    global:
      dynamic:
        includes:
          - dynamic-plugins.default.yaml
        plugins:
          - package: oci://ghcr.io/redhat-developer/rhdh-plugin-export-overlays/backstage-community-plugin-scaffolder-backend-module-servicenow:<tag>
            disabled: false

    The <tag> variable is your RHDH application's version of Backstage and the plugin version, in the format: bs_<backstage-version>__<plugin-version> (note the double underscore delimiter).

    For example, because RHDH 1.9 is based on Backstage 1.45.3, the tag will have the format bs_1.45.3__<plugin-version>.

    Tip

    To ensure environment stability, use a SHA256 digest instead of a version tag. See Determining SHA256 Digests.

    Note

    The default configuration for a plugin is extracted from the dynamic-plugins.default.yaml file, however, you can use a pluginConfig entry to override the default configuration.

  2. Set the following variables in the Helm chart to access the custom actions:

    servicenow:
      # The base url of the ServiceNow instance.
      baseUrl: ${SERVICENOW_BASE_URL}
      # The username to use for authentication.
      username: ${SERVICENOW_USERNAME}
      # The password to use for authentication.
      password: ${SERVICENOW_PASSWORD}

9.2. Supported ServiceNow custom actions in Red Hat Developer Hub

The ServiceNow custom actions enable you to manage records in the Red Hat Developer Hub.

The custom actions support the following HTTP methods for API requests:

  • GET: Retrieves specified information from a specified resource endpoint
  • POST: Creates or updates a resource
  • PUT: Modify a resource
  • PATCH: Updates a resource
  • DELETE: Deletes a resource

    [GET] servicenow:now:table:retrieveRecord

    Retrieves information of a specified record from a table in the Developer Hub.

    Table 2. Input parameters

    NameTypeRequirementDescription

    tableName

    string

    Required

    Name of the table to retrieve the record from

    sysId

    string

    Required

    Unique identifier of the record to retrieve

    sysparmDisplayValue

    enum("true", "false", "all")

    Optional

    Returns field display values such as true, actual values as false, or both. The default value is false.

    sysparmExcludeReferenceLink

    boolean

    Optional

    Set as true to exclude Table API links for reference fields. The default value is false.

    sysparmFields

    string[]

    Optional

    Array of fields to return in the response

    sysparmView

    string

    Optional

    Renders the response according to the specified UI view. You can override this parameter using sysparm_fields.

    sysparmQueryNoDomain

    boolean

    Optional

    Set as true to access data across domains if authorized. The default value is false.

    Table 3. Output parameters

    NameTypeDescription

    result

    Record<PropertyKey, unknown>

    The response body of the request

    [GET] servicenow:now:table:retrieveRecords

    Retrieves information about multiple records from a table in the Developer Hub.

    Table 4. Input parameters

    NameTypeRequirementDescription

    tableName

    string

    Required

    Name of the table to retrieve the records from

    sysparamQuery

    string

    Optional

    Encoded query string used to filter the results

    sysparmDisplayValue

    enum("true", "false", "all")

    Optional

    Returns field display values such as true, actual values as false, or both. The default value is false.

    sysparmExcludeReferenceLink

    boolean

    Optional

    Set as true to exclude Table API links for reference fields. The default value is false.

    sysparmSuppressPaginationHeader

    boolean

    Optional

    Set as true to suppress pagination header. The default value is false.

    sysparmFields

    string[]

    Optional

    Array of fields to return in the response

    sysparmLimit

    int

    Optional

    Maximum number of results returned per page. The default value is 10,000.

    sysparmView

    string

    Optional

    Renders the response according to the specified UI view. You can override this parameter using sysparm_fields.

    sysparmQueryCategory

    string

    Optional

    Name of the query category to use for queries

    sysparmQueryNoDomain

    boolean

    Optional

    Set as true to access data across domains if authorized. The default value is false.

    sysparmNoCount

    boolean

    Optional

    Does not run a select count(*) on the table. The default value is false.

    Table 5. Output parameters

    NameTypeDescription

    result

    Record<PropertyKey, unknown>

    The response body of the request

    [POST] servicenow:now:table:createRecord

    Creates a record in a table in the Developer Hub.

    Table 6. Input parameters

    NameTypeRequirementDescription

    tableName

    string

    Required

    Name of the table to save the record in

    requestBody

    Record<PropertyKey, unknown>

    Optional

    Field name and associated value for each parameter to define in the specified record

    sysparmDisplayValue

    enum("true", "false", "all")

    Optional

    Returns field display values such as true, actual values as false, or both. The default value is false.

    sysparmExcludeReferenceLink

    boolean

    Optional

    Set as true to exclude Table API links for reference fields. The default value is false.

    sysparmFields

    string[]

    Optional

    Array of fields to return in the response

    sysparmInputDisplayValue

    boolean

    Optional

    Set field values using their display value such as true or actual value as false. The default value is false.

    sysparmSuppressAutoSysField

    boolean

    Optional

    Set as true to suppress auto-generation of system fields. The default value is false.

    sysparmView

    string

    Optional

    Renders the response according to the specified UI view. You can override this parameter using sysparm_fields.

    Table 7. Output parameters

    NameTypeDescription

    result

    Record<PropertyKey, unknown>

    The response body of the request

    [PUT] servicenow:now:table:modifyRecord

    Modifies a record in a table in the Developer Hub.

    Table 8. Input parameters

    NameTypeRequirementDescription

    tableName

    string

    Required

    Name of the table to change the record from

    sysId

    string

    Required

    Unique identifier of the record to change

    requestBody

    Record<PropertyKey, unknown>

    Optional

    Field name and associated value for each parameter to define in the specified record

    sysparmDisplayValue

    enum("true", "false", "all")

    Optional

    Returns field display values such as true, actual values as false, or both. The default value is false.

    sysparmExcludeReferenceLink

    boolean

    Optional

    Set as true to exclude Table API links for reference fields. The default value is false.

    sysparmFields

    string[]

    Optional

    Array of fields to return in the response

    sysparmInputDisplayValue

    boolean

    Optional

    Set field values using their display value such as true or actual value as false. The default value is false.

    sysparmSuppressAutoSysField

    boolean

    Optional

    Set as true to suppress auto-generation of system fields. The default value is false.

    sysparmView

    string

    Optional

    Renders the response according to the specified UI view. You can override this parameter using sysparm_fields.

    sysparmQueryNoDomain

    boolean

    Optional

    Set as true to access data across domains if authorized. The default value is false.

    Table 9. Output parameters

    NameTypeDescription

    result

    Record<PropertyKey, unknown>

    The response body of the request

    [PATCH] servicenow:now:table:updateRecord

    Updates a record in a table in the Developer Hub.

    Table 10. Input parameters

    NameTypeRequirementDescription

    tableName

    string

    Required

    Name of the table to update the record in

    sysId

    string

    Required

    Unique identifier of the record to update

    requestBody

    Record<PropertyKey, unknown>

    Optional

    Field name and associated value for each parameter to define in the specified record

    sysparmDisplayValue

    enum("true", "false", "all")

    Optional

    Returns field display values such as true, actual values as false, or both. The default value is false.

    sysparmExcludeReferenceLink

    boolean

    Optional

    Set as true to exclude Table API links for reference fields. The default value is false.

    sysparmFields

    string[]

    Optional

    Array of fields to return in the response

    sysparmInputDisplayValue

    boolean

    Optional

    Set field values using their display value such as true or actual value as false. The default value is false.

    sysparmSuppressAutoSysField

    boolean

    Optional

    Set as true to suppress auto-generation of system fields. The default value is false.

    sysparmView

    string

    Optional

    Renders the response according to the specified UI view. You can override this parameter using sysparm_fields.

    sysparmQueryNoDomain

    boolean

    Optional

    Set as true to access data across domains if authorized. The default value is false.

    Table 11. Output parameters

    NameTypeDescription

    result

    Record<PropertyKey, unknown>

    The response body of the request

    [DELETE] servicenow:now:table:deleteRecord

    Deletes a record from a table in the Developer Hub.

    Table 12. Input parameters

    NameTypeRequirementDescription

    tableName

    string

    Required

    Name of the table to delete the record from

    sysId

    string

    Required

    Unique identifier of the record to delete

    sysparmQueryNoDomain

    boolean

    Optional

    Set as true to access data across domains if authorized. The default value is false.

10. Kubernetes custom actions in Red Hat Developer Hub

You can create and manage Kubernetes resources by using custom scaffolder actions in Red Hat Developer Hub templates. The Kubernetes custom actions plugin is preinstalled in a disabled state.

10.1. Enable Kubernetes custom actions plugin in Red Hat Developer Hub

Enable the preinstalled Kubernetes custom actions plugin by updating the Helm chart configuration.

In Red Hat Developer Hub, the Kubernetes custom actions are provided as a preinstalled plugin, which is disabled by default. You can enable the Kubernetes custom actions plugin by updating the disabled key value in your Helm chart.

Prerequisites

  • You have installed Red Hat Developer Hub with the Helm chart.

Procedure

  • In your Helm chart, add a package with the Kubernetes custom action plugin name and update the disabled field to enable the plugin. For example:

    global:
      dynamic:
        includes:
          - dynamic-plugins.default.yaml
        plugins:
          - package: ./dynamic-plugins/dist/backstage-community-plugin-scaffolder-backend-module-kubernetes-dynamic
            disabled: false
    Note

    The default configuration for a plugin is extracted from the dynamic-plugins.default.yaml file, however, you can use a pluginConfig entry to override the default configuration.

10.2. Use Kubernetes custom actions plugin in Red Hat Developer Hub

Add Kubernetes actions to your custom templates to create namespaces and manage cluster resources.

In Red Hat Developer Hub, the Kubernetes custom actions enable you to run template actions for Kubernetes.

Procedure

  • To use a Kubernetes custom action in your custom template, add the following Kubernetes actions to your template:

    action: kubernetes:create-namespace
    id: create-kubernetes-namespace
    name: Create kubernetes namespace
    input:
      namespace: my-rhdh-project
      clusterRef: bar
      token: TOKEN
      skipTLSVerify: false
      caData: Zm9v
      labels: app.io/type=ns; app.io/managed-by=org;

Additional resources

10.3. Create a template using Kubernetes custom actions in Red Hat Developer Hub

Define a Template object with Kubernetes actions to automate namespace creation and resource management.

Procedure

  • To create a template, define a Template object as a YAML file.

    The Template object describes the template and its metadata. It also contains required input variables and a list of actions that are executed by the scaffolding service.

    apiVersion: scaffolder.backstage.io/v1beta3
    kind: Template
    metadata:
      name: create-kubernetes-namespace
      title: Create a kubernetes namespace
      description: Create a kubernetes namespace
    spec:
      type: service
      parameters:
        - title: Information
          required: [namespace, token]
          properties:
            namespace:
              title: Namespace name
              type: string
              description: Name of the namespace to be created
            clusterRef:
              title: Cluster reference
              type: string
              description: Cluster resource entity reference from the catalog
              ui:field: EntityPicker
              ui:options:
                catalogFilter:
                  kind: Resource
            url:
              title: Url
              type: string
              description: Url of the kubernetes API, will be used if clusterRef is not provided
            token:
              title: Token
              type: string
              ui:field: Secret
              description: Bearer token to authenticate with
            skipTLSVerify:
              title: Skip TLS verification
              type: boolean
              description: Skip TLS certificate verification, not recommended to use in production environment, default to false
            caData:
              title: CA data
              type: string
              ui:field: Secret
              description: Certificate Authority base64 encoded certificate
            labels:
              title: Labels
              type: string
              description: Labels to be applied to the namespace
              ui:widget: textarea
              ui:options:
                rows: 3
              ui:help: 'Hint: Separate multiple labels with a semicolon!'
              ui:placeholder: 'kubernetes.io/type=namespace; app.io/managed-by=org'
      steps:
        - id: create-kubernetes-namespace
          name: Create kubernetes namespace
          action: kubernetes:create-namespace
          input:
            namespace: ${ parameters.namespace }
            clusterRef: ${ parameters.clusterRef }
            url: ${ parameters.url }
            token: ${ secrets.token }
            skipTLSVerify: ${ parameters.skipTLSVerify }
            caData: ${ secrets.caData }
            labels: ${ parameters.labels }

10.4. Supported Kubernetes custom actions in Red Hat Developer Hub

Access parameter specifications and requirements for the kubernetes:create-namespace scaffolder action.

In Red Hat Developer Hub, you can use custom Kubernetes actions in Scaffolder templates.

Action: kubernetes:create-namespace
Creates a namespace for the Kubernetes cluster in the Developer Hub.
Parameter nameTypeRequirementDescriptionExample

namespace

string

Required

Name of the Kubernetes namespace

my-rhdh-project

clusterRef

string

Required only if url is not defined. You cannot specify both url and clusterRef.

Cluster resource entity reference from the catalog

bar

url

string

Required only if clusterRef is not defined. You cannot specify both url and clusterRef.

API url of the Kubernetes cluster

https://api.example.com:6443

token

String

Required

Kubernetes API bearer token used for authentication

 

skipTLSVerify

boolean

Optional

If true, certificate verification is skipped

false

caData

string

Optional

Base64 encoded certificate data

 

label

string

Optional

Labels applied to the namespace

app.io/type=ns; app.io/managed-by=org;

11. Configure Red Hat Developer Hub events module

You can enable real-time updates for GitHub entities by configuring the Events Module with webhooks together with scheduled updates.

Important

These features are for Technology Preview only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend using them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information on Red Hat Technology Preview features, see Technology Preview Features Scope.

11.1. Configure the GitHub Events Module plugin

Configure GitHub webhooks to trigger real-time updates for GitHub Discovery and organizational data.

Learn how to configure Events Module for use with the RHDH GitHub Discovery feature and GitHub organization data.

Prerequisites

  • You have added your GitHub integration credentials in the app-config.yaml file.
  • You have defined the schedule.frequency in the app-config.yaml file as longer time period, such as 24 hours.
  • For GitHub Discovery only: You have enabled GitHub Discovery.
  • For GitHub Organizational Data only: You have enabled Github Authentication with user ingestion.

Procedure

  1. Add the GitHub Events Module to your dynamic-plugins.yaml configuration file as follows:

    data:
    dynamic-plugins.yaml: |
    includes:
    - dynamic-plugins.default.yaml
    plugins:
    - package: oci://registry.access.redhat.com/rhdh/backstage-plugin-events-backend-module-github@sha256:2c1ccc4fb01883dc4da1aa0c417d6e28d944c6ce941454ee41698f2c1812035c
    disabled: false
  2. To create HTTP endpoints to receive events for the github, add the following to your app-config.yaml file:

    events:
      http:
       topics:
        - github
      modules:
        github:
          webhookSecret: ${GITHUB_WEBHOOK_SECRET}
    Important

    Secure your workflow by adding a webhook secret token to validate webhook deliveries.

  3. Create a GitHub webhook with the following specifications:

    • For GitHub Discovery Events: push, repository
    • For GitHub Organizational Data Events: organization, team and membership
    • Content Type: application/json
    • Payload URL: https://<my_developer_hub_domain>/api/events/http/github

      Note

      Payload URL is the URL exposed after configuring the HTTP endpoint.

Verification

  • Check the log for an entry that confirms that http endpoint was set up successfully to receive events from the GitHub webhook.

    Example of a log of successfully set up http endpoint
    {"level":"\u001b[32minfo\u001b[39m","message":"Registered /api/events/http/github to receive events","plugin":"events","service":"backstage","timestamp":"2025-11-03 02:19:12"}
  • For GitHub Discovery only:

    • Trigger a GitHub push event by adding, modifying or deleting the catalog-info.yaml file in the repository where you set up your webhook. A record of this event should appear in the pod logs of your RHDH instance.

      Example of a log with changes to catalog-info.yaml file
      {"level":"\u001b[32minfo\u001b[39m","message":"Processed Github push event: added 0 - removed 0 - modified 1","plugin":"catalog","service":"backstage","span_id":"47534b96c4afc654","target":"github-provider:providerId","timestamp":"2025-06-15 21:33:14","trace_flags":"01","trace_id":"ecc782deb86aed2027da0ae6b1999e5c"}
  • For GitHub Organizational Data only:

    • Newly added users and teams appear in the RHDH catalog.

12. Override Core Backend Service Configuration

Customize core backend services by installing them as BackendFeatures using dynamic plugin functionality.

The Red Hat Developer Hub (RHDH) backend platform consists of several core services that are well encapsulated. The RHDH backend installs these default core services statically during initialization.

Customize a core service by installing it as a BackendFeature by using the dynamic plugin functionality.

Procedure

  1. Configure Developer Hub to allow a core service override, by setting the corresponding core service ID environment variable to true in the Developer Hub app-config.yaml configuration file.

    Table 13. Environment variables and core service IDs

    VariableOverrides the related service

    ENABLE_CORE_AUTH_OVERRIDE

    core.auth

    ENABLE_CORE_CACHE_OVERRIDE

    core.cache

    ENABLE_CORE_ROOTCONFIG_OVERRIDE

    core.rootConfig

    ENABLE_CORE_DATABASE_OVERRIDE

    core.database

    ENABLE_CORE_DISCOVERY_OVERRIDE

    core.discovery

    ENABLE_CORE_HTTPAUTH_OVERRIDE

    core.httpAuth

    ENABLE_CORE_HTTPROUTER_OVERRIDE

    core.httpRouter

    ENABLE_CORE_LIFECYCLE_OVERRIDE

    core.lifecycle

    ENABLE_CORE_LOGGER_OVERRIDE

    core.logger

    ENABLE_CORE_PERMISSIONS_OVERRIDE

    core.permissions

    ENABLE_CORE_ROOTHEALTH_OVERRIDE

    core.rootHealth

    ENABLE_CORE_ROOTHTTPROUTER_OVERRIDE

    core.rootHttpRouter

    ENABLE_CORE_ROOTLIFECYCLE_OVERRIDE

    core.rootLifecycle

    ENABLE_CORE_SCHEDULER_OVERRIDE

    core.scheduler

    ENABLE_CORE_USERINFO_OVERRIDE

    core.userInfo

    ENABLE_CORE_URLREADER_OVERRIDE

    core.urlReader

    ENABLE_EVENTS_SERVICE_OVERRIDE

    events.service

  2. Install your custom core service as a BackendFeature as shown in the following example:

    Example of a BackendFeature middleware function to handle incoming HTTP requests

    // Create the BackendFeature
    $ export const customRootHttpServerFactory: BackendFeature =
      rootHttpRouterServiceFactory({
        configure: ({ app, routes, middleware, logger }) => {
          logger.info(
            'Using custom root HttpRouterServiceFactory configure function',
          );
          app.use(middleware.helmet());
          app.use(middleware.cors());
          app.use(middleware.compression());
          app.use(middleware.logging());
          // Add a the custom middleware function before all
          // of the route handlers
          app.use(addTestHeaderMiddleware({ logger }));
          app.use(routes);
          app.use(middleware.notFound());
          app.use(middleware.error());
        },
      });
    
    // Export the BackendFeature as the default entrypoint
    $ export default customRootHttpServerFactory;

    In the previous example, as the BackendFeature overrides the default implementation of the HTTP router service, you must set the ENABLE_CORE_ROOTHTTPROUTER_OVERRIDE environment variable to true so that the Developer Hub does not install the default implementation automatically.

Legal Notice

Copyright © 2026 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.