Red Hat Developer Hub 1.9

Configuring Red Hat Developer Hub

Adding custom config maps and secrets to configure your Red Hat Developer Hub instance to work in your IT ecosystem.

Red Hat Customer Content Services

Abstract

Configure Red Hat Developer Hub for production by adding custom config maps and secrets to work in your IT ecosystem.

Configure Red Hat Developer Hub for production by adding custom config maps and secrets to work in your IT ecosystem.

1. Provision and use your custom Red Hat Developer Hub configuration

Configure Red Hat Developer Hub by using config maps to mount files and directories and secrets to inject environment variables into your Red Hat OpenShift Container Platform application.

1.1. Provision your custom Red Hat Developer Hub configuration

Provision custom config maps and secrets on Red Hat OpenShift Container Platform (RHOCP) to configure Red Hat Developer Hub before running the application.

Tip

On Red Hat OpenShift Container Platform, you can skip this step to run Developer Hub with the default config map and secret. Your changes on this configuration might get reverted on Developer Hub restart.

Prerequisites

  • By using the OpenShift CLI (oc), you have access, with developer permissions, to the OpenShift cluster aimed at containing your Developer Hub instance.

Procedure

  1. For security, store your secrets as environment variables values in an OpenShift Container Platform secret, rather than in plain text in your configuration files. Collect all your secrets in the secrets.txt file, with one secret per line in KEY=value form.

    Enter your authentication secrets. :_mod-docs-content-type: SNIPPET

  2. Author your custom app-config.yaml file. This is the main Developer Hub configuration file. You need a custom app-config.yaml file to avoid the Developer Hub installer to revert user edits during upgrades. When your custom app-config.yaml file is empty, Developer Hub is using default values.

    • To prepare a deployment with the Red Hat Developer Hub Operator on OpenShift Container Platform, you can start with an empty file.
    • To prepare a deployment with the Red Hat Developer Hub Helm chart, or on Kubernetes, enter the Developer Hub base URL in the relevant fields in your app-config.yaml file to ensure proper functionality of Developer Hub. The base URL is what a Developer Hub user sees in their browser when accessing Developer Hub. The relevant fields are baseUrl in the app and backend sections, and origin in the backend.cors subsection:

      Configuring the baseUrl in app-config.yaml:

      app:
        title: Red Hat Developer Hub
        baseUrl: https://<my_developer_hub_domain>
      
      backend:
        auth:
          externalAccess:
            - type: legacy
              options:
                subject: legacy-default-config
                secret: "${BACKEND_SECRET}"
        baseUrl: https://<my_developer_hub_domain>
        cors:
          origin: https://<my_developer_hub_domain>
    • Optionally, enter your configuration such as:

  3. Author your custom dynamic-plugins.yaml file to enable plugins. By default, Developer Hub enables a minimal plugin set, and disables plugins that require configuration or secrets, such as the GitHub repository discovery plugin and the Role-based access control (RBAC) plugin.

    Enable the GitHub repository discovery and the RBAC features:

    dynamic.plugins.yaml

    includes:
      - dynamic-plugins.default.yaml
    plugins:
      - package: ./dynamic-plugins/dist/backstage-plugin-catalog-backend-module-github
        disabled: false
      - package: ./dynamic-plugins/dist/backstage-community-plugin-rbac
        disabled: false

  4. Provision your custom configuration files to your OpenShift Container Platform cluster.

    1. Create the <my-rhdh-project> {namespace} aimed at containing your Developer Hub instance.

      $ oc create namespace my-rhdh-project
    2. Create config maps for your app-config.yaml and dynamic-plugins.yaml files in the <my-rhdh-project> project.

      $ oc create configmap my-rhdh-app-config --from-file=app-config.yaml --namespace=my-rhdh-project
      $ oc create configmap dynamic-plugins-rhdh --from-file=dynamic-plugins.yaml --namespace=my-rhdh-project

      You can also create the config maps by using the web console.

    3. Provision your secrets.txt file to the my-rhdh-secrets secret in the <my-rhdh-project> project.

      $ oc create secret generic my-rhdh-secrets --from-file=secrets.txt --namespace=my-rhdh-project

      You can also create the secret by using the web console.

1.2. Use the Red Hat Developer Hub Operator to run Developer Hub with your custom configuration

Use the Red Hat Developer Hub Operator to deploy Developer Hub with custom configuration by creating a custom resource that mounts config maps and injects secrets.

Prerequisites

  • By using the OpenShift CLI (oc), you have access, with developer permissions, to the OpenShift Container Platform cluster aimed at containing your Developer Hub instance.
  • Your administrator has installed the Red Hat Developer Hub Operator in the cluster.
  • You have provisioned your custom config maps and secrets in your <my-rhdh-project> project.
  • You have a working default storage class, such as the Elastic Block Store (EBS) storage add-on, configured in your EKS cluster.

Procedure

  1. Author your Backstage CR in a my-rhdh-custom-resource.yaml file to use your custom config maps and secrets.

    Minimal my-rhdh-custom-resource.yaml custom resource example:

    apiVersion: rhdh.redhat.com/v1alpha5
    kind: Backstage
    metadata:
      name: my-rhdh-custom-resource
    spec:
      application:
        appConfig:
          mountPath: /opt/app-root/src
          configMaps:
             - name: my-rhdh-app-config
        extraEnvs:
          secrets:
             - name: <my_product_secrets>
        extraFiles:
          mountPath: /opt/app-root/src
        route:
          enabled: true
      database:
        enableLocalDb: true

    my-rhdh-custom-resource.yaml custom resource example with dynamic plugins and RBAC policies config maps, and external PostgreSQL database secrets:

    apiVersion: rhdh.redhat.com/v1alpha5
    kind: Backstage
    metadata:
      name: <my-rhdh-custom-resource>
    spec:
      application:
        appConfig:
          mountPath: /opt/app-root/src
          configMaps:
             - name: my-rhdh-app-config
             - name: rbac-policies
        dynamicPluginsConfigMapName: dynamic-plugins-rhdh
        extraEnvs:
          secrets:
             - name: <my_product_secrets>
             - name: my-rhdh-database-secrets
        extraFiles:
          mountPath: /opt/app-root/src
          secrets:
            - name: my-rhdh-database-certificates-secrets
              key: postgres-crt.pem, postgres-ca.pem, postgres-key.key
        route:
          enabled: true
      database:
        enableLocalDb: false
    Mandatory fields
    No fields are mandatory. You can create an empty Backstage CR and run Developer Hub with the default configuration.
    Optional fields
    spec.application.appConfig.configMaps
    Enter your config map name list.

    Mount files in the my-rhdh-app-config config map:

    spec:
      application:
        appConfig:
          mountPath: /opt/app-root/src
          configMaps:
             - name: my-rhdh-app-config

    Mount files in the my-rhdh-app-config and rbac-policies config maps:

    spec:
      application:
        appConfig:
          mountPath: /opt/app-root/src
          configMaps:
             - name: my-rhdh-app-config
             - name: rbac-policies
    spec.application.extraEnvs.envs

    Optionally, enter your additional environment variables that are not secrets, such as your proxy environment variables.

    Inject your HTTP_PROXY, HTTPS_PROXY and NO_PROXY environment variables:

    spec:
      application:
        extraEnvs:
          envs:
            - name: HTTP_PROXY
              value: 'http://10.10.10.105:3128'
            - name: HTTPS_PROXY
              value: 'http://10.10.10.106:3128'
            - name: NO_PROXY
              value: 'localhost,example.org'
    spec.application.extraEnvs.secrets

    Enter your environment variables secret name list.

    Inject the environment variables in your Red Hat Developer Hub secret:

    spec:
      application:
        extraEnvs:
          secrets:
             - name: <my_product_secrets>

    Inject the environment variables in the Red Hat Developer Hub and my-rhdh-database-secrets secrets:

    spec:
      application:
        extraEnvs:
          secrets:
             - name: <my_product_secrets>
             - name: my-rhdh-database-secrets
    Note

    <my_product_secrets> is your preferred Developer Hub secret name, specifying the identifier for your secret configuration within Developer Hub.

    spec.application.extraFiles.secrets

    Enter your certificates files secret name and files list.

    Mount the postgres-crt.pem, postgres-ca.pem, and postgres-key.key files contained in the my-rhdh-database-certificates-secrets secret:

    spec:
      application:
        extraFiles:
          mountPath: /opt/app-root/src
          secrets:
            - name: my-rhdh-database-certificates-secrets
              key: postgres-crt.pem, postgres-ca.pem, postgres-key.key
    spec.database.enableLocalDb

    Enable or disable the local PostgreSQL database.

    Disable the local PostgreSQL database generation to use an external postgreSQL database:

    spec:
      database:
        enableLocalDb: false

    On a development environment, use the local PostgreSQL database:

    spec:
      database:
        enableLocalDb: true
    spec.deployment
    Optionally, enter your deployment configuration.
  2. Apply your Backstage CR to start or update your Developer Hub instance:

    $ oc apply --filename=my-rhdh-custom-resource.yaml --namespace=my-rhdh-project

1.3. Use the Red Hat Developer Hub Helm chart to run Developer Hub with your custom configuration

Use the Red Hat Developer Hub Helm chart to deploy Developer Hub with a custom application configuration file on OpenShift Container Platform.

Prerequisites

Procedure

  1. Configure Helm to use your custom configuration files in Developer Hub.

    1. Go to the Helm tab to see the list of Helm releases.
    2. Click the overflow menu on the Helm release that you want to use and select Upgrade.
    3. Use the YAML view to edit the Helm configuration.
    4. Set the value of the upstream.backstage.extraAppConfig.configMapRef and upstream.backstage.extraAppConfig.filename parameters as follows:

      Helm configuration excerpt

      upstream:
        backstage:
          extraAppConfig:
            - configMapRef: my-rhdh-app-config
              filename: app-config.yaml

    5. Click Upgrade.

Next steps

  • Install Developer Hub by using Helm.

2. Red Hat Developer Hub default configuration

Deploy a standard Red Hat Developer Hub instance, understand its structure, and tailor the instance to meet your needs.

2.1. Red Hat Developer Hub default configuration guide

The Operator creates Kubernetes resources with default configuration that you can customize using the Backstage Custom Resource.

The Operator stores the default configuration in a ConfigMap named rhdh-default-config in the rhdh-operator namespace on OpenShift. This ConfigMap has the YAML manifests that define the foundational structure of the RHDH instance.

You can create a basic RHDH instance by applying an empty Backstage Custom Resource as follows:

Example creating a RHDH instance

apiVersion: backstage.redhat.com/v1alpha4
kind: Backstage
metadata:
name: my-rhdh-instance
namespace: rhdh

The Operator automatically creates the following resources in the specified RHDH namespace by default based on the default configuration:

Table 1. Floating action button parameters

File NameResource Group/Version/Kind (GVK)Resource NameDescription

deployment.yaml

apps/v1/Deployment

backstage-{cr-name}

(Mandatory) The main Backstage application deployment.

service.yaml

v1/Service

backstage-{cr-name}

(Mandatory) The Backstage application service.

db-statefulset.yaml

apps/v1/StatefulSet

backstage-psql-{cr-name}

The PostgreSQL database stateful set. Needed if spec.enabledDb=true.

db-service.yaml

v1/Service

backstage-psql-{cr-name}

The PostgreSQL database service. Needed if spec.enabledDb=true.

db-secret.yaml

v1/Secret

backstage-psql-{cr-name}

The PostgreSQL database credentials secret. Needed if spec.enabledDb=true.

route.yaml

route.openshift.io/v1

backstage-{cr-name}

The OpenShift Route to expose Backstage externally. (Optional) Applied to OpenShift only.

app-config.yaml

v1/ConfigMap

backstage-config-{cr-name}

(Optional) Specifies one or more Backstage app-config.yaml files.

configmap-files.yaml

v1/ConfigMap

backstage-files-{cr-name}

(Optional) Specifies additional ConfigMaps to mount as files into the Backstage Pod.

configmap-envs.yaml

v1/ConfigMap

backstage-envs-{cr-name}

(Optional) Specifies additional ConfigMaps to expose as environment variables in the Backstage Pod.

secret-files.yaml

v1/Secret or list of v1/Secret

backstage-files-{cr-name}-{secret-name}

(Optional) Specifies additional Secrets to mount as files into the Backstage Pod.

secret-envs.yaml

v1/Secret or list of v1/Secret

backstage-envs-{cr-name}

(Optional) Specifies additional Secrets to expose as environment variables in the Backstage Pod.

dynamic-plugins.yaml

v1/ConfigMap

backstage-dynamic-plugins-{cr-name}

(Optional) Specifies the dynamic plugins that the Operator installs into the Backstage instance.

pvcs.yaml

list of v1/PersistentVolumeClaim

backstage-{cr-name}-{pvc-name}

(Optional) The Persistent Volume Claim for PostgreSQL database.

Note

{cr-name} is the name of the Backstage Custom Resource, for example 'my-rhdh-instance' in the above example.

2.2. Automated Operator features

Use the Operator to automate key configuration processes for your Backstage application.

2.2.1. Metadata generation

The Operator automatically generates metadata values for default resources at runtime to ensure proper application function.

For all the default resources, the Operator generates metadata.name according to the rules defined in the Default Configuration files, particularly the Resource name column. For example, a Backstage Custom Resource (CR) named mybackstage creates a Kubernetes Deployment resource called backstage-mybackstage.

The Operator generates the following metadata for each resource:

  • deployment.yaml

    • spec.selector.matchLabels[rhdh.redhat.com/app] = backstage-{cr-name}
    • spec.template.metadata.labels[rhdh.redhat.com/app] = backstage-{cr-name}
  • service.yaml

    • spec.selector[rhdh.redhat.com/app] = backstage-{cr-name}
  • db-statefulset.yaml

    • spec.selector.matchLabels[rhdh.redhat.com/app] = backstage-psql-{cr-name}
    • spec.template.metadata.labels[rhdh.redhat.com/app] = backstage-psql-{cr-name}
  • db-service.yaml

    • spec.selector[rhdh.redhat.com/app] = backstage-psql-{cr-name}

2.2.2. Many resources

Define and create many resources of the same type in a single YAML file by using the --- delimiter to separate resource definitions.

For example, adding the following code snip to pvcs.yaml creates two PersistentVolumeClaims (PVCs) called backstage-{cr-name}-myclaim1 and backstage-{cr-name}-myclaim2 and mounts them to the Backstage container.

Example creating many PVCs

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: myclaim1
...
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: myclaim2
...

2.2.3. Default base URLs

The Operator automatically sets base URLs for your application based on Route parameters and OpenShift cluster ingress domain.

The Operator follows these rules to set the base URLs for your application:

  • If the cluster is not OpenShift, the Operator makes no changes.
  • If you explicitly set the spec.application.route.enabled field in your Custom Resource (CR) to false, the Operator makes no changes.
  • If you define spec.application.route.host in the Backstage CR, the Operator sets the base URLs to https&#58;//<spec.application.route.host>.
  • If you specify the spec.application.route.subdomain in the Backstage CR, the Operator sets the base URLs to https&#58;//<spec.application.route.subdomain>.<cluster_ingress_domain>.
  • If you do not set a custom host or subdomain, the Operator sets the base URLs to https&#58;//backstage-<cr_name>-<namespace>.<cluster_ingress_domain>, which is the default domain for the created Route resource.

The Operator updates the following base URLs in the default app-config ConfigMap:

  • app.baseUrl
  • backend.baseUrl
  • backend.cors.origin
Note

You can perform these actions on a best-effort basis and only on OpenShift. During an error or on non-OpenShift clusters, you can still override these defaults by providing a custom app-config ConfigMap.

2.3. Time syntax in Red Hat Developer Hub

Use supported time duration formats in Red Hat Developer Hub, including human-readable strings, duration objects, ISO 8601 strings, and cron expressions.

Table 2. Generally supported time formats

Format

Description

Example

Compound values

Human-readable strings

Simple strings compatible with the ms library.

30m

No

Duration objects

A structured object specifying time units. Matches the HumanDuration TypeScript interface.

  timeout:
    minutes: 30

Yes

ISO 8601 duration strings

Standard ISO 8601 duration strings.

PT30M

Yes

Table 3. Context-dependent time formats

Format

Description

Example

Cron

An object containing a cron key with a crontab-style string. Used primarily by Scheduler services for tasks such as frequency).

  frequency:
    cron: '*/30 * * * *'
Warning

RHDH configuration reader readDurationFromConfig explicitly disallows plain numbers to prevent ambiguity.

However, specific raw configuration fields, such as direct Node.js HTTP server settings, might strictly require numbers. Always check the specific documentation for the field you are configuring.

3. Configure external PostgreSQL databases

Configure an external PostgreSQL database for production environments instead of using the default local database created by the Red Hat Developer Hub Operator or Helm chart.

Important

Configure your database to use the date format of the International Organization for Standardization (ISO) through the DateStyle setting. Other formats are incompatible with the internal tracking of the software catalog, which causes scheduling tasks to fail and prevents your catalog items from refreshing.

3.1. Configure an external PostgreSQL instance using the Operator

Configure an external PostgreSQL instance by using the Red Hat Developer Hub Operator instead of the default local PostgreSQL instance.

Prerequisites

  • You meet the Sizing requirements for external PostgreSQL deployments.
  • You are using a supported version of PostgreSQL. For more information, see the Product life cycle page.
  • You have the following details:

    • db_host: Denotes your PostgreSQL instance Domain Name System (DNS) or IP address
    • db_port: Denotes your PostgreSQL instance port number, such as 5432
    • username: Denotes the user name to connect to your PostgreSQL instance
    • password: Denotes the password to connect to your PostgreSQL instance
  • You have installed the Red Hat Developer Hub Operator.
  • Optional: You have a CA certificate, Transport Layer Security (TLS) private key, and TLS certificate so that you can secure your database connection by using the TLS protocol. For more information, refer to your PostgreSQL vendor documentation.
Note

By default, Developer Hub uses a database for each plugin and automatically creates it if none exists. You might need the Create Database privilege in addition to PostgreSQL Database privileges for configuring an external PostgreSQL instance.

Procedure

  1. Optional: Create a certificate secret to configure your PostgreSQL instance with a TLS connection:

    $ cat <<EOF | oc -n my-rhdh-project create -f -
    apiVersion: v1
    kind: Secret
    metadata:
     name: my-rhdh-database-certificates-secrets
    type: Opaque
    stringData:
     postgres-ca.pem: |-
      -----BEGIN CERTIFICATE-----
      <ca_certificate_key>
     postgres-key.key: |-
      -----BEGIN CERTIFICATE-----
      <tls_private_key>
     postgres-crt.pem: |-
      -----BEGIN CERTIFICATE-----
      <tls_certificate_key>
      # ...
    EOF

    Where:

    my-rhdh-database-certificates-secrets
    The certificate secret name.
    <ca_certificate_key>
    The CA certificate key.
    <tls_private_key>
    Optional: The TLS private key.
    <tls_certificate_key>
    Optional: The TLS certificate key.
  2. Create a credential secret to connect to the PostgreSQL instance:

    $ cat <<EOF | oc -n my-rhdh-project create -f -
    apiVersion: v1
    kind: Secret
    metadata:
     name: my-rhdh-database-secrets
    type: Opaque
    stringData:
     POSTGRES_PASSWORD: <password>
     POSTGRES_PORT: "<db_port>"
     POSTGRES_USER: <username>
     POSTGRES_HOST: <db_host>
     PGSSLMODE: <ssl_mode>
     NODE_EXTRA_CA_CERTS: <abs_path_to_pem_file>
    EOF

    Where:

    my-rhdh-database-secrets
    The credential secret name.
    <password>
    The password to connect to your PostgreSQL instance.
    <db_port>
    Your PostgreSQL instance port number, such as 5432.
    <username>
    The user name to connect to your PostgreSQL instance.
    <db_host>
    Your PostgreSQL instance DNS or IP address.
    <ssl_mode>
    Optional: For TLS connections, the required SSL mode.
    <abs_path_to_pem_file>
    Optional: For TLS connections, the absolute path to the Privacy-Enhanced Mail (PEM) file, for example /opt/app-root/src/postgres-crt.pem.
  3. Create your Backstage custom resource (CR):

    cat <<EOF | oc -n my-rhdh-project create -f -
    apiVersion: rhdh.redhat.com/v1alpha5
    kind: Backstage
    metadata:
      name: <backstage_instance_name>
    spec:
      database:
        enableLocalDb: false
      application:
        extraFiles:
          mountPath: <path>
          secrets:
            - name: my-rhdh-database-certificates-secrets
              key: postgres-crt.pem, postgres-ca.pem, postgres-key.key
        extraEnvs:
          secrets:
            - name: my-rhdh-database-secrets
            # ...

    Where:

    spec.database.enableLocalDb
    Set to false to disable creating local PostgreSQL instances.
    <path>
    The mount path for certificate files, for example /opt/app-root/src.
    my-rhdh-database-certificates-secrets
    The certificate secret name, required if you configure a TLS connection.
    key
    The key names as defined in the my-rhdh-database-certificates-secrets Secret.
    my-rhdh-database-secrets

    The credential secret name.

    Note

    The environment variables listed in the Backstage CR work with the Operator default configuration. If you have changed the Operator default configuration, you must reconfigure the Backstage CR accordingly.

  4. Apply the Backstage CR to the namespace where you have deployed the Developer Hub instance.

3.2. Configure an external PostgreSQL instance using the Helm Chart

Configure an external PostgreSQL instance by using the Helm Chart instead of the default local PostgreSQL instance.

Prerequisites

  • You meet the Sizing requirements for external PostgreSQL deployments.
  • You are using a supported version of PostgreSQL. For more information, see the Product life cycle page.
  • You have the following details:

    • db_host: Denotes your PostgreSQL instance Domain Name System (DNS) or IP address
    • db_port: Denotes your PostgreSQL instance port number, such as 5432
    • username: Denotes the user name to connect to your PostgreSQL instance
    • password: Denotes the password to connect to your PostgreSQL instance
  • You have installed the RHDH application by using the Helm Chart.
  • Optional: You have a CA certificate, Transport Layer Security (TLS) private key, and TLS certificate so that you can secure your database connection by using the TLS protocol. For more information, refer to your PostgreSQL vendor documentation.
Note

By default, Developer Hub uses a database for each plugin and automatically creates it if none exists. You might need the Create Database privilege in addition to PostgreSQL Database privileges for configuring an external PostgreSQL instance.

Procedure

  1. Optional: Create a certificate secret to configure your PostgreSQL instance with a TLS connection:

    $ cat <<EOF | oc -n <your_namespace> create -f -
    apiVersion: v1
    kind: Secret
    metadata:
     name: my-rhdh-database-certificates-secrets
    type: Opaque
    stringData:
     postgres-ca.pem: |-
      -----BEGIN CERTIFICATE-----
      <ca_certificate_key>
     postgres-key.key: |-
      -----BEGIN CERTIFICATE-----
      <tls_private_key>
     postgres-crt.pem: |-
      -----BEGIN CERTIFICATE-----
      <tls_certificate_key>
      # ...
    EOF

    Where:

    my-rhdh-database-certificates-secrets
    The certificate secret name.
    <ca_certificate_key>
    The CA certificate key.
    <tls_private_key>
    Optional: The TLS private key.
    <tls_certificate_key>
    Optional: The TLS certificate key.
  2. Create a credential secret to connect to the PostgreSQL instance:

    $ cat <<EOF | oc -n <your_namespace> create -f -
    apiVersion: v1
    kind: Secret
    metadata:
     name: my-rhdh-database-secrets
    type: Opaque
    stringData:
     POSTGRES_PASSWORD: <password>
     POSTGRES_PORT: "<db_port>"
     POSTGRES_USER: <username>
     POSTGRES_HOST: <db_host>
     PGSSLMODE: <ssl_mode>
     NODE_EXTRA_CA_CERTS: <abs_path_to_pem_file>
    EOF

    Where:

    my-rhdh-database-secrets
    The credential secret name.
    <password>
    The password to connect to your PostgreSQL instance.
    <db_port>
    Your PostgreSQL instance port number, such as 5432.
    <username>
    The user name to connect to your PostgreSQL instance.
    <db_host>
    Your PostgreSQL instance DNS or IP address.
    <ssl_mode>
    Optional: For TLS connections, the required SSL mode.
    <abs_path_to_pem_file>
    Optional: For TLS connections, the absolute path to the Privacy-Enhanced Mail (PEM) file, for example /opt/app-root/src/postgres-crt.pem.
  3. Configure your PostgreSQL instance in the Helm configuration file named values.yaml:

    # ...
    upstream:
      postgresql:
        enabled: false
        auth:
          existingSecret: my-rhdh-database-secrets
      backstage:
        appConfig:
          backend:
            database:
              connection:
                host: ${POSTGRES_HOST}
                port: ${POSTGRES_PORT}
                user: ${POSTGRES_USER}
                password: ${POSTGRES_PASSWORD}
                ssl:
                  rejectUnauthorized: true,
                  ca:
                    $file: /opt/app-root/src/postgres-ca.pem
                  key:
                    $file: /opt/app-root/src/postgres-key.key
                  cert:
                    $file: /opt/app-root/src/postgres-crt.pem
      extraEnvVarsSecrets:
        - my-rhdh-database-secrets
      extraEnvVars:
        - name: BACKEND_SECRET
          valueFrom:
            secretKeyRef:
              key: backend-secret
              name: '{{ include "janus-idp.backend-secret-name" $ }}'
      extraVolumeMounts:
        - mountPath: /opt/app-root/src/dynamic-plugins-root
          name: dynamic-plugins-root
        - mountPath: /opt/app-root/src/postgres-crt.pem
          name: postgres-crt
          subPath: postgres-crt.pem
        - mountPath: /opt/app-root/src/postgres-ca.pem
          name: postgres-ca
          subPath: postgres-ca.pem
        - mountPath: /opt/app-root/src/postgres-key.key
          name: postgres-key
          subPath: postgres-key.key
      extraVolumes:
        - ephemeral:
            volumeClaimTemplate:
              spec:
                accessModes:
                  - ReadWriteOnce
                resources:
                  requests:
                    storage: 1Gi
          name: dynamic-plugins-root
        - configMap:
            defaultMode: 420
            name: dynamic-plugins
            optional: true
          name: dynamic-plugins
        - name: dynamic-plugins-npmrc
          secret:
            defaultMode: 420
            optional: true
            secretName: '{{ printf "%s-dynamic-plugins-npmrc" .Release.Name }}'
        - name: postgres-crt
          secret:
            secretName: my-rhdh-database-certificates-secrets
            # ...

    Where:

    upstream.postgresql.enabled
    Set to false to disable the local PostgreSQL instance creation.
    upstream.postgresql.auth.existingSecret
    The credentials secret to inject into Backstage.
    upstream.backstage.appConfig.backend.database.connection
    The Backstage database connection parameters.
    upstream.backstage.extraEnvVarsSecrets
    The credentials secret to inject as environment variables into Backstage.
    extraVolumeMounts (postgres-crt, postgres-ca, postgres-key)
    Optional: Inject TLS certificate, CA certificate, and TLS private key into the Backstage container.
    extraVolumes (postgres-crt)
    The certificate secret name, required if you configure TLS.
  4. Apply the configuration changes in your Helm configuration file named values.yaml:

    $ helm upgrade -n <your_namespace> <your_deploy_name> openshift-helm-charts/redhat-developer-hub -f values.yaml --version 1.9.0

3.3. Migrate local databases to an external database server using the Operator

Migrate data from a local PostgreSQL server to an external PostgreSQL service by using PostgreSQL utilities such as pg_dump and psql.

Note

The following procedure uses a database copy script to do a quick migration.

Prerequisites

  • You have installed the pg_dump and psql utilities on your local machine.
  • For data export, you have the PostgreSQL user privileges to make a full dump of local databases.
  • For data import, you have the PostgreSQL admin privileges to create an external database and populate it with database dumps.

Procedure

  1. Configure port forwarding for the local PostgreSQL database pod by running the following command on a terminal:

    $ oc port-forward -n <your_namespace> <pgsql_pod_name> <forward_to_port>:<forward_from_port>

    Where:

  2. The <pgsql_pod_name> variable denotes the name of a PostgreSQL pod with the format backstage-psql-<deployment_name>-<_index>.
  3. The <forward_to_port> variable denotes the port of your choice to forward PostgreSQL data to.
  4. The <forward_from_port> variable denotes the local PostgreSQL instance port, such as 5432.

    Example: Configuring port forwarding

    $ oc port-forward -n developer-hub backstage-psql-developer-hub-0 15432:5432

  5. Make a copy of the following db_copy.sh script and edit the details based on your configuration:

    #!/bin/bash
    
    to_host=<db_service_host>
    to_port=5432
    to_user=postgres
    
    from_host=127.0.0.1
    from_port=15432
    from_user=postgres
    
    allDB=("backstage_plugin_app" "backstage_plugin_auth" "backstage_plugin_catalog" "backstage_plugin_permission" "backstage_plugin_scaffolder" "backstage_plugin_search")
    
    for db in ${!allDB[@]};
    do
      db=${allDB[$db]}
      echo Copying database: $db
      PGPASSWORD=$TO_PSW psql -h $to_host -p $to_port -U $to_user -c "create database $db;"
      pg_dump -h $from_host -p $from_port -U $from_user -d $db | PGPASSWORD=$TO_PSW psql -h $to_host -p $to_port -U $to_user -d $db
    done

    Where:

    to_host
    The destination hostname, for example <db_instance_name>.rds.amazonaws.com.
    to_port
    The destination port, such as 5432.
    to_user
    The destination server username, for example postgres.
    from_host
    The source hostname, such as 127.0.0.1.
    from_port
    The source port number, such as the <forward_to_port> variable.
    from_user
    The source server username, for example postgres.
    allDB
    Database names to import, in double quotes separated by spaces.
  6. Create a destination database for copying the data:

    /bin/bash TO_PSW=<destination_db_password> /path/to/db_copy.sh

    Replace <destination_db_password> with the password to connect to the destination database.

    Note

    You can stop port forwarding when the copying of the data is complete. For more information about handling large databases and using the compression tools, see the Handling Large Databases section on the PostgreSQL website.

  7. Reconfigure your Backstage custom resource (CR). For more information, see Configure an external PostgreSQL instance using the Operator.
  8. Check that the following code is present at the end of your Backstage CR after reconfiguration:

    # ...
    spec:
      database:
        enableLocalDb: false
      application:
      # ...
        extraFiles:
          secrets:
            - name: my-rhdh-database-certificates-secrets
              key: postgres-crt.pem
        extraEnvs:
          secrets:
            - name: my-rhdh-database-secrets
    # ...
    Note

    Reconfiguring the Backstage CR deletes the corresponding StatefulSet and Pod objects, but does not delete the PersistenceVolumeClaim object. Use the following command to delete the local PersistenceVolumeClaim object:

    oc -n developer-hub delete pvc <local_psql_pvc_name>

    where, the <local_psql_pvc_name> variable is in the data-<psql_pod_name> format.

  9. Apply the configuration changes.

Verification

  1. Verify that your RHDH instance is running with the migrated data and does not contain the local PostgreSQL database by running the following command:

    oc get pods -n <your_namespace>
  2. Check the output for the following details:
  3. The backstage-developer-hub-xxx pod is in running state.
  4. The backstage-psql-developer-hub-0 pod is not available.

    You can also verify these details by using the Topology view in the OpenShift Container Platform web console.

4. Configure high availability in Red Hat Developer Hub

Configure high availability to ensure continuous service accessibility by eliminating single points of failure through redundancy and failover mechanisms.

Red Hat Developer Hub supports HA deployments on the following platforms:

  • Red Hat OpenShift Container Platform
  • Azure Kubernetes Service
  • Elastic Kubernetes Service
  • Google Kubernetes Engine

The HA deployments enable more resilient and reliable service availability across supported environments.

In a single instance deployment, a failure makes the entire service unavailable. Software crashes, hardware issues, or other disruptions can interrupt development workflows and access to key resources.

With HA enabled, you can scale the number of backend replicas to introduce redundancy. This setup ensures that if one pod or component fails, others continue to serve requests without disruption. The built-in load balancer manages ingress traffic and distributes the load across the available pods. Meanwhile, the RHDH backend manages concurrent requests and resolves resource-level conflicts effectively.

As an administrator, you can configure high availability by adjusting replica values in your configuration file:

  • If you installed using the Operator, configure the replica values in your Backstage custom resource.
  • If you used the Helm chart, set the replica values in the Helm configuration.

4.1. Configure high availability in a Red Hat Developer Hub Operator deployment

Configure high availability for Operator deployments by setting the replicas field to a value greater than 1 in the custom resource.

Procedure

  • In your Backstage custom resource (CR), set replicas to a value greater than 1.

    For example, to configure two replicas (one backup instance):

    apiVersion: rhdh.redhat.com/v1alpha5
    kind: Backstage
    metadata:
      name: <your_yaml_file>
    spec:
      deployment:
        patch:
          spec:
            replicas: 2

4.2. Configure high availability in a Red Hat Developer Hub Helm chart deployment

Configure high availability for Helm deployments by setting the replicas value to greater than 1 in the Helm configuration file.

Procedure

  • In your Helm chart configuration file, set replicas to a value greater than 1.

    For example, to configure two replicas (one backup instance):

    upstream:
      backstage:
        replicas: 2

5. Run Red Hat Developer Hub behind a corporate proxy

In a network restricted environment, configure Red Hat Developer Hub to use your proxy to access remote network resources.

You can run the Developer Hub application behind a corporate proxy by setting any of the following environment variables before starting the application:

HTTP_PROXY
Denotes the proxy to use for HTTP requests.
HTTPS_PROXY
Denotes the proxy to use for HTTPS requests.
NO_PROXY
Set the environment variable to bypass the proxy for certain domains. The variable value is a comma-separated list of hostnames or IP addresses that do not require the proxy, even if you specify one.

5.1. The NO_PROXY exclusion rules

Configure NO_PROXY to bypass the proxy for specific hostnames, IP addresses, and port numbers when using Developer Hub.

Note

The default value for NO_PROXY in RHDH is localhost,127.0.0.1. If you want to override it, include at least localhost or localhost:7007 in the list. Otherwise, the RHDH backend might fail.

Matching follows these rules:

  • NO_PROXY=* will bypass the proxy for all requests.
  • Space and commas might separate the entries in the NO_PROXY list. For example, NO_PROXY="localhost,example.com", or NO_PROXY="localhost example.com", or NO_PROXY="localhost, example.com" would have the same effect.
  • If NO_PROXY has no entries, configuring the HTTP(S)_PROXY settings makes the backend send all requests through the proxy.
  • The backend does not perform a DNS lookup to decide if a request should bypass the proxy or not. For example, if DNS resolves example.com to 1.2.3.4, setting NO_PROXY=1.2.3.4 has no effect on requests sent to example.com. Only requests sent to the IP address 1.2.3.4 bypass the proxy.
  • If you add a port after the hostname or IP address, the request must match both the host/IP and port to bypass the proxy. For example, NO_PROXY=example.com:1234 would bypass the proxy for requests to http(s)://example.com:1234, but not for requests on other ports, such as http(s)://example.com.
  • If you do not specify a port after the hostname or IP address, all requests to that host/IP address will bypass the proxy regardless of the port. For example, NO_PROXY=localhost would bypass the proxy for requests sent to URLs such as http(s)://localhost:7077 and http(s)://localhost:8888.
  • IP Address blocks in CIDR notation will not work. So setting NO_PROXY=10.11.0.0/16 will not have any effect, even if the backend sends a request to an IP address in that block.
  • Supports only IPv4 addresses. IPv6 addresses such as ::1 will not work.
  • Generally, the proxy is only bypassed if the hostname is an exact match for an entry in the NO_PROXY list. The only exceptions are entries that start with a dot (.) or with a wildcard (*). In such a case, bypass the proxy if the hostname ends with the entry.
Note

List the domain and the wildcard domain if you want to exclude a given domain and all its subdomains. For example, you would set NO_PROXY=example.com,.example.com to bypass the proxy for requests sent to http(s)://example.com and http(s)://subdomain.example.com.

5.2. Configure proxy information in Operator deployment

Configure proxy settings for Operator-based deployments by setting environment variables in the ConfigMap or custom resource file.

  • As a cluster administrator with access to the Operator namespace, you can configure the proxy variables in the Operator’s default ConfigMap file. This configuration applies the proxy settings to all the users of the Operator.
  • As a developer, you can configure the proxy variables in a custom resource (CR) file. This configuration applies the proxy settings to the RHDH application created from that CR.

Prerequisites

  • You have installed the Red Hat Developer Hub application.

Procedure

  1. Perform one of the following steps based on your role:
  2. As an administrator, set the proxy information in the Operator’s default ConfigMap file:

    1. Search for a ConfigMap file named backstage-default-config in the default namespace rhdh-operator and open it.
    2. Find the deployment.yaml key.
    3. Set the value of the HTTP_PROXY, HTTPS_PROXY, and NO_PROXY environment variables in the Deployment spec as shown in the following example:

      Example: Setting proxy variables in a ConfigMap file

      # ...
        deployment.yaml: |-
          apiVersion: apps/v1
          kind: Deployment
          spec:
            template:
              spec:
                # ...
                initContainers:
                  - name: install-dynamic-plugins
                    # ...
                    env:
                      - name: NPM_CONFIG_USERCONFIG
                        value: /opt/app-root/src/.npmrc.dynamic-plugins
                      - name: HTTP_PROXY
                        value: 'http://10.10.10.105:3128'
                      - name: HTTPS_PROXY
                        value: 'http://10.10.10.106:3128'
                      - name: NO_PROXY
                        value: 'localhost,example.org'
                    # ...
                containers:
                  - name: backstage-backend
                    # ...
                    env:
                      - name: APP_CONFIG_backend_listen_port
                        value: "7007"
                      - name: HTTP_PROXY
                        value: 'http://10.10.10.105:3128'
                      - name: HTTPS_PROXY
                        value: 'http://10.10.10.106:3128'
                      - name: NO_PROXY
                        value: 'localhost,example.org'

  3. As a developer, set the proxy information in your Backstage CR file as shown in the following example:

    Example: Setting proxy variables in a CR file

    spec:
      # ...
      application:
        extraEnvs:
          envs:
            - name: HTTP_PROXY
              value: 'http://10.10.10.105:3128'
            - name: HTTPS_PROXY
              value: 'http://10.10.10.106:3128'
            - name: NO_PROXY
              value: 'localhost,example.org'

  4. Save the configuration changes.

5.3. Configure proxy information in Helm deployment

Configure proxy settings for Helm-based deployments by setting environment variables in the Helm configuration file.

Prerequisites

  • You have installed the Red Hat Developer Hub application.

Procedure

  1. Set the proxy information in your Helm configuration file:

    upstream:
      backstage:
        extraEnvVars:
          - name: HTTP_PROXY
            value: '<http_proxy_url>'
          - name: HTTPS_PROXY
            value: '<https_proxy_url>'
          - name: NO_PROXY
            value: '<no_proxy_settings>'

    Where,

    <http_proxy_url>
    Denotes a variable that you must replace with the HTTP proxy URL.
    <https_proxy_url>
    Denotes a variable that you must replace with the HTTPS proxy URL.
    <no_proxy_settings>

    Denotes a variable that you must replace with comma-separated URLs, which you want to exclude from proxying, for example, <example1>.com,<example2>.com.

    Example: Setting proxy variables by using the Helm Chart

    upstream:
      backstage:
        extraEnvVars:
          - name: HTTP_PROXY
            value: 'http://10.10.10.105:3128'
          - name: HTTPS_PROXY
            value: 'http://10.10.10.106:3128'
          - name: NO_PROXY
            value: 'localhost,example.org'

  2. Save the configuration changes.

6. Use the dynamic plugins cache

Use the dynamic plugins cache to reduce platform boot time by storing already-installed plugins and avoiding redundant downloads.

6.1. Dynamic plugins cache

The dynamic plugins cache reduces platform boot time by storing already-installed plugins and skipping redundant downloads when the configuration does not change.

When you enable dynamic plugins cache:

  • The system calculates a checksum of each plugin’s YAML configuration (excluding pluginConfig).
  • The system stores the checksum in a file named dynamic-plugin-config.hash within the plugin’s directory.
  • During boot, if a plugin’s package reference matches the earlier installation and the checksum does not change, the system skips the download.
  • The system automatically removes plugins that you disabled since the earlier boot.
Note

To enable the dynamic plugins cache in RHDH, the plugins directory dynamic-plugins-root must be a persistent volume.

6.2. Create a PVC for the dynamic plugin cache by using the Operator

Create a persistent volume claim for the dynamic plugin cache in Operator deployments by replacing the default dynamic-plugins-root volume.

Prerequisites

  • You have installed Red Hat Developer Hub on OpenShift Container Platform using the Red Hat Developer Hub Operator.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Create the persistent volume definition and save it to a file, such as pvc.yaml. For example:

    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: dynamic-plugins-root
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 5Gi
    Note

    This example uses ReadWriteOnce as the access mode which prevents many replicas from sharing the PVC across different nodes. To run many replicas on different nodes, depending on your storage driver, you must use an access mode such as ReadWriteMany.

  2. To apply this PVC to your cluster, run the following command:

    $ oc apply -f pvc.yaml
  3. Replace the default dynamic-plugins-root volume with a PVC named dynamic-plugins-root. For example:

    apiVersion: rhdh.redhat.com/v1alpha5
    kind: Backstage
    metadata:
      name: developer-hub
    spec:
      deployment:
        patch:
          spec:
            template:
              spec:
                volumes:
                  - $patch: replace
                    name: dynamic-plugins-root
                    persistentVolumeClaim:
                      claimName: dynamic-plugins-root
    Note

    To avoid adding a new volume, you must use the $patch: replace directive.

6.3. Fix 404 error after cached dynamic plugins configuration change

When many Developer Hub replicas share a single dynamic plugins cache PVC, updating configurations with the Operator can trigger temporary 404 errors. This occurs because the replicas might access inconsistent cache states during the update process, before all replicas have synchronized.

The solution is to use an individual cache per pod.

Prerequisites

  • Your API version is v1alpha5 or later.

Procedure

  1. In the Backstage Custom Resource (CR) file, set spec.deployment to use the optional StatefulSet as a resource kind. For example:

    apiVersion: rhdh.redhat.com/v1alpha5
    kind: Backstage
    metadata:
      name: <CR_name>
    ...
    spec:
     deployment:
      kind: StatefulSet
      patch:
       spec:
         replicas: 2
         template:
           spec:
             volumes:
               - $patch: replace
                 name: dynamic-plugins-root
                 persistentVolumeClaim:
                   claimName: dynamic-plugins-root
         volumeClaimTemplates:
           - apiVersion: v1
             kind: PersistentVolumeClaim
             metadata:
               name: dynamic-plugins-root
             spec:
               accessModes:
                 - ReadWriteOnce
               resources:
                 requests:
                   storage: 1Gi
    Note

    Using StatefulSet with a single replica can lead to downtime, while the application deletes the old pod and creates a new pod.

  2. Wait a few minutes until the Operator reconciles the CR and the StatefulSet resource is ready.
  3. If you are updating an existing CR, remove the earlier Deployment resource from the cluster:

    oc delete deployment -l app.kubernetes.io/instance=<CR_name>
    Note

    The same requirement applies for changing the resource kind from StatefulSet to Deployment. You must manually delete the resource created before from the cluster, because the Operator does not automatically remove the legacy resource.

6.4. Create a PVC for the dynamic plugin cache using the Helm Chart

Create a persistent volume claim for the dynamic plugin cache in Helm deployments to persist the cache across pod restarts.

Prerequisites

  • You have installed Red Hat Developer Hub using the Helm chart.
  • You have installed the OpenShift CLI (oc).

Procedure

  1. Create the persistent volume definition. For example:

    kind: PersistentVolumeClaim
    apiVersion: v1
    metadata:
      name: dynamic-plugins-root
    spec:
      accessModes:
        - ReadWriteOnce
      resources:
        requests:
          storage: 5Gi
    Note

    This example uses ReadWriteOnce as the access mode which prevents many replicas from sharing the PVC across different nodes. To run many replicas on different nodes, depending on your storage driver, you must use an access mode such as ReadWriteMany.

  2. To apply this PVC to your cluster, run the following command:

    $ oc apply -f pvc.yaml
  3. Configure the Helm chart to use the PVC. For example:

    upstream:
      backstage:
        extraVolumes:
          - name: dynamic-plugins-root
            persistentVolumeClaim:
              claimName: dynamic-plugins-root
          - name: dynamic-plugins
            configMap:
              defaultMode: 420
              name: '{{ printf "%s-dynamic-plugins" .Release.Name }}'
              optional: true
          - name: dynamic-plugins-npmrc
            secret:
              defaultMode: 420
              optional: true
              secretName: '{{ printf "%s-dynamic-plugins-npmrc" .Release.Name }}'
          - name: dynamic-plugins-registry-auth
            secret:
              defaultMode: 416
              optional: true
              secretName: '{{ printf "%s-dynamic-plugins-registry-auth" .Release.Name }}'
          - name: npmcacache
            emptyDir: {}
          - name: temp
            emptyDir: {}
    Note

    When you configure the Helm chart to use the PVC, you must also include the extraVolumes section defined in the default Helm chart values.

6.5. Configure the dynamic plugins cache

Configure the dynamic plugins cache by setting pull policy and download parameters in the dynamic-plugins.yaml file.

Procedure

  • To configure the dynamic plugins cache, set the following optional dynamic plugin cache parameters in your dynamic-plugins.yaml file:

    pullPolicy: IfNotPresent (default)
    Download the artifact if it is not already present in the dynamic-plugins-root folder, without checking image digests.
    pullPolicy: Always

    Compare the image digest in the remote registry and downloads the artifact if it has changed, even if Developer Hub has already downloaded the plugin before.

    When applied to the Node Package Manager (NPM) downloading method, download the remote artifact without a digest check.

    Example dynamic-plugins.yaml file configuration to download the remote artifact without a digest check:

    plugins:
      - disabled: false
        pullPolicy: Always
        package: 'oci://quay.io/example-org/example-plugin:v1.0.0!internal-backstage-plugin-example'
    forceDownload: false (default)
    Older option to download the artifact if it is not already present in the dynamic-plugins-root folder, without checking image digests.
    forceDownload: true

    Older option to force a reinstall of the plugin, bypassing the cache.

    Note

    The pullPolicy option takes precedence over the forceDownload option.

    The forceDownload option might become deprecated in a future Developer Hub release.

7. Enable the Red Hat Developer Hub plugin assets cache

Use a Redis cache store to improve Developer Hub performance and reliability by caching plugin assets.

Prerequisites

  • You have installed Red Hat Developer Hub.
  • You have an active Redis server. For more information on setting up an external Redis server, see the official Redis documentation.

Procedure

  1. Enable the Developer Hub cache by defining Redis as the cache store type and entering your Redis server connection URL in your app-config.yaml file.

    app-config.yaml file fragment

    backend:
      cache:
        store: redis
        connection: redis://user:pass@cache.example.com:6379

  2. Enable the cache for TechDocs by adding the techdocs.cache.ttl setting in your app-config.yaml file. This setting specifies how long, in milliseconds, a statically built asset should stay in the cache.

    app-config.yaml file fragment

    techdocs:
      cache:
        ttl: 3600000

    Tip

    Optionally, enable the cache for unsupported plugins that support this feature. See the documentation for each plugin for details.

8. Inject extra files and environment variables into Backstage containers

Inject extra files and environment variables into Backstage containers by mounting ConfigMaps and Secrets by using the mountPath field.

  • If you do not specify key and mountPath: The system mounts each key or value as a filename or content with a subPath.
  • If you specify key with or without mountPath: The system mounts the specified key or value with a subPath.
  • If you specify only mountPath: The system mounts a directory containing all the keys or values without a subPath.
  • If you do not specify the containers field: The volume mounts only to the backstage-backend container. By default, files mount only to the backstage-backend container. You can also specify other targets, including a list of containers by name (such as dynamic-plugin-install or selectcustom sidecars) or select all containers in the Backstage Pod.

    Note
    • OpenShift Container Platform does not automatically update a volume mounted with subPath. By default, the RHDH Operator monitors these ConfigMaps or Secrets and refreshes the RHDH Pod when changes occur.
    • For security purposes, Red Hat Developer Hub does not give the Operator Service Account read access to Secrets. As a result, mounting files from Secrets without specifying both mountPath and key is not supported.

Procedure

  1. Apply the configuration to your Backstage custom resource (CR). The following code block is an example:

    spec:
      application:
        extraFiles:
          mountPath: _<default_mount_path>_
          configMaps:
            - name: _<configmap_name_all_entries>_
            - name: _<configmap_name_single_key>_
              key: _<specific_file_key>_
              containers:
                - "*"
            - name: _<configmap_name_custom_path>_
              mountPath: _<custom_cm_mount_path>_
              containers:
                - backstage-backend
                - install-dynamic-plugins
          secrets:
            - name: _<secret_name_single_key>_
              key: _<specific_secret_key>_
              containers:
                - install-dynamic-plugins
            - name: _<secret_name_custom_path>_
              mountPath: _<custom_secret_mount_path>_
          pvcs:
            - name: _<pvc_name_default_path>_
            - name: _<pvc_name_custom_path>_
              mountPath: _<custom_pvc_mount_path>_
        extraEnvs:
          configMaps:
            - name: _<configmap_name_env_var>_
              key: _<env_var_key>_
              containers:
                - "*"
          secrets:
            - name: _<secret_name_all_envs>_
          envs:
            - name: _<static_env_var_name>_
              value: "_<static_env_var_value>_"
              containers:
               - install-dynamic-plugins

    where:

    spec.application.extraFiles.mountPath
    Specifies the default base mount path for files if you do not set a specific mountPath for a resource (for example, /<default_mount_path>).
    spec.application.extraFiles.configMaps.name
    Mounts all entries from <configmap_name_all_entries> to the default mount path.
    spec.application.extraFiles.configMaps.key
    Mounts **only the specified key (for example, <specific_file_key>.txt) from the ConfigMap.
    spec.application.extraFiles.configMaps.containers
    Targets all containers ("*") for mounting.
    spec.application.extraFiles.configMaps.mountPath
    Overrides the default and mounts all ConfigMap entries as a directory at the specified path (for example, /<custom_cm_mount_path>).
    spec.application.extraFiles.secrets.key
    Mounts only a specific key from the Secret.
    spec.application.extraFiles.secrets.mountPath
    Overrides the default and mounts all Secret entries as a directory at the specified path (for example, /<custom_secret_mount_path>).
    spec.application.extraFiles.pvcs.name
    Mounts the PVC to the default mount path, appending the PVC name (for example, /<default_mount_path>/<pvc_name_default_path>).
    spec.application.extraFiles.pvcs.mountPath
    Overrides the default and mounts the PVC to the specified path (for example, /<custom_pvc_mount_path>).
    spec.application.extraEnvs.configMaps.containers
    Injects the specified ConfigMap key as an environment variable into all containers ("*").
    spec.application.extraEnvs.secrets.name
    Injects all keys from the Secret as environment variables into the default container.
    spec.application.envs.containers

    Targets only the listed container for the static environment variable injection.

    Note

    The following explicit options are supported:

    1. No or an empty field: Mounts only to the backstage-backend container.
    2. * (asterisk) as the first and only array element: Mounts to all containers.
    3. Explicit container names, for example, install-dynamic-plugins: Mounts only to the listed containers.

Verification

Verify the files mount with the following correct paths and container targets:

ResourceTarget typePath(s) or name(s)Container(s)

ConfigMap (<configmap_name_all_entries>)

File

/<default_mount_path>/<file_1_key>, /<default_mount_path>/<file_2_key>

backstage-backend

ConfigMap (<configmap_name_single_key>)

File

/<default_mount_path>/<specific_file_key>.txt

All

ConfigMap (<configmap_name_custom_path>)

Directory

/<custom_cm_mount_path>/

backstage-backend, install-dynamic-plugins

Secret (<secret_name_single_key>)

File

/<default_mount_path>/<specific_secret_key>.txt

install-dynamic-plugins

Secret (<secret_name_custom_path>)

Directory

/<custom_secret_mount_path>/

backstage-backend

PVC (<pvc_name_default_path>)

Directory

/<default_mount_path>/<pvc_name_default_path>

backstage-backend

ConfigMap (<configmap_name_env_var>)

Environment variable

<env_var_key>

All

Secret (<secret_name_all_envs>)

Environment variable

<secret_key_a>, <secret_key_b>

backstage-backend

Custom Resource Definition (CRD) (envs)

Environment variable

<static_env_var_name> = <static_env_var_value>

install-dynamic-plugins

9. Configure mount paths for default Secrets and Persistent Volume Claims (PVCs)

Configure custom mount paths for Secrets and PVCs by adding the rhdh.redhat.com/mount-path annotation to your resource.

Procedure

  1. To specify a PVC mount path, add the rhdh.redhat.com/mount-path annotation to your configuration file as shown in the following example:

    Example specifying the PVC mount path

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: <my_claim>
      annotations:
        rhdh.redhat.com/mount-path: /mount/path/from/annotation

    Where:

    <my_claim>
    The PVC to mount.
    rhdh.redhat.com/mount-path
    The mount path for the PVC, in this case the /mount/path/from/annotation directory.
  2. To specify a Secret mount path, add the rhdh.redhat.com/mount-path annotation to your configuration file as shown in the following example:

    Example specifying where the Secret mounts

    apiVersion: v1
    kind: Secret
    metadata:
      name: <my_secret>
      annotations:
        rhdh.redhat.com/mount-path: /mount/path/from/annotation

10. Mount secrets and PVCs to specific containers

Mount secrets and PVCs to specific containers by adding the rhdh.redhat.com/containers annotation to your configuration file.

Procedure

  1. To mount Secrets to all containers, set the rhdh.redhat.com/containers annotation to * in your configuration file:

    Example mounting to all containers

    apiVersion: v1
    kind: Secret
    metadata:
      name: <my_secret>
      annotations:
        rhdh.redhat.com/containers: *

    Important

    Set rhdh.redhat.com/containers to * to mount it to all containers in the deployment.

  2. To mount to specific containers, separate the names with commas:

    Example separating the list of containers

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: <my_claim>
      annotations:
        rhdh.redhat.com/containers: "init-dynamic-plugins,backstage-backend"

    Note

    This configuration mounts the <my_claim> PVC to the init-dynamic-plugins and backstage-backend containers.

11. Configure Red Hat Developer Hub deployment when using the Operator

Configure Red Hat Developer Hub deployment by using the spec.deployment.patch field in the Red Hat Developer Hub Operator custom resource to control the Deployment resource.

Create a Backstage CR with the following fields:

apiVersion: rhdh.redhat.com/v1alpha5
kind: Backstage
metadata:
  name: developer-hub
spec:
  deployment:
    patch:
      spec:
        template:
labels

Add labels to the Developer Hub pod.

Example adding the label my=true

apiVersion: rhdh.redhat.com/v1alpha5
kind: Backstage
metadata:
  name: developer-hub
spec:
  deployment:
    patch:
      spec:
        template:
          metadata:
            labels:
              my: true

volumes

Add an additional volume named my-volume and mount it under /my/path in the Developer Hub application container.

apiVersion: rhdh.redhat.com/v1alpha5
kind: Backstage
metadata:
  name: developer-hub
spec:
  deployment:
    patch:
      spec:
        template:
          spec:
            containers:
              - name: backstage-backend
                volumeMounts:
                  - mountPath: /my/path
                    name: my-volume
            volumes:
              - ephemeral:
                  volumeClaimTemplate:
                    spec:
                      storageClassName: "special"
                name: my-volume

Replace the default dynamic-plugins-root volume with a persistent volume claim (PVC) named dynamic-plugins-root. Note the $patch: replace directive, otherwise the system adds a new volume.

apiVersion: rhdh.redhat.com/v1alpha5
kind: Backstage
metadata:
  name: developer-hub
spec:
  deployment:
    patch:
      spec:
        template:
          spec:
            volumes:
              - $patch: replace
                name: dynamic-plugins-root
                persistentVolumeClaim:
                  claimName: dynamic-plugins-root
cpu request

Set the CPU request for the Developer Hub application container to 250m.

Example CPU request

apiVersion: rhdh.redhat.com/v1alpha5
kind: Backstage
metadata:
  name: developer-hub
spec:
  deployment:
    patch:
      spec:
        template:
          spec:
            containers:
              - name: backstage-backend
                resources:
                  requests:
                    cpu: 250m

my-sidecar container

Add a new my-sidecar sidecar container into the Developer Hub Pod.

Example side car container

apiVersion: rhdh.redhat.com/v1alpha5
kind: Backstage
metadata:
  name: developer-hub
spec:
  deployment:
    patch:
      spec:
        template:
          spec:
            containers:
              - name: my-sidecar
                image: quay.io/my-org/my-sidecar:latest

Additional resources

12. Configure an RHDH instance with a TLS connection in Kubernetes

Configure RHDH with a TLS connection in Kubernetes to ensure secure connections with third-party applications and external databases.

Prerequisites

  • You have set up an Azure Red Hat OpenShift (ARO) cluster with a public CA-signed certificate. For more information about obtaining CA certificates, refer to your vendor documentation.
  • You have created a namespace and setup a service account with proper read permissions on resources.

    Example: Kubernetes manifest for role-based access control

    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: backstage-read-only
    rules:
      - apiGroups:
          - '*'
        resources:
          - pods
          - configmaps
          - services
          - deployments
          - replicasets
          - horizontalpodautoscalers
          - ingresses
          - statefulsets
          - limitranges
          - resourcequotas
          - daemonsets
        verbs:
          - get
          - list
          - watch
    #...

  • You have obtained the secret and the service CA certificate associated with your service account.
  • You have created some resources and added annotations to them so the Kubernetes plugin can discover them. You can apply these Kubernetes annotations:

    • backstage.io/kubernetes-id to label components
    • backstage.io/kubernetes-namespace to label namespaces

Procedure

  1. Enable the Kubernetes plugins in the dynamic-plugins-rhdh.yaml file by setting disabled to false:

    kind: ConfigMap
    apiVersion: v1
    metadata:
      name: dynamic-plugins-rhdh
    data:
      dynamic-plugins.yaml: |
        includes:
          - dynamic-plugins.default.yaml
        plugins:
          - package: ./dynamic-plugins/dist/backstage-plugin-kubernetes-backend-dynamic
            disabled: false
          - package: ./dynamic-plugins/dist/backstage-plugin-kubernetes
            disabled: false
            # ...
    Note

    The backstage-plugin-kubernetes plugin is currently in Technology Preview. As an alternative, you can use the ./dynamic-plugins/dist/backstage-plugin-topology-dynamic plugin, which is Generally Available (GA).

  2. Set the Kubernetes cluster details and configure the catalog sync options in the app-config.yaml configuration file:

    kind: ConfigMap
    apiVersion: v1
    metadata:
      name: my-rhdh-app-config
    data:
      "app-config.yaml": |
      # ...
      catalog:
        rules:
          - allow: [Component, System, API, Resource, Location]
        providers:
          kubernetes:
            openshift:
              cluster: openshift
              processor:
                namespaceOverride: default
                defaultOwner: guests
              schedule:
                frequency:
                  seconds: 30
                timeout:
                  seconds: 5
      kubernetes:
        serviceLocatorMethod:
          type: 'multiTenant'
        clusterLocatorMethods:
          - type: 'config'
            clusters:
              - url: <target_cluster_api_server_url>
                name: openshift
                authProvider: 'serviceAccount'
                skipTLSVerify: false
                skipMetricsLookup: true
                dashboardUrl: <target_cluster_console_url>
                dashboardApp: openshift
                serviceAccountToken: ${K8S_SERVICE_ACCOUNT_TOKEN}
                caData: ${K8S_CONFIG_CA_DATA}
                # ...
    url
    The base URL to the Kubernetes control plane. You can run the kubectl cluster-info command to get the base URL.
    skipTLSVerify
    Set the value of this parameter to false to enable the verification of the TLS certificate.
    dashboardUrl
    (Optional) The link to the Kubernetes dashboard managing the ARO cluster.
    serviceAccountToken
    (Optional) Pass the service account token by using a K8S_SERVICE_ACCOUNT_TOKEN environment variable that you define in your <my_product_secrets> secret.
    caData
    Pass the CA data by using a K8S_CONFIG_CA_DATA environment variable that you define in your <my_product_secrets> secret.
  3. Save the configuration changes.

Verification

  1. Run the RHDH application to import your catalog:

    $ kubectl -n rhdh-operator get pods -w
  2. Verify that the pod log shows no errors for your configuration.
  3. Go to Catalog and check the component page in the Developer Hub instance to verify the cluster connection and the presence of your created resources.
Note

If you meet connection errors, such as certificate issues or permissions, check the message box in the component page or view the logs of the pod.

13. Troubleshoot Developer Hub configuration issues

Resolve common configuration issues in Developer Hub, such as Helm overwriting predefined array values.

13.1. Fix Helm overwriting predefined arrays

If you use Helm to install dynamic plugins, you might meet an issue where predefined values in fields with arrays are overwritten after you add new values. The issue affects fields such as:

  • extraEnvVars
  • extraVolumeMounts
  • extraVolumes

Fix this issue by duplicating the predefined values from RHDH Helm Chart’s values.yaml file into your own version of the file.

Procedure

  1. For extraEnvVars, add the following content to your values.yaml file:

    extraEnvVars:
          - name: BACKEND_SECRET
            valueFrom:
              secretKeyRef:
                key: backend-secret
                name: '{{ include "janus-idp.backend-secret-name" $ }}'
          - name: POSTGRESQL_ADMIN_PASSWORD
            valueFrom:
              secretKeyRef:
                key: postgres-password
                name: '{{- include "janus-idp.postgresql.secretName" . }}'
  2. For extraVolumeMounts, add the following content to your values.yaml file:

    extraVolumeMounts:
          - name: dynamic-plugins-root
            mountPath: /opt/app-root/src/dynamic-plugins-root
          - name: temp
            mountPath: /tmp
  3. For extraVolume, add the following content to your values.yaml file:

    extraVolumes:
          - name: dynamic-plugins-root
            ephemeral:
              volumeClaimTemplate:
                spec:
                  accessModes:
                    - ReadWriteOnce
                  resources:
                    requests:
                      storage: 5Gi

Legal Notice

Copyright © 2026 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.