Configuring Red Hat Developer Hub
Adding custom config maps and secrets to configure your Red Hat Developer Hub instance to work in your IT ecosystem.
Abstract
- 1. Provision and use your custom Red Hat Developer Hub configuration
- 2. Red Hat Developer Hub default configuration
- 3. Configure external PostgreSQL databases
- 4. Configure high availability in Red Hat Developer Hub
- 5. Run Red Hat Developer Hub behind a corporate proxy
- 6. Use the dynamic plugins cache
- 7. Enable the Red Hat Developer Hub plugin assets cache
- 8. Inject extra files and environment variables into Backstage containers
- 9. Configure mount paths for default Secrets and Persistent Volume Claims (PVCs)
- 10. Mount secrets and PVCs to specific containers
- 11. Configure Red Hat Developer Hub deployment when using the Operator
- 12. Configure an RHDH instance with a TLS connection in Kubernetes
- 13. Troubleshoot Developer Hub configuration issues
Configure Red Hat Developer Hub for production by adding custom config maps and secrets to work in your IT ecosystem.
1. Provision and use your custom Red Hat Developer Hub configuration
Configure Red Hat Developer Hub by using config maps to mount files and directories and secrets to inject environment variables into your Red Hat OpenShift Container Platform application.
1.1. Provision your custom Red Hat Developer Hub configuration
Provision custom config maps and secrets on Red Hat OpenShift Container Platform (RHOCP) to configure Red Hat Developer Hub before running the application.
On Red Hat OpenShift Container Platform, you can skip this step to run Developer Hub with the default config map and secret. Your changes on this configuration might get reverted on Developer Hub restart.
Prerequisites
-
By using the OpenShift CLI (
oc), you have access, with developer permissions, to the OpenShift cluster aimed at containing your Developer Hub instance.
Procedure
For security, store your secrets as environment variables values in an OpenShift Container Platform secret, rather than in plain text in your configuration files. Collect all your secrets in the
secrets.txtfile, with one secret per line inKEY=valueform.Enter your authentication secrets. :_mod-docs-content-type: SNIPPET
Author your custom
app-config.yamlfile. This is the main Developer Hub configuration file. You need a customapp-config.yamlfile to avoid the Developer Hub installer to revert user edits during upgrades. When your customapp-config.yamlfile is empty, Developer Hub is using default values.- To prepare a deployment with the Red Hat Developer Hub Operator on OpenShift Container Platform, you can start with an empty file.
To prepare a deployment with the Red Hat Developer Hub Helm chart, or on Kubernetes, enter the Developer Hub base URL in the relevant fields in your
app-config.yamlfile to ensure proper functionality of Developer Hub. The base URL is what a Developer Hub user sees in their browser when accessing Developer Hub. The relevant fields arebaseUrlin theappandbackendsections, andoriginin thebackend.corssubsection:Configuring the
baseUrlinapp-config.yaml:app: title: Red Hat Developer Hub baseUrl: https://<my_developer_hub_domain> backend: auth: externalAccess: - type: legacy options: subject: legacy-default-config secret: "${BACKEND_SECRET}" baseUrl: https://<my_developer_hub_domain> cors: origin: https://<my_developer_hub_domain>
Optionally, enter your configuration such as:
Author your custom
dynamic-plugins.yamlfile to enable plugins. By default, Developer Hub enables a minimal plugin set, and disables plugins that require configuration or secrets, such as the GitHub repository discovery plugin and the Role-based access control (RBAC) plugin.Enable the GitHub repository discovery and the RBAC features:
dynamic.plugins.yamlincludes: - dynamic-plugins.default.yaml plugins: - package: ./dynamic-plugins/dist/backstage-plugin-catalog-backend-module-github disabled: false - package: ./dynamic-plugins/dist/backstage-community-plugin-rbac disabled: falseProvision your custom configuration files to your OpenShift Container Platform cluster.
Create the <my-rhdh-project> {namespace} aimed at containing your Developer Hub instance.
$ oc create namespace my-rhdh-project
Create config maps for your
app-config.yamlanddynamic-plugins.yamlfiles in the <my-rhdh-project> project.$ oc create configmap my-rhdh-app-config --from-file=app-config.yaml --namespace=my-rhdh-project $ oc create configmap dynamic-plugins-rhdh --from-file=dynamic-plugins.yaml --namespace=my-rhdh-project
You can also create the config maps by using the web console.
Provision your
secrets.txtfile to themy-rhdh-secretssecret in the <my-rhdh-project> project.$ oc create secret generic my-rhdh-secrets --from-file=secrets.txt --namespace=my-rhdh-project
You can also create the secret by using the web console.
1.2. Use the Red Hat Developer Hub Operator to run Developer Hub with your custom configuration
Use the Red Hat Developer Hub Operator to deploy Developer Hub with custom configuration by creating a custom resource that mounts config maps and injects secrets.
Prerequisites
-
By using the OpenShift CLI (
oc), you have access, with developer permissions, to the OpenShift Container Platform cluster aimed at containing your Developer Hub instance. - Your administrator has installed the Red Hat Developer Hub Operator in the cluster.
-
You have provisioned your custom config maps and secrets in your
<my-rhdh-project>project. - You have a working default storage class, such as the Elastic Block Store (EBS) storage add-on, configured in your EKS cluster.
Procedure
Author your Backstage CR in a
my-rhdh-custom-resource.yamlfile to use your custom config maps and secrets.Minimal
my-rhdh-custom-resource.yamlcustom resource example:apiVersion: rhdh.redhat.com/v1alpha5 kind: Backstage metadata: name: my-rhdh-custom-resource spec: application: appConfig: mountPath: /opt/app-root/src configMaps: - name: my-rhdh-app-config extraEnvs: secrets: - name: <my_product_secrets> extraFiles: mountPath: /opt/app-root/src route: enabled: true database: enableLocalDb: truemy-rhdh-custom-resource.yamlcustom resource example with dynamic plugins and RBAC policies config maps, and external PostgreSQL database secrets:apiVersion: rhdh.redhat.com/v1alpha5 kind: Backstage metadata: name: <my-rhdh-custom-resource> spec: application: appConfig: mountPath: /opt/app-root/src configMaps: - name: my-rhdh-app-config - name: rbac-policies dynamicPluginsConfigMapName: dynamic-plugins-rhdh extraEnvs: secrets: - name: <my_product_secrets> - name: my-rhdh-database-secrets extraFiles: mountPath: /opt/app-root/src secrets: - name: my-rhdh-database-certificates-secrets key: postgres-crt.pem, postgres-ca.pem, postgres-key.key route: enabled: true database: enableLocalDb: false
- Mandatory fields
- No fields are mandatory. You can create an empty Backstage CR and run Developer Hub with the default configuration.
- Optional fields
spec.application.appConfig.configMaps- Enter your config map name list.
Mount files in the
my-rhdh-app-configconfig map:spec: application: appConfig: mountPath: /opt/app-root/src configMaps: - name: my-rhdh-app-configMount files in the
my-rhdh-app-configandrbac-policiesconfig maps:spec: application: appConfig: mountPath: /opt/app-root/src configMaps: - name: my-rhdh-app-config - name: rbac-policiesspec.application.extraEnvs.envsOptionally, enter your additional environment variables that are not secrets, such as your proxy environment variables.
Inject your
HTTP_PROXY,HTTPS_PROXYandNO_PROXYenvironment variables:spec: application: extraEnvs: envs: - name: HTTP_PROXY value: 'http://10.10.10.105:3128' - name: HTTPS_PROXY value: 'http://10.10.10.106:3128' - name: NO_PROXY value: 'localhost,example.org'spec.application.extraEnvs.secretsEnter your environment variables secret name list.
Inject the environment variables in your Red Hat Developer Hub secret:
spec: application: extraEnvs: secrets: - name: <my_product_secrets>Inject the environment variables in the Red Hat Developer Hub and
my-rhdh-database-secretssecrets:spec: application: extraEnvs: secrets: - name: <my_product_secrets> - name: my-rhdh-database-secretsNote<my_product_secrets>is your preferred Developer Hub secret name, specifying the identifier for your secret configuration within Developer Hub.spec.application.extraFiles.secretsEnter your certificates files secret name and files list.
Mount the
postgres-crt.pem,postgres-ca.pem, andpostgres-key.keyfiles contained in themy-rhdh-database-certificates-secretssecret:spec: application: extraFiles: mountPath: /opt/app-root/src secrets: - name: my-rhdh-database-certificates-secrets key: postgres-crt.pem, postgres-ca.pem, postgres-key.keyspec.database.enableLocalDbEnable or disable the local PostgreSQL database.
Disable the local PostgreSQL database generation to use an external postgreSQL database:
spec: database: enableLocalDb: falseOn a development environment, use the local PostgreSQL database:
spec: database: enableLocalDb: truespec.deployment- Optionally, enter your deployment configuration.
Apply your Backstage CR to start or update your Developer Hub instance:
$ oc apply --filename=my-rhdh-custom-resource.yaml --namespace=my-rhdh-project
1.3. Use the Red Hat Developer Hub Helm chart to run Developer Hub with your custom configuration
Use the Red Hat Developer Hub Helm chart to deploy Developer Hub with a custom application configuration file on OpenShift Container Platform.
Prerequisites
- By using the OpenShift Container Platform web console, you have access with developer permissions, to an OpenShift Container Platform project named <my-rhdh-project>, aimed at containing your Developer Hub instance.
-
You have uploaded your custom configuration files and secrets in your
<my-rhdh-project>project.
Procedure
Configure Helm to use your custom configuration files in Developer Hub.
- Go to the Helm tab to see the list of Helm releases.
- Click the overflow menu on the Helm release that you want to use and select Upgrade.
- Use the YAML view to edit the Helm configuration.
Set the value of the
upstream.backstage.extraAppConfig.configMapRefandupstream.backstage.extraAppConfig.filenameparameters as follows:Helm configuration excerpt
upstream: backstage: extraAppConfig: - configMapRef: my-rhdh-app-config filename: app-config.yaml- Click Upgrade.
Next steps
- Install Developer Hub by using Helm.
2. Red Hat Developer Hub default configuration
Deploy a standard Red Hat Developer Hub instance, understand its structure, and tailor the instance to meet your needs.
2.1. Red Hat Developer Hub default configuration guide
The Operator creates Kubernetes resources with default configuration that you can customize using the Backstage Custom Resource.
The Operator stores the default configuration in a ConfigMap named rhdh-default-config in the rhdh-operator namespace on OpenShift. This ConfigMap has the YAML manifests that define the foundational structure of the RHDH instance.
You can create a basic RHDH instance by applying an empty Backstage Custom Resource as follows:
Example creating a RHDH instance
apiVersion: backstage.redhat.com/v1alpha4 kind: Backstage metadata: name: my-rhdh-instance namespace: rhdh
The Operator automatically creates the following resources in the specified RHDH namespace by default based on the default configuration:
Table 1. Floating action button parameters
| File Name | Resource Group/Version/Kind (GVK) | Resource Name | Description |
|---|---|---|---|
|
|
|
|
(Mandatory) The main Backstage application deployment. |
|
|
|
|
(Mandatory) The Backstage application service. |
|
|
|
|
The PostgreSQL database stateful set. Needed if |
|
|
|
|
The PostgreSQL database service. Needed if |
|
|
|
|
The PostgreSQL database credentials secret. Needed if |
|
|
|
|
The OpenShift Route to expose Backstage externally. (Optional) Applied to OpenShift only. |
|
|
|
|
(Optional) Specifies one or more Backstage |
|
|
|
|
(Optional) Specifies additional ConfigMaps to mount as files into the Backstage Pod. |
|
|
|
|
(Optional) Specifies additional ConfigMaps to expose as environment variables in the Backstage Pod. |
|
|
|
|
(Optional) Specifies additional Secrets to mount as files into the Backstage Pod. |
|
|
|
|
(Optional) Specifies additional Secrets to expose as environment variables in the Backstage Pod. |
|
|
|
|
(Optional) Specifies the dynamic plugins that the Operator installs into the Backstage instance. |
|
|
list of |
|
(Optional) The Persistent Volume Claim for PostgreSQL database. |
{cr-name} is the name of the Backstage Custom Resource, for example 'my-rhdh-instance' in the above example.
2.2. Automated Operator features
Use the Operator to automate key configuration processes for your Backstage application.
2.2.1. Metadata generation
The Operator automatically generates metadata values for default resources at runtime to ensure proper application function.
For all the default resources, the Operator generates metadata.name according to the rules defined in the Default Configuration files, particularly the Resource name column. For example, a Backstage Custom Resource (CR) named mybackstage creates a Kubernetes Deployment resource called backstage-mybackstage.
The Operator generates the following metadata for each resource:
deployment.yaml-
spec.selector.matchLabels[rhdh.redhat.com/app] = backstage-{cr-name} -
spec.template.metadata.labels[rhdh.redhat.com/app] = backstage-{cr-name}
-
service.yaml-
spec.selector[rhdh.redhat.com/app] = backstage-{cr-name}
-
db-statefulset.yaml-
spec.selector.matchLabels[rhdh.redhat.com/app] = backstage-psql-{cr-name} -
spec.template.metadata.labels[rhdh.redhat.com/app] = backstage-psql-{cr-name}
-
db-service.yaml-
spec.selector[rhdh.redhat.com/app] = backstage-psql-{cr-name}
-
2.2.2. Many resources
Define and create many resources of the same type in a single YAML file by using the --- delimiter to separate resource definitions.
For example, adding the following code snip to pvcs.yaml creates two PersistentVolumeClaims (PVCs) called backstage-{cr-name}-myclaim1 and backstage-{cr-name}-myclaim2 and mounts them to the Backstage container.
Example creating many PVCs
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: myclaim1 ... --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: myclaim2 ...
2.2.3. Default base URLs
The Operator automatically sets base URLs for your application based on Route parameters and OpenShift cluster ingress domain.
The Operator follows these rules to set the base URLs for your application:
- If the cluster is not OpenShift, the Operator makes no changes.
-
If you explicitly set the
spec.application.route.enabledfield in your Custom Resource (CR) tofalse, the Operator makes no changes. -
If you define
spec.application.route.hostin the Backstage CR, the Operator sets the base URLs tohttps://<spec.application.route.host>. -
If you specify the
spec.application.route.subdomainin the Backstage CR, the Operator sets the base URLs tohttps://<spec.application.route.subdomain>.<cluster_ingress_domain>. -
If you do not set a custom host or subdomain, the Operator sets the base URLs to
https://backstage-<cr_name>-<namespace>.<cluster_ingress_domain>, which is the default domain for the created Route resource.
The Operator updates the following base URLs in the default app-config ConfigMap:
-
app.baseUrl -
backend.baseUrl -
backend.cors.origin
You can perform these actions on a best-effort basis and only on OpenShift. During an error or on non-OpenShift clusters, you can still override these defaults by providing a custom app-config ConfigMap.
2.3. Time syntax in Red Hat Developer Hub
Use supported time duration formats in Red Hat Developer Hub, including human-readable strings, duration objects, ISO 8601 strings, and cron expressions.
Table 2. Generally supported time formats
|
Format |
Description |
Example |
Compound values |
|
Human-readable strings |
Simple strings compatible with the |
|
No |
|
Duration objects |
A structured object specifying time units. Matches the |
timeout:
minutes: 30
|
Yes |
|
ISO 8601 duration strings |
Standard ISO 8601 duration strings. |
|
Yes |
Table 3. Context-dependent time formats
|
Format |
Description |
Example |
|
Cron |
An object containing a |
frequency:
cron: '*/30 * * * *'
|
RHDH configuration reader readDurationFromConfig explicitly disallows plain numbers to prevent ambiguity.
However, specific raw configuration fields, such as direct Node.js HTTP server settings, might strictly require numbers. Always check the specific documentation for the field you are configuring.
3. Configure external PostgreSQL databases
Configure an external PostgreSQL database for production environments instead of using the default local database created by the Red Hat Developer Hub Operator or Helm chart.
Configure your database to use the date format of the International Organization for Standardization (ISO) through the DateStyle setting. Other formats are incompatible with the internal tracking of the software catalog, which causes scheduling tasks to fail and prevents your catalog items from refreshing.
3.1. Configure an external PostgreSQL instance using the Operator
Configure an external PostgreSQL instance by using the Red Hat Developer Hub Operator instead of the default local PostgreSQL instance.
Prerequisites
- You meet the Sizing requirements for external PostgreSQL deployments.
- You are using a supported version of PostgreSQL. For more information, see the Product life cycle page.
You have the following details:
-
db_host: Denotes your PostgreSQL instance Domain Name System (DNS) or IP address -
db_port: Denotes your PostgreSQL instance port number, such as5432 -
username: Denotes the user name to connect to your PostgreSQL instance -
password: Denotes the password to connect to your PostgreSQL instance
-
- You have installed the Red Hat Developer Hub Operator.
- Optional: You have a CA certificate, Transport Layer Security (TLS) private key, and TLS certificate so that you can secure your database connection by using the TLS protocol. For more information, refer to your PostgreSQL vendor documentation.
By default, Developer Hub uses a database for each plugin and automatically creates it if none exists. You might need the Create Database privilege in addition to PostgreSQL Database privileges for configuring an external PostgreSQL instance.
Procedure
Optional: Create a certificate secret to configure your PostgreSQL instance with a TLS connection:
$ cat <<EOF | oc -n my-rhdh-project create -f - apiVersion: v1 kind: Secret metadata: name: my-rhdh-database-certificates-secrets type: Opaque stringData: postgres-ca.pem: |- -----BEGIN CERTIFICATE----- <ca_certificate_key> postgres-key.key: |- -----BEGIN CERTIFICATE----- <tls_private_key> postgres-crt.pem: |- -----BEGIN CERTIFICATE----- <tls_certificate_key> # ... EOF
Where:
my-rhdh-database-certificates-secrets- The certificate secret name.
<ca_certificate_key>- The CA certificate key.
<tls_private_key>- Optional: The TLS private key.
<tls_certificate_key>- Optional: The TLS certificate key.
Create a credential secret to connect to the PostgreSQL instance:
$ cat <<EOF | oc -n my-rhdh-project create -f - apiVersion: v1 kind: Secret metadata: name: my-rhdh-database-secrets type: Opaque stringData: POSTGRES_PASSWORD: <password> POSTGRES_PORT: "<db_port>" POSTGRES_USER: <username> POSTGRES_HOST: <db_host> PGSSLMODE: <ssl_mode> NODE_EXTRA_CA_CERTS: <abs_path_to_pem_file> EOF
Where:
my-rhdh-database-secrets- The credential secret name.
<password>- The password to connect to your PostgreSQL instance.
<db_port>-
Your PostgreSQL instance port number, such as
5432. <username>- The user name to connect to your PostgreSQL instance.
<db_host>- Your PostgreSQL instance DNS or IP address.
<ssl_mode>- Optional: For TLS connections, the required SSL mode.
<abs_path_to_pem_file>-
Optional: For TLS connections, the absolute path to the Privacy-Enhanced Mail (PEM) file, for example
/opt/app-root/src/postgres-crt.pem.
Create your
Backstagecustom resource (CR):cat <<EOF | oc -n my-rhdh-project create -f - apiVersion: rhdh.redhat.com/v1alpha5 kind: Backstage metadata: name: <backstage_instance_name> spec: database: enableLocalDb: false application: extraFiles: mountPath: <path> secrets: - name: my-rhdh-database-certificates-secrets key: postgres-crt.pem, postgres-ca.pem, postgres-key.key extraEnvs: secrets: - name: my-rhdh-database-secrets # ...Where:
spec.database.enableLocalDb-
Set to
falseto disable creating local PostgreSQL instances. <path>-
The mount path for certificate files, for example
/opt/app-root/src. my-rhdh-database-certificates-secrets- The certificate secret name, required if you configure a TLS connection.
key-
The key names as defined in the
my-rhdh-database-certificates-secretsSecret. my-rhdh-database-secretsThe credential secret name.
NoteThe environment variables listed in the
BackstageCR work with the Operator default configuration. If you have changed the Operator default configuration, you must reconfigure theBackstageCR accordingly.
-
Apply the
BackstageCR to the namespace where you have deployed the Developer Hub instance.
3.2. Configure an external PostgreSQL instance using the Helm Chart
Configure an external PostgreSQL instance by using the Helm Chart instead of the default local PostgreSQL instance.
Prerequisites
- You meet the Sizing requirements for external PostgreSQL deployments.
- You are using a supported version of PostgreSQL. For more information, see the Product life cycle page.
You have the following details:
-
db_host: Denotes your PostgreSQL instance Domain Name System (DNS) or IP address -
db_port: Denotes your PostgreSQL instance port number, such as5432 -
username: Denotes the user name to connect to your PostgreSQL instance -
password: Denotes the password to connect to your PostgreSQL instance
-
- You have installed the RHDH application by using the Helm Chart.
- Optional: You have a CA certificate, Transport Layer Security (TLS) private key, and TLS certificate so that you can secure your database connection by using the TLS protocol. For more information, refer to your PostgreSQL vendor documentation.
By default, Developer Hub uses a database for each plugin and automatically creates it if none exists. You might need the Create Database privilege in addition to PostgreSQL Database privileges for configuring an external PostgreSQL instance.
Procedure
Optional: Create a certificate secret to configure your PostgreSQL instance with a TLS connection:
$ cat <<EOF | oc -n <your_namespace> create -f - apiVersion: v1 kind: Secret metadata: name: my-rhdh-database-certificates-secrets type: Opaque stringData: postgres-ca.pem: |- -----BEGIN CERTIFICATE----- <ca_certificate_key> postgres-key.key: |- -----BEGIN CERTIFICATE----- <tls_private_key> postgres-crt.pem: |- -----BEGIN CERTIFICATE----- <tls_certificate_key> # ... EOF
Where:
my-rhdh-database-certificates-secrets- The certificate secret name.
<ca_certificate_key>- The CA certificate key.
<tls_private_key>- Optional: The TLS private key.
<tls_certificate_key>- Optional: The TLS certificate key.
Create a credential secret to connect to the PostgreSQL instance:
$ cat <<EOF | oc -n <your_namespace> create -f - apiVersion: v1 kind: Secret metadata: name: my-rhdh-database-secrets type: Opaque stringData: POSTGRES_PASSWORD: <password> POSTGRES_PORT: "<db_port>" POSTGRES_USER: <username> POSTGRES_HOST: <db_host> PGSSLMODE: <ssl_mode> NODE_EXTRA_CA_CERTS: <abs_path_to_pem_file> EOF
Where:
my-rhdh-database-secrets- The credential secret name.
<password>- The password to connect to your PostgreSQL instance.
<db_port>-
Your PostgreSQL instance port number, such as
5432. <username>- The user name to connect to your PostgreSQL instance.
<db_host>- Your PostgreSQL instance DNS or IP address.
<ssl_mode>- Optional: For TLS connections, the required SSL mode.
<abs_path_to_pem_file>-
Optional: For TLS connections, the absolute path to the Privacy-Enhanced Mail (PEM) file, for example
/opt/app-root/src/postgres-crt.pem.
Configure your PostgreSQL instance in the Helm configuration file named
values.yaml:# ... upstream: postgresql: enabled: false auth: existingSecret: my-rhdh-database-secrets backstage: appConfig: backend: database: connection: host: ${POSTGRES_HOST} port: ${POSTGRES_PORT} user: ${POSTGRES_USER} password: ${POSTGRES_PASSWORD} ssl: rejectUnauthorized: true, ca: $file: /opt/app-root/src/postgres-ca.pem key: $file: /opt/app-root/src/postgres-key.key cert: $file: /opt/app-root/src/postgres-crt.pem extraEnvVarsSecrets: - my-rhdh-database-secrets extraEnvVars: - name: BACKEND_SECRET valueFrom: secretKeyRef: key: backend-secret name: '{{ include "janus-idp.backend-secret-name" $ }}' extraVolumeMounts: - mountPath: /opt/app-root/src/dynamic-plugins-root name: dynamic-plugins-root - mountPath: /opt/app-root/src/postgres-crt.pem name: postgres-crt subPath: postgres-crt.pem - mountPath: /opt/app-root/src/postgres-ca.pem name: postgres-ca subPath: postgres-ca.pem - mountPath: /opt/app-root/src/postgres-key.key name: postgres-key subPath: postgres-key.key extraVolumes: - ephemeral: volumeClaimTemplate: spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi name: dynamic-plugins-root - configMap: defaultMode: 420 name: dynamic-plugins optional: true name: dynamic-plugins - name: dynamic-plugins-npmrc secret: defaultMode: 420 optional: true secretName: '{{ printf "%s-dynamic-plugins-npmrc" .Release.Name }}' - name: postgres-crt secret: secretName: my-rhdh-database-certificates-secrets # ...Where:
upstream.postgresql.enabled-
Set to
falseto disable the local PostgreSQL instance creation. upstream.postgresql.auth.existingSecret- The credentials secret to inject into Backstage.
upstream.backstage.appConfig.backend.database.connection- The Backstage database connection parameters.
upstream.backstage.extraEnvVarsSecrets- The credentials secret to inject as environment variables into Backstage.
extraVolumeMounts(postgres-crt,postgres-ca,postgres-key)- Optional: Inject TLS certificate, CA certificate, and TLS private key into the Backstage container.
extraVolumes(postgres-crt)- The certificate secret name, required if you configure TLS.
Apply the configuration changes in your Helm configuration file named
values.yaml:$ helm upgrade -n <your_namespace> <your_deploy_name> openshift-helm-charts/redhat-developer-hub -f values.yaml --version 1.9.0
3.3. Migrate local databases to an external database server using the Operator
Migrate data from a local PostgreSQL server to an external PostgreSQL service by using PostgreSQL utilities such as pg_dump and psql.
The following procedure uses a database copy script to do a quick migration.
Prerequisites
Procedure
Configure port forwarding for the local PostgreSQL database pod by running the following command on a terminal:
$ oc port-forward -n <your_namespace> <pgsql_pod_name> <forward_to_port>:<forward_from_port>
Where:
-
The
<pgsql_pod_name>variable denotes the name of a PostgreSQL pod with the formatbackstage-psql-<deployment_name>-<_index>. -
The
<forward_to_port>variable denotes the port of your choice to forward PostgreSQL data to. The
<forward_from_port>variable denotes the local PostgreSQL instance port, such as5432.Example: Configuring port forwarding
$ oc port-forward -n developer-hub backstage-psql-developer-hub-0 15432:5432
Make a copy of the following
db_copy.shscript and edit the details based on your configuration:#!/bin/bash to_host=<db_service_host> to_port=5432 to_user=postgres from_host=127.0.0.1 from_port=15432 from_user=postgres allDB=("backstage_plugin_app" "backstage_plugin_auth" "backstage_plugin_catalog" "backstage_plugin_permission" "backstage_plugin_scaffolder" "backstage_plugin_search") for db in ${!allDB[@]}; do db=${allDB[$db]} echo Copying database: $db PGPASSWORD=$TO_PSW psql -h $to_host -p $to_port -U $to_user -c "create database $db;" pg_dump -h $from_host -p $from_port -U $from_user -d $db | PGPASSWORD=$TO_PSW psql -h $to_host -p $to_port -U $to_user -d $db doneWhere:
to_host-
The destination hostname, for example
<db_instance_name>.rds.amazonaws.com. to_port-
The destination port, such as
5432. to_user-
The destination server username, for example
postgres. from_host-
The source hostname, such as
127.0.0.1. from_port-
The source port number, such as the
<forward_to_port>variable. from_user-
The source server username, for example
postgres. allDB- Database names to import, in double quotes separated by spaces.
Create a destination database for copying the data:
/bin/bash TO_PSW=<destination_db_password> /path/to/db_copy.sh
Replace
<destination_db_password>with the password to connect to the destination database.NoteYou can stop port forwarding when the copying of the data is complete. For more information about handling large databases and using the compression tools, see the Handling Large Databases section on the PostgreSQL website.
-
Reconfigure your
Backstagecustom resource (CR). For more information, see Configure an external PostgreSQL instance using the Operator. Check that the following code is present at the end of your
BackstageCR after reconfiguration:# ... spec: database: enableLocalDb: false application: # ... extraFiles: secrets: - name: my-rhdh-database-certificates-secrets key: postgres-crt.pem extraEnvs: secrets: - name: my-rhdh-database-secrets # ...NoteReconfiguring the
BackstageCR deletes the correspondingStatefulSetandPodobjects, but does not delete thePersistenceVolumeClaimobject. Use the following command to delete the localPersistenceVolumeClaimobject:oc -n developer-hub delete pvc <local_psql_pvc_name>
where, the
<local_psql_pvc_name>variable is in thedata-<psql_pod_name>format.- Apply the configuration changes.
Verification
Verify that your RHDH instance is running with the migrated data and does not contain the local PostgreSQL database by running the following command:
oc get pods -n <your_namespace>
- Check the output for the following details:
-
The
backstage-developer-hub-xxxpod is in running state. The
backstage-psql-developer-hub-0pod is not available.You can also verify these details by using the Topology view in the OpenShift Container Platform web console.
4. Configure high availability in Red Hat Developer Hub
Configure high availability to ensure continuous service accessibility by eliminating single points of failure through redundancy and failover mechanisms.
Red Hat Developer Hub supports HA deployments on the following platforms:
- Red Hat OpenShift Container Platform
- Azure Kubernetes Service
- Elastic Kubernetes Service
- Google Kubernetes Engine
The HA deployments enable more resilient and reliable service availability across supported environments.
In a single instance deployment, a failure makes the entire service unavailable. Software crashes, hardware issues, or other disruptions can interrupt development workflows and access to key resources.
With HA enabled, you can scale the number of backend replicas to introduce redundancy. This setup ensures that if one pod or component fails, others continue to serve requests without disruption. The built-in load balancer manages ingress traffic and distributes the load across the available pods. Meanwhile, the RHDH backend manages concurrent requests and resolves resource-level conflicts effectively.
As an administrator, you can configure high availability by adjusting replica values in your configuration file:
-
If you installed using the Operator, configure the replica values in your
Backstagecustom resource. - If you used the Helm chart, set the replica values in the Helm configuration.
4.1. Configure high availability in a Red Hat Developer Hub Operator deployment
Configure high availability for Operator deployments by setting the replicas field to a value greater than 1 in the custom resource.
Procedure
In your
Backstagecustom resource (CR), setreplicasto a value greater than1.For example, to configure two replicas (one backup instance):
apiVersion: rhdh.redhat.com/v1alpha5 kind: Backstage metadata: name: <your_yaml_file> spec: deployment: patch: spec: replicas: 2
4.2. Configure high availability in a Red Hat Developer Hub Helm chart deployment
Configure high availability for Helm deployments by setting the replicas value to greater than 1 in the Helm configuration file.
Procedure
In your Helm chart configuration file, set
replicasto a value greater than1.For example, to configure two replicas (one backup instance):
upstream: backstage: replicas: 2
5. Run Red Hat Developer Hub behind a corporate proxy
In a network restricted environment, configure Red Hat Developer Hub to use your proxy to access remote network resources.
You can run the Developer Hub application behind a corporate proxy by setting any of the following environment variables before starting the application:
HTTP_PROXY- Denotes the proxy to use for HTTP requests.
HTTPS_PROXY- Denotes the proxy to use for HTTPS requests.
NO_PROXY- Set the environment variable to bypass the proxy for certain domains. The variable value is a comma-separated list of hostnames or IP addresses that do not require the proxy, even if you specify one.
5.1. The NO_PROXY exclusion rules
Configure NO_PROXY to bypass the proxy for specific hostnames, IP addresses, and port numbers when using Developer Hub.
The default value for NO_PROXY in RHDH is localhost,127.0.0.1. If you want to override it, include at least localhost or localhost:7007 in the list. Otherwise, the RHDH backend might fail.
Matching follows these rules:
-
NO_PROXY=*will bypass the proxy for all requests. -
Space and commas might separate the entries in the
NO_PROXYlist. For example,NO_PROXY="localhost,example.com", orNO_PROXY="localhost example.com", orNO_PROXY="localhost, example.com"would have the same effect. -
If
NO_PROXYhas no entries, configuring theHTTP(S)_PROXYsettings makes the backend send all requests through the proxy. -
The backend does not perform a DNS lookup to decide if a request should bypass the proxy or not. For example, if DNS resolves
example.comto1.2.3.4, settingNO_PROXY=1.2.3.4has no effect on requests sent toexample.com. Only requests sent to the IP address1.2.3.4bypass the proxy. -
If you add a port after the hostname or IP address, the request must match both the host/IP and port to bypass the proxy. For example,
NO_PROXY=example.com:1234would bypass the proxy for requests tohttp(s)://example.com:1234, but not for requests on other ports, such ashttp(s)://example.com. -
If you do not specify a port after the hostname or IP address, all requests to that host/IP address will bypass the proxy regardless of the port. For example,
NO_PROXY=localhostwould bypass the proxy for requests sent to URLs such ashttp(s)://localhost:7077andhttp(s)://localhost:8888. -
IP Address blocks in CIDR notation will not work. So setting
NO_PROXY=10.11.0.0/16will not have any effect, even if the backend sends a request to an IP address in that block. -
Supports only IPv4 addresses. IPv6 addresses such as
::1will not work. -
Generally, the proxy is only bypassed if the hostname is an exact match for an entry in the
NO_PROXYlist. The only exceptions are entries that start with a dot (.) or with a wildcard (*). In such a case, bypass the proxy if the hostname ends with the entry.
List the domain and the wildcard domain if you want to exclude a given domain and all its subdomains. For example, you would set NO_PROXY=example.com,.example.com to bypass the proxy for requests sent to http(s)://example.com and http(s)://subdomain.example.com.
5.2. Configure proxy information in Operator deployment
Configure proxy settings for Operator-based deployments by setting environment variables in the ConfigMap or custom resource file.
- As a cluster administrator with access to the Operator namespace, you can configure the proxy variables in the Operator’s default ConfigMap file. This configuration applies the proxy settings to all the users of the Operator.
- As a developer, you can configure the proxy variables in a custom resource (CR) file. This configuration applies the proxy settings to the RHDH application created from that CR.
Prerequisites
- You have installed the Red Hat Developer Hub application.
Procedure
- Perform one of the following steps based on your role:
As an administrator, set the proxy information in the Operator’s default ConfigMap file:
-
Search for a ConfigMap file named
backstage-default-configin the default namespacerhdh-operatorand open it. -
Find the
deployment.yamlkey. Set the value of the
HTTP_PROXY,HTTPS_PROXY, andNO_PROXYenvironment variables in theDeploymentspec as shown in the following example:Example: Setting proxy variables in a ConfigMap file
# ... deployment.yaml: |- apiVersion: apps/v1 kind: Deployment spec: template: spec: # ... initContainers: - name: install-dynamic-plugins # ... env: - name: NPM_CONFIG_USERCONFIG value: /opt/app-root/src/.npmrc.dynamic-plugins - name: HTTP_PROXY value: 'http://10.10.10.105:3128' - name: HTTPS_PROXY value: 'http://10.10.10.106:3128' - name: NO_PROXY value: 'localhost,example.org' # ... containers: - name: backstage-backend # ... env: - name: APP_CONFIG_backend_listen_port value: "7007" - name: HTTP_PROXY value: 'http://10.10.10.105:3128' - name: HTTPS_PROXY value: 'http://10.10.10.106:3128' - name: NO_PROXY value: 'localhost,example.org'
-
Search for a ConfigMap file named
As a developer, set the proxy information in your
BackstageCR file as shown in the following example:Example: Setting proxy variables in a CR file
spec: # ... application: extraEnvs: envs: - name: HTTP_PROXY value: 'http://10.10.10.105:3128' - name: HTTPS_PROXY value: 'http://10.10.10.106:3128' - name: NO_PROXY value: 'localhost,example.org'- Save the configuration changes.
5.3. Configure proxy information in Helm deployment
Configure proxy settings for Helm-based deployments by setting environment variables in the Helm configuration file.
Prerequisites
- You have installed the Red Hat Developer Hub application.
Procedure
Set the proxy information in your Helm configuration file:
upstream: backstage: extraEnvVars: - name: HTTP_PROXY value: '<http_proxy_url>' - name: HTTPS_PROXY value: '<https_proxy_url>' - name: NO_PROXY value: '<no_proxy_settings>'Where,
<http_proxy_url>- Denotes a variable that you must replace with the HTTP proxy URL.
<https_proxy_url>- Denotes a variable that you must replace with the HTTPS proxy URL.
<no_proxy_settings>Denotes a variable that you must replace with comma-separated URLs, which you want to exclude from proxying, for example,
<example1>.com,<example2>.com.Example: Setting proxy variables by using the Helm Chart
upstream: backstage: extraEnvVars: - name: HTTP_PROXY value: 'http://10.10.10.105:3128' - name: HTTPS_PROXY value: 'http://10.10.10.106:3128' - name: NO_PROXY value: 'localhost,example.org'
- Save the configuration changes.
6. Use the dynamic plugins cache
Use the dynamic plugins cache to reduce platform boot time by storing already-installed plugins and avoiding redundant downloads.
6.1. Dynamic plugins cache
The dynamic plugins cache reduces platform boot time by storing already-installed plugins and skipping redundant downloads when the configuration does not change.
When you enable dynamic plugins cache:
-
The system calculates a checksum of each plugin’s YAML configuration (excluding
pluginConfig). -
The system stores the checksum in a file named
dynamic-plugin-config.hashwithin the plugin’s directory. - During boot, if a plugin’s package reference matches the earlier installation and the checksum does not change, the system skips the download.
- The system automatically removes plugins that you disabled since the earlier boot.
To enable the dynamic plugins cache in RHDH, the plugins directory dynamic-plugins-root must be a persistent volume.
6.2. Create a PVC for the dynamic plugin cache by using the Operator
Create a persistent volume claim for the dynamic plugin cache in Operator deployments by replacing the default dynamic-plugins-root volume.
Prerequisites
- You have installed Red Hat Developer Hub on OpenShift Container Platform using the Red Hat Developer Hub Operator.
-
You have installed the OpenShift CLI (
oc).
Procedure
Create the persistent volume definition and save it to a file, such as
pvc.yaml. For example:kind: PersistentVolumeClaim apiVersion: v1 metadata: name: dynamic-plugins-root spec: accessModes: - ReadWriteOnce resources: requests: storage: 5GiNoteThis example uses
ReadWriteOnceas the access mode which prevents many replicas from sharing the PVC across different nodes. To run many replicas on different nodes, depending on your storage driver, you must use an access mode such asReadWriteMany.To apply this PVC to your cluster, run the following command:
$ oc apply -f pvc.yaml
Replace the default
dynamic-plugins-rootvolume with a PVC nameddynamic-plugins-root. For example:apiVersion: rhdh.redhat.com/v1alpha5 kind: Backstage metadata: name: developer-hub spec: deployment: patch: spec: template: spec: volumes: - $patch: replace name: dynamic-plugins-root persistentVolumeClaim: claimName: dynamic-plugins-rootNoteTo avoid adding a new volume, you must use the
$patch: replacedirective.
6.3. Fix 404 error after cached dynamic plugins configuration change
When many Developer Hub replicas share a single dynamic plugins cache PVC, updating configurations with the Operator can trigger temporary 404 errors. This occurs because the replicas might access inconsistent cache states during the update process, before all replicas have synchronized.
The solution is to use an individual cache per pod.
Prerequisites
-
Your API version is
v1alpha5or later.
Procedure
In the
BackstageCustom Resource (CR) file, setspec.deploymentto use the optionalStatefulSetas a resource kind. For example:apiVersion: rhdh.redhat.com/v1alpha5 kind: Backstage metadata: name: <CR_name> ... spec: deployment: kind: StatefulSet patch: spec: replicas: 2 template: spec: volumes: - $patch: replace name: dynamic-plugins-root persistentVolumeClaim: claimName: dynamic-plugins-root volumeClaimTemplates: - apiVersion: v1 kind: PersistentVolumeClaim metadata: name: dynamic-plugins-root spec: accessModes: - ReadWriteOnce resources: requests: storage: 1GiNoteUsing
StatefulSetwith a single replica can lead to downtime, while the application deletes the old pod and creates a new pod.-
Wait a few minutes until the Operator reconciles the CR and the
StatefulSetresource is ready. If you are updating an existing CR, remove the earlier
Deploymentresource from the cluster:oc delete deployment -l app.kubernetes.io/instance=<CR_name>NoteThe same requirement applies for changing the resource kind from
StatefulSettoDeployment. You must manually delete the resource created before from the cluster, because the Operator does not automatically remove the legacy resource.
6.4. Create a PVC for the dynamic plugin cache using the Helm Chart
Create a persistent volume claim for the dynamic plugin cache in Helm deployments to persist the cache across pod restarts.
Prerequisites
- You have installed Red Hat Developer Hub using the Helm chart.
-
You have installed the OpenShift CLI (
oc).
Procedure
Create the persistent volume definition. For example:
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: dynamic-plugins-root spec: accessModes: - ReadWriteOnce resources: requests: storage: 5GiNoteThis example uses
ReadWriteOnceas the access mode which prevents many replicas from sharing the PVC across different nodes. To run many replicas on different nodes, depending on your storage driver, you must use an access mode such asReadWriteMany.To apply this PVC to your cluster, run the following command:
$ oc apply -f pvc.yaml
Configure the Helm chart to use the PVC. For example:
upstream: backstage: extraVolumes: - name: dynamic-plugins-root persistentVolumeClaim: claimName: dynamic-plugins-root - name: dynamic-plugins configMap: defaultMode: 420 name: '{{ printf "%s-dynamic-plugins" .Release.Name }}' optional: true - name: dynamic-plugins-npmrc secret: defaultMode: 420 optional: true secretName: '{{ printf "%s-dynamic-plugins-npmrc" .Release.Name }}' - name: dynamic-plugins-registry-auth secret: defaultMode: 416 optional: true secretName: '{{ printf "%s-dynamic-plugins-registry-auth" .Release.Name }}' - name: npmcacache emptyDir: {} - name: temp emptyDir: {}NoteWhen you configure the Helm chart to use the PVC, you must also include the
extraVolumessection defined in the default Helm chart values.
6.5. Configure the dynamic plugins cache
Configure the dynamic plugins cache by setting pull policy and download parameters in the dynamic-plugins.yaml file.
Procedure
To configure the dynamic plugins cache, set the following optional dynamic plugin cache parameters in your
dynamic-plugins.yamlfile:pullPolicy: IfNotPresent(default)- Download the artifact if it is not already present in the dynamic-plugins-root folder, without checking image digests.
pullPolicy: AlwaysCompare the image digest in the remote registry and downloads the artifact if it has changed, even if Developer Hub has already downloaded the plugin before.
When applied to the Node Package Manager (NPM) downloading method, download the remote artifact without a digest check.
Example
dynamic-plugins.yamlfile configuration to download the remote artifact without a digest check:plugins: - disabled: false pullPolicy: Always package: 'oci://quay.io/example-org/example-plugin:v1.0.0!internal-backstage-plugin-example'forceDownload: false(default)- Older option to download the artifact if it is not already present in the dynamic-plugins-root folder, without checking image digests.
forceDownload: trueOlder option to force a reinstall of the plugin, bypassing the cache.
NoteThe
pullPolicyoption takes precedence over theforceDownloadoption.The
forceDownloadoption might become deprecated in a future Developer Hub release.
7. Enable the Red Hat Developer Hub plugin assets cache
Use a Redis cache store to improve Developer Hub performance and reliability by caching plugin assets.
Prerequisites
- You have installed Red Hat Developer Hub.
-
You have an active Redis server. For more information on setting up an external Redis server, see the
official Redis documentation.
Procedure
Enable the Developer Hub cache by defining Redis as the cache store type and entering your Redis server connection URL in your
app-config.yamlfile.app-config.yamlfile fragmentbackend: cache: store: redis connection: redis://user:pass@cache.example.com:6379Enable the cache for TechDocs by adding the
techdocs.cache.ttlsetting in yourapp-config.yamlfile. This setting specifies how long, in milliseconds, a statically built asset should stay in the cache.app-config.yamlfile fragmenttechdocs: cache: ttl: 3600000TipOptionally, enable the cache for unsupported plugins that support this feature. See the documentation for each plugin for details.
8. Inject extra files and environment variables into Backstage containers
Inject extra files and environment variables into Backstage containers by mounting ConfigMaps and Secrets by using the mountPath field.
-
If you do not specify
keyandmountPath: The system mounts each key or value as afilenameor content with asubPath. -
If you specify
keywith or withoutmountPath: The system mounts the specified key or value with asubPath. -
If you specify only
mountPath: The system mounts a directory containing all the keys or values without asubPath. If you do not specify the
containersfield: The volume mounts only to thebackstage-backendcontainer. By default, files mount only to thebackstage-backendcontainer. You can also specify other targets, including a list of containers by name (such asdynamic-plugin-installorselectcustomsidecars) or select all containers in the Backstage Pod.Note-
OpenShift Container Platform does not automatically update a volume mounted with
subPath. By default, the RHDH Operator monitors these ConfigMaps or Secrets and refreshes the RHDH Pod when changes occur. - For security purposes, Red Hat Developer Hub does not give the Operator Service Account read access to Secrets. As a result, mounting files from Secrets without specifying both mountPath and key is not supported.
-
OpenShift Container Platform does not automatically update a volume mounted with
Procedure
Apply the configuration to your
Backstage custom resource (CR). The following code block is an example:spec: application: extraFiles: mountPath: _<default_mount_path>_ configMaps: - name: _<configmap_name_all_entries>_ - name: _<configmap_name_single_key>_ key: _<specific_file_key>_ containers: - "*" - name: _<configmap_name_custom_path>_ mountPath: _<custom_cm_mount_path>_ containers: - backstage-backend - install-dynamic-plugins secrets: - name: _<secret_name_single_key>_ key: _<specific_secret_key>_ containers: - install-dynamic-plugins - name: _<secret_name_custom_path>_ mountPath: _<custom_secret_mount_path>_ pvcs: - name: _<pvc_name_default_path>_ - name: _<pvc_name_custom_path>_ mountPath: _<custom_pvc_mount_path>_ extraEnvs: configMaps: - name: _<configmap_name_env_var>_ key: _<env_var_key>_ containers: - "*" secrets: - name: _<secret_name_all_envs>_ envs: - name: _<static_env_var_name>_ value: "_<static_env_var_value>_" containers: - install-dynamic-pluginswhere:
spec.application.extraFiles.mountPath-
Specifies the default base mount path for files if you do not set a specific
mountPathfor a resource (for example,/<default_mount_path>). spec.application.extraFiles.configMaps.name-
Mounts all entries from
<configmap_name_all_entries>to the default mount path. spec.application.extraFiles.configMaps.key-
Mounts **only the specified key (for example,
<specific_file_key>.txt) from the ConfigMap. spec.application.extraFiles.configMaps.containers-
Targets all containers (
"*") for mounting. spec.application.extraFiles.configMaps.mountPath-
Overrides the default and mounts all ConfigMap entries as a directory at the specified path (for example,
/<custom_cm_mount_path>). spec.application.extraFiles.secrets.key- Mounts only a specific key from the Secret.
spec.application.extraFiles.secrets.mountPath-
Overrides the default and mounts all Secret entries as a directory at the specified path (for example,
/<custom_secret_mount_path>). spec.application.extraFiles.pvcs.name-
Mounts the PVC to the default mount path, appending the PVC name (for example,
/<default_mount_path>/<pvc_name_default_path>). spec.application.extraFiles.pvcs.mountPath-
Overrides the default and mounts the PVC to the specified path (for example,
/<custom_pvc_mount_path>). spec.application.extraEnvs.configMaps.containers-
Injects the specified ConfigMap key as an environment variable into all containers (
"*"). spec.application.extraEnvs.secrets.name- Injects all keys from the Secret as environment variables into the default container.
spec.application.envs.containersTargets only the listed container for the static environment variable injection.
NoteThe following explicit options are supported:
-
No or an empty field: Mounts only to the
backstage-backendcontainer. -
*(asterisk) as the first and only array element: Mounts to all containers. -
Explicit container names, for example,
install-dynamic-plugins: Mounts only to the listed containers.
-
No or an empty field: Mounts only to the
Verification
Verify the files mount with the following correct paths and container targets:
| Resource | Target type | Path(s) or name(s) | Container(s) |
|---|---|---|---|
|
ConfigMap ( |
File |
|
|
|
ConfigMap ( |
File |
|
All |
|
ConfigMap ( |
Directory |
|
|
|
Secret ( |
File |
|
|
|
Secret ( |
Directory |
|
|
|
PVC ( |
Directory |
|
|
|
ConfigMap ( |
Environment variable |
|
All |
|
Secret ( |
Environment variable |
|
|
|
Custom Resource Definition (CRD) ( |
Environment variable |
|
|
9. Configure mount paths for default Secrets and Persistent Volume Claims (PVCs)
Configure custom mount paths for Secrets and PVCs by adding the rhdh.redhat.com/mount-path annotation to your resource.
Procedure
To specify a PVC mount path, add the
rhdh.redhat.com/mount-pathannotation to your configuration file as shown in the following example:Example specifying the PVC mount path
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: <my_claim> annotations: rhdh.redhat.com/mount-path: /mount/path/from/annotationWhere:
<my_claim>- The PVC to mount.
rhdh.redhat.com/mount-path-
The mount path for the PVC, in this case the
/mount/path/from/annotationdirectory.
To specify a Secret mount path, add the
rhdh.redhat.com/mount-pathannotation to your configuration file as shown in the following example:Example specifying where the Secret mounts
apiVersion: v1 kind: Secret metadata: name: <my_secret> annotations: rhdh.redhat.com/mount-path: /mount/path/from/annotation
10. Mount secrets and PVCs to specific containers
Mount secrets and PVCs to specific containers by adding the rhdh.redhat.com/containers annotation to your configuration file.
Procedure
To mount Secrets to all containers, set the
rhdh.redhat.com/containersannotation to*in your configuration file:Example mounting to all containers
apiVersion: v1 kind: Secret metadata: name: <my_secret> annotations: rhdh.redhat.com/containers:
*ImportantSet
rhdh.redhat.com/containersto*to mount it to all containers in the deployment.To mount to specific containers, separate the names with commas:
Example separating the list of containers
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: <my_claim> annotations: rhdh.redhat.com/containers: "init-dynamic-plugins,backstage-backend"NoteThis configuration mounts the
<my_claim>PVC to theinit-dynamic-pluginsandbackstage-backendcontainers.
11. Configure Red Hat Developer Hub deployment when using the Operator
Configure Red Hat Developer Hub deployment by using the spec.deployment.patch field in the Red Hat Developer Hub Operator custom resource to control the Deployment resource.
Create a Backstage CR with the following fields:
apiVersion: rhdh.redhat.com/v1alpha5
kind: Backstage
metadata:
name: developer-hub
spec:
deployment:
patch:
spec:
template:labelsAdd labels to the Developer Hub pod.
Example adding the label
my=trueapiVersion: rhdh.redhat.com/v1alpha5 kind: Backstage metadata: name: developer-hub spec: deployment: patch: spec: template: metadata: labels: my: truevolumes
Add an additional volume named my-volume and mount it under /my/path in the Developer Hub application container.
apiVersion: rhdh.redhat.com/v1alpha5
kind: Backstage
metadata:
name: developer-hub
spec:
deployment:
patch:
spec:
template:
spec:
containers:
- name: backstage-backend
volumeMounts:
- mountPath: /my/path
name: my-volume
volumes:
- ephemeral:
volumeClaimTemplate:
spec:
storageClassName: "special"
name: my-volume
Replace the default dynamic-plugins-root volume with a persistent volume claim (PVC) named dynamic-plugins-root. Note the $patch: replace directive, otherwise the system adds a new volume.
apiVersion: rhdh.redhat.com/v1alpha5
kind: Backstage
metadata:
name: developer-hub
spec:
deployment:
patch:
spec:
template:
spec:
volumes:
- $patch: replace
name: dynamic-plugins-root
persistentVolumeClaim:
claimName: dynamic-plugins-rootcpurequestSet the CPU request for the Developer Hub application container to 250m.
Example CPU request
apiVersion: rhdh.redhat.com/v1alpha5 kind: Backstage metadata: name: developer-hub spec: deployment: patch: spec: template: spec: containers: - name: backstage-backend resources: requests: cpu: 250mmy-sidecarcontainerAdd a new
my-sidecarsidecar container into the Developer Hub Pod.Example side car container
apiVersion: rhdh.redhat.com/v1alpha5 kind: Backstage metadata: name: developer-hub spec: deployment: patch: spec: template: spec: containers: - name: my-sidecar image: quay.io/my-org/my-sidecar:latest
Additional resources
12. Configure an RHDH instance with a TLS connection in Kubernetes
Configure RHDH with a TLS connection in Kubernetes to ensure secure connections with third-party applications and external databases.
Prerequisites
- You have set up an Azure Red Hat OpenShift (ARO) cluster with a public CA-signed certificate. For more information about obtaining CA certificates, refer to your vendor documentation.
You have created a namespace and setup a service account with proper read permissions on resources.
Example: Kubernetes manifest for role-based access control
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: backstage-read-only rules: - apiGroups: - '*' resources: - pods - configmaps - services - deployments - replicasets - horizontalpodautoscalers - ingresses - statefulsets - limitranges - resourcequotas - daemonsets verbs: - get - list - watch #...- You have obtained the secret and the service CA certificate associated with your service account.
You have created some resources and added annotations to them so the Kubernetes plugin can discover them. You can apply these Kubernetes annotations:
-
backstage.io/kubernetes-idto label components -
backstage.io/kubernetes-namespaceto label namespaces
-
Procedure
Enable the Kubernetes plugins in the
dynamic-plugins-rhdh.yamlfile by settingdisabledtofalse:kind: ConfigMap apiVersion: v1 metadata: name: dynamic-plugins-rhdh data: dynamic-plugins.yaml: | includes: - dynamic-plugins.default.yaml plugins: - package: ./dynamic-plugins/dist/backstage-plugin-kubernetes-backend-dynamic disabled: false - package: ./dynamic-plugins/dist/backstage-plugin-kubernetes disabled: false # ...NoteThe
backstage-plugin-kubernetesplugin is currently in Technology Preview. As an alternative, you can use the./dynamic-plugins/dist/backstage-plugin-topology-dynamicplugin, which is Generally Available (GA).Set the Kubernetes cluster details and configure the catalog sync options in the
app-config.yamlconfiguration file:kind: ConfigMap apiVersion: v1 metadata: name: my-rhdh-app-config data: "app-config.yaml": | # ... catalog: rules: - allow: [Component, System, API, Resource, Location] providers: kubernetes: openshift: cluster: openshift processor: namespaceOverride: default defaultOwner: guests schedule: frequency: seconds: 30 timeout: seconds: 5 kubernetes: serviceLocatorMethod: type: 'multiTenant' clusterLocatorMethods: - type: 'config' clusters: - url: <target_cluster_api_server_url> name: openshift authProvider: 'serviceAccount' skipTLSVerify: false skipMetricsLookup: true dashboardUrl: <target_cluster_console_url> dashboardApp: openshift serviceAccountToken: ${K8S_SERVICE_ACCOUNT_TOKEN} caData: ${K8S_CONFIG_CA_DATA} # ...url-
The base URL to the Kubernetes control plane. You can run the
kubectl cluster-infocommand to get the base URL. skipTLSVerify-
Set the value of this parameter to
falseto enable the verification of the TLS certificate. dashboardUrl- (Optional) The link to the Kubernetes dashboard managing the ARO cluster.
serviceAccountToken-
(Optional) Pass the service account token by using a
K8S_SERVICE_ACCOUNT_TOKENenvironment variable that you define in your<my_product_secrets>secret. caData-
Pass the CA data by using a
K8S_CONFIG_CA_DATAenvironment variable that you define in your<my_product_secrets>secret.
- Save the configuration changes.
Verification
Run the RHDH application to import your catalog:
$ kubectl -n rhdh-operator get pods -w
- Verify that the pod log shows no errors for your configuration.
- Go to Catalog and check the component page in the Developer Hub instance to verify the cluster connection and the presence of your created resources.
If you meet connection errors, such as certificate issues or permissions, check the message box in the component page or view the logs of the pod.
13. Troubleshoot Developer Hub configuration issues
Resolve common configuration issues in Developer Hub, such as Helm overwriting predefined array values.
13.1. Fix Helm overwriting predefined arrays
If you use Helm to install dynamic plugins, you might meet an issue where predefined values in fields with arrays are overwritten after you add new values. The issue affects fields such as:
-
extraEnvVars -
extraVolumeMounts -
extraVolumes
Fix this issue by duplicating the predefined values from RHDH Helm Chart’s values.yaml file into your own version of the file.
Procedure
For
extraEnvVars, add the following content to yourvalues.yamlfile:extraEnvVars: - name: BACKEND_SECRET valueFrom: secretKeyRef: key: backend-secret name: '{{ include "janus-idp.backend-secret-name" $ }}' - name: POSTGRESQL_ADMIN_PASSWORD valueFrom: secretKeyRef: key: postgres-password name: '{{- include "janus-idp.postgresql.secretName" . }}'For
extraVolumeMounts, add the following content to yourvalues.yamlfile:extraVolumeMounts: - name: dynamic-plugins-root mountPath: /opt/app-root/src/dynamic-plugins-root - name: temp mountPath: /tmpFor
extraVolume, add the following content to yourvalues.yamlfile:extraVolumes: - name: dynamic-plugins-root ephemeral: volumeClaimTemplate: spec: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi