Configuring Red Hat Developer Hub
Adding custom config maps and secrets to configure your Red Hat Developer Hub instance to work in your IT ecosystem
Abstract
- 1. Provisioning and using your custom Red Hat Developer Hub configuration
- 2. Red Hat Developer Hub default configuration
- 3. Configuring external PostgreSQL databases
- 4. Configuring Red Hat Developer Hub deployment when using the Operator
- 5. Configuring high availability in Red Hat Developer Hub
- 6. Running Red Hat Developer Hub behind a corporate proxy
- 7. Configuring an RHDH instance with a TLS connection in Kubernetes
- 8. Using the dynamic plugins cache
- 9. Enabling the Red Hat Developer Hub plugin assets cache
Learn how to configure Red Hat Developer Hub (RHDH) for production to work in your IT ecosystem by adding custom config maps and secrets.
1. Provisioning and using your custom Red Hat Developer Hub configuration
To configure Red Hat Developer Hub, use these methods, which are widely used to configure a Red Hat OpenShift Container Platform application:
- Use config maps to mount files and directories.
- Use secrets to inject environment variables.
Learn to apply these methods to Developer Hub:
- Provision your custom config maps and secrets to OpenShift Container Platform.
Use your selected deployment method to mount the config maps and inject the secrets:
1.1. Provisioning your custom Red Hat Developer Hub configuration
To configure Red Hat Developer Hub, provision your custom Red Hat Developer Hub config maps and secrets to Red Hat OpenShift Container Platform (RHOCP) before running Red Hat Developer Hub.
On Red Hat OpenShift Container Platform, you can skip this step to run Developer Hub with the default config map and secret. Your changes on this configuration might get reverted on Developer Hub restart.
Prerequisites
-
By using the OpenShift CLI (
oc), you have access, with developer permissions, to the OpenShift cluster aimed at containing your Developer Hub instance.
Procedure
For security, store your secrets as environment variables values in an OpenShift Container Platform secret, rather than in clear text in your configuration files. Collect all your secrets in the
secrets.txtfile, with one secret per line inKEY=valueform.Author your custom
app-config.yamlfile. This is the main Developer Hub configuration file. You need a customapp-config.yamlfile to avoid the Developer Hub installer to revert user edits during upgrades. When your customapp-config.yamlfile is empty, Developer Hub is using default values.- To prepare a deployment with the Red Hat Developer Hub Operator on OpenShift Container Platform, you can start with an empty file.
To prepare a deployment with the Red Hat Developer Hub Helm chart, or on Kubernetes, enter the Developer Hub base URL in the relevant fields in your
app-config.yamlfile to ensure proper functionality of Developer Hub. The base URL is what a Developer Hub user sees in their browser when accessing Developer Hub. The relevant fields arebaseUrlin theappandbackendsections, andoriginin thebackend.corssubsection:Example 1. Configuring the
baseUrlinapp-config.yamlapp: title: Red Hat Developer Hub baseUrl: https://<my_developer_hub_domain> backend: auth: externalAccess: - type: legacy options: subject: legacy-default-config secret: "${BACKEND_SECRET}" baseUrl: https://<my_developer_hub_domain> cors: origin: https://<my_developer_hub_domain>
Optionally, enter your configuration such as:
Author your custom
dynamic-plugins.yamlfile to enable plugins. By default, Developer Hub enables a minimal plugin set, and disables plugins that require configuration or secrets, such as the GitHub repository discovery plugin and the Role-based access control (RBAC) plugin.Enable the GitHub repository discovery and the RBAC features:
dynamic.plugins.yamlincludes: - dynamic-plugins.default.yaml plugins: - package: ./dynamic-plugins/dist/backstage-plugin-catalog-backend-module-github disabled: false - package: ./dynamic-plugins/dist/backstage-community-plugin-rbac disabled: falseProvision your custom configuration files to your OpenShift Container Platform cluster.
Create the <my-rhdh-project> {namespace} aimed at containing your Developer Hub instance.
$ oc create namespace my-rhdh-project
Provision your
app-config.yamlanddynamic-plugins.yamlfiles respectively to themy-rhdh-app-config, anddynamic-plugins-rhdhconfig maps in the <my-rhdh-project> project.$ oc create configmap my-rhdh-app-config --from-file=app-config.yaml --namespace=my-rhdh-project $ oc create configmap dynamic-plugins-rhdh --from-file=dynamic-plugins.yaml --namespace=my-rhdh-project
Alternatively, create the config maps by using the web console.
Provision your
secrets.txtfile to themy-rhdh-secretssecret in the <my-rhdh-project> project.$ oc create secret generic my-rhdh-secrets --from-file=secrets.txt --namespace=my-rhdh-project
Alternatively, create the secret by using the web console.
1.2. Using the Red Hat Developer Hub Operator to run Developer Hub with your custom configuration
To use the Developer Hub Operator to run Red Hat Developer Hub with your custom configuration, create your Backstage custom resource (CR) that:
- Mounts files provisioned in your custom config maps.
- Injects environment variables provisioned in your custom secrets.
Prerequisites
-
By using the OpenShift CLI (
oc), you have access, with developer permissions, to the OpenShift Container Platform cluster aimed at containing your Developer Hub instance. - Your administrator has installed the Red Hat Developer Hub Operator in the cluster.
-
You have provisioned your custom config maps and secrets in your
<my-rhdh-project>project.
Procedure
Author your Backstage CR in a
my-rhdh-custom-resource.yamlfile to use your custom config maps and secrets.Minimal
my-rhdh-custom-resource.yamlcustom resource example:apiVersion: rhdh.redhat.com/v1alpha3 kind: Backstage metadata: name: my-rhdh-custom-resource spec: application: appConfig: mountPath: /opt/app-root/src configMaps: - name: my-rhdh-app-config extraEnvs: secrets: - name: <my_product_secrets> extraFiles: mountPath: /opt/app-root/src route: enabled: true database: enableLocalDb: truemy-rhdh-custom-resource.yamlcustom resource example with dynamic plugins and RBAC policies config maps, and external PostgreSQL database secrets:apiVersion: rhdh.redhat.com/v1alpha3 kind: Backstage metadata: name: <my-rhdh-custom-resource> spec: application: appConfig: mountPath: /opt/app-root/src configMaps: - name: my-rhdh-app-config - name: rbac-policies dynamicPluginsConfigMapName: dynamic-plugins-rhdh extraEnvs: secrets: - name: <my_product_secrets> - name: my-rhdh-database-secrets extraFiles: mountPath: /opt/app-root/src secrets: - name: my-rhdh-database-certificates-secrets key: postgres-crt.pem, postgres-ca.pem, postgres-key.key route: enabled: true database: enableLocalDb: false
- Mandatory fields
- No fields are mandatory. You can create an empty Backstage CR and run Developer Hub with the default configuration.
- Optional fields
spec.application.appConfig.configMaps- Enter your config map name list.
Mount files in the
my-rhdh-app-configconfig map:spec: application: appConfig: mountPath: /opt/app-root/src configMaps: - name: my-rhdh-app-configMount files in the
my-rhdh-app-configandrbac-policiesconfig maps:spec: application: appConfig: mountPath: /opt/app-root/src configMaps: - name: my-rhdh-app-config - name: rbac-policiesspec.application.extraEnvs.envsOptionally, enter your additional environment variables that are not secrets, such as your proxy environment variables.
Inject your
HTTP_PROXY,HTTPS_PROXYandNO_PROXYenvironment variables:spec: application: extraEnvs: envs: - name: HTTP_PROXY value: 'http://10.10.10.105:3128' - name: HTTPS_PROXY value: 'http://10.10.10.106:3128' - name: NO_PROXY value: 'localhost,example.org'spec.application.extraEnvs.secretsEnter your environment variables secret name list.
Inject the environment variables in your Red Hat Developer Hub secret:
spec: application: extraEnvs: secrets: - name: <my_product_secrets>Inject the environment variables in the Red Hat Developer Hub and
my-rhdh-database-secretssecrets:spec: application: extraEnvs: secrets: - name: <my_product_secrets> - name: my-rhdh-database-secretsNote<my_product_secrets>is your preferred Developer Hub secret name, specifying the identifier for your secret configuration within Developer Hub.spec.application.extraFiles.secretsEnter your certificates files secret name and files list.
Mount the
postgres-crt.pem,postgres-ca.pem, andpostgres-key.keyfiles contained in themy-rhdh-database-certificates-secretssecret:spec: application: extraFiles: mountPath: /opt/app-root/src secrets: - name: my-rhdh-database-certificates-secrets key: postgres-crt.pem, postgres-ca.pem, postgres-key.keyspec.database.enableLocalDbEnable or disable the local PostgreSQL database.
Disable the local PostgreSQL database generation to use an external postgreSQL database:
spec: database: enableLocalDb: falseOn a development environment, use the local PostgreSQL database:
spec: database: enableLocalDb: truespec.deployment- Optionally, enter your deployment configuration.
Apply your Backstage CR to start or update your Developer Hub instance:
$ oc apply --filename=my-rhdh-custom-resource.yaml --namespace=my-rhdh-project
1.2.1. Mounting additional files in your custom configuration by using the Red Hat Developer Hub Operator
You can use the Developer Hub Operator to mount extra files, such as a ConfigMap or Secret, to the container in a preferred location.
The mountPath field specifies the location where a ConfigMap or Secret is mounted. The behavior of the mount, whether it includes or excludes a subPath, depends on the specification of the key or mountPath fields.
-
If
keyandmountPathare not specified: Each key or value is mounted as afilenameor content with asubPath. -
If
keyis specified with or withoutmountPath: The specified key or value is mounted with asubPath. -
If only
mountPathis specified: A directory containing all the keys or values is mounted without asubPath.
-
OpenShift Container Platform does not automatically update a volume mounted with
subPath. By default, the RHDH Operator monitors these ConfigMaps or Secrets and refreshes the RHDH Pod when changes occur. - For security purposes, Red Hat Developer Hub does not give the Operator Service Account read access to Secrets. As a result, mounting files from Secrets without specifying both mountPath and key is not supported.
Prerequisites
-
You have developer permissions to access the OpenShift Container Platform cluster containing your Developer Hub instance using the OpenShift CLI (
oc). - Your OpenShift Container Platform administrator has installed the Red Hat Developer Hub Operator in OpenShift Container Platform.
Procedure
In OpenShift Container Platform, create your ConfigMap or Secret with the following YAML codes:
Minimal
my-project-configmapConfigMap example:apiVersion: v1 kind: ConfigMap metadata: name: my-project-configmap data: file11.txt: | My file11 content file 12.txt: | My file12 contentMinimal Red Hat Developer Hub Secret example:
apiVersion: v1 kind: Secret metadata: name:
<my_product_secrets>StringData: secret11.txt: | secret-contentFor more information, see Provisioning and using your custom Red Hat Developer Hub configuration.
Set the value of the
configMaps nameto the name of the ConfigMap orsecrets nameto the name of the Secret in yourBackstageCR. For example:spec: application: extraFiles: mountPath: /my/path configMaps: - name: my-project-configmap key: file12.txt mountPath: /my/my-rhdh-config-map/path secrets: - name:<my_product_secrets>key: secret11.txt mountPath: /my/my-rhdh-secret/pathNote<my_product_secrets>is your preferred Developer Hub secret name, specifying the identifier for your secret configuration within Developer Hub.
1.3. Using the Red Hat Developer Hub Helm chart to run Developer Hub with your custom configuration
You can use the Red Hat Developer Hub Helm chart to add a custom application configuration file to your OpenShift Container Platform instance.
Prerequisites
- By using the OpenShift Container Platform web console, you have access with developer permissions, to an OpenShift Container Platform project named <my-rhdh-project>, aimed at containing your Developer Hub instance.
-
You have uploaded your custom configuration files and secrets in your
<my-rhdh-project>project.
Procedure
Configure Helm to use your custom configuration files in Developer Hub.
- Go to the Helm tab to see the list of Helm releases.
- Click the overflow menu on the Helm release that you want to use and select Upgrade.
- Use the YAML view to edit the Helm configuration.
Set the value of the
upstream.backstage.extraAppConfig.configMapRefandupstream.backstage.extraAppConfig.filenameparameters as follows:Helm configuration excerpt
upstream: backstage: extraAppConfig: - configMapRef: my-rhdh-app-config filename: app-config.yaml- Click Upgrade.
Next steps
- Install Developer Hub by using Helm.
!:previouscontext:
2. Red Hat Developer Hub default configuration
You can deploy a standard Red Hat Developer Hub (RHDH) instance, understand the structure, and tailor RHDH instance to meet your needs.
2.1. Red Hat Developer Hub default configuration guide
The Red Hat Developer Hub (RHDH) Operator creates a set of Kubernetes resources to deploy and manage a Backstage instance. The default configuration for these default resources is defined at the Operator level and can be customized for a specific instance using the Backstage Custom Resource (CR). This approach provides a clear starting point while offering flexibility to tailor each deployment.
The default configuration is stored in a ConfigMap named rhdh-default-config located in the rhdh-operator namespace on OpenShift. This ConfigMap contains the YAML manifests that define the foundational structure of the RHDH instance.
You can create a basic RHDH instance by applying an empty Backstage Custom Resource as follows:
Example creating a RHDH instance
apiVersion: backstage.redhat.com/v1alpha4 kind: Backstage metadata: name: my-rhdh-instance namespace: rhdh
The Operator automatically creates the following resources in the specified RHDH namespace by default based on the default configuration:
Table 1. Floating action button parameters
| File Name | Resource GVK | Resource Name | Description |
|---|---|---|---|
|
|
|
|
(Mandatory) The main Backstage application deployment. |
|
|
|
|
(Mandatory) The Backstage application service. |
|
|
|
|
The PostgreSQL database stateful set. Needed if |
|
|
|
|
The PostgreSQL database service. Needed if |
|
|
|
|
The PostgreSQL database credentials secret. Needed if |
|
|
|
|
The OpenShift Route to expose Backstage externally. (Optional) Applied to Openshift only. |
|
|
|
|
(Optional) Specifies one or more Backstage |
|
|
|
|
(Optional) Specifies additional ConfigMaps to be mounted as files into Backstage Pod. |
|
|
|
|
(Optional) Specifies additional ConfigMaps to be exposed as environment variables into Backstage Pod. |
|
|
|
|
(Optional) Specifies additional Secrets to be mounted as files into Backstage Pod. |
|
|
|
|
(Optional) Specifies additional Secrets to be exposed as environment variables into Backstage Pod. |
|
|
|
|
(Optional) Specifies the dynamic plugins that the Operator installs into the Backstage instance. |
|
|
list of |
|
(Optional) The Persistent Volume Claim for PostgreSQL database. |
{cr-name} is the name of the Backstage Custom Resource, for example 'my-rhdh-instance' in the above example.
2.2. Automated Operator features
You can use the Operator to automate several key processes to effectively configure your Backstage application.
2.2.1. Metadata generation
The Operator automatically generates specific metadata values for all default resources at runtime to ensure your Backstage application functions properly.
For all the default resources, metadata.name is generated according to the rules defined in the Default Configuration files, particularly the Resource name column. For example, Backstage Custom Resource (CR) named mybackstage creates Kubernetes Deployment resource called backstage-mybackstage.
The following metadata is generated for each resource:
deployment.yaml-
spec.selector.matchLabels[rhdh.redhat.com/app] = backstage-{cr-name} -
spec.template.metadata.labels[rhdh.redhat.com/app] = backstage-{cr-name}
-
service.yaml-
spec.selector[rhdh.redhat.com/app] = backstage-{cr-name}
-
db-statefulset.yaml-
spec.selector.matchLabels[rhdh.redhat.com/app] = backstage-psql-{cr-name} -
spec.template.metadata.labels[rhdh.redhat.com/app] = backstage-psql-{cr-name}
-
db-service.yaml-
spec.selector[rhdh.redhat.com/app] = backstage-psql-{cr-name}
-
2.2.2. Multiple resources
You can define and create multiple resources of the same type in a single YAML file. This is applicable to any resource type that is a list in the resource table. To define multiple resources, use the --- delimiter to separate each resource definition.
For example, adding the following code snip to pvcs.yaml creates two PersistentVolumeClaims (PVCs) called backstage-{cr-name}-myclaim1 and backstage-{cr-name}-myclaim2 and mounts them to the Backstage container accordingly.
Example creating multiple PVCs
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: myclaim1 ... --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: myclaim2 ...
2.2.3. Default base URLs
The Operator automatically sets the base URLs for your Backstage application in the default app-config ConfigMap known as backstage-appconfig-{CR_name}. The Operator does so based on your Route parameters and the OpenShift cluster ingress domain.
The Operator follows these rules to set the base URLs for your application:
- If the cluster is not OpenShift, the Operator makes no changes.
-
If you explicitly set the
spec.application.route.enabledfield in your Custom Resource (CR) tofalse, no changes are made. -
If you define
spec.application.route.hostin the Backstage CR, the base URLs are set tohttps://<spec.application.route.host>. -
If you specify the
spec.application.route.subdomainin the Backstage CR, the base URLs are set tohttps://<spec.application.route.subdomain>.<cluster_ingress_domain>. -
If no custom host or subdomain is provided, the Operator sets the base URLs to
https://backstage-{cr_name}-<namespace>.<cluster_ingress_domain>, which is the default domain for the created Route resource.
The Operator updates the following base URLs in the default app-config ConfigMap:
-
app.baseUrl -
backend.baseUrl -
backend.cors.origin
You can perform these actions on a best-effort basis and only on OpenShift. During an error or on non-OpenShift clusters, you can still override these defaults by providing a custom app-config ConfigMap.
2.3. Mounts for default Secrets and Persistent Volume Claims (PVCs)
You can use annotations to configure mount paths and specify containers for Secrets and Persistent Volume Claims (PVCs) that are attached to the Operator default resources in your Red Hat Developer Hub deployment. This method is specific for default objects, for instance, the Backstage Deployment that the Operator manages.
2.3.1. Configuring mount paths for default Secrets and Persistent Volume Claims (PVCs)
By default, the mount path is /opt/app-root/src. To specify a different path, add the rhdh.redhat.com/mount-path annotation to your resource.
Procedure
To specify a PVC mount path, add the
rhdh.redhat.com/mount-pathannotation to your configuration file as shown in the following example:Example specifying the PVC mount path
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: <my_claim> # Specifies the PVC to mount annotations: # Specifies which mount path the PVC mounts to (in this case,
/mount/path/from/annotationdirectory) rhdh.redhat.com/mount-path: /mount/path/from/annotationTo specify a Secret mount path, add the
rhdh.redhat.com/mount-pathannotation to your configuration file as shown in the following example:Example specifying where the Secret mounts
apiVersion: v1 kind: Secret metadata: name: <my_secret> # Specifies the Secret name annotations: rhdh.redhat.com/mount-path: /mount/path/from/annotation
2.3.2. Mounting Secrets and PVCs to specific containers
By default, Secrets and PVCs mount only to the Red Hat Developer Hub backstage-backend container. You can add the rhdh.redhat.com/containers annotation to your configuration file to specify the containers to mount to.
Procedure
To mount Secrets to all containers, set the
rhdh.redhat.com/containersannotation to*in your configuration file:Example mounting to all containers
apiVersion: v1 kind: Secret metadata: name: <my_secret> annotations: rhdh.redhat.com/containers:
*ImportantSet
rhdh.redhat.com/containersto*to mount it to all containers in the deployment.To mount to specific containers, separate the names with commas:
Example separating the list of containers
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: <my_claim> annotations: rhdh.redhat.com/containers: "init-dynamic-plugins,backstage-backend"NoteThis configuration mounts the
<my_claim>PVC to theinit-dynamic-pluginsandbackstage-backendcontainers.
3. Configuring external PostgreSQL databases
As an administrator, you can configure and use external PostgreSQL databases in Red Hat Developer Hub. You can use a PostgreSQL certificate file to configure an external PostgreSQL instance using the Operator or Helm Chart.
Developer Hub supports the configuration of external PostgreSQL databases. You can perform maintenance activities, such as backing up your data or configuring high availability (HA) for the external PostgreSQL databases.
By default, the Red Hat Developer Hub operator or Helm Chart creates a local PostgreSQL database. However, this configuration is not suitable for the production environments. For production deployments, disable the creation of local database and configure Developer Hub to connect to an external PostgreSQL instance instead.
3.1. Configuring an external PostgreSQL instance using the Operator
You can configure an external PostgreSQL instance using the Red Hat Developer Hub Operator. By default, the Operator creates and manages a local instance of PostgreSQL in the same namespace where you have deployed the RHDH instance. However, you can change this default setting to configure an external PostgreSQL database server, for example, Amazon Web Services (AWS) Relational Database Service (RDS) or Azure database.
Prerequisites
- You are using a supported version of PostgreSQL. For more information, see the Product life cycle page.
You have the following details:
-
db-host: Denotes your PostgreSQL instance Domain Name System (DNS) or IP address -
db-port: Denotes your PostgreSQL instance port number, such as5432 -
username: Denotes the user name to connect to your PostgreSQL instance -
password: Denotes the password to connect to your PostgreSQL instance
-
- You have installed the Red Hat Developer Hub Operator.
- Optional: You have a CA certificate, Transport Layer Security (TLS) private key, and TLS certificate so that you can secure your database connection by using the TLS protocol. For more information, refer to your PostgreSQL vendor documentation.
By default, Developer Hub uses a database for each plugin and automatically creates it if none is found. You might need the Create Database privilege in addition to PSQL Database privileges for configuring an external PostgreSQL instance.
Procedure
Optional: Create a certificate secret to configure your PostgreSQL instance with a TLS connection:
cat <<EOF | oc -n my-rhdh-project create -f - apiVersion: v1 kind: Secret metadata: name: my-rhdh-database-certificates-secrets 1 type: Opaque stringData: postgres-ca.pem: |- -----BEGIN CERTIFICATE----- <ca-certificate-key> 2 postgres-key.key: |- -----BEGIN CERTIFICATE----- <tls-private-key> 3 postgres-crt.pem: |- -----BEGIN CERTIFICATE----- <tls-certificate-key> 4 # ... EOF
Create a credential secret to connect with the PostgreSQL instance:
cat <<EOF | oc -n my-rhdh-project create -f - apiVersion: v1 kind: Secret metadata: name: my-rhdh-database-secrets 1 type: Opaque stringData: 2 POSTGRES_PASSWORD: <password> POSTGRES_PORT: "<db-port>" POSTGRES_USER: <username> POSTGRES_HOST: <db-host> PGSSLMODE: <ssl-mode> # for TLS connection 3 NODE_EXTRA_CA_CERTS: <abs-path-to-pem-file> # for TLS connection, e.g. /opt/app-root/src/postgres-crt.pem 4 EOF
- 1
- Provide the name of the credential secret.
- 2
- Provide credential data to connect with your PostgreSQL instance.
- 3
- Optional: Provide the value based on the required Secure Sockets Layer (SSL) mode.
- 4
- Optional: Provide the value only if you need a TLS connection for your PostgreSQL instance.
Create your
Backstagecustom resource (CR):cat <<EOF | oc -n my-rhdh-project create -f - apiVersion: rhdh.redhat.com/v1alpha3 kind: Backstage metadata: name: <backstage-instance-name> spec: database: enableLocalDb: false 1 application: extraFiles: mountPath: <path> # e g /opt/app-root/src secrets: - name: my-rhdh-database-certificates-secrets 2 key: postgres-crt.pem, postgres-ca.pem, postgres-key.key # key name as in my-rhdh-database-certificates-secrets Secret extraEnvs: secrets: - name: my-rhdh-database-secrets 3 # ...NoteThe environment variables listed in the
BackstageCR work with the Operator default configuration. If you have changed the Operator default configuration, you must reconfigure theBackstageCR accordingly.-
Apply the
BackstageCR to the namespace where you have deployed the Developer Hub instance.
3.2. Configuring an external PostgreSQL instance using the Helm Chart
You can configure an external PostgreSQL instance by using the Helm Chart. By default, the Helm Chart creates and manages a local instance of PostgreSQL in the same namespace where you have deployed the RHDH instance. However, you can change this default setting to configure an external PostgreSQL database server, for example, Amazon Web Services (AWS) Relational Database Service (RDS) or Azure database.
Prerequisites
- You are using a supported version of PostgreSQL. For more information, see the Product life cycle page.
You have the following details:
-
db-host: Denotes your PostgreSQL instance Domain Name System (DNS) or IP address -
db-port: Denotes your PostgreSQL instance port number, such as5432 -
username: Denotes the user name to connect to your PostgreSQL instance -
password: Denotes the password to connect to your PostgreSQL instance
-
- You have installed the RHDH application by using the Helm Chart.
- Optional: You have a CA certificate, Transport Layer Security (TLS) private key, and TLS certificate so that you can secure your database connection by using the TLS protocol. For more information, refer to your PostgreSQL vendor documentation.
By default, Developer Hub uses a database for each plugin and automatically creates it if none is found. You might need the Create Database privilege in addition to PSQL Database privileges for configuring an external PostgreSQL instance.
Procedure
Optional: Create a certificate secret to configure your PostgreSQL instance with a TLS connection:
cat <<EOF | oc -n <your-namespace> create -f - apiVersion: v1 kind: Secret metadata: name: my-rhdh-database-certificates-secrets 1 type: Opaque stringData: postgres-ca.pem: |- -----BEGIN CERTIFICATE----- <ca-certificate-key> 2 postgres-key.key: |- -----BEGIN CERTIFICATE----- <tls-private-key> 3 postgres-crt.pem: |- -----BEGIN CERTIFICATE----- <tls-certificate-key> 4 # ... EOF
Create a credential secret to connect with the PostgreSQL instance:
cat <<EOF | oc -n <your-namespace> create -f - apiVersion: v1 kind: Secret metadata: name: my-rhdh-database-secrets 1 type: Opaque stringData: 2 POSTGRES_PASSWORD: <password> POSTGRES_PORT: "<db-port>" POSTGRES_USER: <username> POSTGRES_HOST: <db-host> PGSSLMODE: <ssl-mode> # for TLS connection 3 NODE_EXTRA_CA_CERTS: <abs-path-to-pem-file> # for TLS connection, e.g. /opt/app-root/src/postgres-crt.pem 4 EOF
- 1
- Provide the name of the credential secret.
- 2
- Provide credential data to connect with your PostgreSQL instance.
- 3
- Optional: Provide the value based on the required Secure Sockets Layer (SSL) mode.
- 4
- Optional: Provide the value only if you need a TLS connection for your PostgreSQL instance.
Configure your PostgreSQL instance in the Helm configuration file named
values.yaml:# ... upstream: postgresql: enabled: false # disable PostgreSQL instance creation 1 auth: existingSecret: my-rhdh-database-secrets # inject credentials secret to Backstage 2 backstage: appConfig: backend: database: connection: # configure Backstage DB connection parameters host: ${POSTGRES_HOST} port: ${POSTGRES_PORT} user: ${POSTGRES_USER} password: ${POSTGRES_PASSWORD} ssl: rejectUnauthorized: true, ca: $file: /opt/app-root/src/postgres-ca.pem key: $file: /opt/app-root/src/postgres-key.key cert: $file: /opt/app-root/src/postgres-crt.pem extraEnvVarsSecrets: - my-rhdh-database-secrets # inject credentials secret to Backstage 3 extraEnvVars: - name: BACKEND_SECRET valueFrom: secretKeyRef: key: backend-secret name: '{{ include "janus-idp.backend-secret-name" $ }}' extraVolumeMounts: - mountPath: /opt/app-root/src/dynamic-plugins-root name: dynamic-plugins-root - mountPath: /opt/app-root/src/postgres-crt.pem name: postgres-crt # inject TLS certificate to Backstage cont. 4 subPath: postgres-crt.pem - mountPath: /opt/app-root/src/postgres-ca.pem name: postgres-ca # inject CA certificate to Backstage cont. 5 subPath: postgres-ca.pem - mountPath: /opt/app-root/src/postgres-key.key name: postgres-key # inject TLS private key to Backstage cont. 6 subPath: postgres-key.key extraVolumes: - ephemeral: volumeClaimTemplate: spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi name: dynamic-plugins-root - configMap: defaultMode: 420 name: dynamic-plugins optional: true name: dynamic-plugins - name: dynamic-plugins-npmrc secret: defaultMode: 420 optional: true secretName: '{{ printf "%s-dynamic-plugins-npmrc" .Release.Name }}' - name: postgres-crt secret: secretName: my-rhdh-database-certificates-secrets 7 # ...- 1
- Set the value of the
upstream.postgresql.enabledparameter tofalseto disable creating local PostgreSQL instances. - 2
- Provide the name of the credential secret.
- 3
- Provide the name of the credential secret.
- 4
- Optional: Provide the name of the TLS certificate only for a TLS connection.
- 5
- Optional: Provide the name of the CA certificate only for a TLS connection.
- 6
- Optional: Provide the name of the TLS private key only if your TLS connection requires a private key.
- 7
- Provide the name of the certificate secret if you have configured a TLS connection.
Apply the configuration changes in your Helm configuration file named
values.yaml:helm upgrade -n <your-namespace> <your-deploy-name> openshift-helm-charts/redhat-developer-hub -f values.yaml --version 1.8.0
3.3. Migrating local databases to an external database server using the Operator
By default, Red Hat Developer Hub hosts the data for each plugin in a PostgreSQL database. When you fetch the list of databases, you might see multiple databases based on the number of plugins configured in Developer Hub. You can migrate the data from an RHDH instance hosted on a local PostgreSQL server to an external PostgreSQL service, such as AWS RDS, Azure database, or Crunchy database. To migrate the data from each RHDH instance, you can use PostgreSQL utilities, such as pg_dump with psql or pgAdmin.
The following procedure uses a database copy script to do a quick migration.
Prerequisites
Procedure
Configure port forwarding for the local PostgreSQL database pod by running the following command on a terminal:
oc port-forward -n <your-namespace> <pgsql-pod-name> <forward-to-port>:<forward-from-port>
Where:
-
The
<pgsql-pod-name>variable denotes the name of a PostgreSQL pod with the formatbackstage-psql-<deployment-name>-<_index>. -
The
<forward-to-port>variable denotes the port of your choice to forward PostgreSQL data to. The
<forward-from-port>variable denotes the local PostgreSQL instance port, such as5432.Example: Configuring port forwarding
oc port-forward -n developer-hub backstage-psql-developer-hub-0 15432:5432
-
The
Make a copy of the following
db_copy.shscript and edit the details based on your configuration:#!/bin/bash to_host=<db-service-host> 1 to_port=5432 2 to_user=postgres 3 from_host=127.0.0.1 4 from_port=15432 5 from_user=postgres 6 allDB=("backstage_plugin_app" "backstage_plugin_auth" "backstage_plugin_catalog" "backstage_plugin_permission" "backstage_plugin_scaffolder" "backstage_plugin_search") 7 for db in ${!allDB[@]}; do db=${allDB[$db]} echo Copying database: $db PGPASSWORD=$TO_PSW psql -h $to_host -p $to_port -U $to_user -c "create database $db;" pg_dump -h $from_host -p $from_port -U $from_user -d $db | PGPASSWORD=$TO_PSW psql -h $to_host -p $to_port -U $to_user -d $db done
- 1
- The destination host name, for example,
<db-instance-name>.rds.amazonaws.com. - 2
- The destination port, such as
5432. - 3
- The destination server username, for example,
postgres. - 4
- The source host name, such as
127.0.0.1. - 5
- The source port number, such as the
<forward-to-port>variable. - 6
- The source server username, for example,
postgres. - 7
- The name of databases to import in double quotes separated by spaces, for example,
("backstage_plugin_app" "backstage_plugin_auth" "backstage_plugin_catalog" "backstage_plugin_permission" "backstage_plugin_scaffolder" "backstage_plugin_search").
Create a destination database for copying the data:
/bin/bash TO_PSW=<destination-db-password> /path/to/db_copy.sh 1- 1
- The
<destination-db-password>variable denotes the password to connect to the destination database.
NoteYou can stop port forwarding when the copying of the data is complete. For more information about handling large databases and using the compression tools, see the Handling Large Databases section on the PostgreSQL website.
-
Reconfigure your
Backstagecustom resource (CR). For more information, see Configuring an external PostgreSQL instance using the Operator. Check that the following code is present at the end of your
BackstageCR after reconfiguration:# ... spec: database: enableLocalDb: false application: # ... extraFiles: secrets: - name: my-rhdh-database-certificates-secrets key: postgres-crt.pem # key name as in my-rhdh-database-certificates-secrets Secret extraEnvs: secrets: - name: my-rhdh-database-secrets # ...NoteReconfiguring the
BackstageCR deletes the correspondingStatefulSetandPodobjects, but does not delete thePersistenceVolumeClaimobject. Use the following command to delete the localPersistenceVolumeClaimobject:oc -n developer-hub delete pvc <local-psql-pvc-name>
where, the
<local-psql-pvc-name>variable is in thedata-<psql-pod-name>format.- Apply the configuration changes.
Verification
Verify that your RHDH instance is running with the migrated data and does not contain the local PostgreSQL database by running the following command:
oc get pods -n <your-namespace>
Check the output for the following details:
-
The
backstage-developer-hub-xxxpod is in running state. The
backstage-psql-developer-hub-0pod is not available.You can also verify these details using the Topology view in the OpenShift Container Platform web console.
-
The
!:previouscontext:
4. Configuring Red Hat Developer Hub deployment when using the Operator
The Red Hat Developer Hub Operator exposes a rhdh.redhat.com/v1alpha3 API Version of its custom resource (CR). This CR exposes a generic spec.deployment.patch field, which gives you full control over the Developer Hub Deployment resource. This field can be a fragment of the standard apps.Deployment Kubernetes object.
Procedure
-
Create a
BackstageCR with the following fields:
Example
apiVersion: rhdh.redhat.com/v1alpha3
kind: Backstage
metadata:
name: developer-hub
spec:
deployment:
patch:
spec:
template:
labelsAdd labels to the Developer Hub pod.
Example adding the label
my=trueapiVersion: rhdh.redhat.com/v1alpha3 kind: Backstage metadata: name: developer-hub spec: deployment: patch: spec: template: metadata: labels: my: truevolumes
Add an additional volume named my-volume and mount it under /my/path in the Developer Hub application container.
Example additional volume
apiVersion: rhdh.redhat.com/v1alpha3
kind: Backstage
metadata:
name: developer-hub
spec:
deployment:
patch:
spec:
template:
spec:
containers:
- name: backstage-backend
volumeMounts:
- mountPath: /my/path
name: my-volume
volumes:
- ephemeral:
volumeClaimTemplate:
spec:
storageClassName: "special"
name: my-volume
Replace the default dynamic-plugins-root volume with a persistent volume claim (PVC) named dynamic-plugins-root. Note the $patch: replace directive, otherwise a new volume will be added.
Example dynamic-plugins-root volume replacement
apiVersion: rhdh.redhat.com/v1alpha3
kind: Backstage
metadata:
name: developer-hub
spec:
deployment:
patch:
spec:
template:
spec:
volumes:
- $patch: replace
name: dynamic-plugins-root
persistentVolumeClaim:
claimName: dynamic-plugins-root
cpurequestSet the CPU request for the Developer Hub application container to 250m.
Example CPU request
apiVersion: rhdh.redhat.com/v1alpha3 kind: Backstage metadata: name: developer-hub spec: deployment: patch: spec: template: spec: containers: - name: backstage-backend resources: requests: cpu: 250mmy-sidecarcontainerAdd a new
my-sidecarsidecar container into the Developer Hub Pod.Example side car container
apiVersion: rhdh.redhat.com/v1alpha3 kind: Backstage metadata: name: developer-hub spec: deployment: patch: spec: template: spec: containers: - name: my-sidecar image: quay.io/my-org/my-sidecar:latest
Additional resources
5. Configuring high availability in Red Hat Developer Hub
High availability (HA) is a system design approach that ensures a service remains continuously accessible, even during failures of individual components, by eliminating single points of failure. It introduces redundancy and failover mechanisms to minimize downtime and maintain operational continuity.
Red Hat Developer Hub supports HA deployments on the following platforms:
- Red Hat OpenShift Container Platform
- Azure Kubernetes Service
- Elastic Kubernetes Service
- Google Kubernetes Engine
The HA deployments enable more resilient and reliable service availability across supported environments.
In a single instance deployment, if a failure occurs, whether due to software crashes, hardware issues, or other unexpected disruptions, it would make the entire service unavailable, interrupting development workflows and access to key resources.
With HA enabled, you can scale the number of backend replicas to introduce redundancy. This setup ensures that if one pod or component fails, others continue to serve requests without disruption. The built-in load balancer manages ingress traffic and distributes the load across the available pods. Meanwhile, the RHDH backend manages concurrent requests and resolves resource-level conflicts effectively.
As an administrator, you can configure high availability by adjusting replica values in your configuration file:
-
If you installed using the Operator, configure the replica values in your
Backstagecustom resource. - If you used the Helm chart, set the replica values in the Helm configuration.
5.1. Configuring high availability in a Red Hat Developer Hub Operator deployment
RHDH instances that are deployed with the Operator use configurations in the Backstage custom resource (CR). In the Backstage CR, the default value for the replicas field is 1. If you want to configure your RHDH instance for high availability, you must set replicas to a value greater than 1.
Procedure
In your
Backstagecustom resource (CR), setreplicasto a value greater than1.For example, to configure two replicas (one backup instance):
apiVersion: rhdh.redhat.com/v1alpha3 kind: Backstage metadata: name: <your_yaml_file> spec: deployment: patch: spec: replicas: 2
5.2. Configuring high availability in a Red Hat Developer Hub Helm chart deployment
When you are deploying Developer Hub using the Helm chart, you must set replicas to a value greater than 1 in your Helm chart. The default value for replicas is 1.
Procedure
In your Helm chart configuration file, set
replicasto a value greater than1.For example, to configure two replicas (one backup instance):
upstream: backstage: replicas: 2
!:previouscontext:
6. Running Red Hat Developer Hub behind a corporate proxy
In a network restricted environment, configure Red Hat Developer Hub to use your proxy to access remote network resources.
You can run the Developer Hub application behind a corporate proxy by setting any of the following environment variables before starting the application:
HTTP_PROXY- Denotes the proxy to use for HTTP requests.
HTTPS_PROXY- Denotes the proxy to use for HTTPS requests.
NO_PROXY- Set the environment variable to bypass the proxy for certain domains. The variable value is a comma-separated list of hostnames or IP addresses that can be accessed without the proxy, even if one is specified.
6.1. The NO_PROXY exclusion rules
NO_PROXY is a comma or space-separated list of hostnames or IP addresses, with optional port numbers. If the input URL matches any of the entries listed in NO_PROXY, a direct request fetches that URL, for example, bypassing the proxy settings.
The default value for NO_PROXY in RHDH is localhost,127.0.0.1. If you want to override it, include at least localhost or localhost:7007 in the list. Otherwise, the RHDH backend might fail.
Matching follows the rules below:
-
NO_PROXY=*will bypass the proxy for all requests. -
Space and commas might separate the entries in the
NO_PROXYlist. For example,NO_PROXY="localhost,example.com", orNO_PROXY="localhost example.com", orNO_PROXY="localhost, example.com"would have the same effect. -
If
NO_PROXYcontains no entries, configuring theHTTP(S)_PROXYsettings makes the backend send all requests through the proxy. -
The backend does not perform a DNS lookup to determine if a request should bypass the proxy or not. For example, if DNS resolves
example.comto1.2.3.4, settingNO_PROXY=1.2.3.4has no effect on requests sent toexample.com. Only requests sent to the IP address1.2.3.4bypass the proxy. -
If you add a port after the hostname or IP address, the request must match both the host/IP and port to bypass the proxy. For example,
NO_PROXY=example.com:1234would bypass the proxy for requests tohttp(s)://example.com:1234, but not for requests on other ports, likehttp(s)://example.com. -
If you do not specify a port after the hostname or IP address, all requests to that host/IP address will bypass the proxy regardless of the port. For example,
NO_PROXY=localhostwould bypass the proxy for requests sent to URLs likehttp(s)://localhost:7077andhttp(s)://localhost:8888. -
IP Address blocks in CIDR notation will not work. So setting
NO_PROXY=10.11.0.0/16will not have any effect, even if the backend sends a request to an IP address in that block. -
Supports only IPv4 addresses. IPv6 addresses like
::1will not work. -
Generally, the proxy is only bypassed if the hostname is an exact match for an entry in the
NO_PROXYlist. The only exceptions are entries that start with a dot (.) or with a wildcard (*). In such a case, bypass the proxy if the hostname ends with the entry.
List the domain and the wildcard domain if you want to exclude a given domain and all its subdomains. For example, you would set NO_PROXY=example.com,.example.com to bypass the proxy for requests sent to http(s)://example.com and http(s)://subdomain.example.com.
6.2. Configuring proxy information in Operator deployment
For Operator-based deployment, the approach you use for proxy configuration is based on your role:
- As a cluster administrator with access to the Operator namespace, you can configure the proxy variables in the Operator’s default ConfigMap file. This configuration applies the proxy settings to all the users of the Operator.
- As a developer, you can configure the proxy variables in a custom resource (CR) file. This configuration applies the proxy settings to the RHDH application created from that CR.
Prerequisites
- You have installed the Red Hat Developer Hub application.
Procedure
Perform one of the following steps based on your role:
As an administrator, set the proxy information in the Operator’s default ConfigMap file:
-
Search for a ConfigMap file named
backstage-default-configin the default namespacerhdh-operatorand open it. -
Find the
deployment.yamlkey. Set the value of the
HTTP_PROXY,HTTPS_PROXY, andNO_PROXYenvironment variables in theDeploymentspec as shown in the following example:Example: Setting proxy variables in a ConfigMap file
# Other fields omitted deployment.yaml: |- apiVersion: apps/v1 kind: Deployment spec: template: spec: # Other fields omitted initContainers: - name: install-dynamic-plugins # command omitted env: - name: NPM_CONFIG_USERCONFIG value: /opt/app-root/src/.npmrc.dynamic-plugins - name: HTTP_PROXY value: 'http://10.10.10.105:3128' - name: HTTPS_PROXY value: 'http://10.10.10.106:3128' - name: NO_PROXY value: 'localhost,example.org' # Other fields omitted containers: - name: backstage-backend # Other fields omitted env: - name: APP_CONFIG_backend_listen_port value: "7007" - name: HTTP_PROXY value: 'http://10.10.10.105:3128' - name: HTTPS_PROXY value: 'http://10.10.10.106:3128' - name: NO_PROXY value: 'localhost,example.org'
-
Search for a ConfigMap file named
As a developer, set the proxy information in your
BackstageCR file as shown in the following example:Example: Setting proxy variables in a CR file
spec: # Other fields omitted application: extraEnvs: envs: - name: HTTP_PROXY value: 'http://10.10.10.105:3128' - name: HTTPS_PROXY value: 'http://10.10.10.106:3128' - name: NO_PROXY value: 'localhost,example.org'
- Save the configuration changes.
6.3. Configuring proxy information in Helm deployment
For Helm-based deployment, either a developer or a cluster administrator with permissions to create resources in the cluster can configure the proxy variables in a values.yaml Helm configuration file.
Prerequisites
- You have installed the Red Hat Developer Hub application.
Procedure
Set the proxy information in your Helm configuration file:
upstream: backstage: extraEnvVars: - name: HTTP_PROXY value: '<http_proxy_url>' - name: HTTPS_PROXY value: '<https_proxy_url>' - name: NO_PROXY value: '<no_proxy_settings>'Where,
<http_proxy_url>- Denotes a variable that you must replace with the HTTP proxy URL.
<https_proxy_url>- Denotes a variable that you must replace with the HTTPS proxy URL.
<no_proxy_settings>Denotes a variable that you must replace with comma-separated URLs, which you want to exclude from proxying, for example,
foo.com,baz.com.Example: Setting proxy variables using Helm Chart
upstream: backstage: extraEnvVars: - name: HTTP_PROXY value: 'http://10.10.10.105:3128' - name: HTTPS_PROXY value: 'http://10.10.10.106:3128' - name: NO_PROXY value: 'localhost,example.org'
- Save the configuration changes.
!:previouscontext:
7. Configuring an RHDH instance with a TLS connection in Kubernetes
You can configure a RHDH instance with a Transport Layer Security (TLS) connection in a Kubernetes cluster, such as an Azure Red Hat OpenShift (ARO) cluster, any cluster from a supported cloud provider, or your own cluster with proper configuration. Transport Layer Security (TLS) ensures a secure connection for the RHDH instance with other entities, such as third-party applications, or external databases. However, you must use a public Certificate Authority (CA)-signed certificate to configure your Kubernetes cluster.
Prerequisites
- You have set up an Azure Red Hat OpenShift (ARO) cluster with a public CA-signed certificate. For more information about obtaining CA certificates, refer to your vendor documentation.
You have created a namespace and setup a service account with proper read permissions on resources.
Example: Kubernetes manifest for role-based access control
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: backstage-read-only rules: - apiGroups: - '*' resources: - pods - configmaps - services - deployments - replicasets - horizontalpodautoscalers - ingresses - statefulsets - limitranges - resourcequotas - daemonsets verbs: - get - list - watch #...- You have obtained the secret and the service CA certificate associated with your service account.
You have created some resources and added annotations to them so they can be discovered by the Kubernetes plugin. You can apply these Kubernetes annotations:
-
backstage.io/kubernetes-idto label components -
backstage.io/kubernetes-namespaceto label namespaces
-
Procedure
Enable the Kubernetes plugins in the
dynamic-plugins-rhdh.yamlfile:kind: ConfigMap apiVersion: v1 metadata: name: dynamic-plugins-rhdh data: dynamic-plugins.yaml: | includes: - dynamic-plugins.default.yaml plugins: - package: ./dynamic-plugins/dist/backstage-plugin-kubernetes-backend-dynamic disabled: false 1 - package: ./dynamic-plugins/dist/backstage-plugin-kubernetes disabled: false 2 # ...NoteThe
backstage-plugin-kubernetesplugin is currently in Technology Preview. As an alternative, you can use the./dynamic-plugins/dist/backstage-plugin-topology-dynamicplugin, which is Generally Available (GA).Set the Kubernetes cluster details and configure the catalog sync options in the
app-config.yamlconfiguration file.kind: ConfigMap apiVersion: v1 metadata: name: my-rhdh-app-config data: "app-config.yaml": | # ... catalog: rules: - allow: [Component, System, API, Resource, Location] providers: kubernetes: openshift: cluster: openshift processor: namespaceOverride: default defaultOwner: guests schedule: frequency: seconds: 30 timeout: seconds: 5 kubernetes: serviceLocatorMethod: type: 'multiTenant' clusterLocatorMethods: - type: 'config' clusters: - url: <target-cluster-api-server-url> 1 name: openshift authProvider: 'serviceAccount' skipTLSVerify: false 2 skipMetricsLookup: true dashboardUrl: <target-cluster-console-url> 3 dashboardApp: openshift serviceAccountToken: ${K8S_SERVICE_ACCOUNT_TOKEN} 4 caData: ${K8S_CONFIG_CA_DATA} 5 # ...
- 1
- The base URL to the Kubernetes control plane. You can run the
kubectl cluster-infocommand to get the base URL. - 2
- Set the value of this parameter to
falseto enable the verification of the TLS certificate. - 3
- Optional: The link to the Kubernetes dashboard managing the ARO cluster.
- 4
- Optional: Pass the service account token using a
K8S_SERVICE_ACCOUNT_TOKENenvironment variable that you define in your<my_product_secrets>secret. - 5
- Pass the CA data using a
K8S_CONFIG_CA_DATAenvironment variable that you define in your<my_product_secrets>secret.
- Save the configuration changes.
Verification
Run the RHDH application to import your catalog:
kubectl -n rhdh-operator get pods -w
- Verify that the pod log shows no errors for your configuration.
- Go to Catalog and check the component page in the Developer Hub instance to verify the cluster connection and the presence of your created resources.
If you encounter connection errors, such as certificate issues or permissions, check the message box in the component page or view the logs of the pod.
8. Using the dynamic plugins cache
8.1. Using the dynamic plugins cache
The dynamic plugins cache in Red Hat Developer Hub (RHDH) enhances the installation process and reduces platform boot time by storing previously installed plugins. If the configuration remains unchanged, this feature prevents the need to re-download plugins on subsequent boots.
When you enable dynamic plugins cache:
-
The system calculates a checksum of each plugin’s YAML configuration (excluding
pluginConfig). -
The checksum is stored in a file named
dynamic-plugin-config.hashwithin the plugin’s directory. - During boot, if a plugin’s package reference matches the previous installation and the checksum is unchanged, the download is skipped.
- Plugins that are disabled since the previous boot are automatically removed.
To enable the dynamic plugins cache in RHDH, the plugins directory dynamic-plugins-root must be a persistent volume.
8.2. Creating a PVC for the dynamic plugin cache by using the Operator
For operator-based installations, you must manually create the persistent volume claim (PVC) by replacing the default dynamic-plugins-root volume with a PVC named dynamic-plugins-root.
Prerequisites
- You have installed Red Hat Developer Hub on OpenShift Container Platform using the Red Hat Developer Hub Operator.
-
You have installed the OpenShift CLI (
oc).
Procedure
Create the persistent volume definition and save it to a file, such as
pvc.yaml. For example:kind: PersistentVolumeClaim apiVersion: v1 metadata: name: dynamic-plugins-root spec: accessModes: - ReadWriteOnce resources: requests: storage: 5GiNoteThis example uses
ReadWriteOnceas the access mode which prevents multiple replicas from sharing the PVC across different nodes. To run multiple replicas on different nodes, depending on your storage driver, you must use an access mode such asReadWriteMany.To apply this PVC to your cluster, run the following command:
oc apply -f pvc.yaml
Replace the default
dynamic-plugins-rootvolume with a PVC nameddynamic-plugins-root. For example:apiVersion: rhdh.redhat.com/v1alpha3 kind: Backstage metadata: name: developer-hub spec: deployment: patch: spec: template: spec: volumes: - $patch: replace name: dynamic-plugins-root persistentVolumeClaim: claimName: dynamic-plugins-rootNoteTo avoid adding a new volume, you must use the
$patch: replacedirective.
8.3. Creating a PVC for the dynamic plugin cache using the Helm Chart
For Helm chart installations, if you require the dynamic plugin cache to persist across pod restarts, you must create a persistent volume claim (PVC) and configure the Helm chart to use it.
Prerequisites
- You have installed Red Hat Developer Hub using the Helm chart.
-
You have installed the OpenShift CLI (
oc).
Procedure
Create the persistent volume definition. For example:
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: dynamic-plugins-root spec: accessModes: - ReadWriteOnce resources: requests: storage: 5GiNoteThis example uses
ReadWriteOnceas the access mode which prevents multiple replicas from sharing the PVC across different nodes. To run multiple replicas on different nodes, depending on your storage driver, you must use an access mode such asReadWriteMany.To apply this PVC to your cluster, run the following command:
oc apply -f pvc.yaml
Configure the Helm chart to use the PVC. For example:
upstream: backstage: extraVolumes: - name: dynamic-plugins-root persistentVolumeClaim: claimName: dynamic-plugins-root - name: dynamic-plugins configMap: defaultMode: 420 name: '{{ printf "%s-dynamic-plugins" .Release.Name }}' optional: true - name: dynamic-plugins-npmrc secret: defaultMode: 420 optional: true secretName: '{{ printf "%s-dynamic-plugins-npmrc" .Release.Name }}' - name: dynamic-plugins-registry-auth secret: defaultMode: 416 optional: true secretName: '{{ printf "%s-dynamic-plugins-registry-auth" .Release.Name }}' - name: npmcacache emptyDir: {} - name: temp emptyDir: {}NoteWhen you configure the Helm chart to use the PVC, you must also include the
extraVolumessection defined in the default Helm chart values.
8.4. Configuring the dynamic plugins cache
Procedure
To configure the dynamic plugins cache, set the following optional dynamic plugin cache parameters in your
dynamic-plugins.yamlfile:pullPolicy: IfNotPresent(default)- Download the artifact if it is not already present in the dynamic-plugins-root folder, without checking image digests.
pullPolicy: AlwaysCompare the image digest in the remote registry and downloads the artifact if it has changed, even if Developer Hub has already downloaded the plugin before.
When applied to the Node Package Manager (NPM) downloading method, download the remote artifact without a digest check.
Example
dynamic-plugins.yamlfile configuration to download the remote artifact without a digest check:plugins: - disabled: false pullPolicy: Always package: 'oci://quay.io/example-org/example-plugin:v1.0.0!internal-backstage-plugin-example'forceDownload: false(default)- Older option to download the artifact if it is not already present in the dynamic-plugins-root folder, without checking image digests.
forceDownload: trueOlder option to force a reinstall of the plugin, bypassing the cache.
NoteThe
pullPolicyoption takes precedence over theforceDownloadoption.The
forceDownloadoption might become deprecated in a future Developer Hub release.
9. Enabling the Red Hat Developer Hub plugin assets cache
By default, Red Hat Developer Hub does not cache plugin assets. You can use a Redis cache store to improve Developer Hub performance and reliability. Configured plugins in Developer Hub receive dedicated cache connections, which are powered by the Keyv Redis client.
Prerequisites
- You have installed Red Hat Developer Hub.
-
You have an active Redis server. For more information on setting up an external Redis server, see the
official Redis documentation.
Procedure
Enable the Developer Hub cache by defining Redis as the cache store type and entering your Redis server connection URL in your
app-config.yamlfile.app-config.yamlfile fragmentbackend: cache: store: redis connection: redis://user:pass@cache.example.com:6379Enable the cache for Techdocs by adding the
techdocs.cache.ttlsetting in yourapp-config.yamlfile. This setting specifies how long, in milliseconds, a statically built asset should stay in the cache.app-config.yamlfile fragmenttechdocs: cache: ttl: 3600000
Optionally, enable the cache for unsupported plugins that support this feature. See their respective documentation for details.