Administration guide for Red Hat Developer Hub
Abstract
- Preface
- Red Hat Developer Hub support
- 1. Adding a custom application configuration file to Red Hat OpenShift Container Platform
- 2. Configuring external PostgreSQL databases
- 3. Configuring an RHDH instance with a TLS connection in Kubernetes
- 4. Running the RHDH application behind a corporate proxy
- 5. Red Hat Developer Hub integration with Amazon Web Services (AWS)
- 6. Managing templates
- 7. Configuring the TechDocs plugin in Red Hat Developer Hub
- 8. Configuring Red Hat Developer Hub deployment
Preface
Red Hat Developer Hub is an enterprise-grade, open developer platform that you can use to build developer portals. This platform contains a supported and opinionated framework that helps reduce the friction and frustration of developers while boosting their productivity.
Red Hat Developer Hub support
If you experience difficulty with a procedure described in this documentation, visit the Red Hat Customer Portal. You can use the Red Hat Customer Portal for the following purposes:
- To search or browse through the Red Hat Knowledgebase of technical support articles about Red Hat products.
- To create a support case for Red Hat Global Support Services (GSS). For support case creation, select Red Hat Developer Hub as the product and select the appropriate product version. For detailed information about supported platforms, see Supported Platforms and the Red Hat Developer Hub Life Cycle.
Chapter 1. Adding a custom application configuration file to Red Hat OpenShift Container Platform
To access the Red Hat Developer Hub, you must add a custom application configuration file to Red Hat OpenShift Container Platform. In OpenShift Container Platform, you can use the following content as a base template to create a ConfigMap named app-config-rhdh
:
kind: ConfigMap apiVersion: v1 metadata: name: app-config-rhdh data: app-config-rhdh.yaml: | app: title: Red Hat Developer Hub
You can add the custom application configuration file to OpenShift Container Platform in one of the following ways:
- The Red Hat Developer Hub Operator
- The Red Hat Developer Hub Helm chart
1.1. Adding a custom application configuration file to OpenShift Container Platform using the Helm chart
You can use the Red Hat Developer Hub Helm chart to add a custom application configuration file to your OpenShift Container Platform instance.
Prerequisites
- You have created an Red Hat OpenShift Container Platform account.
Procedure
- From the OpenShift Container Platform web console, select the ConfigMaps tab.
- Click Create ConfigMap.
- From Create ConfigMap page, select the YAML view option in Configure via and make changes to the file, if needed.
- Click Create.
- Go to the Helm tab to see the list of Helm releases.
- Click the overflow menu on the Helm release that you want to use and select Upgrade.
Use either the Form view or YAML view to edit the Helm configuration.
Using Form view
- Expand Root Schema → Backstage chart schema → Backstage parameters → Extra app configuration files to inline into command arguments.
- Click the Add Extra app configuration files to inline into command arguments link.
Enter the value in the following fields:
-
configMapRef:
app-config-rhdh
-
filename:
app-config-rhdh.yaml
-
configMapRef:
- Click Upgrade.
Using YAML view
Set the value of the
upstream.backstage.extraAppConfig.configMapRef
andupstream.backstage.extraAppConfig.filename
parameters as follows:# ... other Red Hat Developer Hub Helm Chart configurations upstream: backstage: extraAppConfig: - configMapRef: app-config-rhdh filename: app-config-rhdh.yaml # ... other Red Hat Developer Hub Helm Chart configurations
- Click Upgrade.
1.2. Adding a custom application configuration file to OpenShift Container Platform using the Operator
A custom application configuration file is a ConfigMap
object that you can use to change the configuration of your Red Hat Developer Hub instance. If you are deploying your Developer Hub instance on Red Hat OpenShift Container Platform, you can use the Red Hat Developer Hub Operator to add a custom application configuration file to your OpenShift Container Platform instance by creating the ConfigMap
object and referencing it in the Developer Hub custom resource (CR).
The custom application configuration file contains a sensitive environment variable, named BACKEND_SECRET
. This variable contains a mandatory backend authentication key that Developer Hub uses to reference an environment variable defined in an OpenShift Container Platform secret. You must create a secret, named 'secrets-rhdh', and reference it in the Developer Hub CR.
You are responsible for protecting your Red Hat Developer Hub installation from external and unauthorized access. Manage the backend authentication key like any other secret. Meet strong password requirements, do not expose it in any configuration files, and only inject it into configuration files as an environment variable.
Prerequisites
- You have an active Red Hat OpenShift Container Platform account.
- Your administrator has installed the Red Hat Developer Hub Operator in OpenShift Container Platform. For more information, see Installing the Red Hat Developer Hub Operator.
- You have created the Red Hat Developer Hub CR in OpenShift Container Platform.
Procedure
- From the Developer perspective in the OpenShift Container Platform web console, select the Topology view, and click the Open URL icon on the Developer Hub pod to identify your Developer Hub external URL: <RHDH_URL>.
- From the Developer perspective in the OpenShift Container Platform web console, select the ConfigMaps view.
- Click Create ConfigMap.
Select the YAML view option in Configure via and use the following example as a base template to create a
ConfigMap
object, such asapp-config-rhdh.yaml
:kind: ConfigMap apiVersion: v1 metadata: name: app-config-rhdh data: "app-config-rhdh.yaml": | app: title: Red Hat Developer Hub baseUrl: <RHDH_URL> 1 backend: auth: externalAccess: - type: legacy options: subject: legacy-default-config secret: "${BACKEND_SECRET}" 2 baseUrl: <RHDH_URL> 3 cors: origin: <RHDH_URL> 4
- 1
- Set the external URL of your Red Hat Developer Hub instance.
- 2
- Use an environment variable exposing an OpenShift Container Platform secret to define the mandatory Developer Hub backend authentication key.
- 3
- Set the external URL of your Red Hat Developer Hub instance.
- 4
- Set the external URL of your Red Hat Developer Hub instance.
- Click Create.
- Select the Secrets view.
- Click Create Key/value Secret.
-
Create a secret named
secrets-rhdh
. Add a key named
BACKEND_SECRET
and a base64 encoded string as a value. Use a unique value for each Red Hat Developer Hub instance. For example, you can use the following command to generate a key from your terminal:node -p 'require("crypto").randomBytes(24).toString("base64")'
- Click Create.
- Select the Topology view.
Click the overflow menu for the Red Hat Developer Hub instance that you want to use and select Edit Backstage to load the YAML view of the Red Hat Developer Hub instance.
In the CR, enter the name of the custom application configuration config map as the value for the
spec.application.appConfig.configMaps
field, and enter the name of your secret as the value for thespec.application.extraEnvs.secrets
field. For example:apiVersion: rhdh.redhat.com/v1alpha1 kind: Backstage metadata: name: developer-hub spec: application: appConfig: mountPath: /opt/app-root/src configMaps: - name: app-config-rhdh extraEnvs: secrets: - name: secrets-rhdh extraFiles: mountPath: /opt/app-root/src replicas: 1 route: enabled: true database: enableLocalDb: true
- Click Save.
- Navigate back to the Topology view and wait for the Red Hat Developer Hub pod to start.
- Click the Open URL icon to use the Red Hat Developer Hub platform with the configuration changes.
Additional resources
- For more information about roles and responsibilities in Developer Hub, see Role-Based Access Control (RBAC) in Red Hat Developer Hub.
Chapter 2. Configuring external PostgreSQL databases
As an administrator, you can configure and use external PostgreSQL databases in Red Hat Developer Hub. You can use a PostgreSQL certificate file to configure an external PostgreSQL instance using the Operator or Helm Chart.
Developer Hub supports the configuration of external PostgreSQL databases. You can perform maintenance activities, such as backing up your data or configuring high availability (HA) for the external PostgreSQL databases.
By default, the Red Hat Developer Hub operator or Helm Chart creates a local PostgreSQL database. However, this configuration is not suitable for the production environments. For production deployments, disable the creation of local database and configure Developer Hub to connect to an external PostgreSQL instance instead.
2.1. Configuring an external PostgreSQL instance using the Operator
You can configure an external PostgreSQL instance using the Red Hat Developer Hub Operator. By default, the Operator creates and manages a local instance of PostgreSQL in the same namespace where you have deployed the RHDH instance. However, you can change this default setting to configure an external PostgreSQL database server, for example, Amazon Web Services (AWS) Relational Database Service (RDS) or Azure database.
Prerequisites
- You are using a supported version of PostgreSQL. For more information, see the Product life cycle page.
You have the following details:
-
db-host
: Denotes your PostgreSQL instance Domain Name System (DNS) or IP address -
db-port
: Denotes your PostgreSQL instance port number, such as5432
-
username
: Denotes the user name to connect to your PostgreSQL instance -
password
: Denotes the password to connect to your PostgreSQL instance
-
- You have installed the Red Hat Developer Hub Operator.
- Optional: You have a CA certificate, Transport Layer Security (TLS) private key, and TLS certificate so that you can secure your database connection by using the TLS protocol. For more information, refer to your PostgreSQL vendor documentation.
By default, Developer Hub uses a database for each plugin and automatically creates it if none is found. You might need the Create Database
privilege in addition to PSQL Database
privileges for configuring an external PostgreSQL instance.
Procedure
Optional: Create a certificate secret to configure your PostgreSQL instance with a TLS connection:
cat <<EOF | oc -n <your-namespace> create -f - apiVersion: v1 kind: Secret metadata: name: <crt-secret> 1 type: Opaque stringData: postgres-ca.pem: |- -----BEGIN CERTIFICATE----- <ca-certificate-key> 2 postgres-key.key: |- -----BEGIN CERTIFICATE----- <tls-private-key> 3 postgres-crt.pem: |- -----BEGIN CERTIFICATE----- <tls-certificate-key> 4 # ... EOF
Create a credential secret to connect with the PostgreSQL instance:
cat <<EOF | oc -n <your-namespace> create -f - apiVersion: v1 kind: Secret metadata: name: <cred-secret> 1 type: Opaque stringData: 2 POSTGRES_PASSWORD: <password> POSTGRES_PORT: "<db-port>" POSTGRES_USER: <username> POSTGRES_HOST: <db-host> PGSSLMODE: <ssl-mode> # for TLS connection 3 NODE_EXTRA_CA_CERTS: <abs-path-to-pem-file> # for TLS connection, e.g. /opt/app-root/src/postgres-crt.pem 4 EOF
- 1
- Provide the name of the credential secret.
- 2
- Provide credential data to connect with your PostgreSQL instance.
- 3
- Optional: Provide the value based on the required Secure Sockets Layer (SSL) mode.
- 4
- Optional: Provide the value only if you need a TLS connection for your PostgreSQL instance.
Create a
Backstage
custom resource (CR):cat <<EOF | oc -n <your-namespace> create -f - apiVersion: rhdh.redhat.com/v1alpha1 kind: Backstage metadata: name: <backstage-instance-name> spec: database: enableLocalDb: false 1 application: extraFiles: mountPath: <path> # e g /opt/app-root/src secrets: - name: <crt-secret> 2 key: postgres-crt.pem, postgres-ca.pem, postgres-key.key # key name as in <crt-secret> Secret extraEnvs: secrets: - name: <cred-secret> 3 # ...
NoteThe environment variables listed in the
Backstage
CR work with the Operator default configuration. If you have changed the Operator default configuration, you must reconfigure theBackstage
CR accordingly.-
Apply the
Backstage
CR to the namespace where you have deployed the RHDH instance.
2.2. Configuring an external PostgreSQL instance using the Helm Chart
You can configure an external PostgreSQL instance by using the Helm Chart. By default, the Helm Chart creates and manages a local instance of PostgreSQL in the same namespace where you have deployed the RHDH instance. However, you can change this default setting to configure an external PostgreSQL database server, for example, Amazon Web Services (AWS) Relational Database Service (RDS) or Azure database.
Prerequisites
- You are using a supported version of PostgreSQL. For more information, see the Product life cycle page.
You have the following details:
-
db-host
: Denotes your PostgreSQL instance Domain Name System (DNS) or IP address -
db-port
: Denotes your PostgreSQL instance port number, such as5432
-
username
: Denotes the user name to connect to your PostgreSQL instance -
password
: Denotes the password to connect to your PostgreSQL instance
-
- You have installed the RHDH application by using the Helm Chart.
- Optional: You have a CA certificate, Transport Layer Security (TLS) private key, and TLS certificate so that you can secure your database connection by using the TLS protocol. For more information, refer to your PostgreSQL vendor documentation.
By default, Developer Hub uses a database for each plugin and automatically creates it if none is found. You might need the Create Database
privilege in addition to PSQL Database
privileges for configuring an external PostgreSQL instance.
Procedure
Optional: Create a certificate secret to configure your PostgreSQL instance with a TLS connection:
cat <<EOF | oc -n <your-namespace> create -f - apiVersion: v1 kind: Secret metadata: name: <crt-secret> 1 type: Opaque stringData: postgres-ca.pem: |- -----BEGIN CERTIFICATE----- <ca-certificate-key> 2 postgres-key.key: |- -----BEGIN CERTIFICATE----- <tls-private-key> 3 postgres-crt.pem: |- -----BEGIN CERTIFICATE----- <tls-certificate-key> 4 # ... EOF
Create a credential secret to connect with the PostgreSQL instance:
cat <<EOF | oc -n <your-namespace> create -f - apiVersion: v1 kind: Secret metadata: name: <cred-secret> 1 type: Opaque stringData: 2 POSTGRES_PASSWORD: <password> POSTGRES_PORT: "<db-port>" POSTGRES_USER: <username> POSTGRES_HOST: <db-host> PGSSLMODE: <ssl-mode> # for TLS connection 3 NODE_EXTRA_CA_CERTS: <abs-path-to-pem-file> # for TLS connection, e.g. /opt/app-root/src/postgres-crt.pem 4 EOF
- 1
- Provide the name of the credential secret.
- 2
- Provide credential data to connect with your PostgreSQL instance.
- 3
- Optional: Provide the value based on the required Secure Sockets Layer (SSL) mode.
- 4
- Optional: Provide the value only if you need a TLS connection for your PostgreSQL instance.
Configure your PostgreSQL instance in the Helm configuration file named
values.yaml
:# ... upstream: postgresql: enabled: false # disable PostgreSQL instance creation 1 auth: existingSecret: <cred-secret> # inject credentials secret to Backstage 2 backstage: appConfig: backend: database: connection: # configure Backstage DB connection parameters host: ${POSTGRES_HOST} port: ${POSTGRES_PORT} user: ${POSTGRES_USER} password: ${POSTGRES_PASSWORD} ssl: rejectUnauthorized: true, ca: $file: /opt/app-root/src/postgres-ca.pem key: $file: /opt/app-root/src/postgres-key.key cert: $file: /opt/app-root/src/postgres-crt.pem extraEnvVarsSecrets: - <cred-secret> # inject credentials secret to Backstage 3 extraEnvVars: - name: BACKEND_SECRET valueFrom: secretKeyRef: key: backend-secret name: '{{ include "janus-idp.backend-secret-name" $ }}' extraVolumeMounts: - mountPath: /opt/app-root/src/dynamic-plugins-root name: dynamic-plugins-root - mountPath: /opt/app-root/src/postgres-crt.pem name: postgres-crt # inject TLS certificate to Backstage cont. 4 subPath: postgres-crt.pem - mountPath: /opt/app-root/src/postgres-ca.pem name: postgres-ca # inject CA certificate to Backstage cont. 5 subPath: postgres-ca.pem - mountPath: /opt/app-root/src/postgres-key.key name: postgres-key # inject TLS private key to Backstage cont. 6 subPath: postgres-key.key extraVolumes: - ephemeral: volumeClaimTemplate: spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi name: dynamic-plugins-root - configMap: defaultMode: 420 name: dynamic-plugins optional: true name: dynamic-plugins - name: dynamic-plugins-npmrc secret: defaultMode: 420 optional: true secretName: dynamic-plugins-npmrc - name: postgres-crt secret: secretName: <crt-secret> 7 # ...
- 1
- Set the value of the
upstream.postgresql.enabled
parameter tofalse
to disable creating local PostgreSQL instances. - 2
- Provide the name of the credential secret.
- 3
- Provide the name of the credential secret.
- 4
- Optional: Provide the name of the TLS certificate only for a TLS connection.
- 5
- Optional: Provide the name of the CA certificate only for a TLS connection.
- 6
- Optional: Provide the name of the TLS private key only if your TLS connection requires a private key.
- 7
- Provide the name of the certificate secret if you have configured a TLS connection.
Apply the configuration changes in your Helm configuration file named
values.yaml
:helm upgrade -n <your-namespace> <your-deploy-name> openshift-helm-charts/redhat-developer-hub -f values.yaml --version 1.4.0
2.3. Migrating local databases to an external database server using the Operator
By default, Red Hat Developer Hub hosts the data for each plugin in a PostgreSQL database. When you fetch the list of databases, you might see multiple databases based on the number of plugins configured in Developer Hub. You can migrate the data from an RHDH instance hosted on a local PostgreSQL server to an external PostgreSQL service, such as AWS RDS, Azure database, or Crunchy database. To migrate the data from each RHDH instance, you can use PostgreSQL utilities, such as pg_dump
with psql
or pgAdmin
.
The following procedure uses a database copy script to do a quick migration.
Prerequisites
Procedure
Configure port forwarding for the local PostgreSQL database pod by running the following command on a terminal:
oc port-forward -n <your-namespace> <pgsql-pod-name> <forward-to-port>:<forward-from-port>
Where:
-
The
<pgsql-pod-name>
variable denotes the name of a PostgreSQL pod with the formatbackstage-psql-<deployment-name>-<_index>
. -
The
<forward-to-port>
variable denotes the port of your choice to forward PostgreSQL data to. The
<forward-from-port>
variable denotes the local PostgreSQL instance port, such as5432
.Example: Configuring port forwarding
oc port-forward -n developer-hub backstage-psql-developer-hub-0 15432:5432
-
The
Make a copy of the following
db_copy.sh
script and edit the details based on your configuration:#!/bin/bash to_host=<db-service-host> 1 to_port=5432 2 to_user=postgres 3 from_host=127.0.0.1 4 from_port=15432 5 from_user=postgres 6 allDB=("backstage_plugin_app" "backstage_plugin_auth" "backstage_plugin_catalog" "backstage_plugin_permission" "backstage_plugin_scaffolder" "backstage_plugin_search") 7 for db in ${!allDB[@]}; do db=${allDB[$db]} echo Copying database: $db PGPASSWORD=$TO_PSW psql -h $to_host -p $to_port -U $to_user -c "create database $db;" pg_dump -h $from_host -p $from_port -U $from_user -d $db | PGPASSWORD=$TO_PSW psql -h $to_host -p $to_port -U $to_user -d $db done
- 1
- The destination host name, for example,
<db-instance-name>.rds.amazonaws.com
. - 2
- The destination port, such as
5432
. - 3
- The destination server username, for example,
postgres
. - 4
- The source host name, such as
127.0.0.1
. - 5
- The source port number, such as the
<forward-to-port>
variable. - 6
- The source server username, for example,
postgres
. - 7
- The name of databases to import in double quotes separated by spaces, for example,
("backstage_plugin_app" "backstage_plugin_auth" "backstage_plugin_catalog" "backstage_plugin_permission" "backstage_plugin_scaffolder" "backstage_plugin_search")
.
Create a destination database for copying the data:
/bin/bash TO_PSW=<destination-db-password> /path/to/db_copy.sh 1
- 1
- The
<destination-db-password>
variable denotes the password to connect to the destination database.
NoteYou can stop port forwarding when the copying of the data is complete. For more information about handling large databases and using the compression tools, see the Handling Large Databases section on the PostgreSQL website.
-
Reconfigure your
Backstage
custom resource (CR). For more information, see Configuring an external PostgreSQL instance using the Operator. Check that the following code is present at the end of your
Backstage
CR after reconfiguration:# ... spec: database: enableLocalDb: false application: # ... extraFiles: secrets: - name: <crt-secret> key: postgres-crt.pem # key name as in <crt-secret> Secret extraEnvs: secrets: - name: <cred-secret> # ...
NoteReconfiguring the
Backstage
CR deletes the correspondingStatefulSet
andPod
objects, but does not delete thePersistenceVolumeClaim
object. Use the following command to delete the localPersistenceVolumeClaim
object:oc -n developer-hub delete pvc <local-psql-pvc-name>
where, the
<local-psql-pvc-name>
variable is in thedata-<psql-pod-name>
format.- Apply the configuration changes.
Verification
Verify that your RHDH instance is running with the migrated data and does not contain the local PostgreSQL database by running the following command:
oc get pods -n <your-namespace>
Check the output for the following details:
-
The
backstage-developer-hub-xxx
pod is in running state. The
backstage-psql-developer-hub-0
pod is not available.You can also verify these details using the Topology view in the OpenShift Container Platform web console.
-
The
Chapter 3. Configuring an RHDH instance with a TLS connection in Kubernetes
You can configure an RHDH instance with a Transport Layer Security (TLS) connection in a Kubernetes cluster, such as an Azure Red Hat OpenShift (ARO) cluster, any cluster from a supported cloud provider, or your own cluster with proper configuration. However, You must use a public Certificate Authority (CA)-signed certificate to configure your Kubernetes cluster.
Prerequisites
- You have set up an Azure Red Hat OpenShift (ARO) cluster with a public CA-signed certificate. For more information about obtaining CA certificates, refer to your vendor documentation.
You have created a namespace and setup a service account with proper read permissions on resources.
Example: Kubernetes manifest for role-based access control
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: backstage-read-only rules: - apiGroups: - '*' resources: - pods - configmaps - services - deployments - replicasets - horizontalpodautoscalers - ingresses - statefulsets - limitranges - resourcequotas - daemonsets verbs: - get - list - watch #...
- You have obtained the secret and the service CA certificate associated with your service account.
You have created some resources and added annotations to them so they can be discovered by the Kubernetes plugin. You can apply these Kubernetes annotations:
-
backstage.io/kubernetes-id
to label components -
backstage.io/kubernetes-namespace
to label namespaces
-
Procedure
Enable the Kubernetes plugins in the
dynamic-plugins-rhdh.yaml
file:kind: ConfigMap apiVersion: v1 metadata: name: dynamic-plugins-rhdh data: dynamic-plugins.yaml: | includes: - dynamic-plugins.default.yaml plugins: - package: ./dynamic-plugins/dist/backstage-plugin-kubernetes-backend-dynamic disabled: false 1 - package: ./dynamic-plugins/dist/backstage-plugin-kubernetes disabled: false 2 # ...
NoteThe
backstage-plugin-kubernetes
plugin is currently in Technology Preview. As an alternative, you can use the./dynamic-plugins/dist/backstage-plugin-topology-dynamic
plugin, which is Generally Available (GA).Set the kubernetes cluster details and configure the catalog sync options in the
app-config-rhdh.yaml
file:kind: ConfigMap apiVersion: v1 metadata: name: app-config-rhdh data: "app-config-rhdh.yaml": | # ... catalog: rules: - allow: [Component, System, API, Resource, Location] providers: kubernetes: openshift: cluster: openshift processor: namespaceOverride: default defaultOwner: guests schedule: frequency: seconds: 30 timeout: seconds: 5 kubernetes: serviceLocatorMethod: type: 'multiTenant' clusterLocatorMethods: - type: 'config' clusters: - url: <target-cluster-api-server-url> 1 name: openshift authProvider: 'serviceAccount' skipTLSVerify: false 2 skipMetricsLookup: true dashboardUrl: <target-cluster-console-url> 3 dashboardApp: openshift serviceAccountToken: ${K8S_SERVICE_ACCOUNT_TOKEN} 4 caData: ${K8S_CONFIG_CA_DATA} 5 # ...
- 1
- The base URL to the Kubernetes control plane. You can run the
kubectl cluster-info
command to get the base URL. - 2
- Set the value of this parameter to
false
to enable the verification of the TLS certificate. - 3
- Optional: The link to the Kubernetes dashboard managing the ARO cluster.
- 4
- Optional: Pass the service account token using a
K8S_SERVICE_ACCOUNT_TOKEN
environment variable that you can define in yoursecrets-rhdh
secret. - 5
- Pass the CA data using a
K8S_CONFIG_CA_DATA
environment variable that you can define in yoursecrets-rhdh
secret.
- Save the configuration changes.
Verification
Run the RHDH application to import your catalog:
kubectl -n rhdh-operator get pods -w
- Verify that the pod log shows no errors for your configuration.
- Go to Catalog and check the component page in the Developer Hub instance to verify the cluster connection and the presence of your created resources.
If you encounter connection errors, such as certificate issues or permissions, check the message box in the component page or view the logs of the pod.
Chapter 4. Running the RHDH application behind a corporate proxy
You can run the RHDH application behind a corporate proxy by setting any of the following environment variables before starting the application:
-
HTTP_PROXY
: Denotes the proxy to use for HTTP requests. -
HTTPS_PROXY
: Denotes the proxy to use for HTTPS requests.
Additionally, set the NO_PROXY
environment variable to bypass the proxy for certain domains. The variable value is a comma-separated list of hostnames or IP addresses that can be accessed without the proxy, even if one is specified.
4.1. Understanding the NO_PROXY
exclusion rules
NO_PROXY
is a comma or space-separated list of hostnames or IP addresses, with optional port numbers. If the input URL matches any of the entries listed in NO_PROXY
, a direct request fetches that URL, for example, bypassing the proxy settings.
The default value for NO_PROXY
in RHDH is localhost,127.0.0.1
. If you want to override it, include at least localhost
or localhost:7007
in the list. Otherwise, the RHDH backend might fail.
Matching follows the rules below:
-
NO_PROXY=*
will bypass the proxy for all requests. -
Space and commas might separate the entries in the
NO_PROXY
list. For example,NO_PROXY="localhost,example.com"
, orNO_PROXY="localhost example.com"
, orNO_PROXY="localhost, example.com"
would have the same effect. -
If
NO_PROXY
contains no entries, configuring theHTTP(S)_PROXY
settings makes the backend send all requests through the proxy. -
The backend does not perform a DNS lookup to determine if a request should bypass the proxy or not. For example, if DNS resolves
example.com
to1.2.3.4
, settingNO_PROXY=1.2.3.4
has no effect on requests sent toexample.com
. Only requests sent to the IP address1.2.3.4
bypass the proxy. -
If you add a port after the hostname or IP address, the request must match both the host/IP and port to bypass the proxy. For example,
NO_PROXY=example.com:1234
would bypass the proxy for requests tohttp(s)://example.com:1234
, but not for requests on other ports, likehttp(s)://example.com
. -
If you do not specify a port after the hostname or IP address, all requests to that host/IP address will bypass the proxy regardless of the port. For example,
NO_PROXY=localhost
would bypass the proxy for requests sent to URLs likehttp(s)://localhost:7077
andhttp(s)://localhost:8888
. -
IP Address blocks in CIDR notation will not work. So setting
NO_PROXY=10.11.0.0/16
will not have any effect, even if the backend sends a request to an IP address in that block. -
Supports only IPv4 addresses. IPv6 addresses like
::1
will not work. -
Generally, the proxy is only bypassed if the hostname is an exact match for an entry in the
NO_PROXY
list. The only exceptions are entries that start with a dot (.
) or with a wildcard (*
). In such a case, bypass the proxy if the hostname ends with the entry.
List the domain and the wildcard domain if you want to exclude a given domain and all its subdomains. For example, you would set NO_PROXY=example.com,.example.com
to bypass the proxy for requests sent to http(s)://example.com
and http(s)://subdomain.example.com
.
4.2. Configuring proxy information in Helm deployment
For Helm-based deployment, either a developer or a cluster administrator with permissions to create resources in the cluster can configure the proxy variables in a values.yaml
Helm configuration file.
Prerequisites
- You have installed the Red Hat Developer Hub application.
Procedure
Set the proxy information in your Helm configuration file:
upstream: backstage: extraEnvVars: - name: HTTP_PROXY value: '<http_proxy_url>' - name: HTTPS_PROXY value: '<https_proxy_url>' - name: NO_PROXY value: '<no_proxy_settings>'
Where,
<http_proxy_url>
- Denotes a variable that you must replace with the HTTP proxy URL.
<https_proxy_url>
- Denotes a variable that you must replace with the HTTPS proxy URL.
<no_proxy_settings>
Denotes a variable that you must replace with comma-separated URLs, which you want to exclude from proxying, for example,
foo.com,baz.com
.Example: Setting proxy variables using Helm Chart
upstream: backstage: extraEnvVars: - name: HTTP_PROXY value: 'http://10.10.10.105:3128' - name: HTTPS_PROXY value: 'http://10.10.10.106:3128' - name: NO_PROXY value: 'localhost,example.org'
- Save the configuration changes.
4.3. Configuring proxy information in Operator deployment
For Operator-based deployment, the approach you use for proxy configuration is based on your role:
- As a cluster administrator with access to the Operator namespace, you can configure the proxy variables in the Operator’s default ConfigMap file. This configuration applies the proxy settings to all the users of the Operator.
- As a developer, you can configure the proxy variables in a custom resource (CR) file. This configuration applies the proxy settings to the RHDH application created from that CR.
Prerequisites
- You have installed the Red Hat Developer Hub application.
Procedure
Perform one of the following steps based on your role:
As an administrator, set the proxy information in the Operator’s default ConfigMap file:
-
Search for a ConfigMap file named
backstage-default-config
in the default namespacerhdh-operator
and open it. -
Find the
deployment.yaml
key. Set the value of the
HTTP_PROXY
,HTTPS_PROXY
, andNO_PROXY
environment variables in theDeployment
spec as shown in the following example:Example: Setting proxy variables in a ConfigMap file
# Other fields omitted deployment.yaml: |- apiVersion: apps/v1 kind: Deployment spec: template: spec: # Other fields omitted initContainers: - name: install-dynamic-plugins # command omitted env: - name: NPM_CONFIG_USERCONFIG value: /opt/app-root/src/.npmrc.dynamic-plugins - name: HTTP_PROXY value: 'http://10.10.10.105:3128' - name: HTTPS_PROXY value: 'http://10.10.10.106:3128' - name: NO_PROXY value: 'localhost,example.org' # Other fields omitted containers: - name: backstage-backend # Other fields omitted env: - name: APP_CONFIG_backend_listen_port value: "7007" - name: HTTP_PROXY value: 'http://10.10.10.105:3128' - name: HTTPS_PROXY value: 'http://10.10.10.106:3128' - name: NO_PROXY value: 'localhost,example.org'
-
Search for a ConfigMap file named
As a developer, set the proxy information in your custom resource (CR) file as shown in the following example:
Example: Setting proxy variables in a CR file
spec: # Other fields omitted application: extraEnvs: envs: - name: HTTP_PROXY value: 'http://10.10.10.105:3128' - name: HTTPS_PROXY value: 'http://10.10.10.106:3128' - name: NO_PROXY value: 'localhost,example.org'
- Save the configuration changes.
Chapter 5. Red Hat Developer Hub integration with Amazon Web Services (AWS)
You can integrate your Red Hat Developer Hub application with Amazon Web Services (AWS), which can help you streamline your workflows within the AWS ecosystem. Integrating the Developer Hub resources with AWS provides access to a comprehensive suite of tools, services, and solutions.
The integration with AWS requires the deployment of Developer Hub in Elastic Kubernetes Service (EKS) using one of the following methods:
- The Helm chart
- The Red Hat Developer Hub Operator
Chapter 6. Managing templates
A template is a form composed of different UI fields that is defined in a YAML file. Templates include actions, which are steps that are executed in sequential order and can be executed conditionally.
You can use templates to easily create Red Hat Developer Hub components, and then publish these components to different locations, such as the Red Hat Developer Hub software catalog, or repositories in GitHub or GitLab.
6.1. Creating a template by using the Template Editor
You can create a template by using the Template Editor.
Procedure
Access the Template Editor by using one of the following options:
-
Open the URL
https://<rhdh_url>/create/edit
for your Red Hat Developer Hub instance. - Click Create… in the navigation menu of the Red Hat Developer Hub console, then click the overflow menu button and select Template editor.
-
Open the URL
- Click Edit Template Form.
- Optional: Modify the YAML definition for the parameters of your template. For more information about these parameters, see Section 6.2, “Creating a template as a YAML file”.
- In the Name * field, enter a unique name for your template.
- From the Owner drop-down menu, choose an owner for the template.
- Click Next.
In the Repository Location view, enter the following information about the hosted repository that you want to publish the template to:
Select an available Host from the drop-down menu.
NoteAvailable hosts are defined in the YAML parameters by the
allowedHosts
field:Example YAML
# ... ui:options: allowedHosts: - github.com # ...
- In the Owner * field, enter an organization, user or project that the hosted repository belongs to.
- In the Repository * field, enter the name of the hosted repository.
- Click Review.
- Review the information for accuracy, then click Create.
Verification
- Click the Catalog tab in the navigation panel.
- In the Kind drop-down menu, select Template.
- Confirm that your template is shown in the list of existing templates.
6.2. Creating a template as a YAML file
You can create a template by defining a Template
object as a YAML file.
The Template
object describes the template and its metadata. It also contains required input variables and a list of actions that are executed by the scaffolding service.
Template
object example
apiVersion: scaffolder.backstage.io/v1beta3 kind: Template metadata: name: template-name 1 title: Example template 2 description: An example template for v1beta3 scaffolder. 3 spec: owner: backstage/techdocs-core 4 type: service 5 parameters: 6 - title: Fill in some steps required: - name properties: name: title: Name type: string description: Unique name of the component owner: title: Owner type: string description: Owner of the component - title: Choose a location required: - repoUrl properties: repoUrl: title: Repository Location type: string steps: 7 - id: fetch-base name: Fetch Base action: fetch:template # ... output: 8 links: - title: Repository 9 url: ${{ steps['publish'].output.remoteUrl }} - title: Open in catalog 10 icon: catalog entityRef: ${{ steps['register'].output.entityRef }} # ...
- 1
- Specify a name for the template.
- 2
- Specify a title for the template. This is the title that is visible on the template tile in the Create… view.
- 3
- Specify a description for the template. This is the description that is visible on the template tile in the Create… view.
- 4
- Specify the ownership of the template. The
owner
field provides information about who is responsible for maintaining or overseeing the template within the system or organization. In the provided example, theowner
field is set tobackstage/techdocs-core
. This means that this template belongs to thetechdocs-core
project in thebackstage
namespace. - 5
- Specify the component type. Any string value is accepted for this required field, but your organization should establish a proper taxonomy for these. Red Hat Developer Hub instances may read this field and behave differently depending on its value. For example, a
website
type component may present tooling in the Red Hat Developer Hub interface that is specific to just websites.The following values are common for this field:
service
- A backend service, typically exposing an API.
website
- A website.
library
- A software library, such as an npm module or a Java library.
- 6
- Use the
parameters
section to specify parameters for user input that are shown in a form view when a user creates a component by using the template in the Red Hat Developer Hub console. Eachparameters
subsection, defined by a title and properties, creates a new form page with that definition. - 7
- Use the
steps
section to specify steps that are executed in the backend. These steps must be defined by using a unique step ID, a name, and an action. You can view actions that are available on your Red Hat Developer Hub instance by visiting the URLhttps://<rhdh_url>/create/actions
. - 8
- Use the
output
section to specify the structure of output data that is created when the template is used. Theoutput
section, particularly thelinks
subsection, provides valuable references and URLs that users can utilize to access and interact with components that are created from the template. - 9
- Provides a reference or URL to the repository associated with the generated component.
- 10
- Provides a reference or URL that allows users to open the generated component in a catalog or directory where various components are listed.
6.3. Importing an existing template to Red Hat Developer Hub
You can add an existing template to your Red Hat Developer Hub instance by using the Catalog Processor.
Prerequisites
- You have created a directory or repository that contains at least one template YAML file.
- If you want to use a template that is stored in a repository such as GitHub or GitLab, you must configure a Red Hat Developer Hub integration for your provider.
Procedure
In the
app-config.yaml
configuration file, modify thecatalog.rules
section to include a rule for templates, and configure thecatalog.locations
section to point to the template that you want to add, as shown in the following example:# ... catalog: rules: - allow: [Template] 1 locations: - type: url 2 target: https://<repository_url>/example-template.yaml 3 # ...
Verification
- Click the Catalog tab in the navigation panel.
- In the Kind drop-down menu, select Template.
- Confirm that your template is shown in the list of existing templates.
Additional resources
Chapter 7. Configuring the TechDocs plugin in Red Hat Developer Hub
The Red Hat Developer Hub TechDocs plugin helps your organization create, find, and use documentation in a central location and in a standardized way. For example:
- Docs-like-code approach
- Write your technical documentation in Markdown files that are stored inside your project repository along with your code.
- Documentation site generation
- Use MkDocs to create a full-featured, Markdown-based, static HTML site for your documentation that is rendered centrally in Developer Hub.
- Documentation site metadata and integrations
- See additional metadata about the documentation site alongside the static documentation, such as the date of the last update, the site owner, top contributors, open GitHub issues, Slack support channels, and Stack Overflow Enterprise tags.
- Built-in navigation and search
- Find the information that you want from a document more quickly and easily.
- Add-ons
- Customize your TechDocs experience with Add-ons to address higher-order documentation needs.
The TechDocs plugin is preinstalled and enabled on a Developer Hub instance by default. You can disable or enable the TechDocs plugin, and change other parameters, by configuring the Red Hat Developer Hub Helm chart or the Red Hat Developer Hub Operator config map.
Red Hat Developer Hub includes a built-in TechDocs builder that generates static HTML documentation from your codebase. However, the default basic setup of the local builder is not intended for production.
You can use a CI/CD pipeline with the repository that has a dedicated job to generate docs for TechDocs. The generated static files are stored in OpenShift Data Foundation or in a cloud storage solution of your choice and published to a static HTML documentation site.
After you configure OpenShift Data Foundation to store the files that TechDocs generates, you can configure the TechDocs plugin to use the OpenShift Data Foundation for cloud storage.
Additional resources
- For more information, see Configuring plugins in Red Hat Developer Hub.
7.1. Configuring storage for TechDocs files
The TechDocs publisher stores generated files in local storage or in cloud storage, such as OpenShift Data Foundation, Google GCS, AWS S3, or Azure Blob Storage.
7.1.1. Using OpenShift Data Foundation for file storage
You can configure OpenShift Data Foundation to store the files that TechDocs generates instead of relying on other cloud storage solutions.
OpenShift Data Foundation provides an ObjectBucketClaim
custom resource (CR) that you can use to request an S3 compatible bucket backend. You must install the OpenShift Data Foundation Operator to use this feature.
Prerequisites
- An OpenShift Container Platform administrator has installed the OpenShift Data Foundation Operator in Red Hat OpenShift Container Platform. For more information, see OpenShift Container Platform - Installing Red Hat OpenShift Data Foundation Operator.
-
An OpenShift Container Platform administrator has created an OpenShift Data Foundation cluster and configured the
StorageSystem
schema. For more information, see OpenShift Container Platform - Creating an OpenShift Data Foundation cluster.
Procedure
Create an
ObjectBucketClaim
CR where the generated TechDocs files are stored. For example:apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: <rhdh_bucket_claim_name> spec: generateBucketName: <rhdh_bucket_claim_name> storageClassName: openshift-storage.noobaa.io
NoteCreating the Developer Hub
ObjectBucketClaim
CR automatically creates both the Developer HubObjectBucketClaim
config map and secret. The config map and secret have the same name as theObjetBucketClaim
CR.
After you create the ObjectBucketClaim
CR, you can use the information stored in the config map and secret to make the information accessible to the Developer Hub container as environment variables. Depending on the method that you used to install Developer Hub, you add the access information to either the Red Hat Developer Hub Helm chart or Operator configuration.
Additional resources
- For more information about the Object Bucket Claim, see OpenShift Container Platform - Object Bucket Claim.
7.1.2. Making object storage accessible to containers by using the Helm chart
Creating a ObjectBucketClaim
custom resource (CR) automatically generates both the Developer Hub ObjectBucketClaim
config map and secret. The config map and secret contain ObjectBucket
access information. Adding the access information to the Helm chart configuration makes it accessible to the Developer Hub container by adding the following environment variables to the container:
-
BUCKET_NAME
-
BUCKET_HOST
-
BUCKET_PORT
-
BUCKET_REGION
-
BUCKET_SUBREGION
-
AWS_ACCESS_KEY_ID
-
AWS_SECRET_ACCESS_KEY
These variables are then used in the TechDocs plugin configuration.
Prerequisites
- You have installed Red Hat Developer Hub on OpenShift Container Platform using the Helm chart.
-
You have created an
ObjectBucketClaim
CR for storing files generated by TechDocs. For more information see Using OpenShift Data Foundation for file storage
Procedure
In the
upstream.backstage
key in the Helm chart values, enter the name of the Developer HubObjectBucketClaim
secret as the value for theextraEnvVarsSecrets
field and theextraEnvVarsCM
field. For example:upstream: backstage: extraEnvVarsSecrets: - <rhdh_bucket_claim_name> extraEnvVarsCM: - <rhdh_bucket_claim_name>
7.1.2.1. Example TechDocs Plugin configuration for the Helm chart
The following example shows a Developer Hub Helm chart configuration for the TechDocs plugin:
global: dynamic: includes: - 'dynamic-plugins.default.yaml' plugins: - disabled: false package: ./dynamic-plugins/dist/backstage-plugin-techdocs-backend-dynamic pluginConfig: techdocs: builder: external generator: runIn: local publisher: awsS3: bucketName: '${BUCKET_NAME}' credentials: accessKeyId: '${AWS_ACCESS_KEY_ID}' secretAccessKey: '${AWS_SECRET_ACCESS_KEY}' endpoint: 'https://${BUCKET_HOST}' region: '${BUCKET_REGION}' s3ForcePathStyle: true type: awsS3
7.1.3. Making object storage accessible to containers by using the Operator
Creating a ObjectBucketClaim
Custom Resource (CR) automatically generates both the Developer Hub ObjectBucketClaim
config map and secret. The config map and secret contain ObjectBucket
access information. Adding the access information to the Operator configuration makes it accessible to the Developer Hub container by adding the following environment variables to the container:
-
BUCKET_NAME
-
BUCKET_HOST
-
BUCKET_PORT
-
BUCKET_REGION
-
BUCKET_SUBREGION
-
AWS_ACCESS_KEY_ID
-
AWS_SECRET_ACCESS_KEY
These variables are then used in the TechDocs plugin configuration.
Prerequisites
- You have installed Red Hat Developer Hub on OpenShift Container Platform using the Operator.
-
You have created an
ObjectBucketClaim
CR for storing files generated by TechDocs.
Procedure
In the Developer Hub
Backstage
CR, enter the name of the Developer HubObjectBucketClaim
config map as the value for thespec.application.extraEnvs.configMaps
field and enter the Developer HubObjectBucketClaim
secret name as the value for thespec.application.extraEnvs.secrets
field. For example:apiVersion: objectbucket.io/v1alpha1 kind: Backstage metadata: name: <name> spec: application: extraEnvs: configMaps: - name: <rhdh_bucket_claim_name> secrets: - name: <rhdh_bucket_claim_name>
7.1.3.1. Example TechDocs Plugin configuration for the Operator
The following example shows a Red Hat Developer Hub Operator config map configuration for the TechDocs plugin:
kind: ConfigMap apiVersion: v1 metadata: name: dynamic-plugins-rhdh data: dynamic-plugins.yaml: | includes: - dynamic-plugins.default.yaml plugins: - disabled: false package: ./dynamic-plugins/dist/backstage-plugin-techdocs-backend-dynamic pluginConfig: techdocs: builder: external generator: runIn: local publisher: awsS3: bucketName: '${BUCKET_NAME}' credentials: accessKeyId: '${AWS_ACCESS_KEY_ID}' secretAccessKey: '${AWS_SECRET_ACCESS_KEY}' endpoint: 'https://${BUCKET_HOST}' region: '${BUCKET_REGION}' s3ForcePathStyle: true type: awsS3
7.2. Configuring CI/CD to generate and publish TechDocs sites
TechDocs reads the static generated documentation files from a cloud storage bucket, such as OpenShift Data Foundation. The documentation site is generated on the CI/CD workflow associated with the repository containing the documentation files. You can generate docs on CI and publish to a cloud storage using the techdocs-cli
CLI tool.
You can use the following example to create a script for TechDocs publication:
# Prepare REPOSITORY_URL='https://github.com/org/repo' git clone $REPOSITORY_URL cd repo # Install @techdocs/cli, mkdocs and mkdocs plugins npm install -g @techdocs/cli pip install "mkdocs-techdocs-core==1.*" # Generate techdocs-cli generate --no-docker # Publish techdocs-cli publish --publisher-type awsS3 --storage-name <bucket/container> --entity <Namespace/Kind/Name>
The TechDocs workflow starts the CI when a user makes changes in the repository containing the documentation files. You can configure the workflow to start only when files inside the docs/
directory or mkdocs.yml
are changed.
7.2.1. Preparing your repository for CI
The first step on the CI is to clone your documentation source repository in a working directory.
Procedure
To clone your documentation source repository in a working directory, enter the following command:
git clone <https://path/to/docs-repository/>
7.2.2. Generating the TechDocs site
Procedure
To configure CI/CD to generate your techdocs, complete the following steps:
Install the
npx
package to runtechdocs-cli
using the following command:npm install -g npx
Install the
techdocs-cli
tool using the following command:npm install -g @techdocs/cli
Install the
mkdocs
plugins using the following command:pip install "mkdocs-techdocs-core==1.*"
Generate your techdocs site using the following command:
npx @techdocs/cli generate --no-docker --source-dir <path_to_repo> --output-dir ./site
Where
<path_to_repo>
is the location in the file path that you used to clone your repository.
7.2.3. Publishing the TechDocs site
Procedure
To publish your techdocs site, complete the following steps:
- Set the necessary authentication environment variables for your cloud storage provider.
Publish your techdocs using the following command:
npx @techdocs/cli publish --publisher-type <awsS3|googleGcs> --storage-name <bucket/container> --entity <namespace/kind/name> --directory ./site
Add a
.github/workflows/techdocs.yml
file in your Software Template(s). For example:name: Publish TechDocs Site on: push: branches: [main] # You can even set it to run only when TechDocs related files are updated. # paths: # - "docs/**" # - "mkdocs.yml" jobs: publish-techdocs-site: runs-on: ubuntu-latest # The following secrets are required in your CI environment for publishing files to AWS S3. # e.g. You can use GitHub Organization secrets to set them for all existing and new repositories. env: TECHDOCS_S3_BUCKET_NAME: ${{ secrets.TECHDOCS_S3_BUCKET_NAME }} AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} AWS_REGION: ${{ secrets.AWS_REGION }} ENTITY_NAMESPACE: 'default' ENTITY_KIND: 'Component' ENTITY_NAME: 'my-doc-entity' # In a Software template, Scaffolder will replace {{cookiecutter.component_id | jsonify}} # with the correct entity name. This is same as metadata.name in the entity's catalog-info.yaml # ENTITY_NAME: '{{ cookiecutter.component_id | jsonify }}' steps: - name: Checkout code uses: actions/checkout@v3 - uses: actions/setup-node@v3 - uses: actions/setup-python@v4 with: python-version: '3.9' - name: Install techdocs-cli run: sudo npm install -g @techdocs/cli - name: Install mkdocs and mkdocs plugins run: python -m pip install mkdocs-techdocs-core==1.* - name: Generate docs site run: techdocs-cli generate --no-docker --verbose - name: Publish docs site run: techdocs-cli publish --publisher-type awsS3 --storage-name $TECHDOCS_S3_BUCKET_NAME --entity $ENTITY_NAMESPACE/$ENTITY_KIND/$ENTITY_NAME
Chapter 8. Configuring Red Hat Developer Hub deployment
The Red Hat Developer Hub operator exposes a rhdh.redhat.com/v1alpha2
API Version of its Custom Resource Definition (CRD). This CRD exposes a generic spec.deployment.patch
field, which gives you full control over the Developer Hub Deployment resource. This field can be a fragment of the standard apps.Deployment
Kubernetes object.
Procedure
- Create a Developer Hub Custom Resource Definition with the following fields:
Example
apiVersion: rhdh.redhat.com/v1alpha2 kind: Backstage metadata: name: developer-hub spec: deployment: patch: spec: template:
labels
Add labels to the Developer Hub pod.
Example adding the label
my=true
apiVersion: rhdh.redhat.com/v1alpha2 kind: Backstage metadata: name: developer-hub spec: deployment: patch: spec: template: metadata: labels: my: true
volumes
Add an additional volume named my-volume
and mount it under /my/path
in the Developer Hub application container.
Example additional volume
apiVersion: rhdh.redhat.com/v1alpha2 kind: Backstage metadata: name: developer-hub spec: deployment: patch: spec: template: spec: containers: - name: backstage-backend volumeMounts: - mountPath: /my/path name: my-volume volumes: - ephemeral: volumeClaimTemplate: spec: storageClassName: "special" name: my-volume
Replace the default dynamic-plugins-root
volume with a persistent volume claim (PVC) named dynamic-plugins-root
. Note the $patch: replace
directive, otherwise a new volume will be added.
Example dynamic-plugins-root
volume replacement
apiVersion: rhdh.redhat.com/v1alpha2 kind: Backstage metadata: name: developer-hub spec: deployment: patch: spec: template: spec: volumes: - $patch: replace name: dynamic-plugins-root persistentVolumeClaim: claimName: dynamic-plugins-root
cpu
requestSet the CPU request for the Developer Hub application container to 250m.
Example CPU request
apiVersion: rhdh.redhat.com/v1alpha2 kind: Backstage metadata: name: developer-hub spec: deployment: patch: spec: template: spec: containers: - name: backstage-backend resources: requests: cpu: 250m
my-sidecar
containerAdd a new
my-sidecar
sidecar container into the Developer Hub Pod.Example side car container
apiVersion: rhdh.redhat.com/v1alpha2 kind: Backstage metadata: name: developer-hub spec: deployment: patch: spec: template: spec: containers: - name: my-sidecar image: quay.io/my-org/my-sidecar:latest
Additional resources
- To learn more about merging, see Strategic Merge Patch.
8.1. Mounting directories from pre-created PVCs
Starting from v1alpha3
, you can mount directories from pre-created PersistentVolumeClaims (PVCs) using the spec.application.extraFiles.pvcs
field.
Prerequisites
You have understanding of the mounting logic:
-
If
spec.application.extraFiles.pvcs[].mountPath
is defined, the PVC is mounted to the specified path. -
If
spec.application.extraFiles.pvcs[].mountPath
is not defined, the PVC is mounted under the path specified inspec.application.extraFiles.mountPath
. -
If neither
mountPath
is defined, the PVC defaults to/opt/app-root/src
or you can use RHDH container working directory (if defined).
-
If
Procedure
Define the PVCs using the following YAML example:
Example YAML file to define PVCs
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: myclaim1 spec: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: myclaim2 spec: accessModes: - ReadWriteOnce resources: requests: storage: 2Gi
Use the following configuration to specify how the PVCs are mounted in the container:
Example configuration for mounting PVCs in a container
spec: application: extraFiles: mountPath: /my/path pvcs: - name: myclaim1 - name: myclaim2 mountPath: /vol/my/claim
Verification
Based on the configuration, the following directories are mounted in the container:
-
/my/path/myclaim1
-
/vol/my/claim