Administration guide for Red Hat Developer Hub
Abstract
- Preface
- Red Hat Developer Hub support
- 1. Adding a custom application configuration file to Red Hat OpenShift Container Platform
- 2. Configuring external PostgreSQL databases
- 3. Configuring an RHDH instance with a TLS connection in Kubernetes
- 4. Telemetry data collection
- 5. Enabling observability for Red Hat Developer Hub on OpenShift Container Platform
- 6. Running the RHDH application behind a corporate proxy
- 7. Red Hat Developer Hub integration with Amazon Web Services (AWS)
- 8. Red Hat Developer Hub integration with Microsoft Azure Kubernetes Service (AKS)
- 9. Managing templates
- 10. Configuring the TechDocs plugin in Red Hat Developer Hub
- 11. Configuring Red Hat Developer Hub deployment
Preface
Red Hat Developer Hub is an enterprise-grade, open developer platform that you can use to build developer portals. This platform contains a supported and opinionated framework that helps reduce the friction and frustration of developers while boosting their productivity.
Red Hat Developer Hub support
If you experience difficulty with a procedure described in this documentation, visit the Red Hat Customer Portal. You can use the Red Hat Customer Portal for the following purposes:
- To search or browse through the Red Hat Knowledgebase of technical support articles about Red Hat products.
- To create a support case for Red Hat Global Support Services (GSS). For support case creation, select Red Hat Developer Hub as the product and select the appropriate product version. For detailed information about supported platforms, see Supported Platforms and the Red Hat Developer Hub Life Cycle.
Chapter 1. Adding a custom application configuration file to Red Hat OpenShift Container Platform
To access the Red Hat Developer Hub, you must add a custom application configuration file to Red Hat OpenShift Container Platform. In OpenShift Container Platform, you can use the following content as a base template to create a ConfigMap named app-config-rhdh
:
kind: ConfigMap apiVersion: v1 metadata: name: app-config-rhdh data: app-config-rhdh.yaml: | app: title: Red Hat Developer Hub
You can add the custom application configuration file to OpenShift Container Platform in one of the following ways:
- The Red Hat Developer Hub Operator
- The Red Hat Developer Hub Helm chart
1.1. Adding a custom application configuration file to OpenShift Container Platform using the Helm chart
You can use the Red Hat Developer Hub Helm chart to add a custom application configuration file to your OpenShift Container Platform instance.
Prerequisites
- You have created an Red Hat OpenShift Container Platform account.
Procedure
- From the OpenShift Container Platform web console, select the ConfigMaps tab.
- Click Create ConfigMap.
- From Create ConfigMap page, select the YAML view option in Configure via and make changes to the file, if needed.
- Click Create.
- Go to the Helm tab to see the list of Helm releases.
- Click the overflow menu on the Helm release that you want to use and select Upgrade.
Use either the Form view or YAML view to edit the Helm configuration.
Using Form view
- Expand Root Schema → Backstage chart schema → Backstage parameters → Extra app configuration files to inline into command arguments.
- Click the Add Extra app configuration files to inline into command arguments link.
Enter the value in the following fields:
-
configMapRef:
app-config-rhdh
-
filename:
app-config-rhdh.yaml
-
configMapRef:
- Click Upgrade.
Using YAML view
Set the value of the
upstream.backstage.extraAppConfig.configMapRef
andupstream.backstage.extraAppConfig.filename
parameters as follows:# ... other Red Hat Developer Hub Helm Chart configurations upstream: backstage: extraAppConfig: - configMapRef: app-config-rhdh filename: app-config-rhdh.yaml # ... other Red Hat Developer Hub Helm Chart configurations
- Click Upgrade.
1.2. Adding a custom application configuration file to OpenShift Container Platform using the Operator
A custom application configuration file is a ConfigMap
object that you can use to change the configuration of your Red Hat Developer Hub instance. If you are deploying your Developer Hub instance on Red Hat OpenShift Container Platform, you can use the Red Hat Developer Hub Operator to add a custom application configuration file to your OpenShift Container Platform instance by creating the ConfigMap
object and referencing it in the Developer Hub custom resource (CR).
The custom application configuration file contains a sensitive environment variable, named BACKEND_SECRET
. This variable contains a mandatory backend authentication key that Developer Hub uses to reference an environment variable defined in an OpenShift Container Platform secret. You must create a secret, named 'secrets-rhdh', and reference it in the Developer Hub CR.
You are responsible for protecting your Red Hat Developer Hub installation from external and unauthorized access. Manage the backend authentication key like any other secret. Meet strong password requirements, do not expose it in any configuration files, and only inject it into configuration files as an environment variable.
Prerequisites
- You have an active Red Hat OpenShift Container Platform account.
- Your administrator has installed the Red Hat Developer Hub Operator in OpenShift Container Platform. For more information, see Installing the Red Hat Developer Hub Operator.
- You have created the Red Hat Developer Hub CR in OpenShift Container Platform.
Procedure
- From the Developer perspective in the OpenShift Container Platform web console, select the Topology view, and click the Open URL icon on the Developer Hub pod to identify your Developer Hub external URL: <RHDH_URL>.
- From the Developer perspective in the OpenShift Container Platform web console, select the ConfigMaps view.
- Click Create ConfigMap.
Select the YAML view option in Configure via and use the following example as a base template to create a
ConfigMap
object, such asapp-config-rhdh.yaml
:kind: ConfigMap apiVersion: v1 metadata: name: app-config-rhdh data: "app-config-rhdh.yaml": | app: title: Red Hat Developer Hub baseUrl: <RHDH_URL> 1 backend: auth: externalAccess: - type: legacy options: subject: legacy-default-config secret: "${BACKEND_SECRET}" 2 baseUrl: <RHDH_URL> 3 cors: origin: <RHDH_URL> 4
- 1
- Set the external URL of your Red Hat Developer Hub instance.
- 2
- Use an environment variable exposing an OpenShift Container Platform secret to define the mandatory Developer Hub backend authentication key.
- 3
- Set the external URL of your Red Hat Developer Hub instance.
- 4
- Set the external URL of your Red Hat Developer Hub instance.
- Click Create.
- Select the Secrets view.
- Click Create Key/value Secret.
-
Create a secret named
secrets-rhdh
. Add a key named
BACKEND_SECRET
and a base64 encoded string as a value. Use a unique value for each Red Hat Developer Hub instance. For example, you can use the following command to generate a key from your terminal:node -p 'require("crypto").randomBytes(24).toString("base64")'
- Click Create.
- Select the Topology view.
Click the overflow menu for the Red Hat Developer Hub instance that you want to use and select Edit Backstage to load the YAML view of the Red Hat Developer Hub instance.
In the CR, enter the name of the custom application configuration config map as the value for the
spec.application.appConfig.configMaps
field, and enter the name of your secret as the value for thespec.application.extraEnvs.secrets
field. For example:apiVersion: v1 kind: ConfigMap metadata: name: example spec: application: appConfig: mountPath: /opt/app-root/src configMaps: - name: app-config-rhdh extraEnvs: secrets: - name: secrets-rhdh extraFiles: mountPath: /opt/app-root/src replicas: 1 route: enabled: true database: enableLocalDb: true
- Click Save.
- Navigate back to the Topology view and wait for the Red Hat Developer Hub pod to start.
- Click the Open URL icon to use the Red Hat Developer Hub platform with the configuration changes.
Additional resources
- For more information about roles and responsibilities in Developer Hub, see Role-Based Access Control (RBAC) in Red Hat Developer Hub.
Chapter 2. Configuring external PostgreSQL databases
As an administrator, you can configure and use external PostgreSQL databases in Red Hat Developer Hub. You can use a PostgreSQL certificate file to configure an external PostgreSQL instance using the Operator or Helm Chart.
Developer Hub supports the configuration of external PostgreSQL databases. You can perform maintenance activities, such as backing up your data or configuring high availability (HA) for the external PostgreSQL databases.
By default, the Red Hat Developer Hub operator or Helm Chart creates a local PostgreSQL database. However, this configuration is not suitable for the production environments. For production deployments, disable the creation of local database and configure Developer Hub to connect to an external PostgreSQL instance instead.
2.1. Configuring an external PostgreSQL instance using the Operator
You can configure an external PostgreSQL instance using the Red Hat Developer Hub Operator. By default, the Operator creates and manages a local instance of PostgreSQL in the same namespace where you have deployed the RHDH instance. However, you can change this default setting to configure an external PostgreSQL database server, for example, Amazon Web Services (AWS) Relational Database Service (RDS) or Azure database.
Prerequisites
- You are using a supported version of PostgreSQL. For more information, see the Product life cycle page.
You have the following details:
-
db-host
: Denotes your PostgreSQL instance Domain Name System (DNS) or IP address -
db-port
: Denotes your PostgreSQL instance port number, such as5432
-
username
: Denotes the user name to connect to your PostgreSQL instance -
password
: Denotes the password to connect to your PostgreSQL instance
-
- You have installed the Red Hat Developer Hub Operator.
- Optional: You have a CA certificate, Transport Layer Security (TLS) private key, and TLS certificate so that you can secure your database connection by using the TLS protocol. For more information, refer to your PostgreSQL vendor documentation.
By default, Developer Hub uses a database for each plugin and automatically creates it if none is found. You might need the Create Database
privilege in addition to PSQL Database
privileges for configuring an external PostgreSQL instance.
Procedure
Optional: Create a certificate secret to configure your PostgreSQL instance with a TLS connection:
cat <<EOF | oc -n <your-namespace> create -f - apiVersion: v1 kind: Secret metadata: name: <crt-secret> 1 type: Opaque stringData: postgres-ca.pem: |- -----BEGIN CERTIFICATE----- <ca-certificate-key> 2 postgres-key.key: |- -----BEGIN CERTIFICATE----- <tls-private-key> 3 postgres-crt.pem: |- -----BEGIN CERTIFICATE----- <tls-certificate-key> 4 # ... EOF
Create a credential secret to connect with the PostgreSQL instance:
cat <<EOF | oc -n <your-namespace> create -f - apiVersion: v1 kind: Secret metadata: name: <cred-secret> 1 type: Opaque stringData: 2 POSTGRES_PASSWORD: <password> POSTGRES_PORT: "<db-port>" POSTGRES_USER: <username> POSTGRES_HOST: <db-host> PGSSLMODE: <ssl-mode> # for TLS connection 3 NODE_EXTRA_CA_CERTS: <abs-path-to-pem-file> # for TLS connection, e.g. /opt/app-root/src/postgres-crt.pem 4 EOF
- 1
- Provide the name of the credential secret.
- 2
- Provide credential data to connect with your PostgreSQL instance.
- 3
- Optional: Provide the value based on the required Secure Sockets Layer (SSL) mode.
- 4
- Optional: Provide the value only if you need a TLS connection for your PostgreSQL instance.
Create a
Backstage
custom resource (CR):cat <<EOF | oc -n <your-namespace> create -f - apiVersion: rhdh.redhat.com/v1alpha1 kind: Backstage metadata: name: <backstage-instance-name> spec: database: enableLocalDb: false 1 application: extraFiles: mountPath: <path> # e g /opt/app-root/src secrets: - name: <crt-secret> 2 key: postgres-crt.pem, postgres-ca.pem, postgres-key.key # key name as in <crt-secret> Secret extraEnvs: secrets: - name: <cred-secret> 3 # ...
NoteThe environment variables listed in the
Backstage
CR work with the Operator default configuration. If you have changed the Operator default configuration, you must reconfigure theBackstage
CR accordingly.-
Apply the
Backstage
CR to the namespace where you have deployed the RHDH instance.
2.2. Configuring an external PostgreSQL instance using the Helm Chart
You can configure an external PostgreSQL instance by using the Helm Chart. By default, the Helm Chart creates and manages a local instance of PostgreSQL in the same namespace where you have deployed the RHDH instance. However, you can change this default setting to configure an external PostgreSQL database server, for example, Amazon Web Services (AWS) Relational Database Service (RDS) or Azure database.
Prerequisites
- You are using a supported version of PostgreSQL. For more information, see the Product life cycle page.
You have the following details:
-
db-host
: Denotes your PostgreSQL instance Domain Name System (DNS) or IP address -
db-port
: Denotes your PostgreSQL instance port number, such as5432
-
username
: Denotes the user name to connect to your PostgreSQL instance -
password
: Denotes the password to connect to your PostgreSQL instance
-
- You have installed the RHDH application by using the Helm Chart.
- Optional: You have a CA certificate, Transport Layer Security (TLS) private key, and TLS certificate so that you can secure your database connection by using the TLS protocol. For more information, refer to your PostgreSQL vendor documentation.
By default, Developer Hub uses a database for each plugin and automatically creates it if none is found. You might need the Create Database
privilege in addition to PSQL Database
privileges for configuring an external PostgreSQL instance.
Procedure
Optional: Create a certificate secret to configure your PostgreSQL instance with a TLS connection:
cat <<EOF | oc -n <your-namespace> create -f - apiVersion: v1 kind: Secret metadata: name: <crt-secret> 1 type: Opaque stringData: postgres-ca.pem: |- -----BEGIN CERTIFICATE----- <ca-certificate-key> 2 postgres-key.key: |- -----BEGIN CERTIFICATE----- <tls-private-key> 3 postgres-crt.pem: |- -----BEGIN CERTIFICATE----- <tls-certificate-key> 4 # ... EOF
Create a credential secret to connect with the PostgreSQL instance:
cat <<EOF | oc -n <your-namespace> create -f - apiVersion: v1 kind: Secret metadata: name: <cred-secret> 1 type: Opaque stringData: 2 POSTGRES_PASSWORD: <password> POSTGRES_PORT: "<db-port>" POSTGRES_USER: <username> POSTGRES_HOST: <db-host> PGSSLMODE: <ssl-mode> # for TLS connection 3 NODE_EXTRA_CA_CERTS: <abs-path-to-pem-file> # for TLS connection, e.g. /opt/app-root/src/postgres-crt.pem 4 EOF
- 1
- Provide the name of the credential secret.
- 2
- Provide credential data to connect with your PostgreSQL instance.
- 3
- Optional: Provide the value based on the required Secure Sockets Layer (SSL) mode.
- 4
- Optional: Provide the value only if you need a TLS connection for your PostgreSQL instance.
Configure your PostgreSQL instance in the Helm configuration file named
values.yaml
:# ... upstream: postgresql: enabled: false # disable PostgreSQL instance creation 1 auth: existingSecret: <cred-secret> # inject credentials secret to Backstage 2 backstage: appConfig: backend: database: connection: # configure Backstage DB connection parameters host: ${POSTGRES_HOST} port: ${POSTGRES_PORT} user: ${POSTGRES_USER} password: ${POSTGRES_PASSWORD} ssl: rejectUnauthorized: true, ca: $file: /opt/app-root/src/postgres-ca.pem key: $file: /opt/app-root/src/postgres-key.key cert: $file: /opt/app-root/src/postgres-crt.pem extraEnvVarsSecrets: - <cred-secret> # inject credentials secret to Backstage 3 extraEnvVars: - name: BACKEND_SECRET valueFrom: secretKeyRef: key: backend-secret name: '{{ include "janus-idp.backend-secret-name" $ }}' extraVolumeMounts: - mountPath: /opt/app-root/src/dynamic-plugins-root name: dynamic-plugins-root - mountPath: /opt/app-root/src/postgres-crt.pem name: postgres-crt # inject TLS certificate to Backstage cont. 4 subPath: postgres-crt.pem - mountPath: /opt/app-root/src/postgres-ca.pem name: postgres-ca # inject CA certificate to Backstage cont. 5 subPath: postgres-ca.pem - mountPath: /opt/app-root/src/postgres-key.key name: postgres-key # inject TLS private key to Backstage cont. 6 subPath: postgres-key.key extraVolumes: - ephemeral: volumeClaimTemplate: spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi name: dynamic-plugins-root - configMap: defaultMode: 420 name: dynamic-plugins optional: true name: dynamic-plugins - name: dynamic-plugins-npmrc secret: defaultMode: 420 optional: true secretName: dynamic-plugins-npmrc - name: postgres-crt secret: secretName: <crt-secret> 7 # ...
- 1
- Set the value of the
upstream.postgresql.enabled
parameter tofalse
to disable creating local PostgreSQL instances. - 2
- Provide the name of the credential secret.
- 3
- Provide the name of the credential secret.
- 4
- Optional: Provide the name of the TLS certificate only for a TLS connection.
- 5
- Optional: Provide the name of the CA certificate only for a TLS connection.
- 6
- Optional: Provide the name of the TLS private key only if your TLS connection requires a private key.
- 7
- Provide the name of the certificate secret if you have configured a TLS connection.
Apply the configuration changes in your Helm configuration file named
values.yaml
:helm upgrade -n <your-namespace> <your-deploy-name> openshift-helm-charts/redhat-developer-hub -f values.yaml --version 1.3.0
2.3. Migrating local databases to an external database server using the Operator
By default, Red Hat Developer Hub hosts the data for each plugin in a PostgreSQL database. When you fetch the list of databases, you might see multiple databases based on the number of plugins configured in Developer Hub. You can migrate the data from an RHDH instance hosted on a local PostgreSQL server to an external PostgreSQL service, such as AWS RDS, Azure database, or Crunchy database. To migrate the data from each RHDH instance, you can use PostgreSQL utilities, such as pg_dump
with psql
or pgAdmin
.
The following procedure uses a database copy script to do a quick migration.
Prerequisites
Procedure
Configure port forwarding for the local PostgreSQL database pod by running the following command on a terminal:
oc port-forward -n <your-namespace> <pgsql-pod-name> <forward-to-port>:<forward-from-port>
Where:
-
The
<pgsql-pod-name>
variable denotes the name of a PostgreSQL pod with the formatbackstage-psql-<deployment-name>-<_index>
. -
The
<forward-to-port>
variable denotes the port of your choice to forward PostgreSQL data to. The
<forward-from-port>
variable denotes the local PostgreSQL instance port, such as5432
.Example: Configuring port forwarding
oc port-forward -n developer-hub backstage-psql-developer-hub-0 15432:5432
-
The
Make a copy of the following
db_copy.sh
script and edit the details based on your configuration:#!/bin/bash to_host=<db-service-host> 1 to_port=5432 2 to_user=postgres 3 from_host=127.0.0.1 4 from_port=15432 5 from_user=postgres 6 allDB=("backstage_plugin_app" "backstage_plugin_auth" "backstage_plugin_catalog" "backstage_plugin_permission" "backstage_plugin_scaffolder" "backstage_plugin_search") 7 for db in ${!allDB[@]}; do db=${allDB[$db]} echo Copying database: $db PGPASSWORD=$TO_PSW psql -h $to_host -p $to_port -U $to_user -c "create database $db;" pg_dump -h $from_host -p $from_port -U $from_user -d $db | PGPASSWORD=$TO_PSW psql -h $to_host -p $to_port -U $to_user -d $db done
- 1
- The destination host name, for example,
<db-instance-name>.rds.amazonaws.com
. - 2
- The destination port, such as
5432
. - 3
- The destination server username, for example,
postgres
. - 4
- The source host name, such as
127.0.0.1
. - 5
- The source port number, such as the
<forward-to-port>
variable. - 6
- The source server username, for example,
postgres
. - 7
- The name of databases to import in double quotes separated by spaces, for example,
("backstage_plugin_app" "backstage_plugin_auth" "backstage_plugin_catalog" "backstage_plugin_permission" "backstage_plugin_scaffolder" "backstage_plugin_search")
.
Create a destination database for copying the data:
/bin/bash TO_PSW=<destination-db-password> /path/to/db_copy.sh 1
- 1
- The
<destination-db-password>
variable denotes the password to connect to the destination database.
NoteYou can stop port forwarding when the copying of the data is complete. For more information about handling large databases and using the compression tools, see the Handling Large Databases section on the PostgreSQL website.
-
Reconfigure your
Backstage
custom resource (CR). For more information, see Configuring an external PostgreSQL instance using the Operator. Check that the following code is present at the end of your
Backstage
CR after reconfiguration:# ... spec: database: enableLocalDb: false application: # ... extraFiles: secrets: - name: <crt-secret> key: postgres-crt.pem # key name as in <crt-secret> Secret extraEnvs: secrets: - name: <cred-secret> # ...
NoteReconfiguring the
Backstage
CR deletes the correspondingStatefulSet
andPod
objects, but does not delete thePersistenceVolumeClaim
object. Use the following command to delete the localPersistenceVolumeClaim
object:oc -n developer-hub delete pvc <local-psql-pvc-name>
where, the
<local-psql-pvc-name>
variable is in thedata-<psql-pod-name>
format.- Apply the configuration changes.
Verification
Verify that your RHDH instance is running with the migrated data and does not contain the local PostgreSQL database by running the following command:
oc get pods -n <your-namespace>
Check the output for the following details:
-
The
backstage-developer-hub-xxx
pod is in running state. The
backstage-psql-developer-hub-0
pod is not available.You can also verify these details using the Topology view in the OpenShift Container Platform web console.
-
The
Chapter 3. Configuring an RHDH instance with a TLS connection in Kubernetes
You can configure an RHDH instance with a Transport Layer Security (TLS) connection in a Kubernetes cluster, such as an Azure Red Hat OpenShift (ARO) cluster, any cluster from a supported cloud provider, or your own cluster with proper configuration. However, You must use a public Certificate Authority (CA)-signed certificate to configure your Kubernetes cluster.
Prerequisites
- You have set up an Azure Red Hat OpenShift (ARO) cluster with a public CA-signed certificate. For more information about obtaining CA certificates, refer to your vendor documentation.
You have created a namespace and setup a service account with proper read permissions on resources.
Example: Kubernetes manifest for role-based access control
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: backstage-read-only rules: - apiGroups: - '*' resources: - pods - configmaps - services - deployments - replicasets - horizontalpodautoscalers - ingresses - statefulsets - limitranges - resourcequotas - daemonsets verbs: - get - list - watch #...
- You have obtained the secret and the service CA certificate associated with your service account.
You have created some resources and added annotations to them so they can be discovered by the Kubernetes plugin. You can apply these Kubernetes annotations:
-
backstage.io/kubernetes-id
to label components -
backstage.io/kubernetes-namespace
to label namespaces
-
Procedure
Enable the Kubernetes plugins in the
dynamic-plugins-rhdh.yaml
file:kind: ConfigMap apiVersion: v1 metadata: name: dynamic-plugins-rhdh data: dynamic-plugins.yaml: | includes: - dynamic-plugins.default.yaml plugins: - package: ./dynamic-plugins/dist/backstage-plugin-kubernetes-backend-dynamic disabled: false 1 - package: ./dynamic-plugins/dist/backstage-plugin-kubernetes disabled: false 2 # ...
NoteThe
backstage-plugin-kubernetes
plugin is currently in Technology Preview. As an alternative, you can use the./dynamic-plugins/dist/backstage-plugin-topology-dynamic
plugin, which is Generally Available (GA).Set the kubernetes cluster details and configure the catalog sync options in the
app-config-rhdh.yaml
file:kind: ConfigMap apiVersion: v1 metadata: name: app-config-rhdh data: "app-config-rhdh.yaml": | # ... catalog: rules: - allow: [Component, System, API, Resource, Location] providers: kubernetes: openshift: cluster: openshift processor: namespaceOverride: default defaultOwner: guests schedule: frequency: seconds: 30 timeout: seconds: 5 kubernetes: serviceLocatorMethod: type: 'multiTenant' clusterLocatorMethods: - type: 'config' clusters: - url: <target-cluster-api-server-url> 1 name: openshift authProvider: 'serviceAccount' skipTLSVerify: false 2 skipMetricsLookup: true dashboardUrl: <target-cluster-console-url> 3 dashboardApp: openshift serviceAccountToken: ${K8S_SERVICE_ACCOUNT_TOKEN} 4 caData: ${K8S_CONFIG_CA_DATA} 5 # ...
- 1
- The base URL to the Kubernetes control plane. You can run the
kubectl cluster-info
command to get the base URL. - 2
- Set the value of this parameter to
false
to enable the verification of the TLS certificate. - 3
- Optional: The link to the Kubernetes dashboard managing the ARO cluster.
- 4
- Optional: Pass the service account token using a
K8S_SERVICE_ACCOUNT_TOKEN
environment variable that you can define in yoursecrets-rhdh
secret. - 5
- Pass the CA data using a
K8S_CONFIG_CA_DATA
environment variable that you can define in yoursecrets-rhdh
secret.
- Save the configuration changes.
Verification
Run the RHDH application to import your catalog:
kubectl -n rhdh-operator get pods -w
- Verify that the pod log shows no errors for your configuration.
- Go to Catalog and check the component page in the Developer Hub instance to verify the cluster connection and the presence of your created resources.
If you encounter connection errors, such as certificate issues or permissions, check the message box in the component page or view the logs of the pod.
Chapter 4. Telemetry data collection
The telemetry data collection feature helps in collecting and analyzing the telemetry data to improve your experience with Red Hat Developer Hub. This feature is enabled by default.
As an administrator, you can disable the telemetry data collection feature based on your needs. For example, in an air-gapped environment, you can disable this feature to avoid needless outbound requests affecting the responsiveness of the RHDH application. For more details, see the Disabling telemetry data collection in RHDH section.
Red Hat collects and analyzes the following data:
- Events of page visits and clicks on links or buttons.
- System-related information, for example, locale, timezone, user agent including browser and OS details.
- Page-related information, for example, title, category, extension name, URL, path, referrer, and search parameters.
-
Anonymized IP addresses, recorded as
0.0.0.0
. - Anonymized username hashes, which are unique identifiers used solely to identify the number of unique users of the RHDH application.
With RHDH, you can customize the telemetry data collection feature and the telemetry Segment source configuration based on your needs.
4.1. Disabling telemetry data collection in RHDH
To disable telemetry data collection, you must disable the analytics-provider-segment
plugin either using the Helm Chart or the Red Hat Developer Hub Operator configuration.
4.1.1. Disabling telemetry data collection using the Helm Chart
You can disable the telemetry data collection feature by using the Helm Chart.
Prerequisites
- You have logged in as an administrator in the OpenShift Container Platform web console.
- You have installed Red Hat Developer Hub on OpenShift Container Platform using the Helm Chart.
Procedure
- In the Developer perspective of the OpenShift Container Platform web console, go to the Helm view to see the list of Helm releases.
Click the overflow menu on the Helm release that you want to use and select Upgrade.
NoteYou can also create a new Helm release by clicking the Create button and edit the configuration to disable telemetry.
Use either the Form view or YAML view to edit the Helm configuration:
Using Form view
- Expand Root Schema → global → Dynamic plugins configuration. → List of dynamic plugins that should be installed in the backstage application.
- Click the Add list of dynamic plugins that should be installed in the backstage application. link.
Perform one of the following steps:
If you have not configured the plugin, add the following value in the Package specification of the dynamic plugin to install. It should be usable by the npm pack command. field:
./dynamic-plugins/dist/janus-idp-backstage-plugin-analytics-provider-segment
-
If you have configured the plugin, find the Package specification of the dynamic plugin to install. It should be usable by the npm pack command. field with the
./dynamic-plugins/dist/janus-idp-backstage-plugin-analytics-provider-segment
value.
- Select the Disable the plugin checkbox.
- Click Upgrade.
Using YAML view
Perform one of the following steps:
If you have not configured the plugin, add the following YAML code in your
values.yaml
Helm configuration file:# ... global: dynamic: plugins: - package: './dynamic-plugins/dist/janus-idp-backstage-plugin-analytics-provider-segment' disabled: true # ...
-
If you have configured the plugin, search it in your Helm configuration and set the value of the
plugins.disabled
parameter totrue
.
- Click Upgrade.
4.1.2. Disabling telemetry data collection using the Operator
You can disable the telemetry data collection feature by using the Operator.
Prerequisites
- You have logged in as an administrator in the OpenShift Container Platform web console.
- You have installed Red Hat Developer Hub on OpenShift Container Platform using the Operator.
Procedure
Perform one of the following steps:
-
If you have created the
dynamic-plugins-rhdh
ConfigMap file and not configured theanalytics-provider-segment
plugin, add the plugin to the list of plugins and set itsplugins.disabled
parameter totrue
. -
If you have created the
dynamic-plugins-rhdh
ConfigMap file and configured theanalytics-provider-segment
plugin, search the plugin in the list of plugins and set itsplugins.disabled
parameter totrue
. If you have not created the ConfigMap file, create it with the following YAML code:
kind: ConfigMap apiVersion: v1 metadata: name: dynamic-plugins-rhdh data: dynamic-plugins.yaml: | includes: - dynamic-plugins.default.yaml plugins: - package: './dynamic-plugins/dist/janus-idp-backstage-plugin-analytics-provider-segment' disabled: true
-
If you have created the
Set the value of the
dynamicPluginsConfigMapName
parameter to the name of the ConfigMap file in yourBackstage
custom resource:# ... spec: application: dynamicPluginsConfigMapName: dynamic-plugins-rhdh # ...
- Save the configuration changes.
4.2. Enabling telemetry data collection in RHDH
The telemetry data collection feature is enabled by default. However, if you have disabled the feature and want to re-enable it, you must enable the analytics-provider-segment
plugin either by using the Helm Chart or the Red Hat Developer Hub Operator configuration.
4.2.1. Enabling telemetry data collection using the Helm Chart
You can enable the telemetry data collection feature by using the Helm Chart.
Prerequisites
- You have logged in as an administrator in the OpenShift Container Platform web console.
- You have installed Red Hat Developer Hub on OpenShift Container Platform using the Helm Chart.
Procedure
- In the Developer perspective of the OpenShift Container Platform web console, go to the Helm view to see the list of Helm releases.
Click the overflow menu on the Helm release that you want to use and select Upgrade.
NoteYou can also create a new Helm release by clicking the Create button and edit the configuration to enable telemetry.
Use either the Form view or YAML view to edit the Helm configuration:
Using Form view
- Expand Root Schema → global → Dynamic plugins configuration. → List of dynamic plugins that should be installed in the backstage application.
- Click the Add list of dynamic plugins that should be installed in the backstage application. link.
Perform one of the following steps:
If you have not configured the plugin, add the following value in the Package specification of the dynamic plugin to install. It should be usable by the npm pack command. field:
./dynamic-plugins/dist/janus-idp-backstage-plugin-analytics-provider-segment
-
If you have configured the plugin, find the Package specification of the dynamic plugin to install. It should be usable by the npm pack command. field with the
./dynamic-plugins/dist/janus-idp-backstage-plugin-analytics-provider-segment
value.
- Clear the Disable the plugin checkbox.
- Click Upgrade.
Using YAML view
Perform one of the following steps:
If you have not configured the plugin, add the following YAML code in your Helm configuration file:
# ... global: dynamic: plugins: - package: './dynamic-plugins/dist/janus-idp-backstage-plugin-analytics-provider-segment' disabled: false # ...
-
If you have configured the plugin, search it in your Helm configuration and set the value of the
plugins.disabled
parameter tofalse
.
- Click Upgrade.
4.2.2. Enabling telemetry data collection using the Operator
You can enable the telemetry data collection feature by using the Operator.
Prerequisites
- You have logged in as an administrator in the OpenShift Container Platform web console.
- You have installed Red Hat Developer Hub on OpenShift Container Platform using the Operator.
Procedure
Perform one of the following steps:
-
If you have created the
dynamic-plugins-rhdh
ConfigMap file and not configured theanalytics-provider-segment
plugin, add the plugin to the list of plugins and set itsplugins.disabled
parameter tofalse
. -
If you have created the
dynamic-plugins-rhdh
ConfigMap file and configured theanalytics-provider-segment
plugin, search the plugin in the list of plugins and set itsplugins.disabled
parameter tofalse
. If you have not created the ConfigMap file, create it with the following YAML code:
kind: ConfigMap apiVersion: v1 metadata: name: dynamic-plugins-rhdh data: dynamic-plugins.yaml: | includes: - dynamic-plugins.default.yaml plugins: - package: './dynamic-plugins/dist/janus-idp-backstage-plugin-analytics-provider-segment' disabled: false
-
If you have created the
Set the value of the
dynamicPluginsConfigMapName
parameter to the name of the ConfigMap file in yourBackstage
custom resource:# ... spec: application: dynamicPluginsConfigMapName: dynamic-plugins-rhdh # ...
- Save the configuration changes.
4.3. Customizing telemetry Segment source
The analytics-provider-segment
plugin sends the collected telemetry data to Red Hat by default. However, you can configure a new Segment source that receives telemetry data based on your needs. For configuration, you need a unique Segment write key that points to the Segment source.
By configuring a new Segment source, you can collect and analyze the same set of data that is mentioned in the Telemetry data collection section. You might also require to create your own telemetry data collection notice for your application users.
4.3.1. Customizing telemetry Segment source using the Helm Chart
You can configure integration with your Segment source by using the Helm Chart.
Prerequisites
- You have logged in as an administrator in the OpenShift Container Platform web console.
- You have installed Red Hat Developer Hub on OpenShift Container Platform using the Helm Chart.
Procedure
- In the Developer perspective of the OpenShift Container Platform web console, go to the Helm view to see the list of Helm releases.
- Click the overflow menu on the Helm release that you want to use and select Upgrade.
Use either the Form view or YAML view to edit the Helm configuration:
Using Form view
- Expand Root Schema → Backstage Chart Schema → Backstage Parameters → Backstage container environment variables.
- Click the Add Backstage container environment variables link.
Enter the name and value of the Segment key.
- Click Upgrade.
Using YAML view
Add the following YAML code in your Helm configuration file:
# ... upstream: backstage: extraEnvVars: - name: SEGMENT_WRITE_KEY value: <segment_key> 1 # ...
- 1
- Replace
<segment_key>
with a unique identifier for your Segment source.
- Click Upgrade.
4.3.2. Customizing telemetry Segment source using the Operator
You can configure integration with your Segment source by using the Operator.
Prerequisites
- You have logged in as an administrator in the OpenShift Container Platform web console.
- You have installed Red Hat Developer Hub on OpenShift Container Platform using the Operator.
Procedure
Add the following YAML code in your
Backstage
custom resource (CR):# ... spec: application: extraEnvs: envs: - name: SEGMENT_WRITE_KEY value: <segment_key> 1 # ...
- 1
- Replace
<segment_key>
with a unique identifier for your Segment source.
- Save the configuration changes.
Chapter 5. Enabling observability for Red Hat Developer Hub on OpenShift Container Platform
In OpenShift Container Platform, metrics are exposed through an HTTP service endpoint under the /metrics
canonical name. You can create a ServiceMonitor
custom resource (CR) to scrape metrics from a service endpoint in a user-defined project.
5.1. Enabling metrics monitoring in a Helm chart installation on an OpenShift Container Platform cluster
You can enable and view metrics for a Red Hat Developer Hub Helm deployment from the Developer perspective of the OpenShift Container Platform web console.
Prerequisites
- Your OpenShift Container Platform cluster has monitoring for user-defined projects enabled.
- You have installed Red Hat Developer Hub on OpenShift Container Platform using the Helm chart.
Procedure
- From the Developer perspective in the OpenShift Container Platform web console, select the Topology view.
Click the overflow menu of the Red Hat Developer Hub Helm chart, and select Upgrade.
On the Upgrade Helm Release page, select the YAML view option in Configure via, then configure the
metrics
section in the YAML, as shown in the following example:upstream: # ... metrics: serviceMonitor: enabled: true path: /metrics # ...
- Click Upgrade.
Verification
- From the Developer perspective in the OpenShift Container Platform web console, select the Observe view.
- Click the Metrics tab to view metrics for Red Hat Developer Hub pods.
5.2. Enabling metrics monitoring in a Red Hat Developer Hub Operator installation on an OpenShift Container Platform cluster
You can enable and view metrics for an Operator-installed Red Hat Developer Hub instance from the Developer perspective of the OpenShift Container Platform web console.
Prerequisites
- Your OpenShift Container Platform cluster has monitoring for user-defined projects enabled.
- You have installed Red Hat Developer Hub on OpenShift Container Platform using the Red Hat Developer Hub Operator.
-
You have installed the OpenShift CLI (
oc
).
Procedure
Currently, the Red Hat Developer Hub Operator does not support creating a ServiceMonitor
custom resource (CR) by default. You must complete the following steps to create a ServiceMonitor
CR to scrape metrics from the endpoint.
Create the
ServiceMonitor
CR as a YAML file:apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: <custom_resource_name> 1 namespace: <project_name> 2 labels: app.kubernetes.io/instance: <custom_resource_name> app.kubernetes.io/name: backstage spec: namespaceSelector: matchNames: - <project_name> selector: matchLabels: rhdh.redhat.com/app: backstage-<custom_resource_name> endpoints: - port: backend path: '/metrics'
Apply the
ServiceMonitor
CR by running the following command:oc apply -f <filename>
Verification
- From the Developer perspective in the OpenShift Container Platform web console, select the Observe view.
- Click the Metrics tab to view metrics for Red Hat Developer Hub pods.
5.3. Additional resources
Chapter 6. Running the RHDH application behind a corporate proxy
You can run the RHDH application behind a corporate proxy by setting any of the following environment variables before starting the application:
-
HTTP_PROXY
: Denotes the proxy to use for HTTP requests. -
HTTPS_PROXY
: Denotes the proxy to use for HTTPS requests.
Additionally, you can set the NO_PROXY
environment variable to exclude certain domains from proxying. The variable value is a comma-separated list of hostnames that do not require a proxy to get reached, even if one is specified.
6.1. Configuring proxy information in Helm deployment
For Helm-based deployment, either a developer or a cluster administrator with permissions to create resources in the cluster can configure the proxy variables in a values.yaml
Helm configuration file.
Prerequisites
- You have installed the Red Hat Developer Hub application.
Procedure
Set the proxy information in your Helm configuration file:
upstream: backstage: extraEnvVars: - name: HTTP_PROXY value: '<http_proxy_url>' - name: HTTPS_PROXY value: '<https_proxy_url>' - name: NO_PROXY value: '<no_proxy_settings>'
Where,
<http_proxy_url>
- Denotes a variable that you must replace with the HTTP proxy URL.
<https_proxy_url>
- Denotes a variable that you must replace with the HTTPS proxy URL.
<no_proxy_settings>
Denotes a variable that you must replace with comma-separated URLs, which you want to exclude from proxying, for example,
foo.com,baz.com
.Example: Setting proxy variables using Helm Chart
upstream: backstage: extraEnvVars: - name: HTTP_PROXY value: 'http://10.10.10.105:3128' - name: HTTPS_PROXY value: 'http://10.10.10.106:3128' - name: NO_PROXY value: 'localhost,example.org'
- Save the configuration changes.
6.2. Configuring proxy information in Operator deployment
For Operator-based deployment, the approach you use for proxy configuration is based on your role:
- As a cluster administrator with access to the Operator namespace, you can configure the proxy variables in the Operator’s default ConfigMap file. This configuration applies the proxy settings to all the users of the Operator.
- As a developer, you can configure the proxy variables in a custom resource (CR) file. This configuration applies the proxy settings to the RHDH application created from that CR.
Prerequisites
- You have installed the Red Hat Developer Hub application.
Procedure
Perform one of the following steps based on your role:
As an administrator, set the proxy information in the Operator’s default ConfigMap file:
-
Search for a ConfigMap file named
backstage-default-config
in the default namespacerhdh-operator
and open it. -
Find the
deployment.yaml
key. Set the value of the
HTTP_PROXY
,HTTPS_PROXY
, andNO_PROXY
environment variables in theDeployment
spec as shown in the following example:Example: Setting proxy variables in a ConfigMap file
# Other fields omitted deployment.yaml: |- apiVersion: apps/v1 kind: Deployment spec: template: spec: # Other fields omitted initContainers: - name: install-dynamic-plugins # command omitted env: - name: NPM_CONFIG_USERCONFIG value: /opt/app-root/src/.npmrc.dynamic-plugins - name: HTTP_PROXY value: 'http://10.10.10.105:3128' - name: HTTPS_PROXY value: 'http://10.10.10.106:3128' - name: NO_PROXY value: 'localhost,example.org' # Other fields omitted containers: - name: backstage-backend # Other fields omitted env: - name: APP_CONFIG_backend_listen_port value: "7007" - name: HTTP_PROXY value: 'http://10.10.10.105:3128' - name: HTTPS_PROXY value: 'http://10.10.10.106:3128' - name: NO_PROXY value: 'localhost,example.org'
-
Search for a ConfigMap file named
As a developer, set the proxy information in your custom resource (CR) file as shown in the following example:
Example: Setting proxy variables in a CR file
spec: # Other fields omitted application: extraEnvs: envs: - name: HTTP_PROXY value: 'http://10.10.10.105:3128' - name: HTTPS_PROXY value: 'http://10.10.10.106:3128' - name: NO_PROXY value: 'localhost,example.org'
- Save the configuration changes.
Chapter 7. Red Hat Developer Hub integration with Amazon Web Services (AWS)
You can integrate your Red Hat Developer Hub application with Amazon Web Services (AWS), which can help you streamline your workflows within the AWS ecosystem. Integrating the Developer Hub resources with AWS provides access to a comprehensive suite of tools, services, and solutions.
The integration with AWS requires the deployment of Developer Hub in Elastic Kubernetes Service (EKS) using one of the following methods:
- The Helm chart
- The Red Hat Developer Hub Operator
7.1. Monitoring and logging with Amazon Web Services (AWS) in Red Hat Developer Hub
In the Red Hat Developer Hub, monitoring and logging are facilitated through Amazon Web Services (AWS) integration. With features like Amazon CloudWatch for real-time monitoring and Amazon Prometheus for comprehensive logging, you can ensure the reliability, scalability, and compliance of your Developer Hub application hosted on AWS infrastructure.
This integration enables you to oversee, diagnose, and refine your applications in the Red Hat ecosystem, leading to an improved development and operational journey.
7.1.1. Monitoring with Amazon Prometheus
Red Hat Developer Hub provides Prometheus metrics related to the running application. For more information about enabling or deploying Prometheus for EKS clusters, see Prometheus metrics in the Amazon documentation.
To monitor Developer Hub using Amazon Prometheus, you need to create an Amazon managed service for the Prometheus workspace and configure the ingestion of the Developer Hub Prometheus metrics. For more information, see Create a workspace and Ingest Prometheus metrics to the workspace sections in the Amazon documentation.
After ingesting Prometheus metrics into the created workspace, you can configure the metrics scraping to extract data from pods based on specific pod annotations.
7.1.1.1. Configuring annotations for monitoring
You can configure the annotations for monitoring in both Helm deployment and Operator-backed deployment.
- Helm deployment
To annotate the backstage pod for monitoring, update your
values.yaml
file as follows:upstream: backstage: # --- TRUNCATED --- podAnnotations: prometheus.io/scrape: 'true' prometheus.io/path: '/metrics' prometheus.io/port: '7007' prometheus.io/scheme: 'http'
- Operator-backed deployment
Procedure
As an administrator of the operator, edit the default configuration to add Prometheus annotations as follows:
# Update OPERATOR_NS accordingly OPERATOR_NS=rhdh-operator kubectl edit configmap backstage-default-config -n "${OPERATOR_NS}"
Find the
deployment.yaml
key in the ConfigMap and add the annotations to thespec.template.metadata.annotations
field as follows:deployment.yaml: |- apiVersion: apps/v1 kind: Deployment # --- truncated --- spec: template: # --- truncated --- metadata: labels: rhdh.redhat.com/app: # placeholder for 'backstage-<cr-name>' # --- truncated --- annotations: prometheus.io/scrape: 'true' prometheus.io/path: '/metrics' prometheus.io/port: '7007' prometheus.io/scheme: 'http' # --- truncated ---
- Save your changes.
Verification
To verify if the scraping works:
Use
kubectl
to port-forward the Prometheus console to your local machine as follows:kubectl --namespace=prometheus port-forward deploy/prometheus-server 9090
-
Open your web browser and navigate to
http://localhost:9090
to access the Prometheus console. -
Monitor relevant metrics, such as
process_cpu_user_seconds_total
.
7.1.2. Logging with Amazon CloudWatch logs
Logging within the Red Hat Developer Hub relies on the winston library. By default, logs at the debug level are not recorded. To activate debug logs, you must set the environment variable LOG_LEVEL
to debug in your Red Hat Developer Hub instance.
7.1.2.1. Configuring the application log level
You can configure the application log level in both Helm deployment and Operator-backed deployment.
- Helm deployment
To update the logging level, add the environment variable
LOG_LEVEL
to your Helm chart’svalues.yaml
file:upstream: backstage: # --- Truncated --- extraEnvVars: - name: LOG_LEVEL value: debug
- Operator-backed deployment
You can modify the logging level by including the environment variable
LOG_LEVEL
in your custom resource as follows:spec: # Other fields omitted application: extraEnvs: envs: - name: LOG_LEVEL value: debug
7.1.2.2. Retrieving logs from Amazon CloudWatch
The CloudWatch Container Insights are used to capture logs and metrics for Amazon EKS. For more information, see Logging for Amazon EKS documentation.
To capture the logs and metrics, install the Amazon CloudWatch Observability EKS add-on in your cluster. Following the setup of Container Insights, you can access container logs using Logs Insights or Live Tail views.
CloudWatch names the log group where all container logs are consolidated in the following manner:
/aws/containerinsights/<ClusterName>/application
Following is an example query to retrieve logs from the Developer Hub instance:
fields @timestamp, @message, kubernetes.container_name | filter kubernetes.container_name in ["install-dynamic-plugins", "backstage-backend"]
7.2. Using Amazon Cognito as an authentication provider in Red Hat Developer Hub
In this section, Amazon Cognito is an AWS service for adding an authentication layer to Developer Hub. You can sign in directly to the Developer Hub using a user pool or fedarate through a third-party identity provider.
Although Amazon Cognito is not part of the core authentication providers for the Developer Hub, it can be integrated using the generic OpenID Connect (OIDC) provider.
You can configure your Developer Hub in both Helm Chart and Operator-backed deployments.
Prerequisites
You have a User Pool or you have created a new one. For more information about user pools, see Amazon Cognito user pools documentation.
NoteEnsure that you have noted the AWS region where the user pool is located and the user pool ID.
You have created an App Client within your user pool for integrating the hosted UI. For more information, see Setting up the hosted UI with the Amazon Cognito console.
When setting up the hosted UI using the Amazon Cognito console, ensure to make the following adjustments:
-
In the Allowed callback URL(s) section, include the URL
https://<rhdh_url>/api/auth/oidc/handler/frame
. Ensure to replace<rhdh_url>
with your Developer Hub application’s URL, such as,my.rhdh.example.com
. -
Similarly, in the Allowed sign-out URL(s) section, add
https://<rhdh_url>
. Replace<rhdh_url>
with your Developer Hub application’s URL, such asmy.rhdh.example.com
. - Under OAuth 2.0 grant types, select Authorization code grant to return an authorization code.
Under OpenID Connect scopes, ensure to select at least the following scopes:
- OpenID
- Profile
- Helm deployment
Procedure
Edit or create your custom
app-config-rhdh
ConfigMap as follows:apiVersion: v1 kind: ConfigMap metadata: name: app-config-rhdh data: "app-config-rhdh.yaml": | # --- Truncated --- app: title: Red Hat Developer Hub signInPage: oidc auth: environment: production session: secret: ${AUTH_SESSION_SECRET} providers: oidc: production: clientId: ${AWS_COGNITO_APP_CLIENT_ID} clientSecret: ${AWS_COGNITO_APP_CLIENT_SECRET} metadataUrl: ${AWS_COGNITO_APP_METADATA_URL} callbackUrl: ${AWS_COGNITO_APP_CALLBACK_URL} scope: 'openid profile email' prompt: auto
Edit or create your custom
secrets-rhdh
Secret using the following template:apiVersion: v1 kind: Secret metadata: name: secrets-rhdh stringData: AUTH_SESSION_SECRET: "my super auth session secret - change me!!!" AWS_COGNITO_APP_CLIENT_ID: "my-aws-cognito-app-client-id" AWS_COGNITO_APP_CLIENT_SECRET: "my-aws-cognito-app-client-secret" AWS_COGNITO_APP_METADATA_URL: "https://cognito-idp.[region].amazonaws.com/[userPoolId]/.well-known/openid-configuration" AWS_COGNITO_APP_CALLBACK_URL: "https://[rhdh_dns]/api/auth/oidc/handler/frame"
Add references of both the ConfigMap and Secret resources in your
values.yaml
file:upstream: backstage: image: pullSecrets: - rhdh-pull-secret podSecurityContext: fsGroup: 2000 extraAppConfig: - filename: app-config-rhdh.yaml configMapRef: app-config-rhdh extraEnvVarsSecrets: - secrets-rhdh
Upgrade the Helm deployment:
helm upgrade rhdh \ openshift-helm-charts/redhat-developer-hub \ [--version 1.3.0] \ --values /path/to/values.yaml
- Operator-backed deployment
Add the following code to your
app-config-rhdh
ConfigMap:apiVersion: v1 kind: ConfigMap metadata: name: app-config-rhdh data: "app-config-rhdh.yaml": | # --- Truncated --- signInPage: oidc auth: # Production to disable guest user login environment: production # Providing an auth.session.secret is needed because the oidc provider requires session support. session: secret: ${AUTH_SESSION_SECRET} providers: oidc: production: # See https://github.com/backstage/backstage/blob/master/plugins/auth-backend-module-oidc-provider/config.d.ts clientId: ${AWS_COGNITO_APP_CLIENT_ID} clientSecret: ${AWS_COGNITO_APP_CLIENT_SECRET} metadataUrl: ${AWS_COGNITO_APP_METADATA_URL} callbackUrl: ${AWS_COGNITO_APP_CALLBACK_URL} # Minimal set of scopes needed. Feel free to add more if needed. scope: 'openid profile email' # Note that by default, this provider will use the 'none' prompt which assumes that your are already logged on in the IDP. # You should set prompt to: # - auto: will let the IDP decide if you need to log on or if you can skip login when you have an active SSO session # - login: will force the IDP to always present a login form to the user prompt: auto
Add the following code to your
secrets-rhdh
Secret:apiVersion: v1 kind: Secret metadata: name: secrets-rhdh stringData: # --- Truncated --- # TODO: Change auth session secret. AUTH_SESSION_SECRET: "my super auth session secret - change me!!!" # TODO: user pool app client ID AWS_COGNITO_APP_CLIENT_ID: "my-aws-cognito-app-client-id" # TODO: user pool app client Secret AWS_COGNITO_APP_CLIENT_SECRET: "my-aws-cognito-app-client-secret" # TODO: Replace region and user pool ID AWS_COGNITO_APP_METADATA_URL: "https://cognito-idp.[region].amazonaws.com/[userPoolId]/.well-known/openid-configuration" # TODO: Replace <rhdh_dns> AWS_COGNITO_APP_CALLBACK_URL: "https://[rhdh_dns]/api/auth/oidc/handler/frame"
Ensure your Custom Resource contains references to both the
app-config-rhdh
ConfigMap andsecrets-rhdh
Secret:apiVersion: rhdh.redhat.com/v1alpha1 kind: Backstage metadata: # TODO: this the name of your Developer Hub instance name: my-rhdh spec: application: imagePullSecrets: - "rhdh-pull-secret" route: enabled: false appConfig: configMaps: - name: "app-config-rhdh" extraEnvs: secrets: - name: "secrets-rhdh"
Optional: If you have an existing Developer Hub instance backed by the Custom Resource and you have not edited it, you can manually delete the Developer Hub deployment to recreate it using the operator. Run the following command to delete the Developer Hub deployment:
kubectl delete deployment -l app.kubernetes.io/instance=<CR_NAME>
-
In the Allowed callback URL(s) section, include the URL
Verification
- Navigate to your Developer Hub web URL and sign in using OIDC authentication, which prompts you to authenticate through the configured AWS Cognito user pool.
- Once logged in, access Settings and verify user details.
Chapter 8. Red Hat Developer Hub integration with Microsoft Azure Kubernetes Service (AKS)
You can integrate Developer Hub with Microsoft Azure Kubernetes Service (AKS), which provides a significant advancement in development, offering a streamlined environment for building, deploying, and managing your applications.
This integration requires the deployment of Developer Hub on AKS using one of the following methods:
- The Helm chart
- The Red Hat Developer Hub Operator
8.1. Monitoring and logging with Azure Kubernetes Services (AKS) in Red Hat Developer Hub
Monitoring and logging are integral aspects of managing and maintaining Azure Kubernetes Services (AKS) in Red Hat Developer Hub. With features like Managed Prometheus Monitoring and Azure Monitor integration, administrators can efficiently monitor resource utilization, diagnose issues, and ensure the reliability of their containerized workloads.
To enable Managed Prometheus Monitoring, use the -enable-azure-monitor-metrics
option within either the az aks create
or az aks update
command, depending on whether you’re creating a new cluster or updating an existing one, such as:
az aks create/update --resource-group <your-ResourceGroup> --name <your-Cluster> --enable-azure-monitor-metrics
The previous command installs the metrics add-on, which gathers Prometheus metrics. Using the previous command, you can enable monitoring of Azure resources through both native Azure Monitor metrics and Prometheus metrics. You can also view the results in the portal under Monitoring → Insights. For more information, see Monitor Azure resources with Azure Monitor.
Furthermore, metrics from both the Managed Prometheus service and Azure Monitor can be accessed through Azure Managed Grafana service. For more information, see Link a Grafana workspace section.
By default, Prometheus uses the minimum ingesting profile, which optimizes ingestion volume and sets default configurations for scrape frequency, targets, and metrics collected. The default settings can be customized through custom configuration. Azure offers various methods, including using different ConfigMaps, to provide scrape configuration and other metric add-on settings. For more information about default configuration, see Default Prometheus metrics configuration in Azure Monitor and Customize scraping of Prometheus metrics in Azure Monitor managed service for Prometheus documentation.
8.1.1. Viewing logs with Azure Kubernetes Services (AKS)
You can access live data logs generated by Kubernetes objects and collect log data in Container Insights within AKS.
Prerequisites
- You have deployed Developer Hub on AKS.
For more information, see Installing Red Hat Developer Hub on Microsoft Azure Kubernetes Service.
Procedure
- View live logs from your Developer Hub instance
- Navigate to the Azure Portal.
-
Search for the resource group
<your-ResourceGroup>
and locate your AKS cluster<your-Cluster>
. - Select Kubernetes resources → Workloads from the menu.
-
Select the
<your-rhdh-cr>-developer-hub
(in case of Helm Chart installation) or<your-rhdh-cr>-backstage
(in case of Operator-backed installation) deployment. - Click Live Logs in the left menu.
Select the pod.
NoteThere must be only single pod.
Live log data is collected and displayed.
- View real-time log data from the Container Engine
- Navigate to the Azure Portal.
-
Search for the resource group
<your-ResourceGroup>
and locate your AKS cluster<your-Cluster>
. - Select Monitoring → Insights from the menu.
- Go to the Containers tab.
- Find the backend-backstage container and click it to view real-time log data as it’s generated by the Container Engine.
8.2. Using Microsoft Azure as an authentication provider in Red Hat Developer Hub
The core-plugin-api
package in Developer Hub comes integrated with Microsoft Azure authentication provider, authenticating signing in using Azure OAuth.
Prerequisites
- You have deployed Developer Hub on AKS.
For more information, see Installing Red Hat Developer Hub on Azure Kubernetes Service (AKS).
- You have created registered your application in Azure portal. For more information, see Register an application with the Microsoft identity platform.
8.2.1. Using Microsoft Azure as an authentication provider in Helm deployment
You can use Microsoft Azure as an authentication provider in Red Hat Developer Hub, when installed using the Helm Chart.
For more information, see Deploying Developer Hub on AKS with the Helm chart.
Procedure
After the application is registered, note down the following:
-
clientId
: Application (client) ID, found under App Registration → Overview. -
clientSecret
: Secret, found under *App Registration → Certificates & secrets (create new if needed). -
tenantId
: Directory (tenant) ID, found under App Registration → Overview.
-
Ensure the following fragment is included in your Developer Hub ConfigMap:
auth: environment: production providers: microsoft: production: clientId: ${AZURE_CLIENT_ID} clientSecret: ${AZURE_CLIENT_SECRET} tenantId: ${AZURE_TENANT_ID} domainHint: ${AZURE_TENANT_ID} additionalScopes: - Mail.Send
You can either create a new file or add it to an existing one.
Apply the ConfigMap to your Kubernetes cluster:
kubectl -n <your_namespace> apply -f <app-config>.yaml
Create or reuse an existing Secret containing Azure credentials and add the following fragment:
stringData: AZURE_CLIENT_ID: <value-of-clientId> AZURE_CLIENT_SECRET: <value-of-clientSecret> AZURE_TENANT_ID: <value-of-tenantId>
Apply the secret to your Kubernetes cluster:
kubectl -n <your_namespace> apply -f <azure-secrets>.yaml
Ensure your
values.yaml
file references the previously created ConfigMap and Secret:upstream: backstage: ... extraAppConfig: - filename: ... configMapRef: <app-config-containing-azure> extraEnvVarsSecrets: - <secret-containing-azure>
Optional: If the Helm Chart is already installed, upgrade it:
helm -n <your_namespace> upgrade -f <your-values.yaml> <your_deploy_name> redhat-developer/backstage --version 1.3.0
Optional: If your
rhdh.yaml
file is not changed, for example, you only updated the ConfigMap and Secret referenced from it, refresh your Developer Hub deployment by removing the corresponding pods:kubectl -n <your_namespace> delete pods -l backstage.io/app=backstage-<your-rhdh-cr>
8.2.2. Using Microsoft Azure as an authentication provider in Operator-backed deployment
You can use Microsoft Azure as an authentication provider in Red Hat Developer Hub, when installed using the Operator.
For more information, see Installing Red Hat Developer Hub on OpenShift Container Platform with the Operator.
Procedure
After the application is registered, note down the following:
-
clientId
: Application (client) ID, found under App Registration → Overview. -
clientSecret
: Secret, found under *App Registration → Certificates & secrets (create new if needed). -
tenantId
: Directory (tenant) ID, found under App Registration → Overview.
-
Ensure the following fragment is included in your Developer Hub ConfigMap:
auth: environment: production providers: microsoft: production: clientId: ${AZURE_CLIENT_ID} clientSecret: ${AZURE_CLIENT_SECRET} tenantId: ${AZURE_TENANT_ID} domainHint: ${AZURE_TENANT_ID} additionalScopes: - Mail.Send
You can either create a new file or add it to an existing one.
Apply the ConfigMap to your Kubernetes cluster:
kubectl -n <your_namespace> apply -f <app-config>.yaml
Create or reuse an existing Secret containing Azure credentials and add the following fragment:
stringData: AZURE_CLIENT_ID: <value-of-clientId> AZURE_CLIENT_SECRET: <value-of-clientSecret> AZURE_TENANT_ID: <value-of-tenantId>
Apply the secret to your Kubernetes cluster:
kubectl -n <your_namespace> apply -f <azure-secrets>.yaml
Ensure your Custom Resource manifest contains references to the previously created ConfigMap and Secret:
apiVersion: rhdh.redhat.com/v1alpha1 kind: Backstage metadata: name: <your-rhdh-cr> spec: application: imagePullSecrets: - rhdh-pull-secret route: enabled: false appConfig: configMaps: - name: <app-config-containing-azure> extraEnvs: secrets: - name: <secret-containing-azure>
Apply your Custom Resource manifest:
kubectl -n <your_namespace> apply -f rhdh.yaml
Optional: If your
rhdh.yaml
file is not changed, for example, you only updated the ConfigMap and Secret referenced from it, refresh your Developer Hub deployment by removing the corresponding pods:kubectl -n <your_namespace> delete pods -l backstage.io/app=backstage-<your-rhdh-cr>
Chapter 9. Managing templates
A template is a form composed of different UI fields that is defined in a YAML file. Templates include actions, which are steps that are executed in sequential order and can be executed conditionally.
You can use templates to easily create Red Hat Developer Hub components, and then publish these components to different locations, such as the Red Hat Developer Hub software catalog, or repositories in GitHub or GitLab.
9.1. Creating a template by using the Template Editor
You can create a template by using the Template Editor.
Procedure
Access the Template Editor by using one of the following options:
-
Open the URL
https://<rhdh_url>/create/edit
for your Red Hat Developer Hub instance. - Click Create… in the navigation menu of the Red Hat Developer Hub console, then click the overflow menu button and select Template editor.
-
Open the URL
- Click Edit Template Form.
- Optional: Modify the YAML definition for the parameters of your template. For more information about these parameters, see Section 9.2, “Creating a template as a YAML file”.
- In the Name * field, enter a unique name for your template.
- From the Owner drop-down menu, choose an owner for the template.
- Click Next.
In the Repository Location view, enter the following information about the hosted repository that you want to publish the template to:
Select an available Host from the drop-down menu.
NoteAvailable hosts are defined in the YAML parameters by the
allowedHosts
field:Example YAML
# ... ui:options: allowedHosts: - github.com # ...
- In the Owner * field, enter an organization, user or project that the hosted repository belongs to.
- In the Repository * field, enter the name of the hosted repository.
- Click Review.
- Review the information for accuracy, then click Create.
Verification
- Click the Catalog tab in the navigation panel.
- In the Kind drop-down menu, select Template.
- Confirm that your template is shown in the list of existing templates.
9.2. Creating a template as a YAML file
You can create a template by defining a Template
object as a YAML file.
The Template
object describes the template and its metadata. It also contains required input variables and a list of actions that are executed by the scaffolding service.
Template
object example
apiVersion: scaffolder.backstage.io/v1beta3 kind: Template metadata: name: template-name 1 title: Example template 2 description: An example template for v1beta3 scaffolder. 3 spec: owner: backstage/techdocs-core 4 type: service 5 parameters: 6 - title: Fill in some steps required: - name properties: name: title: Name type: string description: Unique name of the component owner: title: Owner type: string description: Owner of the component - title: Choose a location required: - repoUrl properties: repoUrl: title: Repository Location type: string steps: 7 - id: fetch-base name: Fetch Base action: fetch:template # ... output: 8 links: - title: Repository 9 url: ${{ steps['publish'].output.remoteUrl }} - title: Open in catalog 10 icon: catalog entityRef: ${{ steps['register'].output.entityRef }} # ...
- 1
- Specify a name for the template.
- 2
- Specify a title for the template. This is the title that is visible on the template tile in the Create… view.
- 3
- Specify a description for the template. This is the description that is visible on the template tile in the Create… view.
- 4
- Specify the ownership of the template. The
owner
field provides information about who is responsible for maintaining or overseeing the template within the system or organization. In the provided example, theowner
field is set tobackstage/techdocs-core
. This means that this template belongs to thetechdocs-core
project in thebackstage
namespace. - 5
- Specify the component type. Any string value is accepted for this required field, but your organization should establish a proper taxonomy for these. Red Hat Developer Hub instances may read this field and behave differently depending on its value. For example, a
website
type component may present tooling in the Red Hat Developer Hub interface that is specific to just websites.The following values are common for this field:
service
- A backend service, typically exposing an API.
website
- A website.
library
- A software library, such as an npm module or a Java library.
- 6
- Use the
parameters
section to specify parameters for user input that are shown in a form view when a user creates a component by using the template in the Red Hat Developer Hub console. Eachparameters
subsection, defined by a title and properties, creates a new form page with that definition. - 7
- Use the
steps
section to specify steps that are executed in the backend. These steps must be defined by using a unique step ID, a name, and an action. You can view actions that are available on your Red Hat Developer Hub instance by visiting the URLhttps://<rhdh_url>/create/actions
. - 8
- Use the
output
section to specify the structure of output data that is created when the template is used. Theoutput
section, particularly thelinks
subsection, provides valuable references and URLs that users can utilize to access and interact with components that are created from the template. - 9
- Provides a reference or URL to the repository associated with the generated component.
- 10
- Provides a reference or URL that allows users to open the generated component in a catalog or directory where various components are listed.
9.3. Importing an existing template to Red Hat Developer Hub
You can add an existing template to your Red Hat Developer Hub instance by using the Catalog Processor.
Prerequisites
- You have created a directory or repository that contains at least one template YAML file.
- If you want to use a template that is stored in a repository such as GitHub or GitLab, you must configure a Red Hat Developer Hub integration for your provider.
Procedure
In the
app-config.yaml
configuration file, modify thecatalog.rules
section to include a rule for templates, and configure thecatalog.locations
section to point to the template that you want to add, as shown in the following example:# ... catalog: rules: - allow: [Template] 1 locations: - type: url 2 target: https://<repository_url>/example-template.yaml 3 # ...
Verification
- Click the Catalog tab in the navigation panel.
- In the Kind drop-down menu, select Template.
- Confirm that your template is shown in the list of existing templates.
Additional resources
Chapter 10. Configuring the TechDocs plugin in Red Hat Developer Hub
The Red Hat Developer Hub TechDocs plugin helps your organization create, find, and use documentation in a central location and in a standardized way. For example:
- Docs-like-code approach
- Write your technical documentation in Markdown files that are stored inside your project repository along with your code.
- Documentation site generation
- Use MkDocs to create a full-featured, Markdown-based, static HTML site for your documentation that is rendered centrally in Developer Hub.
- Documentation site metadata and integrations
- See additional metadata about the documentation site alongside the static documentation, such as the date of the last update, the site owner, top contributors, open GitHub issues, Slack support channels, and Stack Overflow Enterprise tags.
- Built-in navigation and search
- Find the information that you want from a document more quickly and easily.
- Add-ons
- Customize your TechDocs experience with Add-ons to address higher-order documentation needs.
The TechDocs plugin is preinstalled and enabled on a Developer Hub instance by default. You can disable or enable the TechDocs plugin, and change other parameters, by configuring the Red Hat Developer Hub Helm chart or the Red Hat Developer Hub Operator config map.
Red Hat Developer Hub includes a built-in TechDocs builder that generates static HTML documentation from your codebase. However, the default basic setup of the local builder is not intended for production.
You can use a CI/CD pipeline with the repository that has a dedicated job to generate docs for TechDocs. The generated static files are stored in OpenShift Data Foundation or in a cloud storage solution of your choice and published to a static HTML documentation site.
After you configure OpenShift Data Foundation to store the files that TechDocs generates, you can configure the TechDocs plugin to use the OpenShift Data Foundation for cloud storage.
Additional resources
- For more information, see Configuring plugins in Red Hat Developer Hub.
10.1. Configuring storage for TechDocs files
The TechDocs publisher stores generated files in local storage or in cloud storage, such as OpenShift Data Foundation, Google GCS, AWS S3, or Azure Blob Storage.
10.1.1. Using OpenShift Data Foundation for file storage
You can configure OpenShift Data Foundation to store the files that TechDocs generates instead of relying on other cloud storage solutions.
OpenShift Data Foundation provides an ObjectBucketClaim
custom resource (CR) that you can use to request an S3 compatible bucket backend. You must install the OpenShift Data Foundation Operator to use this feature.
Prerequisites
- An OpenShift Container Platform administrator has installed the OpenShift Data Foundation Operator in Red Hat OpenShift Container Platform. For more information, see OpenShift Container Platform - Installing Red Hat OpenShift Data Foundation Operator.
-
An OpenShift Container Platform administrator has created an OpenShift Data Foundation cluster and configured the
StorageSystem
schema. For more information, see OpenShift Container Platform - Creating an OpenShift Data Foundation cluster.
Procedure
Create an
ObjectBucketClaim
CR where the generated TechDocs files are stored. For example:apiVersion: objectbucket.io/v1alpha1 kind: ObjectBucketClaim metadata: name: <rhdh_bucket_claim_name> spec: generateBucketName: <rhdh_bucket_claim_name> storageClassName: openshift-storage.noobaa.io
NoteCreating the Developer Hub
ObjectBucketClaim
CR automatically creates both the Developer HubObjectBucketClaim
config map and secret. The config map and secret have the same name as theObjetBucketClaim
CR.
After you create the ObjectBucketClaim
CR, you can use the information stored in the config map and secret to make the information accessible to the Developer Hub container as environment variables. Depending on the method that you used to install Developer Hub, you add the access information to either the Red Hat Developer Hub Helm chart or Operator configuration.
Additional resources
- For more information about the Object Bucket Claim, see OpenShift Container Platform - Object Bucket Claim.
10.1.2. Making object storage accessible to containers by using the Helm chart
Creating a ObjectBucketClaim
custom resource (CR) automatically generates both the Developer Hub ObjectBucketClaim
config map and secret. The config map and secret contain ObjectBucket
access information. Adding the access information to the Helm chart configuration makes it accessible to the Developer Hub container by adding the following environment variables to the container:
-
BUCKET_NAME
-
BUCKET_HOST
-
BUCKET_PORT
-
BUCKET_REGION
-
BUCKET_SUBREGION
-
AWS_ACCESS_KEY_ID
-
AWS_SECRET_ACCESS_KEY
These variables are then used in the TechDocs plugin configuration.
Prerequisites
- You have installed Red Hat Developer Hub on OpenShift Container Platform using the Helm chart.
-
You have created an
ObjectBucketClaim
CR for storing files generated by TechDocs. For more information see Using OpenShift Data Foundation for file storage
Procedure
In the
upstream.backstage
key in the Helm chart values, enter the name of the Developer HubObjectBucketClaim
secret as the value for theextraEnvVarsSecrets
field and theextraEnvVarsCM
field. For example:upstream: backstage: extraEnvVarsSecrets: - <rhdh_bucket_claim_name> extraEnvVarsCM: - <rhdh_bucket_claim_name>
10.1.2.1. Example TechDocs Plugin configuration for the Helm chart
The following example shows a Developer Hub Helm chart configuration for the TechDocs plugin:
global: dynamic: includes: - 'dynamic-plugins.default.yaml' plugins: - disabled: false package: ./dynamic-plugins/dist/backstage-plugin-techdocs-backend-dynamic pluginConfig: techdocs: builder: external generator: runIn: local publisher: awsS3: bucketName: '${BUCKET_NAME}' credentials: accessKeyId: '${AWS_ACCESS_KEY_ID}' secretAccessKey: '${AWS_SECRET_ACCESS_KEY}' endpoint: 'https://${BUCKET_HOST}' region: '${BUCKET_REGION}' s3ForcePathStyle: true type: awsS3
10.1.3. Making object storage accessible to containers by using the Operator
Creating a ObjectBucketClaim
Custom Resource (CR) automatically generates both the Developer Hub ObjectBucketClaim
config map and secret. The config map and secret contain ObjectBucket
access information. Adding the access information to the Operator configuration makes it accessible to the Developer Hub container by adding the following environment variables to the container:
-
BUCKET_NAME
-
BUCKET_HOST
-
BUCKET_PORT
-
BUCKET_REGION
-
BUCKET_SUBREGION
-
AWS_ACCESS_KEY_ID
-
AWS_SECRET_ACCESS_KEY
These variables are then used in the TechDocs plugin configuration.
Prerequisites
- You have installed Red Hat Developer Hub on OpenShift Container Platform using the Operator.
-
You have created an
ObjectBucketClaim
CR for storing files generated by TechDocs.
Procedure
In the Developer Hub
Backstage
CR, enter the name of the Developer HubObjectBucketClaim
config map as the value for thespec.application.extraEnvs.configMaps
field and enter the Developer HubObjectBucketClaim
secret name as the value for thespec.application.extraEnvs.secrets
field. For example:apiVersion: objectbucket.io/v1alpha1 kind: Backstage metadata: name: <name> spec: application: extraEnvs: configMaps: - name: <rhdh_bucket_claim_name> secrets: - name: <rhdh_bucket_claim_name>
10.1.3.1. Example TechDocs Plugin configuration for the Operator
The following example shows a Red Hat Developer Hub Operator config map configuration for the TechDocs plugin:
kind: ConfigMap apiVersion: v1 metadata: name: dynamic-plugins-rhdh data: dynamic-plugins.yaml: | includes: - dynamic-plugins.default.yaml plugins: - disabled: false package: ./dynamic-plugins/dist/backstage-plugin-techdocs-backend-dynamic pluginConfig: techdocs: builder: external generator: runIn: local publisher: awsS3: bucketName: '${BUCKET_NAME}' credentials: accessKeyId: '${AWS_ACCESS_KEY_ID}' secretAccessKey: '${AWS_SECRET_ACCESS_KEY}' endpoint: 'https://${BUCKET_HOST}' region: '${BUCKET_REGION}' s3ForcePathStyle: true type: awsS3
10.2. Configuring CI/CD to generate and publish TechDocs sites
TechDocs reads the static generated documentation files from a cloud storage bucket, such as OpenShift Data Foundation. The documentation site is generated on the CI/CD workflow associated with the repository containing the documentation files. You can generate docs on CI and publish to a cloud storage using the techdocs-cli
CLI tool.
You can use the following example to create a script for TechDocs publication:
# Prepare REPOSITORY_URL='https://github.com/org/repo' git clone $REPOSITORY_URL cd repo # Install @techdocs/cli, mkdocs and mkdocs plugins npm install -g @techdocs/cli pip install "mkdocs-techdocs-core==1.*" # Generate techdocs-cli generate --no-docker # Publish techdocs-cli publish --publisher-type awsS3 --storage-name <bucket/container> --entity <Namespace/Kind/Name>
The TechDocs workflow starts the CI when a user makes changes in the repository containing the documentation files. You can configure the workflow to start only when files inside the docs/
directory or mkdocs.yml
are changed.
10.2.1. Preparing your repository for CI
The first step on the CI is to clone your documentation source repository in a working directory.
Procedure
To clone your documentation source repository in a working directory, enter the following command:
git clone <https://path/to/docs-repository/>
10.2.2. Generating the TechDocs site
Procedure
To configure CI/CD to generate your techdocs, complete the following steps:
Install the
npx
package to runtechdocs-cli
using the following command:npm install -g npx
Install the
techdocs-cli
tool using the following command:npm install -g @techdocs/cli
Install the
mkdocs
plugins using the following command:pip install "mkdocs-techdocs-core==1.*"
Generate your techdocs site using the following command:
npx @techdocs/cli generate --no-docker --source-dir <path_to_repo> --output-dir ./site
Where
<path_to_repo>
is the location in the file path that you used to clone your repository.
10.2.3. Publishing the TechDocs site
Procedure
To publish your techdocs site, complete the following steps:
- Set the necessary authentication environment variables for your cloud storage provider.
Publish your techdocs using the following command:
npx @techdocs/cli publish --publisher-type <awsS3|googleGcs> --storage-name <bucket/container> --entity <namespace/kind/name> --directory ./site
Add a
.github/workflows/techdocs.yml
file in your Software Template(s). For example:name: Publish TechDocs Site on: push: branches: [main] # You can even set it to run only when TechDocs related files are updated. # paths: # - "docs/**" # - "mkdocs.yml" jobs: publish-techdocs-site: runs-on: ubuntu-latest # The following secrets are required in your CI environment for publishing files to AWS S3. # e.g. You can use GitHub Organization secrets to set them for all existing and new repositories. env: TECHDOCS_S3_BUCKET_NAME: ${{ secrets.TECHDOCS_S3_BUCKET_NAME }} AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} AWS_REGION: ${{ secrets.AWS_REGION }} ENTITY_NAMESPACE: 'default' ENTITY_KIND: 'Component' ENTITY_NAME: 'my-doc-entity' # In a Software template, Scaffolder will replace {{cookiecutter.component_id | jsonify}} # with the correct entity name. This is same as metadata.name in the entity's catalog-info.yaml # ENTITY_NAME: '{{ cookiecutter.component_id | jsonify }}' steps: - name: Checkout code uses: actions/checkout@v3 - uses: actions/setup-node@v3 - uses: actions/setup-python@v4 with: python-version: '3.9' - name: Install techdocs-cli run: sudo npm install -g @techdocs/cli - name: Install mkdocs and mkdocs plugins run: python -m pip install mkdocs-techdocs-core==1.* - name: Generate docs site run: techdocs-cli generate --no-docker --verbose - name: Publish docs site run: techdocs-cli publish --publisher-type awsS3 --storage-name $TECHDOCS_S3_BUCKET_NAME --entity $ENTITY_NAMESPACE/$ENTITY_KIND/$ENTITY_NAME
Chapter 11. Configuring Red Hat Developer Hub deployment
The Red Hat Developer Hub operator exposes a rhdh.redhat.com/v1alpha2
API Version of its Custom Resource Definition (CRD). This CRD exposes a generic spec.deployment.patch
field, which gives you full control over the Developer Hub Deployment resource. This field can be a fragment of the standard apps.Deployment
Kubernetes object.
Procedure
- Create a Developer Hub Custom Resource Definition with the following fields:
Example
apiVersion: rhdh.redhat.com/v1alpha2 kind: Backstage metadata: name: developer-hub spec: deployment: patch: spec: template:
labels
Add labels to the Developer Hub pod.
Example adding the label
my=true
apiVersion: rhdh.redhat.com/v1alpha2 kind: Backstage metadata: name: developer-hub spec: deployment: patch: spec: template: metadata: labels: my: true
volumes
Add an additional volume named
my-volume
and mount it under/my/path
in the Developer Hub application container.Example additional volume
apiVersion: rhdh.redhat.com/v1alpha2 kind: Backstage metadata: name: developer-hub spec: deployment: patch: spec: template: spec: containers: - name: backstage-backend volumeMounts: - mountPath: /my/path name: my-volume volumes: - ephemeral: volumeClaimTemplate: spec: storageClassName: "special" name: my-volume
Replace the default
dynamic-plugins-root
volume with a persistent volume claim (PVC) nameddynamic-plugins-root
. Note the$patch: replace
directive, otherwise a new volume will be added.Example
dynamic-plugins-root
volume replacementapiVersion: rhdh.redhat.com/v1alpha2 kind: Backstage metadata: name: developer-hub spec: deployment: patch: spec: template: spec: volumes: - $patch: replace name: dynamic-plugins-root persistentVolumeClaim: claimName: dynamic-plugins-root
cpu
requestSet the CPU request for the Developer Hub application container to 250m.
Example CPU request
apiVersion: rhdh.redhat.com/v1alpha2 kind: Backstage metadata: name: developer-hub spec: deployment: patch: spec: template: spec: containers: - name: backstage-backend resources: requests: cpu: 250m
my-sidecar
containerAdd a new
my-sidecar
sidecar container into the Developer Hub Pod.Example side car container
apiVersion: rhdh.redhat.com/v1alpha2 kind: Backstage metadata: name: developer-hub spec: deployment: patch: spec: template: spec: containers: - name: my-sidecar image: quay.io/my-org/my-sidecar:latest
Additional resources
- To learn more about merging, see Strategic Merge Patch.