Red Hat Developer Hub is an enterprise-grade, open developer platform that you can use to build developer portals. This platform contains a supported and opinionated framework that helps reduce the friction and frustration of developers while boosting their productivity.

Red Hat Developer Hub support

If you experience difficulty with a procedure described in this documentation, visit the Red Hat Customer Portal. You can use the Red Hat Customer Portal for the following purposes:

  • To search or browse through the Red Hat Knowledgebase of technical support articles about Red Hat products.

  • To create a support case for Red Hat Global Support Services (GSS). For support case creation, select Red Hat Developer Hub as the product and select the appropriate product version.

1. Adding a custom application configuration file to Red Hat OpenShift Container Platform

To access the Red Hat Developer Hub, you must add a custom application configuration file to Red Hat OpenShift Container Platform. In OpenShift Container Platform, you can use the following content as a base template to create a ConfigMap named app-config-rhdh:

kind: ConfigMap
apiVersion: v1
metadata:
  name: app-config-rhdh
data:
  app-config-rhdh.yaml: |
    app:
      title: {product}

You can add the custom application configuration file to OpenShift Container Platform in one of the following ways:

  • The Red Hat Developer Hub Operator

  • The Red Hat Developer Hub Helm chart.

1.1. Adding a custom application configuration file to OpenShift Container Platform using the Helm chart

You can use the Red Hat Developer Hub Helm chart to add a custom application configuration file to your OpenShift Container Platform instance.

Prerequisites
  • You have created an Red Hat OpenShift Container Platform account.

Procedure
  1. From the OpenShift Container Platform web console, select the ConfigMaps tab.

  2. Click Create ConfigMap.

  3. From Create ConfigMap page, select the YAML view option in Configure via and make changes to the file, if needed.

  4. Click Create.

  5. Go to the Helm tab to see the list of Helm releases.

  6. Click the overflow menu on the Helm release that you want to use and select Upgrade.

  7. Use either the Form view or YAML view to edit the Helm configuration.

    • Using Form view

      1. Expand Root Schema → Backstage chart schema → Backstage parameters → Extra app configuration files to inline into command arguments.

      2. Click the Add Extra app configuration files to inline into command arguments link.

      3. Enter the value in the following fields:

        • configMapRef: app-config-rhdh

        • filename: app-config-rhdh.yaml

      4. Click Upgrade.

    • Using YAML view

      1. Set the value of the upstream.backstage.extraAppConfig.configMapRef and upstream.backstage.extraAppConfig.filename parameters as follows:

        # ... other Red Hat Developer Hub Helm Chart configurations
        upstream:
          backstage:
            extraAppConfig:
              - configMapRef: app-config-rhdh
                filename: app-config-rhdh.yaml
        # ... other Red Hat Developer Hub Helm Chart configurations
      2. Click Upgrade.

1.2. Adding a custom application configuration file to OpenShift Container Platform using the Operator

A custom application configuration file is a ConfigMap object that you can use to change the configuration of your Red Hat Developer Hub instance. If you are deploying your Developer Hub instance on Red Hat OpenShift Container Platform, you can use the Red Hat Developer Hub Operator to add a custom application configuration file to your OpenShift Container Platform instance by creating the ConfigMap object and referencing it in the Developer Hub custom resource (CR).

The custom application configuration file contains a sensitive environment variable, named BACKEND_SECRET. This variable contains a mandatory backend authentication key that Developer Hub uses to reference an environment variable defined in an OpenShift Container Platform secret. You must create a secret, named 'secrets-rhdh', and reference it in the Developer Hub CR.

Note

You are responsible for protecting your Red Hat Developer Hub installation from external and unauthorized access. Manage the backend authentication key like any other secret. Meet strong password requirements, do not expose it in any configuration files, and only inject it into configuration files as an environment variable.

Prerequisites
  • You have an active Red Hat OpenShift Container Platform account.

  • Your administrator has installed the Red Hat Developer Hub Operator in OpenShift Container Platform. For more information, see Installing the Red Hat Developer Hub Operator.

  • You have created the Red Hat Developer Hub CR in OpenShift Container Platform.

Procedure
  1. From the Developer perspective in the OpenShift Container Platform web console, select the Topology view, and click the Open URL icon on the Developer Hub pod to identify your Developer Hub external URL: <RHDH_URL>.

  2. From the Developer perspective in the OpenShift Container Platform web console, select the ConfigMaps view.

  3. Click Create ConfigMap.

  4. Select the YAML view option in Configure via and use the following example as a base template to create a ConfigMap object, such as app-config-rhdh.yaml:

    kind: Backstage
    apiVersion: rhdh.redhat.com/v1alpha1
    metadata:
      name: app-config-rhdh
    data:
      "app-config-rhdh.yaml": |
        app:
          title: Red Hat Developer Hub
          baseUrl: <RHDH_URL> # (1)
        backend:
          auth:
            keys:
              - secret: "${BACKEND_SECRET}" # (2)
          baseUrl: <RHDH_URL> # (3)
          cors:
            origin: <RHDH_URL> # (4)
    1. Set the external URL of your Red Hat Developer Hub instance.

    2. Use an environment variable exposing an OpenShift Container Platform secret to define the mandatory Developer Hub backend authentication key.

    3. Set the external URL of your Red Hat Developer Hub instance.

    4. Set the external URL of your Red Hat Developer Hub instance.

  5. Click Create.

  6. Select the Secrets view.

  7. Click Create Key/value Secret.

  8. Create a secret named secrets-rhdh.

  9. Add a key named BACKEND_SECRET and a base64 encoded string as a value. Use a unique value for each Red Hat Developer Hub instance. For example, you can use the following command to generate a key from your terminal:

    node -p 'require("crypto").randomBytes(24).toString("base64")'
  10. Click Create.

  11. Select the Topology view.

  12. Click the overflow menu for the Red Hat Developer Hub instance that you want to use and select Edit Backstage to load the YAML view of the Red Hat Developer Hub instance.

    operator install 2
  13. In the CR, enter the name of the custom application configuration config map as the value for the spec.application.appConfig.configMaps field, and enter the name of your secret as the value for the spec.application.extraEnvs.secrets field. For example:

    apiVersion: rhdh.redhat.com/v1alpha1
    kind: Backstage
    metadata:
      name: developer-hub
    spec:
      application:
        appConfig:
          mountPath: /opt/app-root/src
          configMaps:
             - name: app-config-rhdh
        extraEnvs:
          secrets:
             - name: secrets-rhdh
        extraFiles:
          mountPath: /opt/app-root/src
        replicas: 1
        route:
          enabled: true
      database:
        enableLocalDb: true
  14. Click Save.

  15. Navigate back to the Topology view and wait for the Red Hat Developer Hub pod to start.

  16. Click the Open URL icon to use the Red Hat Developer Hub platform with the configuration changes.

Additional resources

2. Configuring external PostgreSQL databases

As an administrator, you can configure and use external PostgreSQL databases in Red Hat Developer Hub. You can use a PostgreSQL certificate file to configure an external PostgreSQL instance using the Operator or Helm Chart.

Note

Developer Hub supports the configuration of external PostgreSQL databases. You can perform maintenance activities, such as backing up your data or configuring high availability (HA) for the external PostgreSQL databases.

By default, the Red Hat Developer Hub operator or Helm Chart creates a local PostgreSQL database. However, this configuration is not suitable for the production environments. For production deployments, disable the creation of local database and configure Developer Hub to connect to an external PostgreSQL instance instead.

2.1. Configuring an external PostgreSQL instance using the Operator

You can configure an external PostgreSQL instance using the Red Hat Developer Hub Operator. By default, the Operator creates and manages a local instance of PostgreSQL in the same namespace where you have deployed the RHDH instance. However, you can change this default setting to configure an external PostgreSQL database server, for example, Amazon Web Services (AWS) Relational Database Service (RDS) or Azure database.

Prerequisites
  • You are using a supported version of PostgreSQL. For more information, see the Product life cycle page.

  • You have the following details:

    • db-host: Denotes your PostgreSQL instance Domain Name System (DNS) or IP address

    • db-port: Denotes your PostgreSQL instance port number, such as 5432

    • username: Denotes the user name to connect to your PostgreSQL instance

    • password: Denotes the password to connect to your PostgreSQL instance

  • You have installed the Red Hat Developer Hub Operator.

  • Optional: You have a CA certificate, Transport Layer Security (TLS) private key, and TLS certificate so that you can secure your database connection by using the TLS protocol. For more information, refer to your PostgreSQL vendor documentation.

Note

By default, Developer Hub uses a database for each plugin and automatically creates it if none is found. You might need the Create Database privilege in addition to PSQL Database privileges for configuring an external PostgreSQL instance.

Procedure
  1. Optional: Create a certificate secret to configure your PostgreSQL instance with a TLS connection:

    cat <<EOF | oc -n <your-namespace> create -f -
    apiVersion: v1
    kind: Secret
    metadata:
     name: <crt-secret> (1)
    type: Opaque
    stringData:
     postgres-ca.pem: |-
      -----BEGIN CERTIFICATE-----
      <ca-certificate-key> (2)
     postgres-key.key: |-
      -----BEGIN CERTIFICATE-----
      <tls-private-key> (3)
     postgres-crt.pem: |-
      -----BEGIN CERTIFICATE-----
      <tls-certificate-key> (4)
      # ...
    EOF
    1. Provide the name of the certificate secret.

    2. Provide the CA certificate key.

    3. Optional: Provide the TLS private key.

    4. Optional: Provide the TLS certificate key.

  2. Create a credential secret to connect with the PostgreSQL instance:

    cat <<EOF | oc -n <your-namespace> create -f -
    apiVersion: v1
    kind: Secret
    metadata:
     name: <cred-secret> (1)
    type: Opaque
    stringData: (2)
     POSTGRES_PASSWORD: <password>
     POSTGRES_PORT: "<db-port>"
     POSTGRES_USER: <username>
     POSTGRES_HOST: <db-host>
     PGSSLMODE: <ssl-mode> # for TLS connection (3)
     NODE_EXTRA_CA_CERTS: <abs-path-to-pem-file> # for TLS connection, e.g. /opt/app-root/src/postgres-crt.pem (4)
    EOF
    1. Provide the name of the credential secret.

    2. Provide credential data to connect with your PostgreSQL instance.

    3. Optional: Provide the value based on the required Secure Sockets Layer (SSL) mode.

    4. Optional: Provide the value only if you need a TLS connection for your PostgreSQL instance.

  3. Create a Backstage custom resource (CR):

    cat <<EOF | oc -n <your-namespace> create -f -
    apiVersion: rhdh.redhat.com/v1alpha1
    kind: Backstage
    metadata:
      name: <backstage-instance-name>
    spec:
      database:
        enableLocalDb: false (1)
      application:
        extraFiles:
          mountPath: <path> # e g /opt/app-root/src
          secrets:
            - name: <crt-secret> (2)
              key: postgres-crt.pem, postgres-ca.pem, postgres-key.key # key name as in <crt-secret> Secret
        extraEnvs:
          secrets:
            - name: <cred-secret> (3)
            # ...
    1. Set the value of the enableLocalDb parameter to false to disable creating local PostgreSQL instances.

    2. Provide the name of the certificate secret if you have configured a TLS connection.

    3. Provide the name of the credential secret that you created.

    Note

    The environment variables listed in the Backstage CR work with the Operator default configuration. If you have changed the Operator default configuration, you must reconfigure the Backstage CR accordingly.

  4. Apply the Backstage CR to the namespace where you have deployed the RHDH instance.

2.2. Configuring an external PostgreSQL instance using the Helm Chart

You can configure an external PostgreSQL instance by using the Helm Chart. By default, the Helm Chart creates and manages a local instance of PostgreSQL in the same namespace where you have deployed the RHDH instance. However, you can change this default setting to configure an external PostgreSQL database server, for example, Amazon Web Services (AWS) Relational Database Service (RDS) or Azure database.

Prerequisites
  • You are using a supported version of PostgreSQL. For more information, see the Product life cycle page.

  • You have the following details:

    • db-host: Denotes your PostgreSQL instance Domain Name System (DNS) or IP address

    • db-port: Denotes your PostgreSQL instance port number, such as 5432

    • username: Denotes the user name to connect to your PostgreSQL instance

    • password: Denotes the password to connect to your PostgreSQL instance

  • You have installed the RHDH application by using the Helm Chart.

  • Optional: You have a CA certificate, Transport Layer Security (TLS) private key, and TLS certificate so that you can secure your database connection by using the TLS protocol. For more information, refer to your PostgreSQL vendor documentation.

Note

By default, Developer Hub uses a database for each plugin and automatically creates it if none is found. You might need the Create Database privilege in addition to PSQL Database privileges for configuring an external PostgreSQL instance.

Procedure
  1. Optional: Create a certificate secret to configure your PostgreSQL instance with a TLS connection:

    cat <<EOF | oc -n <your-namespace> create -f -
    apiVersion: v1
    kind: Secret
    metadata:
     name: <crt-secret> (1)
    type: Opaque
    stringData:
     postgres-ca.pem: |-
      -----BEGIN CERTIFICATE-----
      <ca-certificate-key> (2)
     postgres-key.key: |-
      -----BEGIN CERTIFICATE-----
      <tls-private-key> (3)
     postgres-crt.pem: |-
      -----BEGIN CERTIFICATE-----
      <tls-certificate-key> (4)
      # ...
    EOF
    1. Provide the name of the certificate secret.

    2. Provide the CA certificate key.

    3. Optional: Provide the TLS private key.

    4. Optional: Provide the TLS certificate key.

  2. Create a credential secret to connect with the PostgreSQL instance:

    cat <<EOF | oc -n <your-namespace> create -f -
    apiVersion: v1
    kind: Secret
    metadata:
     name: <cred-secret> (1)
    type: Opaque
    stringData: (2)
     POSTGRES_PASSWORD: <password>
     POSTGRES_PORT: "<db-port>"
     POSTGRES_USER: <username>
     POSTGRES_HOST: <db-host>
     PGSSLMODE: <ssl-mode> # for TLS connection (3)
     NODE_EXTRA_CA_CERTS: <abs-path-to-pem-file> # for TLS connection, e.g. /opt/app-root/src/postgres-crt.pem (4)
    EOF
    1. Provide the name of the credential secret.

    2. Provide credential data to connect with your PostgreSQL instance.

    3. Optional: Provide the value based on the required Secure Sockets Layer (SSL) mode.

    4. Optional: Provide the value only if you need a TLS connection for your PostgreSQL instance.

  3. Configure your PostgreSQL instance in the Helm configuration file named values.yaml:

    # ...
    upstream:
      postgresql:
        enabled: false  # disable PostgreSQL instance creation (1)
        auth:
          existingSecret: <cred-secret> # inject credentials secret to Backstage (2)
      backstage:
        appConfig:
          backend:
            database:
              connection:  # configure Backstage DB connection parameters
                host: ${POSTGRES_HOST}
                port: ${POSTGRES_PORT}
                user: ${POSTGRES_USER}
                password: ${POSTGRES_PASSWORD}
                ssl:
                  rejectUnauthorized: true,
                  ca:
                    $file: /opt/app-root/src/postgres-ca.pem
                  key:
                    $file: /opt/app-root/src/postgres-key.key
                  cert:
                    $file: /opt/app-root/src/postgres-crt.pem
      extraEnvVarsSecrets:
        - <cred-secret> # inject credentials secret to Backstage (3)
      extraEnvVars:
        - name: BACKEND_SECRET
          valueFrom:
            secretKeyRef:
              key: backend-secret
              name: '{{ include "janus-idp.backend-secret-name" $ }}'
      extraVolumeMounts:
        - mountPath: /opt/app-root/src/dynamic-plugins-root
          name: dynamic-plugins-root
        - mountPath: /opt/app-root/src/postgres-crt.pem
          name: postgres-crt # inject TLS certificate to Backstage cont. (4)
          subPath: postgres-crt.pem
        - mountPath: /opt/app-root/src/postgres-ca.pem
          name: postgres-ca # inject CA certificate to Backstage cont. (5)
          subPath: postgres-ca.pem
        - mountPath: /opt/app-root/src/postgres-key.key
          name: postgres-key # inject TLS private key to Backstage cont. (6)
          subPath: postgres-key.key
      extraVolumes:
        - ephemeral:
            volumeClaimTemplate:
              spec:
                accessModes:
                  - ReadWriteOnce
                resources:
                  requests:
                    storage: 1Gi
          name: dynamic-plugins-root
        - configMap:
            defaultMode: 420
            name: dynamic-plugins
            optional: true
          name: dynamic-plugins
        - name: dynamic-plugins-npmrc
          secret:
            defaultMode: 420
            optional: true
            secretName: dynamic-plugins-npmrc
        - name: postgres-crt
          secret:
            secretName: <crt-secret> (7)
            # ...
    1. Set the value of the upstream.postgresql.enabled parameter to false to disable creating local PostgreSQL instances.

    2. Provide the name of the credential secret.

    3. Provide the name of the credential secret.

    4. Optional: Provide the name of the TLS certificate only for a TLS connection.

    5. Optional: Provide the name of the CA certificate only for a TLS connection.

    6. Optional: Provide the name of the TLS private key only if your TLS connection requires a private key.

    7. Provide the name of the certificate secret if you have configured a TLS connection.

  4. Apply the configuration changes in your Helm configuration file named values.yaml:

    helm upgrade -n <your-namespace> <your-deploy-name> openshift-helm-charts/redhat-developer-hub -f values.yaml --version 1.2.5

2.3. Migrating local databases to an external database server using the Operator

By default, Red Hat Developer Hub hosts the data for each plugin in a PostgreSQL database. When you fetch the list of databases, you might see multiple databases based on the number of plugins configured in Developer Hub. You can migrate the data from an RHDH instance hosted on a local PostgreSQL server to an external PostgreSQL service, such as AWS RDS, Azure database, or Crunchy database. To migrate the data from each RHDH instance, you can use PostgreSQL utilities, such as pg_dump with psql or pgAdmin.

Note

The following procedure uses a database copy script to do a quick migration.

Prerequisites
  • You have installed the pg_dump and psql utilities on your local machine.

  • For data export, you have the PGSQL user privileges to make a full dump of local databases.

  • For data import, you have the PGSQL admin privileges to create an external database and populate it with database dumps.

Procedure
  1. Configure port forwarding for the local PostgreSQL database pod by running the following command on a terminal:

    oc port-forward -n <your-namespace> <pgsql-pod-name> <forward-to-port>:<forward-from-port>

    Where:

    • The <pgsql-pod-name> variable denotes the name of a PostgreSQL pod with the format backstage-psql-<deployment-name>-<_index>.

    • The <forward-to-port> variable denotes the port of your choice to forward PostgreSQL data to.

    • The <forward-from-port> variable denotes the local PostgreSQL instance port, such as 5432.

      Example: Configuring port forwarding
      oc port-forward -n developer-hub backstage-psql-developer-hub-0 15432:5432
  2. Make a copy of the following db_copy.sh script and edit the details based on your configuration:

    #!/bin/bash
    
    to_host=<db-service-host> (1)
    to_port=5432 (2)
    to_user=postgres (3)
    
    from_host=127.0.0.1 (4)
    from_port=15432 (5)
    from_user=postgres (6)
    
    allDB=("backstage_plugin_app" "backstage_plugin_auth" "backstage_plugin_catalog" "backstage_plugin_permission" "backstage_plugin_scaffolder" "backstage_plugin_search") (7)
    
    for db in ${!allDB[@]};
    do
      db=${allDB[$db]}
      echo Copying database: $db
      PGPASSWORD=$TO_PSW psql -h $to_host -p $to_port -U $to_user -c "create database $db;"
      pg_dump -h $from_host -p $from_port -U $from_user -d $db | PGPASSWORD=$TO_PSW psql -h $to_host -p $to_port -U $to_user -d $db
    done
    1. The destination host name, for example, <db-instance-name>.rds.amazonaws.com.

    2. The destination port, such as 5432.

    3. The destination server username, for example, postgres.

    4. The source host name, such as 127.0.0.1.

    5. The source port number, such as the <forward-to-port> variable.

    6. The source server username, for example, postgres.

    7. The name of databases to import in double quotes separated by spaces, for example, ("backstage_plugin_app" "backstage_plugin_auth" "backstage_plugin_catalog" "backstage_plugin_permission" "backstage_plugin_scaffolder" "backstage_plugin_search").

  3. Create a destination database for copying the data:

    /bin/bash TO_PSW=<destination-db-password> /path/to/db_copy.sh (1)
    1. The <destination-db-password> variable denotes the password to connect to the destination database.

    Note

    You can stop port forwarding when the copying of the data is complete. For more information about handling large databases and using the compression tools, see the Handling Large Databases section on the PostgreSQL website.

  4. Reconfigure your Backstage custom resource (CR). For more information, see Configuring an external PostgreSQL instance using the Operator.

  5. Check that the following code is present at the end of your Backstage CR after reconfiguration:

    # ...
    spec:
      database:
        enableLocalDb: false
      application:
      # ...
        extraFiles:
          secrets:
            - name: <crt-secret>
              key: postgres-crt.pem # key name as in <crt-secret> Secret
        extraEnvs:
          secrets:
            - name: <cred-secret>
    # ...
    Note

    Reconfiguring the Backstage CR deletes the corresponding StatefulSet and Pod objects, but does not delete the PersistenceVolumeClaim object. Use the following command to delete the local PersistenceVolumeClaim object:

    oc -n developer-hub delete pvc <local-psql-pvc-name>

    where, the <local-psql-pvc-name> variable is in the data-<psql-pod-name> format.

  6. Apply the configuration changes.

Verification
  1. Verify that your RHDH instance is running with the migrated data and does not contain the local PostgreSQL database by running the following command:

    oc get pods -n <your-namespace>
  2. Check the output for the following details:

    • The backstage-developer-hub-xxx pod is in running state.

    • The backstage-psql-developer-hub-0 pod is not available.

      You can also verify these details using the Topology view in the OpenShift Container Platform web console.

3. Configuring an RHDH instance with a TLS connection in Kubernetes

You can configure an RHDH instance with a Transport Layer Security (TLS) connection in a Kubernetes cluster, such as an Azure Red Hat OpenShift (ARO) cluster, any cluster from a supported cloud provider, or your own cluster with proper configuration. However, You must use a public Certificate Authority (CA)-signed certificate to configure your Kubernetes cluster.

Prerequisites
  • You have set up an Azure Red Hat OpenShift (ARO) cluster with a public CA-signed certificate. For more information about obtaining CA certificates, refer to your vendor documentation.

  • You have created a namespace and setup a service account with proper read permissions on resources.

    Example: Kubernetes manifest for role-based access control
    apiVersion: rbac.authorization.k8s.io/v1
    kind: ClusterRole
    metadata:
      name: backstage-read-only
    rules:
      - apiGroups:
          - '*'
        resources:
          - pods
          - configmaps
          - services
          - deployments
          - replicasets
          - horizontalpodautoscalers
          - ingresses
          - statefulsets
          - limitranges
          - resourcequotas
          - daemonsets
        verbs:
          - get
          - list
          - watch
    #...
  • You have obtained the secret and the service CA certificate associated with your service account.

  • You have created some resources and added annotations to them so they can be discovered by the Kubernetes plugin. You can apply these Kubernetes annotations:

    • backstage.io/kubernetes-id to label components

    • backstage.io/kubernetes-namespace to label namespaces

Procedure
  1. Enable the Kubernetes plugins in the dynamic-plugins-rhdh.yaml file:

    kind: ConfigMap
    apiVersion: v1
    metadata:
      name: dynamic-plugins-rhdh
    data:
      dynamic-plugins.yaml: |
        includes:
          - dynamic-plugins.default.yaml
        plugins:
          - package: ./dynamic-plugins/dist/backstage-plugin-kubernetes-backend-dynamic
            disabled: false (1)
          - package: ./dynamic-plugins/dist/backstage-plugin-kubernetes
            disabled: false (2)
            # ...
    1. Set the value to false to enable the backstage-plugin-kubernetes-backend-dynamic plugin.

    2. Set the value to false to enable the backstage-plugin-kubernetes plugin.

    Note

    The backstage-plugin-kubernetes plugin is currently in Technology Preview. As an alternative, you can use the ./dynamic-plugins/dist/backstage-plugin-topology-dynamic plugin, which is Generally Available (GA).

  2. Set the kubernetes cluster details and configure the catalog sync options in the app-config-rhdh.yaml file:

    kind: ConfigMap
    apiVersion: v1
    metadata:
      name: app-config-rhdh
    data:
      "app-config-rhdh.yaml": |
      # ...
      catalog:
        rules:
          - allow: [Component, System, API, Resource, Location]
        providers:
          kubernetes:
            openshift:
              cluster: openshift
              processor:
                namespaceOverride: default
                defaultOwner: guests
              schedule:
                frequency:
                  seconds: 30
                timeout:
                  seconds: 5
      kubernetes:
        serviceLocatorMethod:
          type: 'multiTenant'
        clusterLocatorMethods:
          - type: 'config'
            clusters:
              - url: <target-cluster-api-server-url> (1)
                name: openshift
                authProvider: 'serviceAccount'
                skipTLSVerify: false (2)
                skipMetricsLookup: true
                dashboardUrl: <target-cluster-console-url> (3)
                dashboardApp: openshift
                serviceAccountToken: ${K8S_SERVICE_ACCOUNT_TOKEN} (4)
                caData: ${K8S_CONFIG_CA_DATA} (5)
                # ...
    1. The base URL to the Kubernetes control plane. You can run the kubectl cluster-info command to get the base URL.

    2. Set the value of this parameter to false to enable the verification of the TLS certificate.

    3. Optional: The link to the Kubernetes dashboard managing the ARO cluster.

    4. Optional: Pass the service account token using a K8S_SERVICE_ACCOUNT_TOKEN environment variable that you can define in your secrets-rhdh secret.

    5. Pass the CA data using a K8S_CONFIG_CA_DATA environment variable that you can define in your secrets-rhdh secret.

  3. Save the configuration changes.

Verification
  1. Run the RHDH application to import your catalog:

    kubectl -n rhdh-operator get pods -w
  2. Verify that the pod log shows no errors for your configuration.

  3. Go to Catalog and check the component page in the Developer Hub instance to verify the cluster connection and the presence of your created resources.

Note

If you encounter connection errors, such as certificate issues or permissions, check the message box in the component page or view the logs of the pod.

4. Telemetry data collection

The telemetry data collection feature helps in collecting and analyzing the telemetry data to improve your experience with Red Hat Developer Hub. This feature is enabled by default.

Important

As an administrator, you can disable the telemetry data collection feature based on your needs. For example, in an air-gapped environment, you can disable this feature to avoid needless outbound requests affecting the responsiveness of the RHDH application. For more details, see the Disabling telemetry data collection in RHDH section.

Red Hat collects and analyzes the following data:

  • Events of page visits and clicks on links or buttons.

  • System-related information, for example, locale, timezone, user agent including browser and OS details.

  • Page-related information, for example, title, category, extension name, URL, path, referrer, and search parameters.

  • Anonymized IP addresses, recorded as 0.0.0.0.

  • Anonymized username hashes, which are unique identifiers used solely to identify the number of unique users of the RHDH application.

With RHDH, you can customize the telemetry data collection feature and the telemetry Segment source configuration based on your needs.

4.1. Disabling telemetry data collection in RHDH

To disable telemetry data collection, you must disable the analytics-provider-segment plugin either using the Helm Chart or the Red Hat Developer Hub Operator configuration.

4.1.1. Disabling telemetry data collection using the Helm Chart

You can disable the telemetry data collection feature by using the Helm Chart.

Prerequisites
Procedure
  1. In the Developer perspective of the OpenShift Container Platform web console, go to the Helm view to see the list of Helm releases.

  2. Click the overflow menu on the Helm release that you want to use and select Upgrade.

    Note

    You can also create a new Helm release by clicking the Create button and edit the configuration to disable telemetry.

  3. Use either the Form view or YAML view to edit the Helm configuration:

    • Using Form view

      1. Expand Root Schema → global → Dynamic plugins configuration. → List of dynamic plugins that should be installed in the backstage application.

      2. Click the Add list of dynamic plugins that should be installed in the backstage application. link.

      3. Perform one of the following steps:

        • If you have not configured the plugin, add the following value in the Package specification of the dynamic plugin to install. It should be usable by the npm pack command. field:

          ./dynamic-plugins/dist/janus-idp-backstage-plugin-analytics-provider-segment

          disabling telemetry
        • If you have configured the plugin, find the Package specification of the dynamic plugin to install. It should be usable by the npm pack command. field with the ./dynamic-plugins/dist/janus-idp-backstage-plugin-analytics-provider-segment value.

      4. Select the Disable the plugin checkbox.

      5. Click Upgrade.

    • Using YAML view

      1. Perform one of the following steps:

        • If you have not configured the plugin, add the following YAML code in your values.yaml Helm configuration file:

          # ...
          global:
            dynamic:
              plugins:
                - package: './dynamic-plugins/dist/janus-idp-backstage-plugin-analytics-provider-segment'
                  disabled: true
          # ...
        • If you have configured the plugin, search it in your Helm configuration and set the value of the plugins.disabled parameter to true.

      2. Click Upgrade.

4.1.2. Disabling telemetry data collection using the Operator

You can disable the telemetry data collection feature by using the Operator.

Prerequisites
Procedure
  1. Perform one of the following steps:

    • If you have created the dynamic-plugins-rhdh ConfigMap file and not configured the analytics-provider-segment plugin, add the plugin to the list of plugins and set its plugins.disabled parameter to true.

    • If you have created the dynamic-plugins-rhdh ConfigMap file and configured the analytics-provider-segment plugin, search the plugin in the list of plugins and set its plugins.disabled parameter to true.

    • If you have not created the ConfigMap file, create it with the following YAML code:

      kind: ConfigMap
      apiVersion: v1
      metadata:
        name: dynamic-plugins-rhdh
      data:
        dynamic-plugins.yaml: |
          includes:
            - dynamic-plugins.default.yaml
          plugins:
            - package: './dynamic-plugins/dist/janus-idp-backstage-plugin-analytics-provider-segment'
              disabled: true
  2. Set the value of the dynamicPluginsConfigMapName parameter to the name of the ConfigMap file in your Backstage custom resource:

    # ...
    spec:
      application:
        dynamicPluginsConfigMapName: dynamic-plugins-rhdh
    # ...
  3. Save the configuration changes.

4.2. Enabling telemetry data collection in RHDH

The telemetry data collection feature is enabled by default. However, if you have disabled the feature and want to re-enable it, you must enable the analytics-provider-segment plugin either by using the Helm Chart or the Red Hat Developer Hub Operator configuration.

4.2.1. Enabling telemetry data collection using the Helm Chart

You can enable the telemetry data collection feature by using the Helm Chart.

Prerequisites
Procedure
  1. In the Developer perspective of the OpenShift Container Platform web console, go to the Helm view to see the list of Helm releases.

  2. Click the overflow menu on the Helm release that you want to use and select Upgrade.

    Note

    You can also create a new Helm release by clicking the Create button and edit the configuration to enable telemetry.

  3. Use either the Form view or YAML view to edit the Helm configuration:

    • Using Form view

      1. Expand Root Schema → global → Dynamic plugins configuration. → List of dynamic plugins that should be installed in the backstage application.

      2. Click the Add list of dynamic plugins that should be installed in the backstage application. link.

      3. Perform one of the following steps:

        • If you have not configured the plugin, add the following value in the Package specification of the dynamic plugin to install. It should be usable by the npm pack command. field:

          ./dynamic-plugins/dist/janus-idp-backstage-plugin-analytics-provider-segment

        • If you have configured the plugin, find the Package specification of the dynamic plugin to install. It should be usable by the npm pack command. field with the ./dynamic-plugins/dist/janus-idp-backstage-plugin-analytics-provider-segment value.

      4. Clear the Disable the plugin checkbox.

      5. Click Upgrade.

    • Using YAML view

      1. Perform one of the following steps:

        • If you have not configured the plugin, add the following YAML code in your Helm configuration file:

          # ...
          global:
            dynamic:
              plugins:
                - package: './dynamic-plugins/dist/janus-idp-backstage-plugin-analytics-provider-segment'
                  disabled: false
          # ...
        • If you have configured the plugin, search it in your Helm configuration and set the value of the plugins.disabled parameter to false.

      2. Click Upgrade.

4.2.2. Enabling telemetry data collection using the Operator

You can enable the telemetry data collection feature by using the Operator.

Prerequisites
Procedure
  1. Perform one of the following steps:

    • If you have created the dynamic-plugins-rhdh ConfigMap file and not configured the analytics-provider-segment plugin, add the plugin to the list of plugins and set its plugins.disabled parameter to false.

    • If you have created the dynamic-plugins-rhdh ConfigMap file and configured the analytics-provider-segment plugin, search the plugin in the list of plugins and set its plugins.disabled parameter to false.

    • If you have not created the ConfigMap file, create it with the following YAML code:

      kind: ConfigMap
      apiVersion: v1
      metadata:
        name: dynamic-plugins-rhdh
      data:
        dynamic-plugins.yaml: |
          includes:
            - dynamic-plugins.default.yaml
          plugins:
            - package: './dynamic-plugins/dist/janus-idp-backstage-plugin-analytics-provider-segment'
              disabled: false
  2. Set the value of the dynamicPluginsConfigMapName parameter to the name of the ConfigMap file in your Backstage custom resource:

    # ...
    spec:
      application:
        dynamicPluginsConfigMapName: dynamic-plugins-rhdh
    # ...
  3. Save the configuration changes.

4.3. Customizing telemetry Segment source

The analytics-provider-segment plugin sends the collected telemetry data to Red Hat by default. However, you can configure a new Segment source that receives telemetry data based on your needs. For configuration, you need a unique Segment write key that points to the Segment source.

Note

By configuring a new Segment source, you can collect and analyze the same set of data that is mentioned in the Telemetry data collection section. You might also require to create your own telemetry data collection notice for your application users.

4.3.1. Customizing telemetry Segment source using the Helm Chart

You can configure integration with your Segment source by using the Helm Chart.

Prerequisites
Procedure
  1. In the Developer perspective of the OpenShift Container Platform web console, go to the Helm view to see the list of Helm releases.

  2. Click the overflow menu on the Helm release that you want to use and select Upgrade.

  3. Use either the Form view or YAML view to edit the Helm configuration:

    • Using Form view

      1. Expand Root Schema → Backstage Chart Schema → Backstage Parameters → Backstage container environment variables.

      2. Click the Add Backstage container environment variables link.

      3. Enter the name and value of the Segment key.

        segment source helm
      4. Click Upgrade.

    • Using YAML view

      1. Add the following YAML code in your Helm configuration file:

        # ...
        upstream:
          backstage:
            extraEnvVars:
              - name: SEGMENT_WRITE_KEY
                value: <segment_key> # (1)
        # ...
        1. Replace <segment_key> with a unique identifier for your Segment source.

      2. Click Upgrade.

4.3.2. Customizing telemetry Segment source using the Operator

You can configure integration with your Segment source by using the Operator.

Prerequisites
Procedure
  1. Add the following YAML code in your Backstage custom resource (CR):

    # ...
    spec:
      application:
        extraEnvs:
          envs:
            - name: SEGMENT_WRITE_KEY
              value: <segment_key> # (1)
    # ...
    1. Replace <segment_key> with a unique identifier for your Segment source.

  2. Save the configuration changes.

5. Enabling observability for Red Hat Developer Hub on OpenShift Container Platform

In OpenShift Container Platform, metrics are exposed through an HTTP service endpoint under the /metrics canonical name. You can create a ServiceMonitor custom resource (CR) to scrape metrics from a service endpoint in a user-defined project.

5.1. Enabling metrics monitoring in a Helm chart installation on an OpenShift Container Platform cluster

You can enable and view metrics for a Red Hat Developer Hub Helm deployment from the Developer perspective of the OpenShift Container Platform web console.

Prerequisites
  • Your OpenShift Container Platform cluster has monitoring for user-defined projects enabled.

  • You have installed Red Hat Developer Hub on OpenShift Container Platform using the Helm chart.

Procedure
  1. From the Developer perspective in the OpenShift Container Platform web console, select the Topology view.

  2. Click the overflow menu of the Red Hat Developer Hub Helm chart, and select Upgrade.

    helm upgrade
  3. On the Upgrade Helm Release page, select the YAML view option in Configure via, then configure the metrics section in the YAML, as shown in the following example:

    upstream:
    # ...
      metrics:
        serviceMonitor:
          enabled: true
          path: /metrics
    # ...
    upgrade helm metrics
  4. Click Upgrade.

Verification
  1. From the Developer perspective in the OpenShift Container Platform web console, select the Observe view.

  2. Click the Metrics tab to view metrics for Red Hat Developer Hub pods.

5.2. Enabling metrics monitoring in a Red Hat Developer Hub Operator installation on an OpenShift Container Platform cluster

You can enable and view metrics for an Operator-installed Red Hat Developer Hub instance from the Developer perspective of the OpenShift Container Platform web console.

Prerequisites
  • Your OpenShift Container Platform cluster has monitoring for user-defined projects enabled.

  • You have installed Red Hat Developer Hub on OpenShift Container Platform using the Red Hat Developer Hub Operator.

  • You have installed the OpenShift CLI (oc).

Procedure

Currently, the Red Hat Developer Hub Operator does not support creating a ServiceMonitor custom resource (CR) by default. You must complete the following steps to create a ServiceMonitor CR to scrape metrics from the endpoint.

  1. Create the ServiceMonitor CR as a YAML file:

    apiVersion: monitoring.coreos.com/v1
    kind: ServiceMonitor
    metadata:
      name: <custom_resource_name> # (1)
      namespace: <project_name> # (2)
      labels:
        app.kubernetes.io/instance: <custom_resource_name>
        app.kubernetes.io/name: backstage
    spec:
      namespaceSelector:
        matchNames:
          - <project_name>
      selector:
        matchLabels:
          rhdh.redhat.com/app: backstage-<custom_resource_name>
      endpoints:
      - port: backend
        path: '/metrics'
    1. Replace <custom_resource_name> with the name of your Red Hat Developer Hub CR.

    2. Replace <project_name> with the name of the OpenShift Container Platform project where your Red Hat Developer Hub instance is running.

  2. Apply the ServiceMonitor CR by running the following command:

    oc apply -f <filename>
Verification
  1. From the Developer perspective in the OpenShift Container Platform web console, select the Observe view.

  2. Click the Metrics tab to view metrics for Red Hat Developer Hub pods.

6. Running the RHDH application behind a corporate proxy

You can run the RHDH application behind a corporate proxy by setting any of the following environment variables before starting the application:

  • HTTP_PROXY: Denotes the proxy to use for HTTP requests.

  • HTTPS_PROXY: Denotes the proxy to use for HTTPS requests.

Additionally, you can set the NO_PROXY environment variable to exclude certain domains from proxying. The variable value is a comma-separated list of hostnames that do not require a proxy to get reached, even if one is specified.

6.1. Configuring proxy information in Helm deployment

For Helm-based deployment, either a developer or a cluster administrator with permissions to create resources in the cluster can configure the proxy variables in a values.yaml Helm configuration file.

Prerequisites
  • You have installed the Red Hat Developer Hub application.

Procedure
  1. Set the proxy information in your Helm configuration file:

    upstream:
      backstage:
        extraEnvVars:
          - name: HTTP_PROXY
            value: '<http_proxy_url>'
          - name: HTTPS_PROXY
            value: '<https_proxy_url>'
          - name: NO_PROXY
            value: '<no_proxy_settings>'

    Where,

    <http_proxy_url>

    Denotes a variable that you must replace with the HTTP proxy URL.

    <https_proxy_url>

    Denotes a variable that you must replace with the HTTPS proxy URL.

    <no_proxy_settings>

    Denotes a variable that you must replace with comma-separated URLs, which you want to exclude from proxying, for example, foo.com,baz.com.

    Example: Setting proxy variables using Helm Chart
    upstream:
      backstage:
        extraEnvVars:
          - name: HTTP_PROXY
            value: 'http://10.10.10.105:3128'
          - name: HTTPS_PROXY
            value: 'http://10.10.10.106:3128'
          - name: NO_PROXY
            value: 'localhost,example.org'
  2. Save the configuration changes.

6.2. Configuring proxy information in Operator deployment

For Operator-based deployment, the approach you use for proxy configuration is based on your role:

  • As a cluster administrator with access to the Operator namespace, you can configure the proxy variables in the Operator’s default ConfigMap file. This configuration applies the proxy settings to all the users of the Operator.

  • As a developer, you can configure the proxy variables in a custom resource (CR) file. This configuration applies the proxy settings to the RHDH application created from that CR.

Prerequisites
  • You have installed the Red Hat Developer Hub application.

Procedure
  1. Perform one of the following steps based on your role:

    • As an administrator, set the proxy information in the Operator’s default ConfigMap file:

      1. Search for a ConfigMap file named backstage-default-config in the default namespace rhdh-operator and open it.

      2. Find the deployment.yaml key.

      3. Set the value of the HTTP_PROXY, HTTPS_PROXY, and NO_PROXY environment variables in the Deployment spec as shown in the following example:

        Example: Setting proxy variables in a ConfigMap file
        # Other fields omitted
          deployment.yaml: |-
            apiVersion: apps/v1
            kind: Deployment
            spec:
              template:
                spec:
                  # Other fields omitted
                  initContainers:
                    - name: install-dynamic-plugins
                      # command omitted
                      env:
                        - name: NPM_CONFIG_USERCONFIG
                          value: /opt/app-root/src/.npmrc.dynamic-plugins
                        - name: HTTP_PROXY
                          value: 'http://10.10.10.105:3128'
                        - name: HTTPS_PROXY
                          value: 'http://10.10.10.106:3128'
                        - name: NO_PROXY
                          value: 'localhost,example.org'
                      # Other fields omitted
                  containers:
                    - name: backstage-backend
                      # Other fields omitted
                      env:
                        - name: APP_CONFIG_backend_listen_port
                          value: "7007"
                        - name: HTTP_PROXY
                          value: 'http://10.10.10.105:3128'
                        - name: HTTPS_PROXY
                          value: 'http://10.10.10.106:3128'
                        - name: NO_PROXY
                          value: 'localhost,example.org'
    • As a developer, set the proxy information in your custom resource (CR) file as shown in the following example:

      Example: Setting proxy variables in a CR file
      spec:
        # Other fields omitted
        application:
          extraEnvs:
            envs:
              - name: HTTP_PROXY
                value: 'http://10.10.10.105:3128'
              - name: HTTPS_PROXY
                value: 'http://10.10.10.106:3128'
              - name: NO_PROXY
                value: 'localhost,example.org'
  2. Save the configuration changes.

7. Red Hat Developer Hub integration with Amazon Web Services (AWS)

You can integrate your Red Hat Developer Hub application with Amazon Web Services (AWS), which can help you streamline your workflows within the AWS ecosystem. Integrating the Developer Hub resources with AWS provides access to a comprehensive suite of tools, services, and solutions.

The integration with AWS requires the deployment of Developer Hub in Elastic Kubernetes Service (EKS) using one of the following methods:

  • The Helm chart

  • The Red Hat Developer Hub Operator

7.1. Monitoring and logging with Amazon Web Services (AWS) in Red Hat Developer Hub

In the Red Hat Developer Hub, monitoring and logging are facilitated through Amazon Web Services (AWS) integration. With features like Amazon CloudWatch for real-time monitoring and Amazon Prometheus for comprehensive logging, you can ensure the reliability, scalability, and compliance of your Developer Hub application hosted on AWS infrastructure.

This integration enables you to oversee, diagnose, and refine your applications in the Red Hat ecosystem, leading to an improved development and operational journey.

7.1.1. Monitoring with Amazon Prometheus

Red Hat Developer Hub provides Prometheus metrics related to the running application. For more information about enabling or deploying Prometheus for EKS clusters, see Prometheus metrics in the Amazon documentation.

To monitor Developer Hub using Amazon Prometheus, you need to create an Amazon managed service for the Prometheus workspace and configure the ingestion of the Developer Hub Prometheus metrics. For more information, see Create a workspace and Ingest Prometheus metrics to the workspace sections in the Amazon documentation.

After ingesting Prometheus metrics into the created workspace, you can configure the metrics scraping to extract data from pods based on specific pod annotations.

Configuring annotations for monitoring

You can configure the annotations for monitoring in both Helm deployment and Operator-backed deployment.

Helm deployment

To annotate the backstage pod for monitoring, update your values.yaml file as follows:

upstream:
  backstage:
    # --- TRUNCATED ---
    podAnnotations:
      prometheus.io/scrape: 'true'
      prometheus.io/path: '/metrics'
      prometheus.io/port: '7007'
      prometheus.io/scheme: 'http'
Operator-backed deployment
Procedure
  1. As an administrator of the operator, edit the default configuration to add Prometheus annotations as follows:

    # Update OPERATOR_NS accordingly
    OPERATOR_NS=rhdh-operator
    kubectl edit configmap backstage-default-config -n "${OPERATOR_NS}"
  2. Find the deployment.yaml key in the ConfigMap and add the annotations to the spec.template.metadata.annotations field as follows:

    deployment.yaml: |-
      apiVersion: apps/v1
      kind: Deployment
      # --- truncated ---
      spec:
        template:
          # --- truncated ---
          metadata:
            labels:
             rhdh.redhat.com/app:  # placeholder for 'backstage-<cr-name>'
            # --- truncated ---
            annotations:
              prometheus.io/scrape: 'true'
              prometheus.io/path: '/metrics'
              prometheus.io/port: '7007'
              prometheus.io/scheme: 'http'
      # --- truncated ---
  3. Save your changes.

Verification

To verify if the scraping works:

  1. Use kubectl to port-forward the Prometheus console to your local machine as follows:

    kubectl --namespace=prometheus port-forward deploy/prometheus-server 9090
  2. Open your web browser and navigate to http://localhost:9090 to access the Prometheus console.

  3. Monitor relevant metrics, such as process_cpu_user_seconds_total.

7.1.2. Logging with Amazon CloudWatch logs

Logging within the Red Hat Developer Hub relies on the winston library. By default, logs at the debug level are not recorded. To activate debug logs, you must set the environment variable LOG_LEVEL to debug in your Red Hat Developer Hub instance.

Configuring the application log level

You can configure the application log level in both Helm deployment and Operator-backed deployment.

Helm deployment

To update the logging level, add the environment variable LOG_LEVEL to your Helm chart’s values.yaml file:

upstream:
  backstage:
    # --- Truncated ---
    extraEnvVars:
      - name: LOG_LEVEL
        value: debug
Operator-backed deployment

You can modify the logging level by including the environment variable LOG_LEVEL in your custom resource as follows:

spec:
  # Other fields omitted
  application:
    extraEnvs:
      envs:
        - name: LOG_LEVEL
          value: debug
Retrieving logs from Amazon CloudWatch

The CloudWatch Container Insights are used to capture logs and metrics for Amazon EKS. For more information, see Logging for Amazon EKS documentation.

To capture the logs and metrics, install the Amazon CloudWatch Observability EKS add-on in your cluster. Following the setup of Container Insights, you can access container logs using Logs Insights or Live Tail views.

CloudWatch names the log group where all container logs are consolidated in the following manner:

/aws/containerinsights/<ClusterName>/application

Following is an example query to retrieve logs from the Developer Hub instance:

fields @timestamp, @message, kubernetes.container_name
| filter kubernetes.container_name in ["install-dynamic-plugins", "backstage-backend"]

7.2. Using Amazon Cognito as an authentication provider in Red Hat Developer Hub

In this section, Amazon Cognito is an AWS service for adding an authentication layer to Developer Hub. You can sign in directly to the Developer Hub using a user pool or fedarate through a third-party identity provider.

Although Amazon Cognito is not part of the core authentication providers for the Developer Hub, it can be integrated using the generic OpenID Connect (OIDC) provider.

You can configure your Developer Hub in both Helm Chart and Operator-backed deployments.

Prerequisites
  • You have a User Pool or you have created a new one. For more information about user pools, see Amazon Cognito user pools documentation.

    Note

    Ensure that you have noted the AWS region where the user pool is located and the user pool ID.

  • You have created an App Client within your user pool for integrating the hosted UI. For more information, see Setting up the hosted UI with the Amazon Cognito console.

    When setting up the hosted UI using the Amazon Cognito console, ensure to make the following adjustments:

    1. In the Allowed callback URL(s) section, include the URL https://<rhdh_url>/api/auth/oidc/handler/frame. Ensure to replace <rhdh_url> with your Developer Hub application’s URL, such as, my.rhdh.example.com.

    2. Similarly, in the Allowed sign-out URL(s) section, add https://<rhdh_url>. Replace <rhdh_url> with your Developer Hub application’s URL, such as my.rhdh.example.com.

    3. Under OAuth 2.0 grant types, select Authorization code grant to return an authorization code.

    4. Under OpenID Connect scopes, ensure to select at least the following scopes:

      • OpenID

      • Profile

      • Email

    Helm deployment
    Procedure
    1. Edit or create your custom app-config-rhdh ConfigMap as follows:

      apiVersion: v1
      kind: ConfigMap
      metadata:
        name: app-config-rhdh
      data:
        "app-config-rhdh.yaml": |
          # --- Truncated ---
          app:
            title: Red Hat Developer Hub
      
          signInPage: oidc
          auth:
            environment: production
            session:
              secret: ${AUTH_SESSION_SECRET}
            providers:
              oidc:
                production:
                  clientId: ${AWS_COGNITO_APP_CLIENT_ID}
                  clientSecret: ${AWS_COGNITO_APP_CLIENT_SECRET}
                  metadataUrl: ${AWS_COGNITO_APP_METADATA_URL}
                  callbackUrl: ${AWS_COGNITO_APP_CALLBACK_URL}
                  scope: 'openid profile email'
                  prompt: auto
    2. Edit or create your custom secrets-rhdh Secret using the following template:

      apiVersion: v1
      kind: Secret
      metadata:
        name: secrets-rhdh
      stringData:
        AUTH_SESSION_SECRET: "my super auth session secret - change me!!!"
        AWS_COGNITO_APP_CLIENT_ID: "my-aws-cognito-app-client-id"
        AWS_COGNITO_APP_CLIENT_SECRET: "my-aws-cognito-app-client-secret"
        AWS_COGNITO_APP_METADATA_URL: "https://cognito-idp.[region].amazonaws.com/[userPoolId]/.well-known/openid-configuration"
        AWS_COGNITO_APP_CALLBACK_URL: "https://[rhdh_dns]/api/auth/oidc/handler/frame"
    3. Add references of both the ConfigMap and Secret resources in your values.yaml file:

      upstream:
        backstage:
          image:
            pullSecrets:
            - rhdh-pull-secret
          podSecurityContext:
            fsGroup: 2000
          extraAppConfig:
            - filename: app-config-rhdh.yaml
              configMapRef: app-config-rhdh
          extraEnvVarsSecrets:
            - secrets-rhdh
    4. Upgrade the Helm deployment:

      helm upgrade rhdh \
        openshift-helm-charts/redhat-developer-hub \
        [--version 1.2.5] \
        --values /path/to/values.yaml
    Operator-backed deployment
    1. Add the following code to your app-config-rhdh ConfigMap:

      apiVersion: v1
      kind: ConfigMap
      metadata:
        name: app-config-rhdh
      data:
        "app-config-rhdh.yaml": |
          # --- Truncated ---
      
          signInPage: oidc
          auth:
            # Production to disable guest user login
            environment: production
            # Providing an auth.session.secret is needed because the oidc provider requires session support.
            session:
              secret: ${AUTH_SESSION_SECRET}
            providers:
              oidc:
                production:
                  # See https://github.com/backstage/backstage/blob/master/plugins/auth-backend-module-oidc-provider/config.d.ts
                  clientId: ${AWS_COGNITO_APP_CLIENT_ID}
                  clientSecret: ${AWS_COGNITO_APP_CLIENT_SECRET}
                  metadataUrl: ${AWS_COGNITO_APP_METADATA_URL}
                  callbackUrl: ${AWS_COGNITO_APP_CALLBACK_URL}
                  # Minimal set of scopes needed. Feel free to add more if needed.
                  scope: 'openid profile email'
      
                  # Note that by default, this provider will use the 'none' prompt which assumes that your are already logged on in the IDP.
                  # You should set prompt to:
                  # - auto: will let the IDP decide if you need to log on or if you can skip login when you have an active SSO session
                  # - login: will force the IDP to always present a login form to the user
                  prompt: auto
    2. Add the following code to your secrets-rhdh Secret:

      apiVersion: v1
      kind: Secret
      metadata:
        name: secrets-rhdh
      stringData:
        # --- Truncated ---
      
        # TODO: Change auth session secret.
        AUTH_SESSION_SECRET: "my super auth session secret - change me!!!"
      
        # TODO: user pool app client ID
        AWS_COGNITO_APP_CLIENT_ID: "my-aws-cognito-app-client-id"
      
        # TODO: user pool app client Secret
        AWS_COGNITO_APP_CLIENT_SECRET: "my-aws-cognito-app-client-secret"
      
        # TODO: Replace region and user pool ID
        AWS_COGNITO_APP_METADATA_URL: "https://cognito-idp.[region].amazonaws.com/[userPoolId]/.well-known/openid-configuration"
      
        # TODO: Replace <rhdh_dns>
        AWS_COGNITO_APP_CALLBACK_URL: "https://[rhdh_dns]/api/auth/oidc/handler/frame"
    3. Ensure your Custom Resource contains references to both the app-config-rhdh ConfigMap and secrets-rhdh Secret:

      apiVersion: rhdh.redhat.com/v1alpha1
      kind: Backstage
      metadata:
       # TODO: this the name of your Developer Hub instance
        name: my-rhdh
      spec:
        application:
          imagePullSecrets:
          - "rhdh-pull-secret"
          route:
            enabled: false
          appConfig:
            configMaps:
              - name: "app-config-rhdh"
          extraEnvs:
            secrets:
              - name: "secrets-rhdh"
    4. Optional: If you have an existing Developer Hub instance backed by the Custom Resource and you have not edited it, you can manually delete the Developer Hub deployment to recreate it using the operator. Run the following command to delete the Developer Hub deployment:

      kubectl delete deployment -l app.kubernetes.io/instance=<CR_NAME>
Verification
  1. Navigate to your Developer Hub web URL and sign in using OIDC authentication, which prompts you to authenticate through the configured AWS Cognito user pool.

  2. Once logged in, access Settings and verify user details.

8. Red Hat Developer Hub integration with Microsoft Azure Kubernetes Service (AKS)

You can integrate Developer Hub with Microsoft Azure Kubernetes Service (AKS), which provides a significant advancement in development, offering a streamlined environment for building, deploying, and managing your applications.

This integration requires the deployment of Developer Hub on AKS using one of the following methods:

  • The Helm chart

  • The Red Hat Developer Hub Operator

8.1. Monitoring and logging with Azure Kubernetes Services (AKS) in Red Hat Developer Hub

Monitoring and logging are integral aspects of managing and maintaining Azure Kubernetes Services (AKS) in Red Hat Developer Hub. With features like Managed Prometheus Monitoring and Azure Monitor integration, administrators can efficiently monitor resource utilization, diagnose issues, and ensure the reliability of their containerized workloads.

To enable Managed Prometheus Monitoring, use the -enable-azure-monitor-metrics option within either the az aks create or az aks update command, depending on whether you’re creating a new cluster or updating an existing one, such as:

az aks create/update --resource-group <your-ResourceGroup> --name <your-Cluster> --enable-azure-monitor-metrics

The previous command installs the metrics add-on, which gathers Prometheus metrics. Using the previous command, you can enable monitoring of Azure resources through both native Azure Monitor metrics and Prometheus metrics. You can also view the results in the portal under Monitoring → Insights. For more information, see Monitor Azure resources with Azure Monitor.

Furthermore, metrics from both the Managed Prometheus service and Azure Monitor can be accessed through Azure Managed Grafana service. For more information, see Link a Grafana workspace section.

By default, Prometheus uses the minimum ingesting profile, which optimizes ingestion volume and sets default configurations for scrape frequency, targets, and metrics collected. The default settings can be customized through custom configuration. Azure offers various methods, including using different ConfigMaps, to provide scrape configuration and other metric add-on settings. For more information about default configuration, see Default Prometheus metrics configuration in Azure Monitor and Customize scraping of Prometheus metrics in Azure Monitor managed service for Prometheus documentation.

8.1.1. Viewing logs with Azure Kubernetes Services (AKS)

You can access live data logs generated by Kubernetes objects and collect log data in Container Insights within AKS.

Prerequisites
  • You have deployed Developer Hub on AKS.

assembly-install-rhdh-aks.adoc

Procedure
View live logs from your Developer Hub instance
  1. Navigate to the Azure Portal.

  2. Search for the resource group <your-ResourceGroup> and locate your AKS cluster <your-Cluster>.

  3. Select Kubernetes resources → Workloads from the menu.

  4. Select the <your-rhdh-cr>-developer-hub (in case of Helm Chart installation) or <your-rhdh-cr>-backstage (in case of Operator-backed installation) deployment.

  5. Click Live Logs in the left menu.

  6. Select the pod.

    Note
    There must be only single pod.

Live log data is collected and displayed.

View real-time log data from the Container Engine
  1. Navigate to the Azure Portal.

  2. Search for the resource group <your-ResourceGroup> and locate your AKS cluster <your-Cluster>.

  3. Select MonitoringInsights from the menu.

  4. Go to the Containers tab.

  5. Find the backend-backstage container and click it to view real-time log data as it’s generated by the Container Engine.

8.2. Using Microsoft Azure as an authentication provider in Red Hat Developer Hub

The core-plugin-api package in Developer Hub comes integrated with Microsoft Azure authentication provider, authenticating signing in using Azure OAuth.

Prerequisites
  • You have deployed Developer Hub on AKS.

8.2.1. Using Microsoft Azure as an authentication provider in Helm deployment

You can use Microsoft Azure as an authentication provider in Red Hat Developer Hub, when installed using the Helm Chart.

Procedure
  1. After the application is registered, note down the following:

    • clientId: Application (client) ID, found under App Registration → Overview.

    • clientSecret: Secret, found under *App Registration → Certificates & secrets (create new if needed).

    • tenantId: Directory (tenant) ID, found under App Registration → Overview.

  2. Ensure the following fragment is included in your Developer Hub ConfigMap:

    auth:
      environment: production
      providers:
        microsoft:
          production:
            clientId: ${AZURE_CLIENT_ID}
            clientSecret: ${AZURE_CLIENT_SECRET}
            tenantId: ${AZURE_TENANT_ID}
            domainHint: ${AZURE_TENANT_ID}
            additionalScopes:
              - Mail.Send

    You can either create a new file or add it to an existing one.

  3. Apply the ConfigMap to your Kubernetes cluster:

    kubectl -n <your_namespace> apply -f <app-config>.yaml
  4. Create or reuse an existing Secret containing Azure credentials and add the following fragment:

    stringData:
      AZURE_CLIENT_ID: <value-of-clientId>
      AZURE_CLIENT_SECRET: <value-of-clientSecret>
      AZURE_TENANT_ID: <value-of-tenantId>
  5. Apply the secret to your Kubernetes cluster:

    kubectl -n <your_namespace> apply -f <azure-secrets>.yaml
  6. Ensure your values.yaml file references the previously created ConfigMap and Secret:

    upstream:
      backstage:
      ...
        extraAppConfig:
          - filename: ...
            configMapRef: <app-config-containing-azure>
        extraEnvVarsSecrets:
          - <secret-containing-azure>
  7. Optional: If the Helm Chart is already installed, upgrade it:

    helm -n <your_namespace> upgrade -f <your-values.yaml> <your_deploy_name> redhat-developer/backstage --version 1.2.5
  8. Optional: If your rhdh.yaml file is not changed, for example, you only updated the ConfigMap and Secret referenced from it, refresh your Developer Hub deployment by removing the corresponding pods:

    kubectl -n <your_namespace> delete pods -l backstage.io/app=backstage-<your-rhdh-cr>

8.2.2. Using Microsoft Azure as an authentication provider in Operator-backed deployment

You can use Microsoft Azure as an authentication provider in Red Hat Developer Hub, when installed using the Operator.

Procedure
  1. After the application is registered, note down the following:

    • clientId: Application (client) ID, found under App Registration → Overview.

    • clientSecret: Secret, found under *App Registration → Certificates & secrets (create new if needed).

    • tenantId: Directory (tenant) ID, found under App Registration → Overview.

  2. Ensure the following fragment is included in your Developer Hub ConfigMap:

    auth:
      environment: production
      providers:
        microsoft:
          production:
            clientId: ${AZURE_CLIENT_ID}
            clientSecret: ${AZURE_CLIENT_SECRET}
            tenantId: ${AZURE_TENANT_ID}
            domainHint: ${AZURE_TENANT_ID}
            additionalScopes:
              - Mail.Send

    You can either create a new file or add it to an existing one.

  3. Apply the ConfigMap to your Kubernetes cluster:

    kubectl -n <your_namespace> apply -f <app-config>.yaml
  4. Create or reuse an existing Secret containing Azure credentials and add the following fragment:

    stringData:
      AZURE_CLIENT_ID: <value-of-clientId>
      AZURE_CLIENT_SECRET: <value-of-clientSecret>
      AZURE_TENANT_ID: <value-of-tenantId>
  5. Apply the secret to your Kubernetes cluster:

    kubectl -n <your_namespace> apply -f <azure-secrets>.yaml
  6. Ensure your Custom Resource manifest contains references to the previously created ConfigMap and Secret:

    apiVersion: rhdh.redhat.com/v1alpha1
    kind: Backstage
    metadata:
      name: <your-rhdh-cr>
    spec:
      application:
        imagePullSecrets:
        - rhdh-pull-secret
        route:
          enabled: false
        appConfig:
          configMaps:
            - name: <app-config-containing-azure>
        extraEnvs:
          secrets:
            - name: <secret-containing-azure>
  7. Apply your Custom Resource manifest:

    kubectl -n <your_namespace> apply -f rhdh.yaml
  8. Optional: If your rhdh.yaml file is not changed, for example, you only updated the ConfigMap and Secret referenced from it, refresh your Developer Hub deployment by removing the corresponding pods:

    kubectl -n <your_namespace> delete pods -l backstage.io/app=backstage-<your-rhdh-cr>

9. Managing templates

A template is a form composed of different UI fields that is defined in a YAML file. Templates include actions, which are steps that are executed in sequential order and can be executed conditionally.

You can use templates to easily create Red Hat Developer Hub components, and then publish these components to different locations, such as the Red Hat Developer Hub software catalog, or repositories in GitHub or GitLab.

9.1. Creating a template by using the Template Editor

You can create a template by using the Template Editor.

Procedure
  1. Access the Template Editor by using one of the following options:

    Template Editor
    • Open the URL https://<rhdh_url>/create/edit for your Red Hat Developer Hub instance.

    • Click Create…​ in the navigation menu of the Red Hat Developer Hub console, then click the overflow menu button and select Template editor.

  2. Click Edit Template Form.

  3. Optional: Modify the YAML definition for the parameters of your template. For more information about these parameters, see Creating a template as a YAML file.

  4. In the Name * field, enter a unique name for your template.

  5. From the Owner drop-down menu, choose an owner for the template.

  6. Click Next.

  7. In the Repository Location view, enter the following information about the hosted repository that you want to publish the template to:

    1. Select an available Host from the drop-down menu.

      Note

      Available hosts are defined in the YAML parameters by the allowedHosts field:

      Example YAML
      # ...
              ui:options:
                allowedHosts:
                  - github.com
      # ...
    2. In the Owner * field, enter an organization, user or project that the hosted repository belongs to.

    3. In the Repository * field, enter the name of the hosted repository.

    4. Click Review.

  8. Review the information for accuracy, then click Create.

Verification
  1. Click the Catalog tab in the navigation panel.

  2. In the Kind drop-down menu, select Template.

  3. Confirm that your template is shown in the list of existing templates.

9.2. Creating a template as a YAML file

You can create a template by defining a Template object as a YAML file.

The Template object describes the template and its metadata. It also contains required input variables and a list of actions that are executed by the scaffolding service.

Template object example
apiVersion: scaffolder.backstage.io/v1beta3
kind: Template
metadata:
  name: template-name # (1)
  title: Example template # (2)
  description: An example template for v1beta3 scaffolder. # (3)
spec:
  owner: backstage/techdocs-core # (4)
  type: service # (5)
  parameters: # (6)
    - title: Fill in some steps
      required:
        - name
      properties:
        name:
          title: Name
          type: string
          description: Unique name of the component
        owner:
          title: Owner
          type: string
          description: Owner of the component
    - title: Choose a location
      required:
        - repoUrl
      properties:
        repoUrl:
          title: Repository Location
          type: string
  steps: # (7)
    - id: fetch-base
      name: Fetch Base
      action: fetch:template
      # ...
  output: # (8)
    links:
      - title: Repository # (9)
        url: ${{ steps['publish'].output.remoteUrl }}
      - title: Open in catalog # (10)
        icon: catalog
        entityRef: ${{ steps['register'].output.entityRef }}
# ...
  1. Specify a name for the template.

  2. Specify a title for the template. This is the title that is visible on the template tile in the Create…​ view.

  3. Specify a description for the template. This is the description that is visible on the template tile in the Create…​ view.

  4. Specify the ownership of the template. The owner field provides information about who is responsible for maintaining or overseeing the template within the system or organization. In the provided example, the owner field is set to backstage/techdocs-core. This means that this template belongs to the techdocs-core project in the backstage namespace.

  5. Specify the component type. Any string value is accepted for this required field, but your organization should establish a proper taxonomy for these. Red Hat Developer Hub instances may read this field and behave differently depending on its value. For example, a website type component may present tooling in the Red Hat Developer Hub interface that is specific to just websites.

    The following values are common for this field:

    service

    A backend service, typically exposing an API.

    website

    A website.

    library

    A software library, such as an npm module or a Java library.

  6. Use the parameters section to specify parameters for user input that are shown in a form view when a user creates a component by using the template in the Red Hat Developer Hub console. Each parameters subsection, defined by a title and properties, creates a new form page with that definition.

  7. Use the steps section to specify steps that are executed in the backend. These steps must be defined by using a unique step ID, a name, and an action. You can view actions that are available on your Red Hat Developer Hub instance by visiting the URL https://<rhdh_url>/create/actions.

  8. Use the output section to specify the structure of output data that is created when the template is used. The output section, particularly the links subsection, provides valuable references and URLs that users can utilize to access and interact with components that are created from the template.

  9. Provides a reference or URL to the repository associated with the generated component.

  10. Provides a reference or URL that allows users to open the generated component in a catalog or directory where various components are listed.

9.3. Importing an existing template to Red Hat Developer Hub

You can add an existing template to your Red Hat Developer Hub instance by using the Catalog Processor.

Prerequisites
  • You have created a directory or repository that contains at least one template YAML file.

  • If you want to use a template that is stored in a repository such as GitHub or GitLab, you must configure a Red Hat Developer Hub integration for your provider.

Procedure
  • In the app-config.yaml configuration file, modify the catalog.rules section to include a rule for templates, and configure the catalog.locations section to point to the template that you want to add, as shown in the following example:

    # ...
    catalog:
      rules:
        - allow: [Template] # (1)
      locations:
        - type: url # (2)
          target: https://<repository_url>/example-template.yaml # (3)
    # ...
    1. To allow new templates to be added to the catalog, you must add a Template rule.

    2. If you are importing templates from a repository, such as GitHub or GitLab, use the url type.

    3. Specify the URL for the template.

Verification
  1. Click the Catalog tab in the navigation panel.

  2. In the Kind drop-down menu, select Template.

  3. Confirm that your template is shown in the list of existing templates.

10. Configuring the TechDocs plugin in Red Hat Developer Hub

The Red Hat Developer Hub TechDocs plugin helps your organization create, find, and use documentation in a central location and in a standardized way. For example:

Docs-like-code approach

Write your technical documentation in Markdown files that are stored inside your project repository along with your code.

Documentation site generation

Use MkDocs to create a full-featured, Markdown-based, static HTML site for your documentation that is rendered centrally in Developer Hub.

Documentation site metadata and integrations

See additional metadata about the documentation site alongside the static documentation, such as the date of the last update, the site owner, top contributors, open GitHub issues, Slack support channels, and Stack Overflow Enterprise tags.

Built-in navigation and search

Find the information that you want from a document more quickly and easily.

Add-ons

Customize your TechDocs experience with Add-ons to address higher-order documentation needs.

The TechDocs plugin is preinstalled and enabled on a Developer Hub instance by default. You can disable or enable the TechDocs plugin, and change other parameters, by configuring the Red Hat Developer Hub Helm chart or the Red Hat Developer Hub Operator config map.

Important

Red Hat Developer Hub includes a built-in TechDocs builder that generates static HTML documentation from your codebase. However, the default basic setup of the local builder is not intended for production.

You can use a CI/CD pipeline with the repository that has a dedicated job to generate docs for TechDocs. The generated static files are stored in OpenShift Data Foundation or in a cloud storage solution of your choice and published to a static HTML documentation site.

After you configure OpenShift Data Foundation to store the files that TechDocs generates, you can configure the TechDocs plugin to use the OpenShift Data Foundation for cloud storage.

Additional resources

10.1. Configuring storage for TechDocs files

The TechDocs publisher stores generated files in local storage or in cloud storage, such as OpenShift Data Foundation, Google GCS, AWS S3, or Azure Blob Storage.

10.1.1. Using OpenShift Data Foundation for file storage

You can configure OpenShift Data Foundation to store the files that TechDocs generates instead of relying on other cloud storage solutions.

OpenShift Data Foundation provides an ObjectBucketClaim custom resource (CR) that you can use to request an S3 compatible bucket backend. You must install the OpenShift Data Foundation Operator to use this feature.

Prerequisites
Procedure
  • Create an ObjectBucketClaim CR where the generated TechDocs files are stored. For example:

    apiVersion: objectbucket.io/v1alpha1
    kind: ObjectBucketClaim
    metadata:
      name: <rhdh_bucket_claim_name>
    spec:
      generateBucketName: <rhdh_bucket_claim_name>
      storageClassName: openshift-storage.noobaa.io
    Note

    Creating the Developer Hub ObjectBucketClaim CR automatically creates both the Developer Hub ObjectBucketClaim config map and secret. The config map and secret have the same name as the ObjetBucketClaim CR.

After you create the ObjectBucketClaim CR, you can use the information stored in the config map and secret to make the information accessible to the Developer Hub container as environment variables. Depending on the method that you used to install Developer Hub, you add the access information to either the Red Hat Developer Hub Helm chart or Operator configuration.

Additional resources

10.1.2. Making object storage accessible to containers by using the Helm chart

Creating a ObjectBucketClaim custom resource (CR) automatically generates both the Developer Hub ObjectBucketClaim config map and secret. The config map and secret contain ObjectBucket access information. Adding the access information to the Helm chart configuration makes it accessible to the Developer Hub container by adding the following environment variables to the container:

  • BUCKET_NAME

  • BUCKET_HOST

  • BUCKET_PORT

  • BUCKET_REGION

  • BUCKET_SUBREGION

  • AWS_ACCESS_KEY_ID

  • AWS_SECRET_ACCESS_KEY

These variables are then used in the TechDocs plugin configuration.

Prerequisites
  • You have installed Red Hat Developer Hub on OpenShift Container Platform using the Helm chart.

  • You have created an ObjectBucketClaim CR for storing files generated by TechDocs. For more information see Using OpenShift Data Foundation for file storage

Procedure
  • In the upstream.backstage key in the Helm chart values, enter the name of the Developer Hub ObjectBucketClaim secret as the value for the extraEnvVarsSecrets field and the extraEnvVarsCM field. For example:

    upstream:
      backstage:
        extraEnvVarsSecrets:
          - <rhdh_bucket_claim_name>
        extraEnvVarsCM:
          - <rhdh_bucket_claim_name>
Example TechDocs Plugin configuration for the Helm chart

The following example shows a Developer Hub Helm chart configuration for the TechDocs plugin:

global:
  dynamic:
    includes:
      - 'dynamic-plugins.default.yaml'
  plugins:
    - disabled: false
      package: ./dynamic-plugins/dist/backstage-plugin-techdocs-backend-dynamic
      pluginConfig:
        techdocs:
          builder: external
          generator:
            runIn: local
          publisher:
            awsS3:
              bucketName: '${BUCKET_NAME}'
              credentials:
                accessKeyId: '${AWS_ACCESS_KEY_ID}'
                secretAccessKey: '${AWS_SECRET_ACCESS_KEY}'
              endpoint: 'https://${BUCKET_HOST}'
              region: '${BUCKET_REGION}'
              s3ForcePathStyle: true
            type: awsS3

10.1.3. Making object storage accessible to containers by using the Operator

Creating a ObjectBucketClaim Custom Resource (CR) automatically generates both the Developer Hub ObjectBucketClaim config map and secret. The config map and secret contain ObjectBucket access information. Adding the access information to the Operator configuration makes it accessible to the Developer Hub container by adding the following environment variables to the container:

  • BUCKET_NAME

  • BUCKET_HOST

  • BUCKET_PORT

  • BUCKET_REGION

  • BUCKET_SUBREGION

  • AWS_ACCESS_KEY_ID

  • AWS_SECRET_ACCESS_KEY

These variables are then used in the TechDocs plugin configuration.

Prerequisites
  • You have installed Red Hat Developer Hub on OpenShift Container Platform using the Operator.

  • You have created an ObjectBucketClaim CR for storing files generated by TechDocs.

Procedure
  • In the Developer Hub Backstage CR, enter the name of the Developer Hub ObjectBucketClaim config map as the value for the spec.application.extraEnvs.configMaps field and enter the Developer Hub ObjectBucketClaim secret name as the value for the spec.application.extraEnvs.secrets field. For example:

    apiVersion: objectbucket.io/v1alpha1
    kind: Backstage
    metadata:
     name: <name>
    spec:
      application:
        extraEnvs:
          configMaps:
            - name: <rhdh_bucket_claim_name>
          secrets:
            - name: <rhdh_bucket_claim_name>
Example TechDocs Plugin configuration for the Operator

The following example shows a Red Hat Developer Hub Operator config map configuration for the TechDocs plugin:

kind: ConfigMap
apiVersion: v1
metadata:
  name: dynamic-plugins-rhdh
data:
  dynamic-plugins.yaml: |
    includes:
      - dynamic-plugins.default.yaml
    plugins:
      - disabled: false
        package: ./dynamic-plugins/dist/backstage-plugin-techdocs-backend-dynamic
        pluginConfig:
          techdocs:
            builder: external
            generator:
              runIn: local
            publisher:
              awsS3:
                bucketName: '${BUCKET_NAME}'
                credentials:
                  accessKeyId: '${AWS_ACCESS_KEY_ID}'
                  secretAccessKey: '${AWS_SECRET_ACCESS_KEY}'
                endpoint: 'https://${BUCKET_HOST}'
                region: '${BUCKET_REGION}'
                s3ForcePathStyle: true
              type: awsS3

10.2. Configuring CI/CD to generate and publish TecDocs sites

TechDocs reads the static generated documentation files from a cloud storage bucket, such as OpenShift Data Foundation. The documentation site is generated on the CI/CD workflow associated with the repository containing the documentation files. You can generate docs on CI and publish to a cloud storage using the techdocs-cli CLI tool.

You can use the following example to create a script for TechDocs publication:

# Prepare
REPOSITORY_URL='https://github.com/org/repo'
git clone $REPOSITORY_URL
cd repo

# Install @techdocs/cli, mkdocs and mkdocs plugins
npm install -g @techdocs/cli
pip install "mkdocs-techdocs-core==1.*"

# Generate
techdocs-cli generate --no-docker

# Publish
techdocs-cli publish --publisher-type awsS3 --storage-name <bucket/container> --entity <Namespace/Kind/Name>

The TechDocs workflow starts the CI when a user makes changes in the repository containing the documentation files. You can configure the workflow to start only when files inside the docs/ directory or mkdocs.yml are changed.

10.2.1. Preparing your repository for CI

The first step on the CI is to clone your documentation source repository in a working directory.

Procedure
  • To clone your documentation source repository in a working directory, enter the following command:

    git clone <https://path/to/docs-repository/>

10.2.2. Generating the TechDocs site

Procedure

To configure CI/CD to generate your techdocs, complete the following steps:

  1. Install the npx package to run techdocs-cli using the following command:

    npm install -g npx
  2. Install the techdocs-cli tool using the following command:

    npm install -g @techdocs/cli
  3. Install the mkdocs plugins using the following command:

    pip install "mkdocs-techdocs-core==1.*"
  4. Generate your techdocs site using the following command:

    npx @techdocs/cli generate --no-docker --source-dir <path_to_repo> --output-dir ./site

    Where <path_to_repo> is the location in the file path that you used to clone your repository.

10.2.3. Publishing the TechDocs site

Procedure

To publish your techdocs site, complete the following steps:

  1. Set the necessary authentication environment variables for your cloud storage provider.

  2. Publish your techdocs using the following command:

    npx @techdocs/cli publish --publisher-type <awsS3|googleGcs> --storage-name <bucket/container> --entity <namespace/kind/name> --directory ./site
  3. Add a .github/workflows/techdocs.yml file in your Software Template(s). For example:

    name: Publish TechDocs Site
    
    on:
     push:
       branches: [main]
       # You can even set it to run only when TechDocs related files are updated.
       # paths:
       #   - "docs/**"
       #   - "mkdocs.yml"
    
    jobs:
     publish-techdocs-site:
       runs-on: ubuntu-latest
    
       # The following secrets are required in your CI environment for publishing files to AWS S3.
       # e.g. You can use GitHub Organization secrets to set them for all existing and new repositories.
       env:
         TECHDOCS_S3_BUCKET_NAME: ${{ secrets.TECHDOCS_S3_BUCKET_NAME }}
         AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
         AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
         AWS_REGION: ${{ secrets.AWS_REGION }}
         ENTITY_NAMESPACE: 'default'
         ENTITY_KIND: 'Component'
         ENTITY_NAME: 'my-doc-entity'
         # In a Software template, Scaffolder will replace {{cookiecutter.component_id | jsonify}}
         # with the correct entity name. This is same as metadata.name in the entity's catalog-info.yaml
         # ENTITY_NAME: '{{ cookiecutter.component_id | jsonify }}'
    
       steps:
         - name: Checkout code
           uses: actions/checkout@v3
    
         - uses: actions/setup-node@v3
         - uses: actions/setup-python@v4
           with:
             python-version: '3.9'
    
         - name: Install techdocs-cli
           run: sudo npm install -g @techdocs/cli
    
         - name: Install mkdocs and mkdocs plugins
           run: python -m pip install mkdocs-techdocs-core==1.*
    
         - name: Generate docs site
           run: techdocs-cli generate --no-docker --verbose
    
         - name: Publish docs site
           run: techdocs-cli publish --publisher-type awsS3 --storage-name $TECHDOCS_S3_BUCKET_NAME --entity $ENTITY_NAMESPACE/$ENTITY_KIND/$ENTITY_NAME