The Red Hat Developer Hub is an enterprise-grade, open developer platform that is used for building developer portals. This platform contains a supported and opinionated framework that helps reduce the friction and frustration of developers while boosting their productivity.
Red Hat Developer Hub support
If you experience difficulty with a procedure described in this documentation, visit the Red Hat Customer Portal. You can use the Red Hat Customer Portal for the following purposes:
-
To search or browse through the Red Hat Knowledgebase of technical support articles about Red Hat products.
-
To create a support case for Red Hat Global Support Services (GSS). For support case creation, select Red Hat Developer Hub as the product and select the appropriate product version.
1. Overview of Red Hat Developer Hub
Red Hat Developer Hub (Developer Hub) serves as an open developer platform designed for building developer portals. Using Developer Hub, the engineering teams can access a unified platform that streamlines the development process and provides a variety of tools and resources to build high-quality software efficiently.
The goal of Developer Hub is to address the difficulties associated with creating and sustaining developer portals using:
-
A centralized dashboard to view all available developer tools and resources to increase productivity
-
Self-service capabilities, along with guardrails, for cloud-native application development that complies with enterprise-class best practices
-
Proper security and governance for all developers across the enterprise
The Red Hat Developer Hub simplifies decision-making by providing a developer experience that presents a selection of internally approved tools, programming languages, and various developer resources within a self-managed portal. This approach contributes to the acceleration of application development and the maintenance of code quality, all while fostering innovation.
2. Installing Red Hat Developer Hub using Helm Chart
You can use a Helm Chart in Red Hat OpenShift (OpenShift) to install Red Hat Developer Hub, which is a flexible installation method.
Helm is a package manager on OpenShift that provides the following features:
-
Applies regular application updates using custom hooks
-
Manages the installation of complex applications
-
Provides charts that you can host on public and private servers
-
Supports rolling back to previous application versions
The Red Hat Developer Hub Helm Chart is available in the Helm catalog in Red Hat OpenShift Dedicated and OpenShift Container Platform (OCP).
-
You are logged in to the OCP using the OpenShift web console.
-
You have configured the appropriate roles and permissions within your project to create an application.
-
Create a project in the OpenShift, if not present.
For more information about creating a project in OpenShift, see Red Hat OpenShift documentation.
-
Switch to Developer perspective on your Red Hat OpenShift web console.
-
Click +Add.
-
From the Developer Catalog panel, click Helm Chart.
-
Search for Developer Hub in the search bar and select the Red Hat Developer Hub card.
-
Click Create.
-
Copy the OpenShift router host (for example:
apps.<clusterName>.com
) to Root Schema → global → Shorthand for users who do not want to specify a custom HOSTNAME. Used ONLY with the DEFAULT upstream.backstage.appConfig value and with OCP Route enabled., and adjust the other values if needed. -
Alternatively, copy the OpenShift router host (for example:
apps.<clusterName>.com
) toglobal.clusterRouterBase
, and adjust other values if needed, such asglobal.clusterRouterBase: apps.example.com
.In the previous steps, the information about the host is copied, which is accessed by the Developer Hub backend.
When an OCP route is generated automatically, the host value for the route is inferred and the same host information is sent to the Developer Hub. Also, if the Developer Hub is present on a custom domain by setting the host manually using values, the custom host takes precedence.
-
Click Create and wait for the database and Red Hat Developer Hub to start.
-
Click the Open URL option to start using the Red Hat Developer Hub platform.
Note
|
If your
Then, verify the configuration files. This is because the configuration files are not being accessed by the RHDH container. |
2.1. Installing Red Hat Developer Hub in an air-gapped environment
An air-gapped environment, also known as an air-gapped network or isolated network, ensures security by physically segregating the system or network. This isolation is established to prevent unauthorized access, data transfer, or communication between the air-gapped system and external sources.
You can install the Red Hat Developer Hub in an air-gapped environment to ensure security and meet specific regulatory requirements.
To install the Developer Hub in an air-gapped environment, you must have access to the registry.redhat.io
and the registry for the air-gapped environment.
-
You have installed an Openshift Container Platform (OCP) 4.10 or later.
-
You have access to the
registry.redhat.io
. -
You have access to the OpenShift image registry of your cluster. For more information about exposing the OpenShift image registry, see Exposing the registry in OpenShift documentation.
-
You have installed the
oc
command line tool on your workstation. -
You have installed the
podman
command line tools on your workstation. -
You you have an account in Red Hat Developer portal.
-
Log in to the OCP using
oc
:oc login -u <user> -p <password> https://api.<HOSTNAME>:6443
-
Log in to the OCP image registry using
podman
:podman login -u kubeadmin -p $(oc whoami -t) default-route-openshift-image-registry.<HOSTNAME>
NoteYou can use the following commands to get the full host name of the OpenShift image registry and than use the host name in a command to log in:
REGISTRY_HOST=$(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}') podman login -u kubeadmin -p $(oc whoami -t) $REGISTRY_HOST
-
Log in to the
registry.redhat.io
inpodman
using the following command:podman login registry.redhat.io
For more information about registry authentication, see Red Hat Container Registry Authentication.
-
Pull Developer Hub and PostgreSQL images from Red Hat Image registry to your workstation:
podman pull <DEVELOPERHUBIMAGE> podman pull registry.redhat.io/rhel9/postgresql-15:latest
-
Push both images to the internal OCP registry.
For more information about pushing images directly to OCP registry, see How do I push an Image directly into the OpenShift 4 registry.
podman push --remove-signatures registry.redhat.io/rhel9/postgresql-15:latest default-route-openshift-image-registry.<hostname>/<yourProject>/postgresql-15:latest
TipIf an x509 error occurs, ensure you install the CA certificate used for OpenShift routes on your system.
Optionally, you can append
--tls-verify=false
to thepodman
push command, note that this approach is not recommended. -
Use the following command to verify that both images are present in the internal OCP registry:
oc get imagestream -n <projectName>
-
Enable local image lookup for both images using the following commands:
oc set image-lookup postgresql-15 oc set image-lookup rhdh-hub-rhel9
-
Go to YAML view and update the
image
section forbackstage
andpostgresql
using the following values:Example values for Developer Hub imageupstream: backstage: image: registry: "" repository: rhdh-hub-rhel9 tag: latest
Example values for PostgreSQL imageupstream: postgresql: image: registry: "" repository: postgresql-15 tag: latest
-
Install the Red Hat Developer Hub using Helm Chart. For more information about installing Developer Hub, see Installing Red Hat Developer Hub using Helm Chart.
3. Installing Red Hat Developer Hub using the Operator
3.1. As an administrator
As an administrator, you can install Red Hat Developer Hub on your OpenShift Container Platform using the Operator.
-
You are logged in as an administrator on the OpenShift web console.
-
You have configured the appropriate roles and permissions within your project to create an application. See the Red Hat OpenShift documentation on Building applications for more details.
-
In the Administrator perspective of the OpenShift web console, navigate to Operators → OperatorHub.
-
Use the Filter by keyword box to search for the Red Hat Developer Hub Operator in the catalog, and then click the Red Hat Developer Hub tile.
-
Install the Red Hat Developer Hub Operator. For more information, see Installing from OperatorHub using the web console.
NoteFor enhanced security, you should deploy the Red Hat Developer Hub Operator in a dedicated default namespace such as rhdh-operator
. The cluster administrator can restrict other users' access to the operator resources through role bindings or cluster role bindings. You can choose to deploy the operator in theopenshift-operators
namespace instead, however, you should note that the Red Hat Developer Hub operator shares the namespace with other operators, and therefore any users who can create workloads in that namespace can get their privileges escalated from all operators' service accounts. -
See the “As a developer” section to continue setting up your Red Hat Developer Hub instance.
3.2. As a developer
As a developer, you can install Red Hat Developer Hub on your OpenShift Container Platform using the Operator.
-
Your administrator has installed the Red Hat Developer Hub Operator. For more information see the "As an administrator" section.
-
Create a project in OpenShift for your Red Hat Developer Hub instance. For more information about creating a project in OpenShift, see Red Hat OpenShift documentation.
-
From the Developer perspective in the Red Hat OpenShift web console, click the +Add tab.
-
From the Developer Catalog panel, click Operator Backed.
-
Search for Developer Hub or Backstage in the search bar and select the Developer Hub card.
-
Click Create.
-
Optionally, configure the Red Hat Developer Hub Backstage instance with non-default settings.
-
Click Create.
-
From the Topology tab, wait for the database and Red Hat Developer Hub to start.
-
Click the Open URL option from the Developer Hub pod to start using the Red Hat Developer Hub platform.
3.3. Configuring the Developer Hub Custom Resource
Note
|
Updates to the Backstage Custom Resource (CR) are automatically handled by the Operator. However, updates to resources referenced by the CR, such as ConfigMaps or Secrets, are not updated automatically unless the CR itself is updated. If you want to update resources referenced by the CR, then you must manually delete the Backstage Deployment so that the Operator can re-create it with the updated resources. |
3.3.1. Adding a custom application configuration file to Red Hat OpenShift
To change the configuration of your Red Hat Developer Hub instance, you must add a custom application configuration file to OpenShift and reference it in the Custom Resource. In OpenShift, you can use the following content as a base template to create a ConfigMap such as app-config-rhdh.yaml
:
kind: ConfigMap
apiVersion: v1
metadata:
name: app-config-rhdh
data:
app-config-rhdh.yaml: |
app:
title: Red Hat Developer Hub
backend:
auth:
keys:
- secret: “${BACKEND_SECRET}”
baseUrl: https://backstage-<CUSTOM_RESOURCE_NAME>-<NAMESPACE_NAME>.<OPENSHIFT_INGRESS_DOMAIN>
cors:
origin: https://backstage-<CUSTOM_RESOURCE_NAME>-<NAMESPACE_NAME>.<OPENSHIFT_INGRESS_DOMAIN>
kind: ConfigMap
apiVersion: v1
metadata:
name: app-config-rhdh
data:
"app-config-rhdh.yaml": |
app:
title: Red Hat Developer Hub
baseUrl: https://backstage-developer-hub-my-ns.apps.ci-ln-vtkzr22-72292.origin-ci-int-gce.dev.rhcloud.com
backend:
auth:
keys:
- secret: "${BACKEND_SECRET}"
baseUrl: https://backstage-backstage-sample-my-ns.apps.ci-ln-vtkzr22-72292.origin-ci-int-gce.dev.rhcloud.com
cors:
origin: https://backstage-backstage-sample-my-ns.apps.ci-ln-vtkzr22-72292.origin-ci-int-gce.dev.rhcloud.com
There is a mandatory Backend Auth Key for Red Hat Developer Hub. This references an environment variable defined in an OpenShift Secret.
Note
|
You are responsible for protecting your Red Hat Developer Hub installation from external and unauthorized access. The Backend Auth Key should be managed as any other secret. It should meet strong password requirements, you should not expose it in any configuration files and only inject it into configuration files as an environment variable. For more information about roles and responsibilities in Developer Hub, see the Role-Based Access Control (RBAC) in Red Hat Developer Hub section in the Administration Guide for Red Hat Developer Hub. |
You need to know the external URL of your Red Hat Developer Hub instance and set it in the app.baseUrl
, backend.baseUrl
and backend.cors.origin
fields of the application configuration. By default, this will be named as follows: https://backstage-<CUSTOM_RESOURCE_NAME>-<NAMESPACE_NAME>.<OPENSHIFT_INGRESS_DOMAIN>;
. You can use the oc get ingresses.config/cluster -o jsonpath='{.spec.domain}'
command to display your ingress domain. If you are using a different host or sub-domain, which is customizable in the Custom Resource spec.application.route field
, you must adjust the application configuration accordingly.
-
You have created an account in Red Hat OpenShift.
-
From the Developer perspective, select the ConfigMaps tab.
-
Click Create ConfigMap.
-
Select the YAML view option in Configure via and make the changes to the file, if necessary.
-
Click Create.
-
Select the Secrets tab.
-
Click Create Key/value Secret.
-
Name the secret
secrets-rhdh
. -
Add a key named
BACKEND_SECRET
and a base64 encoded string as a value. Use a unique value for each Red Hat Developer Hub instance. For example, you can use the following command to generate a key from your terminal:node -p 'require("crypto").randomBytes(24).toString("base64")'
-
Click Create.
-
Select the Topology tab.
-
Click on the three dots menu of a Red Hat Developer Hub instance and select Edit Backstage to load the YAML view of the Red Hat Developer Hub instance.
-
Add the
spec.application.appConfig.configMaps
andspec.application.extraEnvs.secrets
fields to the Custom Resource, as follows:spec: application: appConfig: mountPath: /opt/app-root/src configMaps: - name: app-config-rhdh extraEnvs: secrets: - name: secrets-rhdh extraFiles: mountPath: /opt-/app-root/src replicas: 1 route: enabled: true database: enableLocalDb: true
-
Click Save.
-
Navigate back to the Topology view and wait for the Red Hat Developer Hub pod to start.
-
Click the Open URL option to start using the Red Hat Developer Hub platform with the new configuration changes.
3.4. Configuring dynamic plugins with the Operator
You can store the configuration for dynamic plugins in a ConfigMap object that the Custom Resource can reference.
In OpenShift, you can use the following content as a base template to create a ConfigMap named dynamic-plugins-rhdh
:
kind: ConfigMap
apiVersion: v1
metadata:
name: dynamic-plugins-rhdh
data:
dynamic-plugins.yaml: |
includes:
- dynamic-plugins.default.yaml
plugins:
- package: './dynamic-plugins/dist/backstage-plugin-catalog-backend-module-github-dynamic'
disabled: false
pluginConfig: {}
Note
|
If the |
-
Navigate to OpenShift and select the ConfigMaps tab.
-
Click Create ConfigMap.
The Create ConfigMap page appears.
-
Select the YAML view option in Configure via and edit the file, if needed.
-
Click Create.
-
Go to the Topology tab.
-
Click on the three dots menu of a Red Hat Developer Hub instance and select Edit Backstage to load the YAML view of the Red Hat Developer Hub instance.
-
Add the
spec.application.dynamicPluginsConfigMapName
field to the Custom Resource, as follows:spec: application: appConfig: mountPath: /opt/app-root/src configMaps: - name: app-config-rhdh dynamicPluginsConfigMapName: dynamic-plugins-rhdh extraEnvs: secrets: - name: secrets-rhdh extraFiles: mountPath: /opt-/app-root/src replicas: 1 route: enabled: true database: enableLocalDb: true
-
Click Save.
-
Navigate back to the Topology view and wait for the Red Hat Developer Hub pod to start.
-
Click the Open URL option to start using the Red Hat Developer Hub platform with the new configuration changes.
To check that the dynamic plugins configuration has been loaded, append the following to your Red Hat Developer Hub root URL: /api/dynamic-plugins-info/loaded-plugins
and check the list of plugins.
3.5. Installing Red Hat Developer Hub Hub using a custom Backstage image
You can install Red Hat Developer Hub that uses a custom Backstage image in one of the following ways:
-
Use the Form view and enter the image in application → image
-
Use the YAML view to enter the image directly in the Backstage Custom Resource specification, as follows:
spec: application: image: <your custom image>
Warning
|
Installing a Red Hat Developer Hub application with a custom Backstage image might pose security risks to your organization. It is your responsibility to ensure that the image is from trusted sources, and has been tested and validated for security compliance. Red Hat only supports the images shipped within the Red Hat Developer Hub Operator. |
3.6. Installing Red Hat Developer Hub using the operator in an air-gapped environment
On an OpenShift cluster operating on a restricted network, public resources are not available. However, deploying the Red Hat Developer Hub (RHDH) Operator and running RHDH requires the following public resources:
-
Operator images (bundle, operator, catalog)
-
Operands images (RHDH, PostgreSQL)
To make these resources available, replace these resources with their equivalent resources in a mirror registry accessible to the OpenShift cluster.
You can use a helper script that mirrors the necessary images and provides the necessary configuration to ensure those images will be used when installing the RHDH Operator and creating RHDH instances.
Note
|
This script requires a target mirror registry which you should already have installed if your OpenShift cluster is ready to operate on a restricted network. However, if you are preparing your cluster for disconnected usage, you can use the script to deploy a mirror registry in the cluster and use it for the mirroring process. |
-
An active
oc
session with administrative permissions to the OpenShift cluster. See Getting started with the OpenShift CLI. -
An active
oc registry
session to theregistry.redhat.io
Red Hat Ecosystem Catalog. See Red Hat Container Registry Authentication. -
The
opm
CLI tool is installed. See Installing the opm CLI. -
The jq package is installed. See Download jq.
-
Podman is installed. See Podman Installation Instructions.
-
Skopeo version 1.14 or higher is installed. See Installing Skopeo.
-
If you already have a mirror registry for your cluster, an active Skopeo session with administrative access to this registry is required. See Authenticating to a registry and Mirroring images for a disconnected installation.
Note
|
The internal OpenShift cluster image registry cannot be used as a target mirror registry. See About the mirror registry. |
-
If you prefer to create your own mirror registry, see Creating a mirror registry with mirror registry for Red Hat OpenShift.
-
If you do not already have a mirror registry, you can use the helper script to create one for you and you need the following additional prerequisites:
-
The cURL package is installed. For Red Hat Enterprise Linux, the curl command is available by installing the curl package. To use curl for other platforms, see the cURL website.
-
The
htpasswd
command is available. For Red Hat Enterprise Linux, thehtpasswd
command is available by installing thehttpd-tools
package.
-
Download and run the mirroring script to install a custom Operator catalog and mirror the related images:
prepare-restricted-environment.sh
(source).# if you do not already have a target mirror registry # and want the script to create one for you. bash prepare-restricted-environment.sh \ --prod_operator_index "registry.redhat.io/redhat/redhat-operator-index:v4.14" \ --prod_operator_package_name "rhdh" \ --prod_operator_bundle_name "rhdh-operator" \ --prod_operator_version "v1.1.0" # or, if you already have a target mirror registry bash prepare-restricted-environment.sh \ --prod_operator_index "registry.redhat.io/redhat/redhat-operator-index:v4.14" \ --prod_operator_package_name "rhdh" \ --prod_operator_bundle_name "rhdh-operator" \ --prod_operator_version "v1.1.0" \ --use_existing_mirror_registry "my_registry"
NoteThe script can take several minutes to complete as it copies multiple images to the mirror registry. -
Refer to the Installing Red Hat Developer Hub using the operator as an administrator section to install the operator and configure your Red Hat Developer Hub instance.
4. Red Hat Developer Hub integration with Amazon Web Services (AWS)
You can integrate your Red Hat Developer Hub application with Amazon Web Services (AWS), which can help you streamline your workflows within the AWS ecosystem. Integrating the Developer Hub resources with AWS provides access to a comprehensive suite of tools, services, and solutions.
The integration with AWS requires the deployment of Developer Hub in Elastic Kubernetes Services (EKS) using Helm Chart or Operator.
4.1. Deploying Red Hat Developer Hub in Elastic Kubernetes Services (EKS) using Helm Chart
When you deploy Developer Hub in Elastic Kubernetes Services (EKS) using Helm Chart, it orchestrates a robust development environment within the AWS ecosystem.
-
You have an EKS cluster with AWS Application Load Balancer (ALB) add-on installed. For more information, see Application load balancing on Amazon EKS and Installing the AWS Load Balancer Controller add-on.
-
You have configured a domain name for your Developer Hub instance. The domain name can be a hosted zone entry on Route 53 or managed outside of AWS. For more information, see Configuring Amazon Route 53 as your DNS service documentation.
-
You have an entry in the AWS Certificate Manager (ACM) for your preferred domain name. Make sure to keep a record of your Certificate ARN.
-
You have subscribed to
registry.redhat.io
. For more information, see Red Hat Container Registry Authentication. -
You have set the context to the EKS cluster in your current
kubeconfig
. For more information, see Creating or updating a kubeconfig file for an Amazon EKS cluster. -
You have installed
kubectl
. For more information, see Installing or updating kubectl. -
You have installed Helm 3 or the latest. For more information, see Using Helm with Amazon EKS.
-
Go to your terminal and run the following command to add the Helm chart repository containing the Developer Hub chart to your local Helm registry:
$ helm repo add openshift-helm-charts https://charts.openshift.io/
-
Create a pull secret using the following command:
$ kubectl create secret docker-registry rhdh-pull-secret \ --docker-server=registry.redhat.io \ --docker-username=<user_name> \ (1) --docker-password=<password> \ (2) --docker-email=<email> (3)
-
Enter your username in the command.
-
Enter your password in the command.
-
Enter your email address in the command.
The created pull secret is used to pull the Developer Hub images from the Red Hat Ecosystem.
-
-
Create a file named
values.yaml
using the following template:global: # TODO: Set your application domain name. host: <your rhdh domain name> route: enabled: false upstream: service: # NodePort is required for the ALB to route to the Service type: NodePort ingress: enabled: true annotations: kubernetes.io/ingress.class: alb alb.ingress.kubernetes.io/scheme: internet-facing # TODO: Using an ALB HTTPS Listener requires a certificate for your own domain. Fill in the ARN of your certificate, e.g.: alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:xxx:xxxx:certificate/xxxxxx alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]' alb.ingress.kubernetes.io/ssl-redirect: '443' # TODO: Set your application domain name. external-dns.alpha.kubernetes.io/hostname: <your rhdh domain name> backstage: image: pullSecrets: - rhdh-pull-secret podSecurityContext: # you can assign any random value as fsGroup fsGroup: 2000 postgresql: image: pullSecrets: - rhdh-pull-secret primary: podSecurityContext: enabled: true # you can assign any random value as fsGroup fsGroup: 3000 volumePermissions: enabled: true
-
Run the following command in your terminal to deploy Developer Hub using the latest version of Helm Chart and using the values.yaml file created in the previous step:
helm install rhdh \ openshift-helm-charts/redhat-developer-hub \ [--version 1.0.0-1] \ (1) --values /path/to/values.yaml
-
version
1.0.0-1
is the latest version of the Helm Chart.
-
Wait until the DNS name is responsive, indicating that your Developer Hub instance is ready for use.
4.2. Deploying Red Hat Developer Hub on Elastic Kubernetes Services (EKS) using Operator
You can deploy the Developer Hub on EKS using the Operator with or without Operator Lifecycle Manager (OLM) framework. Following that, you can proceed to install your Developer Hub instance in EKS.
4.2.1. Installing the Operator with the OLM framework
-
You have set the context to the EKS cluster in your current
kubeconfig
. For more information, see Creating or updating a kubeconfig file for an Amazon EKS cluster. -
You have installed
kubectl
. For more information, see Installing or updating kubectl. -
You have subscribed to
registry.redhat.io
. For more information, see Red Hat Container Registry Authentication. -
You have installed the Operator Lifecycle Manager (OLM). For more information about installation and troubleshooting, see How do I get Operator Lifecycle Manager?
-
Run the following command in your terminal to create the
rhdh-operator
namespace where the Operator is installed:$ kubectl create namespace rhdh-operator
-
Create a pull secret using the following command:
$ kubectl -n rhdh-operator create secret docker-registry rhdh-pull-secret \ --docker-server=registry.redhat.io \ --docker-username=<user_name> \ (1) --docker-password=<password> \ (2) --docker-email=<email> (3)
-
Enter your username in the command.
-
Enter your password in the command.
-
Enter your email address in the command.
The created pull secret is used to pull the Developer Hub images from the Red Hat Ecosystem.
-
-
Create a
CatalogSource
resource that contains the Operators from the Red Hat Ecosystem:$ cat <<EOF | kubectl -n rhdh-operator apply -f - apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: redhat-catalog spec: sourceType: grpc image: registry.redhat.io/redhat/redhat-operator-index:v4.14 secrets: - "rhdh-pull-secret" displayName: Red Hat Operators EOF
-
Create an
OperatorGroup
resource as follows:$ cat <<EOF | kubectl apply -n rhdh-operator -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: rhdh-operator-group EOF
-
Create a
Subscription
resource using the following code:$ cat <<EOF | kubectl apply -n rhdh-operator -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: rhdh namespace: rhdh-operator spec: channel: fast installPlanApproval: Automatic name: rhdh source: redhat-catalog sourceNamespace: rhdh-operator startingCSV: rhdh-operator.v1.1.0 EOF
-
Run the following command to verify that the created Operator is running:
$ kubectl -n rhdh-operator get pods -w
If the operator pod shows
ImagePullBackOff
status, then you might need permissions to pull the image directly within the Operator deployment’s manifest.TipYou can include the required secret name in the
deployment.spec.template.spec.imagePullSecrets
list and verify the deployment name usingkubectl get deployment -n rhdh-operator
command:$ kubectl -n rhdh-operator patch deployment \ rhdh.fast --patch '{"spec":{"template":{"spec":{"imagePullSecrets":[{"name":"rhdh-pull-secret"}]}}}}' \ --type=merge
-
Update the default configuration of the operator to ensure that Developer Hub resources can start correctly in EKS using the following steps:
-
Edit the
backstage-default-config
ConfigMap in therhdh-operator
namespace using the following command:$ kubectl -n rhdh-operator edit configmap backstage-default-config
-
Locate the
db-statefulset.yaml
string and add thefsGroup
to itsspec.template.spec.securityContext
, as shown in the following example:db-statefulset.yaml: | apiVersion: apps/v1 kind: StatefulSet --- TRUNCATED --- spec: --- TRUNCATED --- restartPolicy: Always securityContext: # You can assign any random value as fsGroup fsGroup: 2000 serviceAccount: default serviceAccountName: default --- TRUNCATED ---
-
Locate the
deployment.yaml
string and add thefsGroup
to its specification, as shown in the following example:deployment.yaml: | apiVersion: apps/v1 kind: Deployment --- TRUNCATED --- spec: securityContext: # You can assign any random value as fsGroup fsGroup: 3000 automountServiceAccountToken: false --- TRUNCATED ---
-
Locate the
service.yaml
string and change thetype
toNodePort
as follows:service.yaml: | apiVersion: v1 kind: Service spec: # NodePort is required for the ALB to route to the Service type: NodePort --- TRUNCATED ---
-
Save and exit.
Wait for a few minutes until the changes are automatically applied to the operator pods.
-
4.2.2. Installing the Operator without the OLM framework
-
You have installed the following commands:
-
git
-
make
-
sed
-
-
Clone the Operator repository to your local machine using the following command:
$ git clone --depth=1 https://github.com/janus-idp/operator.git rhdh-operator && cd rhdh-operator
-
Run the following command and generate the deployment manifest:
$ make deployment-manifest
The previous command generates a file named
rhdh-operator-<VERSION>.yaml
, which is updated manually. -
Run the following command to apply replacements in the generated deployment manifest:
$ sed -i "s/backstage-operator/rhdh-operator/g" rhdh-operator-*.yaml $ sed -i "s/backstage-system/rhdh-operator/g" rhdh-operator-*.yaml $ sed -i "s/backstage-controller-manager/rhdh-controller-manager/g" rhdh-operator-*.yaml
-
Open the generated deployment manifest file in an editor and perform the following steps:
-
Locate the
db-statefulset.yaml
string and add thefsGroup
to itsspec.template.spec.securityContext
, as shown in the following example:db-statefulset.yaml: | apiVersion: apps/v1 kind: StatefulSet --- TRUNCATED --- spec: --- TRUNCATED --- restartPolicy: Always securityContext: # You can assign any random value as fsGroup fsGroup: 2000 serviceAccount: default serviceAccountName: default --- TRUNCATED ---
-
Locate the
deployment.yaml
string and add thefsGroup
to its specification, as shown in the following example:deployment.yaml: | apiVersion: apps/v1 kind: Deployment --- TRUNCATED --- spec: securityContext: # You can assign any random value as fsGroup fsGroup: 3000 automountServiceAccountToken: false --- TRUNCATED ---
-
Locate the
service.yaml
string and change thetype
toNodePort
as follows:service.yaml: | apiVersion: v1 kind: Service spec: # NodePort is required for the ALB to route to the Service type: NodePort --- TRUNCATED ---
-
Replace the default images with the images that are pulled from the Red Hat Ecosystem:
$ sed -i "s#gcr.io/kubebuilder/kube-rbac-proxy:.*#registry.redhat.io/openshift4/ose-kube-rbac-proxy:v4.15#g" rhdh-operator-*.yaml $ sed -i "s#quay.io/janus-idp/operator:.*#registry.redhat.io/rhdh/rhdh-rhel9-operator:1.1#g" rhdh-operator-*.yaml $ sed -i "s#quay.io/janus-idp/backstage-showcase:.*#registry.redhat.io/rhdh/rhdh-hub-rhel9:1.1#g" rhdh-operator-*.yaml $ sed -i "s#quay.io/fedora/postgresql-15:.*#registry.redhat.io/rhel9/postgresql-15:latest#g" rhdh-operator-*.yaml
-
-
Add the image pull secret to the manifest in the Deployment resource as follows:
--- TRUNCATED --- apiVersion: apps/v1 kind: Deployment metadata: labels: app.kubernetes.io/component: manager app.kubernetes.io/created-by: rhdh-operator app.kubernetes.io/instance: controller-manager app.kubernetes.io/managed-by: kustomize app.kubernetes.io/name: deployment app.kubernetes.io/part-of: rhdh-operator control-plane: controller-manager name: rhdh-controller-manager namespace: rhdh-operator spec: replicas: 1 selector: matchLabels: control-plane: controller-manager template: metadata: annotations: kubectl.kubernetes.io/default-container: manager labels: control-plane: controller-manager spec: imagePullSecrets: - name: rhdh-pull-secret --- TRUNCATED ---
-
Apply the manifest to deploy the operator using the following command:
$ kubectl apply -f rhdh-operator-VERSION.yaml
-
Run the following command to verify that the Operator is running:
$ kubectl -n rhdh-operator get pods -w
4.2.3. Installing the Developer Hub instance in EKS
Once the Operator is installed and running, you can create a Developer Hub instance in EKS.
-
You have an EKS cluster with AWS Application Load Balancer (ALB) add-on installed. For more information, see Application load balancing on Amazon EKS and Installing the AWS Load Balancer Controller add-on.
-
You have configured a domain name for your Developer Hub instance. The domain name can be a hosted zone entry on Route 53 or managed outside of AWS. For more information, see Configuring Amazon Route 53 as your DNS service documentation.
-
You have an entry in the AWS Certificate Manager (ACM) for your preferred domain name. Make sure to keep a record of your Certificate ARN.
-
You have subscribed to
registry.redhat.io
. For more information, see Red Hat Container Registry Authentication. -
You have set the context to the EKS cluster in your current
kubeconfig
. For more information, see Creating or updating a kubeconfig file for an Amazon EKS cluster. -
You have installed
kubectl
. For more information, see Installing or updating kubectl.
-
Create a ConfigMap named
app-config-rhdh
containing the Developer Hub configuration using the following template:apiVersion: v1 kind: ConfigMap metadata: name: app-config-rhdh data: "app-config-rhdh.yaml": | app: title: Red Hat Developer Hub baseUrl: https://<rhdh_dns_name> backend: auth: keys: - secret: "${BACKEND_SECRET}" baseUrl: https://<rhdh_dns_name> cors: origin: https://<rhdh_dns_name>
-
Create a Secret named
secrets-rhdh
and add a key namedBACKEND_SECRET
with aBase64-encoded
string as value:apiVersion: v1 kind: Secret metadata: name: secrets-rhdh stringData: # TODO: See https://backstage.io/docs/auth/service-to-service-auth/#setup BACKEND_SECRET: "xxx"
ImportantEnsure that you use a unique value of
BACKEND_SECRET
for each Developer Hub instance.You can use the following command to generate a key:
node-p'require("crypto").randomBytes(24).toString("base64")'
-
To enable pulling the PostgreSQL image from the Red Hat Ecosystem Catalog, add the image pull secret in the default service account within the namespace where the Developer Hub instance is being deployed:
$ kubectl patch serviceaccount default \ -p '{"imagePullSecrets": [{"name": "rhdh-pull-secret"}]}' \ -n <your_namespace>
-
Create a Custom Resource file using the following template:
apiVersion: rhdh.redhat.com/v1alpha1 kind: Backstage metadata: # TODO: this the name of your RHDH instance name: my-rhdh spec: application: imagePullSecrets: - "rhdh-pull-secret" route: enabled: false appConfig: configMaps: - name: "app-config-rhdh" extraEnvs: secrets: - name: "secrets-rhdh"
-
Create an Ingress resource using the following template, ensuring to customize the names as needed:
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: # TODO: this the name of your RHDH Ingress name: my-rhdh annotations: alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/target-type: ip # TODO: Using an ALB HTTPS Listener requires a certificate for your own domain. Fill in the ARN of your certificate, e.g.: alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-xxx:xxxx:certificate/xxxxxx alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]' alb.ingress.kubernetes.io/ssl-redirect: '443' # TODO: Set your application domain name. external-dns.alpha.kubernetes.io/hostname: <rhdh_dns_name> spec: ingressClassName: alb rules: # TODO: Set your application domain name. - host: <rhdh_dns_name> http: paths: - path: / pathType: Prefix backend: service: # TODO: my-rhdh is the name of your Backstage Custom Resource. # Adjust if you changed it! name: backstage-my-rhdh port: name: http-backend
In the previous template, replace ` <rhdh_dns_name>` with your Developer Hub domain name and update the value of
alb.ingress.kubernetes.io/certificate-arn
with your certificate ARN.
Wait until the DNS name is responsive, indicating that your Developer Hub instance is ready for use.
4.3. Monitoring and logging with Amazon Web Services (AWS) in Red Hat Developer Hub
In the Red Hat Developer Hub, monitoring and logging are facilitated through Amazon Web Services (AWS) integration. With features like Amazon CloudWatch for real-time monitoring and Amazon Prometheus for comprehensive logging, you can ensure the reliability, scalability, and compliance of your Developer Hub application hosted on AWS infrastructure.
This integration enables you to oversee, diagnose, and refine your applications in the Red Hat ecosystem, leading to an improved development and operational journey.
4.3.1. Monitoring with Amazon Prometheus
Red Hat Developer Hub provides Prometheus metrics related to the running application. For more information about enabling or deploying Prometheus for EKS clusters, see Prometheus metrics in the Amazon documentation.
To monitor Developer Hub using Amazon Prometheus, you need to create an Amazon managed service for the Prometheus workspace and configure the ingestion of the Developer Hub Prometheus metrics. For more information, see Create a workspace and Ingest Prometheus metrics to the workspace sections in the Amazon documentation.
After ingesting Prometheus metrics into the created workspace, you can configure the metrics scraping to extract data from pods based on specific pod annotations.
Configuring annotations for monitoring
You can configure the annotations for monitoring in both Helm deployment and Operator-backed deployment.
- Helm deployment
-
To annotate the backstage pod for monitoring, update your
values.yaml
file as follows:upstream: backstage: # --- TRUNCATED --- podAnnotations: prometheus.io/scrape: 'true' prometheus.io/path: '/metrics' prometheus.io/port: '7007' prometheus.io/scheme: 'http'
- Operator-backed deployment
-
Procedure
-
As an administrator of the operator, edit the default configuration to add Prometheus annotations as follows:
# Update OPERATOR_NS accordingly $ OPERATOR_NS=rhdh-operator $ kubectl edit configmap backstage-default-config -n "${OPERATOR_NS}"
-
Find the
deployment.yaml
key in the ConfigMap and add the annotations to thespec.template.metadata.annotations
field as follows:deployment.yaml: |- apiVersion: apps/v1 kind: Deployment # --- truncated --- spec: template: # --- truncated --- metadata: labels: rhdh.redhat.com/app: # placeholder for 'backstage-<cr-name>' # --- truncated --- annotations: prometheus.io/scrape: 'true' prometheus.io/path: '/metrics' prometheus.io/port: '7007' prometheus.io/scheme: 'http' # --- truncated ---
-
Save your changes.
-
To verify if the scraping works:
-
Use
kubectl
to port-forward the Prometheus console to your local machine as follows:$ kubectl --namespace=prometheus port-forward deploy/prometheus-server 9090
-
Open your web browser and navigate to
http://localhost:9090
to access the Prometheus console. -
Monitor relevant metrics, such as
process_cpu_user_seconds_total
.
4.3.2. Logging with Amazon CloudWatch logs
Logging within the Red Hat Developer Hub relies on the winston library. By default, logs at the debug level are not recorded. To activate debug logs, you must set the environment variable LOG_LEVEL
to debug in your Red Hat Developer Hub instance.
Configuring the application log level
You can configure the application log level in both Helm deployment and Operator-backed deployment.
- Helm deployment
-
To update the logging level, add the environment variable
LOG_LEVEL
to your Helm chart’svalues.yaml
file:upstream: backstage: # --- Truncated --- extraEnvVars: - name: LOG_LEVEL value: debug
- Operator-backed deployment
-
You can modify the logging level by including the environment variable
LOG_LEVEL
in your custom resource as follows:spec: # Other fields omitted application: extraEnvs: envs: - name: LOG_LEVEL value: debug
Retrieving logs from Amazon CloudWatch
The CloudWatch Container Insights are used to capture logs and metrics for Amazon EKS. For more information, see Logging for Amazon EKS documentation.
To capture the logs and metrics, install the Amazon CloudWatch Observability EKS add-on in your cluster. Following the setup of Container Insights, you can access container logs using Logs Insights or Live Tail views.
CloudWatch names the log group where all container logs are consolidated in the following manner:
/aws/containerinsights/<ClusterName>/application
Following is an example query to retrieve logs from the Developer Hub instance:
fields @timestamp, @message, kubernetes.container_name
| filter kubernetes.container_name in ["install-dynamic-plugins", "backstage-backend"]
4.4. Using Amazon Cognito as an authentication provider in Red Hat Developer Hub
In this section, Amazon Cognito is an AWS service for adding an authentication layer to Developer Hub. You can sign in directly to the Developer Hub using a user pool or fedarate through a third-party identity provider.
Although Amazon Cognito is not part of the core authentication providers for the Developer Hub, it can be integrated using the generic OpenID Connect (OIDC) provider.
You can configure your Developer Hub in both Helm Chart and Operator-backed deployments.
-
You have a User Pool or you have created a new one. For more information about user pools, see Amazon Cognito user pools documentation.
NoteEnsure that you have noted the AWS region where the user pool is located and the user pool ID.
-
You have created an App Client within your user pool for integrating the hosted UI. For more information, see Setting up the hosted UI with the Amazon Cognito console.
When setting up the hosted UI using the Amazon Cognito console, ensure to make the following adjustments:
-
In the Allowed callback URL(s) section, include the URL
https://<rhdh_url>/api/auth/oidc/handler/frame
. Ensure to replace<rhdh_url>
with your Developer Hub application’s URL, such as,my.rhdh.example.com
. -
Similarly, in the Allowed sign-out URL(s) section, add
https://<rhdh_url>
. Replace<rhdh_url>
with your Developer Hub application’s URL, such asmy.rhdh.example.com
. -
Under OAuth 2.0 grant types, select Authorization code grant to return an authorization code.
-
Under OpenID Connect scopes, ensure to select at least the following scopes:
-
OpenID
-
Profile
-
Email
-
- Helm deployment
-
Procedure
-
Edit or create your custom
app-config-rhdh
ConfigMap as follows:apiVersion: v1 kind: ConfigMap metadata: name: app-config-rhdh data: "app-config-rhdh.yaml": | # --- Truncated --- app: title: Red Hat Developer Hub signInPage: oidc auth: environment: production session: secret: ${AUTH_SESSION_SECRET} providers: oidc: production: clientId: ${AWS_COGNITO_APP_CLIENT_ID} clientSecret: ${AWS_COGNITO_APP_CLIENT_SECRET} metadataUrl: ${AWS_COGNITO_APP_METADATA_URL} callbackUrl: ${AWS_COGNITO_APP_CALLBACK_URL} scope: 'openid profile email' prompt: auto
-
Edit or create your custom
secrets-rhdh
Secret using the following template:apiVersion: v1 kind: Secret metadata: name: secrets-rhdh stringData: AUTH_SESSION_SECRET: "my super auth session secret - change me!!!" AWS_COGNITO_APP_CLIENT_ID: "my-aws-cognito-app-client-id" AWS_COGNITO_APP_CLIENT_SECRET: "my-aws-cognito-app-client-secret" AWS_COGNITO_APP_METADATA_URL: "https://cognito-idp.[region].amazonaws.com/[userPoolId]/.well-known/openid-configuration" AWS_COGNITO_APP_CALLBACK_URL: "https://[rhdh_dns]/api/auth/oidc/handler/frame"
-
Add references of both the ConfigMap and Secret resources in your
values.yaml
file:upstream: backstage: image: pullSecrets: - rhdh-pull-secret podSecurityContext: fsGroup: 2000 extraAppConfig: - filename: app-config-rhdh.yaml configMapRef: app-config-rhdh extraEnvVarsSecrets: - secrets-rhdh
-
Upgrade the Helm deployment:
helm upgrade rhdh \ openshift-helm-charts/redhat-developer-hub \ [--version 1.0.0-1] \ --values /path/to/values.yaml
-
- Operator-backed deployment
-
-
Add the following code to your
app-config-rhdh
ConfigMap:apiVersion: v1 kind: ConfigMap metadata: name: app-config-rhdh data: "app-config-rhdh.yaml": | # --- Truncated --- signInPage: oidc auth: # Production to disable guest user login environment: production # Providing an auth.session.secret is needed because the oidc provider requires session support. session: secret: ${AUTH_SESSION_SECRET} providers: oidc: production: # See https://github.com/backstage/backstage/blob/master/plugins/auth-backend-module-oidc-provider/config.d.ts clientId: ${AWS_COGNITO_APP_CLIENT_ID} clientSecret: ${AWS_COGNITO_APP_CLIENT_SECRET} metadataUrl: ${AWS_COGNITO_APP_METADATA_URL} callbackUrl: ${AWS_COGNITO_APP_CALLBACK_URL} # Minimal set of scopes needed. Feel free to add more if needed. scope: 'openid profile email' # Note that by default, this provider will use the 'none' prompt which assumes that your are already logged on in the IDP. # You should set prompt to: # - auto: will let the IDP decide if you need to log on or if you can skip login when you have an active SSO session # - login: will force the IDP to always present a login form to the user prompt: auto
-
Add the following code to your
secrets-rhdh
Secret:apiVersion: v1 kind: Secret metadata: name: secrets-rhdh stringData: # --- Truncated --- # TODO: Change auth session secret. AUTH_SESSION_SECRET: "my super auth session secret - change me!!!" # TODO: user pool app client ID AWS_COGNITO_APP_CLIENT_ID: "my-aws-cognito-app-client-id" # TODO: user pool app client Secret AWS_COGNITO_APP_CLIENT_SECRET: "my-aws-cognito-app-client-secret" # TODO: Replace region and user pool ID AWS_COGNITO_APP_METADATA_URL: "https://cognito-idp.[region].amazonaws.com/[userPoolId]/.well-known/openid-configuration" # TODO: Replace <rhdh_dns> AWS_COGNITO_APP_CALLBACK_URL: "https://[rhdh_dns]/api/auth/oidc/handler/frame"
-
Ensure your Custom Resource contains references to both the
app-config-rhdh
ConfigMap andsecrets-rhdh
Secret:apiVersion: rhdh.redhat.com/v1alpha1 kind: Backstage metadata: # TODO: this the name of your RHDH instance name: my-rhdh spec: application: imagePullSecrets: - "rhdh-pull-secret" route: enabled: false appConfig: configMaps: - name: "app-config-rhdh" extraEnvs: secrets: - name: "secrets-rhdh"
-
Optional: If you have an existing Developer Hub instance backed by the Custom Resource and you have not edited it, you can manually delete the Developer Hub deployment to recreate it using the operator. Run the following command to delete the Developer Hub deployment:
$ kubectl delete deployment -l app.kubernetes.io/instance=<CR_NAME>
-
-
-
Navigate to your Developer Hub web URL and sign in using OIDC authentication, which prompts you to authenticate through the configured AWS Cognito user pool.
-
Once logged in, access Settings and verify user details.
5. Role-Based Access Control (RBAC) in Red Hat Developer Hub
Role-Based Access Control is a security paradigm that restricts access to authorized users. This feature includes defining roles with specific permissions and then assigning those roles to the users.
The Red Hat Developer Hub uses RBAC to improve the permission system within the platform. The RBAC feature in Developer Hub introduces an administrator role and leverages the organizational structure including teams, groups, and users by facilitating efficient access control.
5.1. Permission policies configuration
There are two approaches to configure the permission policies in Red Hat Developer Hub, including:
-
Configuration of permission policies administrators
-
Configuration of permission policies defined in an external file
5.1.1. Configuration of permission policies administrators
The permission policies for users and groups in the Developer Hub are managed by permission policy administrators. Only permission policy administrators can access the Role-Based Access Control REST API.
You can set the credentials of a permission policy administrator in the app-config.yaml
file as follows:
permission:
enabled: true
rbac:
admin:
users:
- name: user:default/joeuser
5.1.2. Configuration of permission policies defined in an external file
You can follow this approach of configuring the permission policies before starting the Red Hat Developer Hub. If permission policies are defined in an external file, then you can import the same file in the Developer Hub. The permission policies need to be defined in Casbin rules format. For information about the Casbin rules format, see Basics of Casbin rules.
The following is an example of permission policies configuration:
p, role:default/guests, catalog-entity, read, deny
p, role:default/guests, catalog.entity.create, create, deny
g, user:default/<USER_TO_ROLE>, role:default/guests
If a defined permission does not contain an action associated with it, then add use
as a policy. See the following example:
p, role:default/guests, kubernetes.proxy, use, deny
You can define the policy.csv
file path in the app-config.yaml
file:
permission:
enabled: true
rbac:
policies-csv-file: /some/path/rbac-policy.csv
Mounting policy.csv
file to the Developer Hub Helm Chart
When the Red Hat Developer Hub is deployed with the Helm Chart, then you must define the policy.csv
file by mounting it to the Developer Hub Helm Chart.
You can add your policy.csv
file to the Developer Hub Helm Chart by creating a configMap
and mounting it.
-
You are logged in to the OCP using the OpenShift web console.
-
Red Hat Developer Hub is installed and deployed using Helm Chart.
For more information about installing the Red Hat Developer Hub using Helm Chart, see Installing Red Hat Developer Hub using Helm Chart.
-
In Red Hat OpenShift, create a
configMap
to hold the policies as shown in the following example:ExampleConfigMap
kind: ConfigMap apiVersion: v1 metadata: name: rbac-policy namespace: rhdh data: rbac-policy.csv: | p, role:default/guests, catalog-entity, read, allow p, role:default/guests, catalog.entity.create, create, allow g, user:default/<YOUR_USER>, role:default/guests
-
In the Developer Hub Helm Chart, go to Root Schema → Backstage chart schema → Backstage parameters → Backstage container additional volume mounts.
-
Select Add Backstage container additional volume mounts and add the following values:
-
mountPath:
opt/app-root/src/rbac
-
Name:
rbac-policy
-
-
Add the RBAC policy to the Backstage container additional volumes in the Developer Hub Helm Chart:
-
name:
rbac-policy
-
configMap
-
defaultMode:
420
-
name:
rbac-policy
-
-
-
Update the policy path in the
app-config.yaml
file as follows:Exampleapp-config.yaml
filepermission: enabled: true rbac: policies-csv-file: ./rbac/rbac-policy.csv
5.1.3. Permission policies in Red Hat Developer Hub
Permission policies in Red Hat Developer Hub are a set of rules to govern access to resources or functionalities. These policies state the authorization level that is granted to users based on their roles. The permission policies are implemented to maintain security and confidentiality within a given environment.
The following permission policies are supported in the Developer Hub:
- Catalog permissions
Name | Resource type | Policy | Description |
---|---|---|---|
|
|
read |
Allows user or role to read from the catalog |
|
create |
Allows user or role to create catalog entities, including registering an existing component in the catalog |
|
|
|
update |
Allows user or role to refresh a single or multiple entities from the catalog |
|
|
delete |
Allows user or role to delete a single or multiple entities from the catalog |
|
read |
Allows user or role to read a single or multiple locations from the catalog |
|
|
create |
Allows user or role to create locations within the catalog |
|
|
delete |
Allows user or role to delete locations from the catalog |
- Scaffolder permissions
Name | Resource type | Policy | Description |
---|---|---|---|
|
|
Allows the execution of an action from a template |
|
|
|
read |
Allows user or role to read a single or multiple one parameters from a template |
|
|
read |
Allows user or role to read a single or multiple steps from a template |
- RBAC permissions
Name | Resource type | Policy | Description |
---|---|---|---|
|
|
read |
Allows user or role to read permission policies and roles |
|
|
create |
Allows user or role to create a single or multiple permission policies and roles |
|
|
update |
Allows user or role to update a single or multiple permission policies and roles |
|
|
delete |
Allows user or role to delete a single or multiple permission policies and roles |
- Kubernetes permissions
Name | Resource type | Policy | Description |
---|---|---|---|
|
Allows user or role to access the proxy endpoint |
5.2. Managing role-based access controls (RBAC) using the Red Hat Developer Hub Web UI
Administrators can use the Developer Hub web interface (Web UI) to allocate specific roles and permissions to individual users or groups. Allocating roles ensures that access to resources and functionalities is regulated across the Developer Hub.
With the administrator role in Developer Hub, you can assign permissions to users and groups, which allow users or groups to view, create, modify, and delete the roles using the Developer Hub Web UI.
To access the RBAC features in the Web UI, you must install and configure the @janus-idp/backstage-plugin-rbac
plugin as a dynamic plugin. For more information about installing a dynamic plugin, see Dynamic plugin installation.
After you install the @janus-idp/backstage-plugin-rbac
plugin, the Administration option appears at the bottom of the sidebar. When you can click Administration, the RBAC tab appears by default, displaying all of the existing roles created in the Developer Hub. In the RBAC tab, you can also view the total number of users, groups, and the total number of permission policies associated with a role. You can also edit or delete a role using the Actions column.
5.2.1. Creating a role in the Red Hat Developer Hub Web UI
You can create a role in the Red Hat Developer Hub using the Web UI.
-
You have an administrator role in the Developer Hub.
-
You have installed the
@janus-idp/backstage-plugin-rbac
plugin in Developer Hub. For more information, see Dynamic plugin installation. -
You have configured the required permission policies. For more information, see Permission policies configuration.
-
Go to Administration at the bottom of the sidebar in the Developer Hub.
The RBAC tab appears, displaying all the created roles in the Developer Hub.
-
(Optional) Click any role to view the role information on the OVERVIEW page.
-
Click CREATE to create a role.
-
Enter the name and description of the role in the given fields and click NEXT.
-
Add users and groups using the search field, and click NEXT.
-
Select Plugin and Permission from the drop-downs in the Add permission policies section.
-
Select or clear the Policy that you want to set in the Add permission policies section, and click NEXT.
-
Review the added information in the Review and create section.
-
Click CREATE.
The created role appears in the list available in the RBAC tab.
5.2.2. Editing a role in the Red Hat Developer Hub Web UI
You can edit a role in the Red Hat Developer Hub using the Web UI.
Note
|
The policies generated from a |
-
You have an administrator role in the Developer Hub.
-
You have installed the
@janus-idp/backstage-plugin-rbac
plugin in Developer Hub. For more information, see Dynamic plugin installation. -
You have configured the required permission policies. For more information, see Permission policies configuration.
-
The role that you want to edit is created in the Developer Hub.
-
Go to Administration at the bottom of the sidebar in the Developer Hub.
The RBAC tab appears, displaying all the created roles in the Developer Hub.
-
(Optional) Click any role to view the role information on the OVERVIEW page.
-
Select the edit icon for the role that you want to edit.
-
Edit the details of the role, such as name, description, users and groups, and permission policies, and click NEXT.
-
Review the edited details of the role and click SAVE.
After editing a role, you can view the edited details of a role on the OVERVIEW page of a role. You can also edit a role’s users and groups or permissions by using the edit icon on the respective cards on the OVERVIEW page.
5.2.3. Deleting a role in the Red Hat Developer Hub Web UI
You can delete a role in the Red Hat Developer Hub using the Web UI.
Note
|
The policies generated from a |
-
You have an administrator role in the Developer Hub.
-
You have installed the
@janus-idp/backstage-plugin-rbac
plugin in Developer Hub. For more information, see Dynamic plugin installation. -
You have configured the required permission policies. For more information, see Permission policies configuration.
-
The role that you want to delete is created in the Developer Hub.
-
Go to Administration at the bottom of the sidebar in the Developer Hub.
The RBAC tab appears, displaying all the created roles in the Developer Hub.
-
(Optional) Click any role to view the role information on the OVERVIEW page.
-
Select the delete icon from the Actions column for the role that you want to delete.
Delete this role? pop-up appears on the screen.
-
Click DELETE.
5.3. Role-based Access Control (RBAC) REST API
Red Hat Developer Hub provides RBAC REST API that you can use to manage the permissions and roles in the Developer Hub. This API supports you to facilitate and automate the maintenance of Developer Hub permission policies and roles.
Using the RBAC REST API, you can perform the following actions:
-
Retrieve information about all permission policies or specific permission policies, or roles
-
Create, update, or delete a permission policy or a role
-
Retrieve permission policy information about static plugins
The RBAC REST API requires the following components:
Authorization
The RBAC REST API requires Bearer token authorization for the permitted user role. For development purposes, you can access a web console in a browser. When you refresh a token request in the list of network requests, you find the token in the response JSON.
Authorization: Bearer $token
For example, on the Homepage of the Developer Hub, you can navigate to the Network tab and search for the query?term=
network call. Alternatively, you can go to the Catalog page and select any network call with entity-facets
to acquire the Bearer token.
HTTP methods
The RBAC REST API supports the following HTTP methods for API requests:
-
GET
: Retrieves specified information from a specified resource endpoint -
POST
: Creates or updates a resource -
PUT
: Updates a resource -
DELETE
: Deletes a resource
Base URL
The base URL for RBAC REST API requests is http://SERVER:PORT/api/permission/policies
, such as http://localhost:7007/api/permission/policies
.
Endpoints
RBAC REST API endpoints, such as /api/permission/policies/[kind]/[namespace]/[name]
for specified kind, namespace, and username, are the URI that you append to the base URL to access the corresponding resource.
Example request URL for /api/permission/policies/[kind]/[namespace]/[name]
endpoint is:
Note
|
If at least one permission is assigned to |
Request data
HTTP POST
requests in the RBAC REST API may require a JSON request body with data to accompany the request.
Example POST
request URL and JSON request body data for
http://localhost:7007/api/permission/policies
:
{
"entityReference": "role:default/test",
"permission": "catalog-entity",
"policy": "delete",
"effect": "allow"
}
HTTP status codes
The RBAC REST API supports the following HTTP status codes to return as responses:
-
200
OK: The request was successful. -
201
Created: The request resulted in a new resource being successfully created. -
204
No Content: The request was successful, but there is no additional content to send in the response payload. -
400
Bad Request: input error with the request -
401
Unauthorized: lacks valid authentication for the requested resource -
403
Forbidden: refusal to authorize request -
404
Not Found: could not find requested resource -
409
Conflict: request conflict with the current state and the target resource
5.3.1. Sending requests with the RBAC REST API using a REST client or curl utility
The RBAC REST API enables you to interact with the permission policies and roles in Developer Hub without using the user interface. You can send RBAC REST API requests using any REST client or curl utility.
-
Red Hat Developer Hub is installed and running. For more information about installing Red Hat Developer Hub, see Installing Red Hat Developer Hub using Helm Chart. .
-
You have access to the Developer Hub.
-
Identify a relevant API endpoint to which you want to send a request, such as
POST /api/permission/policies
. Adjust any request details according to your use case.For REST client:
-
Authorization: Enter the generated token from the web console.
-
HTTP method: Set to
POST
. -
URL: Enter the RBAC REST API base URL and endpoint such as
http://localhost:7007/api/permission/policies
.
For curl utility:
-
-X
: Set toPOST
-
-H
: Set the following header:Content-type: application/json
Authorization: Bearer $token
$token
is the requested token from the web console in a browser. -
URL: Enter the following RBAC REST API base URL endpoint, such as
http://localhost:7007/api/permission/policies
-
-d
: Add a request JSON body
Example request:
curl -X POST "http://localhost:7007/api/permission/policies" -d '{"entityReference":"role:default/test", "permission": "catalog-entity", "policy": "read", "effect":"allow"}' -H "Content-Type: application/json" -H "Authorization: Bearer $token" -v
-
-
Execute the request and review the response.
5.3.2. Supported RBAC REST API endpoints
The RBAC REST API provides the following endpoints for managing permission policies and roles in the Developer Hub and for retrieving information about the policies and roles.
Permission policies
The RBAC REST API supports the following endpoints for managing permission policies in the Red Hat Developer Hub.
- [GET] /api/permission/policies
-
Returns permission policies list for all users.
Example response (JSON)[ { "entityReference": "role:default/test", "permission": "catalog-entity", "policy": "read", "effect": "allow" }, { "entityReference": "role:default/test", "permission": "catalog.entity.create", "policy": "use", "effect": "allow" }, ]
- [GET] /api/permission/policies/{kind}/{namespace}/{name}
-
Returns permission policies related to the specified entity reference.
Table 1. Request parameters Name Description Type Requirement kind
Kind of the entity
String
Required
namespace
Namespace of the entity
String
Required
name
Username related to the entity
String
Required
Example response (JSON)[ { "entityReference": "role:default/test", "permission": "catalog-entity", "policy": "read", "effect": "allow" }, { "entityReference": "role:default/test", "permission": "catalog.entity.create", "policy": "use", "effect": "allow" } ]
- [POST] /api/permission/policies
-
Creates a permission policy for a specified entity.
Table 2. Request parameters Name Description Type Requirement entityReference
Reference values of an entity including namespace and name
String
Required
permission
Type of the permission
String
Required
policy
Read or write policy to the permission
String
Required
effect
Indication of allowing or not allowing the policy
String
Required
Example request body (JSON){ "entityReference": "role:default/test", "permission": "catalog-entity", "policy": "read", "effect": "allow" }
Example response201 Created
- [PUT] /api/permission/policies/{kind}/{namespace}/{name}
-
Updates a permission policy for a specified entity.
Request parametersThe request body contains the
oldPolicy
andnewPolicy
objects:Name Description Type Requirement permission
Type of the permission
String
Required
policy
Read or write policy to the permission
String
Required
effect
Indication of allowing or not allowing the policy
String
Required
Example request body (JSON){ "oldPolicy": { "permission": "catalog-entity", "policy": "read", "effect": "deny" }, "newPolicy": { "permission": "policy-entity", "policy": "read", "effect": "allow" } }
Example response200
- [DELETE] /api/permission/policies/{kind}/{namespace}/{name}?permission={value1}&policy={value2}&effect={value3}
-
Deletes a permission policy added to the specified entity.
Table 3. Request parameters Name Description Type Requirement kind
Kind of the entity
String
Required
namespace
Namespace of the entity
String
Required
name
Username related to the entity
String
Required
permission
Type of the permission
String
Required
policy
Read or write policy to the permission
String
Required
effect
Indication of allowing or not allowing the policy
String
Required
Example response204 No Content
- [GET] /api/permission/plugins/policies
-
Returns permission policies for all static plugins.
Example response (JSON)[ { "pluginId": "catalog", "policies": [ { "permission": "catalog-entity", "policy": "read" }, { "permission": "catalog.entity.create", "policy": "create" }, { "permission": "catalog-entity", "policy": "delete" }, { "permission": "catalog-entity", "policy": "update" }, { "permission": "catalog.location.read", "policy": "read" }, { "permission": "catalog.location.create", "policy": "create" }, { "permission": "catalog.location.delete", "policy": "delete" } ] }, ... ]
Roles
The RBAC REST API supports the following endpoints for managing roles in the Red Hat Developer Hub.
- [GET] /api/permission/roles
-
Returns all roles in Developer Hub.
Example response (JSON)[ { "memberReferences": ["user:default/pataknight"], "name": "role:default/guests" }, { "memberReferences": [ "group:default/janus-authors", "user:default/matt" ], "name": "role:default/rbac_admin" } ]
- [GET] /api/permission/roles/{kind}/{namespace}/{name}
-
Creates a role in Developer Hub.
Table 4. Request parameters Name Description Type Requirement body
The
memberReferences
,group
,namespace
, andname
the new role to be created.Request body
Required
Example request body (JSON){ "memberReferences": ["group:default/test"], "name": "role:default/test_admin" }
Example response201 Created
- [PUT] /api/permission/roles/{kind}/{namespace}/{name}
-
Updates
memberReferences
,kind
,namespace
, orname
for a role in Developer Hub.Request parametersThe request body contains the
oldRole
andnewRole
objects:Name Description Type Requirement body
The
memberReferences
,group
,namespace
, andname
the new role to be created.Request body
Required
Example request body (JSON){ "oldRole": { "memberReferences": ["group:default/test"], "name": "role:default/test_admin" }, "newRole": { "memberReferences": ["group:default/test", "user:default/test2"], "name": "role:default/test_admin" } }
Example response200 OK
- [DELETE] /api/permission/roles/{kind}/{namespace}/{name}?memberReferences=<VALUE>
-
Deletes the specified user or group from a role in Developer Hub.
Table 5. Request parameters Name Description Type Requirement kind
Kind of the entity
String
Required
namespace
Namespace of the entity
String
Required
name
Username related to the entity
String
Required
memberReferences
Associated group information
String
Required
Example response204
- [DELETE] /api/permission/roles/{kind}/{namespace}/{name}
-
Deletes a specified role from Developer Hub.
Table 6. Request parameters Name Description Type Requirement kind
Kind of the entity
String
Required
namespace
Namespace of the entity
String
Required
name
Username related to the entity
String
Required
Example response204
6. Dynamic plugin installation
The dynamic plugin support is based on the backend plugin manager package, which is a service that scans a configured root directory (dynamicPlugins.rootDirectory
in the app config) for dynamic plugin packages and loads them dynamically.
You can use the dynamic plugins that come preinstalled with Red Hat Developer Hub or install external dynamic plugins from a public NPM registry.
6.1. Viewing installed plugins
Using the Dynamic Plugins Info front-end plugin, you can view plugins that are currently installed in your Red Hat Developer Hub application. This plugin is enabled by default.
-
Open your Developer Hub application and click Administration.
-
Go to the PLUGINS tab.
The PLUGINS tab contains a list of installed plugins and related information, such as NAME, VERSION, and ROLE.
6.2. Preinstalled dynamic plugins
Red Hat Developer Hub is preinstalled with a selection of dynamic plugins. The dynamic plugins that require custom configuration are disabled by default.
For a complete list of dynamic plugins that are preinstalled in this release of Developer Hub, see the Dynamic plugins support matrix.
Upon application startup, for each plugin that is disabled by default, the install-dynamic-plugins init container
within the Developer Hub pod log displays a message similar to the following:
======= Skipping disabled dynamic plugin ./dynamic-plugins/dist/backstage-plugin-catalog-backend-module-github-dynamic
To enable this plugin, add a package with the same name to the Helm chart and change the value in the disabled
field to ‘false’. For example:
global:
dynamic:
includes:
- dynamic-plugins.default.yaml
plugins:
- package: ./dynamic-plugins/dist/backstage-plugin-catalog-backend-module-github-dynamic
disabled: false
Note
|
The default configuration for a plugin is extracted from the dynamic-plugins.default.yaml` file, however, you can use a pluginConfig entry to override the default configuration.
|
6.2.1. Preinstalled dynamic plugin descriptions and details
Important
|
Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend using them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information on Red Hat Technology Preview features, see Technology Preview Features Scope. Additional detail on how Red Hat provides support for bundled community dynamic plugins is available on the Red Hat Developer Support Policy page. |
There are 56 plugins available in Red Hat Developer Hub. See the Dynamic plugins support matrix for details.
Name | Role | Plugin | Description | Version | Support Level | Path | Required Variables | Default |
---|---|---|---|---|---|---|---|---|
3scale |
Backend |
@janus-idp/backstage-plugin-3scale-backend |
The 3scale Backstage provider plugin synchronizes the 3scale content into the Backstage catalog. |
1.4.7 |
Red Hat Tech Preview |
./dynamic-plugins/dist/janus-idp-backstage-plugin-3scale-backend-dynamic |
|
Disabled |
AAP |
Backend |
@janus-idp/backstage-plugin-aap-backend |
1.5.5 |
Red Hat Tech Preview |
./dynamic-plugins/dist/janus-idp-backstage-plugin-aap-backend-dynamic |
|
Disabled |
|
ACR |
Frontend |
@janus-idp/backstage-plugin-acr |
1.2.28 |
Red Hat Tech Preview |
./dynamic-plugins/dist/janus-idp-backstage-plugin-acr |
Disabled |
||
Analytics Provider Segment |
Frontend |
@janus-idp/backstage-plugin-analytics-provider-segment |
This plugin provides an implementation of the Backstage Analytics API for Segment. Once installed and configured, analytics events will be sent to Segment as your users navigate and use your Backstage instance. |
1.2.11 |
Production |
./dynamic-plugins/dist/janus-idp-backstage-plugin-analytics-provider-segment |
|
Disabled |
Argo CD |
Frontend |
@roadiehq/backstage-plugin-argo-cd |
Backstage plugin to view and interact with Argo CD. |
2.6.2 |
Production |
./dynamic-plugins/dist/roadiehq-backstage-plugin-argo-cd |
Disabled |
|
Argo CD |
Backend |
@roadiehq/backstage-plugin-argo-cd-backend |
Backstage plugin Argo CD backend |
2.14.5 |
Production |
./dynamic-plugins/dist/roadiehq-backstage-plugin-argo-cd-backend-dynamic |
|
Disabled |
Argo CD |
Backend |
@roadiehq/scaffolder-backend-argocd |
1.1.23 |
Community Support |
./dynamic-plugins/dist/roadiehq-scaffolder-backend-argocd-dynamic |
|
Disabled |
|
Azure Devops |
Frontend |
@backstage/plugin-azure-devops |
0.3.12 |
Community Support |
./dynamic-plugins/dist/backstage-plugin-azure-devops |
Disabled |
||
Azure Devops |
Backend |
@backstage/plugin-azure-devops-backend |
Azure DevOps backend plugin that contains the API for retrieving builds, pull requests, etc. which is used by the Azure DevOps frontend plugin. |
0.5.5 |
Community Support |
./dynamic-plugins/dist/backstage-plugin-azure-devops-backend-dynamic |
|
Disabled |
Azure Devops |
Backend |
@backstage/plugin-scaffolder-backend-module-azure |
The azure module for @backstage/plugin-scaffolder-backend |
0.1.5 |
Community Support |
dynamic-plugins/wrappers/backstage-plugin-scaffolder-backend-module-azure-dynamic |
Enabled |
|
Bitbucket |
Backend |
@backstage/plugin-catalog-backend-module-bitbucket-cloud |
A Backstage catalog backend module that helps integrate towards Bitbucket Cloud. |
0.1.28 |
Community Support |
dynamic-plugins/wrappers/backstage-plugin-catalog-backend-module-bitbucket-cloud |
|
Disabled |
Bitbucket |
Backend |
@backstage/plugin-catalog-backend-module-bitbucket-server |
A Backstage catalog backend module that helps integrate towards Bitbucket Server. |
0.1.26 |
Community Support |
dynamic-plugins/wrappers/backstage-plugin-catalog-backend-module-bitbucket-server |
|
Disabled |
Bitbucket |
Backend |
@backstage/plugin-scaffolder-backend-module-bitbucket-cloud |
The Bitbucket Cloud module for @backstage/plugin-scaffolder-backend |
0.1.3 |
Community Support |
dynamic-plugins/wrappers/backstage-plugin-scaffolder-backend-module-bitbucket-cloud |
Enabled |
|
Bitbucket |
Backend |
@backstage/plugin-scaffolder-backend-module-bitbucket-server |
The Bitbucket Server module for @backstage/plugin-scaffolder-backend. |
0.1.3 |
Community Support |
dynamic-plugins/wrappers/backstage-plugin-scaffolder-backend-module-bitbucket-server |
Enabled |
|
Datadog |
Frontend |
@roadiehq/backstage-plugin-datadog |
Embed Datadog graphs and dashboards into Backstage. |
2.2.6 |
Community Support |
./dynamic-plugins/dist/roadiehq-backstage-plugin-datadog |
Disabled |
|
Dynatrace |
Frontend |
@backstage/plugin-dynatrace |
A Backstage plugin that integrates towards Dynatrace. |
9.0.0 |
Community Support |
./dynamic-plugins/dist/backstage-plugin-dynatrace |
Disabled |
|
Dynamic Plugins |
Frontend |
@janus-idp/backstage-plugin-dynamic-plugins-info |
Dynamic Plugins Info plugin for Backstage. |
1.0.2 |
Production |
@janus-idp/backstage-plugin-dynamic-plugins-info |
Enabled |
|
Gerrit |
Backend |
@backstage/plugin-scaffolder-backend-module-gerrit |
The gerrit module for @backstage/plugin-scaffolder-backend. |
0.1.5 |
Community Support |
dynamic-plugins/wrappers/backstage-plugin-scaffolder-backend-module-gerrit-dynamic |
Enabled |
|
Github |
Backend |
@backstage/plugin-catalog-backend-module-github |
A Backstage catalog backend module that helps integrate towards Github |
0.5.3 |
Community Support |
./dynamic-plugins/dist/backstage-plugin-catalog-backend-module-github-dynamic |
|
Disabled |
Github |
Backend |
@backstage/plugin-catalog-backend-module-github-org |
The github-org backend module for the catalog plugin. |
0.1.0 |
Community Support |
./dynamic-plugins/dist/backstage-plugin-catalog-backend-module-github-org-dynamic |
|
Disabled |
Github |
Frontend |
@backstage/plugin-github-actions |
A Backstage plugin that integrates towards GitHub Actions |
0.6.11 |
Community Support |
./dynamic-plugins/dist/backstage-plugin-github-actions |
Disabled |
|
Github |
Frontend |
@backstage/plugin-github-issues |
A Backstage plugin that integrates towards GitHub Issues |
0.2.19 |
Community Support |
./dynamic-plugins/dist/backstage-plugin-github-issues |
Disabled |
|
Github |
Backend |
@backstage/plugin-scaffolder-backend-module-github |
The github module for @backstage/plugin-scaffolder-backend. |
0.2.3 |
Community Support |
dynamic-plugins/wrappers/backstage-plugin-scaffolder-backend-module-github-dynamic |
Enabled |
|
Github |
Frontend |
@roadiehq/backstage-plugin-github-insights |
Backstage plugin to provide Readmes, Top Contributors and other widgets. |
2.3.27 |
Community Support |
./dynamic-plugins/dist/roadiehq-backstage-plugin-github-insights |
Disabled |
|
Github |
Frontend |
@roadiehq/backstage-plugin-github-pull-requests |
Backstage plugin to view and interact with GitHub pull requests. |
2.5.24 |
Community Support |
./dynamic-plugins/dist/roadiehq-backstage-plugin-github-pull-requests |
Disabled |
|
Github |
Frontend |
@roadiehq/backstage-plugin-security-insights |
Backstage plugin to add security insights for GitHub repos. |
2.3.15 |
Community Support |
./dynamic-plugins/dist/roadiehq-backstage-plugin-security-insights |
Disabled |
|
Gitlab |
Backend |
@backstage/plugin-catalog-backend-module-gitlab |
Extracts repositories out of an GitLab instance. |
0.3.10 |
Community Support |
./dynamic-plugins/dist/backstage-plugin-catalog-backend-module-gitlab-dynamic |
Disabled |
|
Gitlab |
Backend |
@backstage/plugin-scaffolder-backend-module-gitlab |
A module for the scaffolder backend that lets you interact with gitlab |
0.2.16 |
Community Support |
./dynamic-plugins/dist/backstage-plugin-scaffolder-backend-module-gitlab-dynamic |
Disabled |
|
Gitlab |
Frontend |
@immobiliarelabs/backstage-plugin-gitlab |
Backstage plugin to interact with GitLab |
6.4.0 |
Community Support |
./dynamic-plugins/dist/immobiliarelabs-backstage-plugin-gitlab |
Disabled |
|
Gitlab |
Backend |
@immobiliarelabs/backstage-plugin-gitlab-backend |
Backstage plugin to interact with GitLab |
6.4.0 |
Community Support |
./dynamic-plugins/dist/immobiliarelabs-backstage-plugin-gitlab-backend-dynamic |
|
Disabled |
Jenkins |
Frontend |
@backstage/plugin-jenkins |
A Backstage plugin that integrates towards Jenkins |
0.9.5 |
Community Support |
./dynamic-plugins/dist/backstage-plugin-jenkins |
Disabled |
|
Jenkins |
Backend |
@backstage/plugin-jenkins-backend |
A Backstage backend plugin that integrates towards Jenkins |
0.3.7 |
Community Support |
./dynamic-plugins/dist/backstage-plugin-jenkins-backend-dynamic |
|
Disabled |
Jfrog Artifactory |
Frontend |
@janus-idp/backstage-plugin-jfrog-artifactory |
The Jfrog Artifactory plugin displays information about your container images within the Jfrog Artifactory registry. |
1.2.28 |
Red Hat Tech Preview |
./dynamic-plugins/dist/janus-idp-backstage-plugin-jfrog-artifactory |
|
Disabled |
Jira |
Frontend |
@roadiehq/backstage-plugin-jira |
Backstage plugin to view and interact with Jira |
2.5.4 |
Community Support |
./dynamic-plugins/dist/roadiehq-backstage-plugin-jira |
Disabled |
|
Keycloak |
Backend |
The Keycloak backend plugin integrates Keycloak into Backstage. |
1.8.5 |
Production |
./dynamic-plugins/dist/janus-idp-backstage-plugin-keycloak-backend-dynamic |
|
Disabled |
|
Kubernetes |
Frontend |
@backstage/plugin-kubernetes |
A Backstage plugin that integrates towards Kubernetes |
0.11.5 |
Community Support |
./dynamic-plugins/dist/backstage-plugin-kubernetes |
Enabled |
|
Kubernetes |
Backend |
@backstage/plugin-kubernetes-backend |
A Backstage backend plugin that integrates towards Kubernetes |
0.15.3 |
Production |
./dynamic-plugins/dist/backstage-plugin-kubernetes-backend-dynamic |
|
Enabled |
Kubernetes |
Frontend |
@janus-idp/backstage-plugin-topology |
The Topology plugin enables you to visualize the workloads such as Deployment, Job, Daemonset, Statefulset, CronJob, and Pods powering any service on the Kubernetes cluster. |
1.18.7 |
Production |
./dynamic-plugins/dist/janus-idp-backstage-plugin-topology |
Enabled |
|
Lighthouse |
Frontend |
@backstage/plugin-lighthouse |
A Backstage plugin that integrates towards Lighthouse |
0.4.15 |
Community Support |
./dynamic-plugins/dist/backstage-plugin-lighthouse |
Disabled |
|
Nexus Repository Manager |
Frontend |
@janus-idp/backstage-plugin-nexus-repository-manager |
The Nexus Repository Manager plugin displays the information about your build artifacts that are available in the Nexus Repository Manager in your Backstage application. |
1.4.28 |
Red Hat Tech Preview |
./dynamic-plugins/dist/janus-idp-backstage-plugin-nexus-repository-manager |
Disabled |
|
OCM |
Frontend |
@janus-idp/backstage-plugin-ocm |
The Open Cluster
Management (OCM) plugin integrates your Backstage instance with the
|
3.7.5 |
Production |
./dynamic-plugins/dist/janus-idp-backstage-plugin-ocm |
Disabled |
|
OCM |
Backend |
@janus-idp/backstage-plugin-ocm-backend |
3.5.6 |
Production |
./dynamic-plugins/dist/janus-idp-backstage-plugin-ocm-backend-dynamic |
|
Disabled |
|
Pagerduty |
Frontend |
@pagerduty/backstage-plugin |
A Backstage plugin that integrates towards PagerDuty |
0.9.3 |
Community Support |
./dynamic-plugins/wrappers/pagerduty-backstage-plugin |
Disabled |
|
Quay |
Frontend |
@janus-idp/backstage-plugin-quay |
The Quay plugin displays the information about your container images within the Quay registry in your Backstage application. |
1.5.9 |
Production |
./dynamic-plugins/dist/janus-idp-backstage-plugin-quay |
Disabled |
|
Quay |
Backend |
@janus-idp/backstage-scaffolder-backend-module-quay |
This module provides Backstage template actions for Quay. |
1.3.5 |
Production |
./dynamic-plugins/dist/janus-idp-backstage-scaffolder-backend-module-quay-dynamic |
Enabled |
|
RBAC |
Frontend |
@janus-idp/backstage-plugin-rbac |
RBAC frontend plugin for Backstage. |
1.15.3 |
Production |
@janus-idp/backstage-plugin-rbac |
Disabled |
|
Regex |
Backend |
@janus-idp/backstage-scaffolder-backend-module-regex |
This plugin provides Backstage template actions for RegExp. |
1.3.5 |
Production |
./dynamic-plugins/dist/janus-idp-backstage-scaffolder-backend-module-regex-dynamic |
Enabled |
|
Scaffolder |
Backend |
@roadiehq/scaffolder-backend-module-utils |
This contains a collection of actions to use in scaffolder templates. |
1.13.6 |
Community Support |
./dynamic-plugins/dist/roadiehq-scaffolder-backend-module-utils-dynamic |
Enabled |
|
ServiceNow |
Backend |
@janus-idp/backstage-scaffolder-backend-module-servicenow |
This plugin provides Backstage template actions for ServiceNow. |
1.3.5 |
Red Hat Tech Preview |
./dynamic-plugins/dist/janus-idp-backstage-scaffolder-backend-module-servicenow-dynamic |
|
Disabled |
SonarQube |
Frontend |
@backstage/plugin-sonarqube |
A Backstage plugin to display SonarQube code quality and security results. |
0.7.12 |
Community Support |
./dynamic-plugins/dist/backstage-plugin-sonarqube |
Disabled |
|
SonarQube |
Backend |
@backstage/plugin-sonarqube-backend |
0.2.15 |
Community Support |
./dynamic-plugins/dist/backstage-plugin-sonarqube-backend-dynamic |
|
Disabled |
|
SonarQube |
Backend |
@janus-idp/backstage-scaffolder-backend-module-sonarqube |
This module provides Backstage template actions for SonarQube. |
1.3.5 |
Red Hat Tech Preview |
./dynamic-plugins/dist/janus-idp-backstage-scaffolder-backend-module-sonarqube-dynamic |
Disabled |
|
Tech Radar |
Frontend |
@backstage/plugin-tech-radar |
A Backstage plugin that lets you display a Tech Radar for your organization |
0.6.13 |
Community Support |
./dynamic-plugins/dist/backstage-plugin-tech-radar |
Disabled |
|
Techdocs |
Frontend |
@backstage/plugin-techdocs |
The Backstage plugin that renders technical documentation for your components |
1.10.0 |
Production |
./dynamic-plugins/dist/backstage-plugin-techdocs |
Disabled |
|
Techdocs |
Backend |
@backstage/plugin-techdocs-backend |
The Backstage backend plugin that renders technical documentation for your components |
1.9.6 |
Production |
./dynamic-plugins/dist/backstage-plugin-techdocs-backend-dynamic |
|
Disabled |
Tekton |
Frontend |
@janus-idp/backstage-plugin-tekton |
The Tekton plugin enables you to visualize the PipelineRun resources available on the Kubernetes cluster. |
3.5.10 |
Production |
./dynamic-plugins/dist/janus-idp-backstage-plugin-tekton |
Disabled |
6.3. Installation of dynamic plugins using the Helm Chart
You can deploy a Developer Hub instance using a Helm Chart, which is a flexible installation method. With the Helm chart, you can sideload dynamic plugins into your Developer Hub instance without having to recompile your code or rebuild the container.
To install dynamic plugins in Developer Hub using Helm, add the following global.dynamic
parameters in your Helm Chart:
-
plugins
: the dynamic plugins list intended for installation. By default, the list is empty. You can populate the plugins list with the following fields:-
package
: a package specification for the dynamic plugin package that you want to install. You can use a package for either an internal or external dynamic plugin installation. For an internal installation, use from a local path to the folder containing the dynamic plugin for a local installation. For an external installation, use a or from package specification from a public NPM repository. -
integrity
(required for external packages): an integrity checksum in the form of<alg>-<digest>
specific to the package. Supported algorithms includesha256
,sha384
andsha512
. -
pluginConfig
: an optional plugin-specific app-config yaml fragment. See plugin configuration for more information.details. -
disabled
.: disables the dynamic plugin if set to `true
. Default: `false
-
-
includes
: a list of YAML files utilizing the same syntax.
[NOTE] The plugins
list in the includes
file is merged with the plugins
list in the main Helm values. If a plugin package is mentioned in both plugins
lists, the plugins
fields in the main Helm values override the plugins
fields in the includes
file. The default configuration includes the dynamic-plugins.default.yaml
file, which contains all of the dynamic plugins preinstalled in Developer Hub, whether enabled or disabled by default.
6.3.1. Obtaining the integrity checksum
To obtain the integrity checksum, enter the following command:
npm view <package name>@<version> dist.integrity
6.3.2. Example Helm chart configurations for dynamic plugin installations
The following examples demonstrate how to configure the Helm chart for specific types of dynamic plugin installations.
global:
dynamic:
plugins:
- package: <alocal package-spec used by npm pack>
- package: <external package-spec used by npm pack>
integrity: sha512-<some hash>
pluginConfig: ...
global:
dynamic:
includes:
- dynamic-plugins.default.yaml
plugins:
- package: <some imported plugins listed in dynamic-plugins.default.yaml>
disabled: true
global:
dynamic:
includes:
- dynamic-plugins.default.yaml
plugins:
- package: <some imported plugins listed in dynamic-plugins.custom.yaml>
disabled: false
global:
dynamic:
includes:
- dynamic-plugins.default.yaml
plugins:
- package: <some imported plugins listed in dynamic-plugins.custom.yaml>
disabled: false
6.3.3. Installing external dynamic plugins using a Helm chart
The NPM registry contains external dynamic plugins that you can use for demonstration purposes. For example, the following community plugins are available in the janus-idp organization
in the NPMJS:
* Notifications
(frontend and backend)
* kubernetes-actions
(scaffolder actions)
To install the ‘Notifications’ and ‘kubernetes-actions’ plugins, include them in the Helm chart values in the global.dynamic.plugins
list as shown in the following example:
global: dynamic: plugins: - package: '@janus-idp/plugin-notifications-backend-dynamic@1.3.6' # Integrity can be found at https://registry.npmjs.org/@janus-idp/plugin-notifications-backend-dynamic integrity: 'sha512-Qd8pniy1yRx+x7LnwjzQ6k9zP+C1yex24MaCcx7dGDPT/XbTokwoSZr4baSSn8jUA6P45NUUevu1d629mG4JGQ==' - package: '@janus-idp/plugin-notifications@1.1.12 ' # https://registry.npmjs.org/@janus-idp/plugin-notifications integrity: 'sha512-GCdEuHRQek3ay428C8C4wWgxjNpNwCXgIdFbUUFGCLLkBFSaOEw+XaBvWaBGtQ5BLgE3jQEUxa+422uzSYC5oQ==' pluginConfig: dynamicPlugins: frontend: janus-idp.backstage-plugin-notifications: appIcons: - name: notificationsIcon module: NotificationsPlugin importName: NotificationsActiveIcon dynamicRoutes: - path: /notifications importName: NotificationsPage module: NotificationsPlugin menuItem: icon: notificationsIcon text: Notifications config: pollingIntervalMs: 5000 - package: '@janus-idp/backstage-scaffolder-backend-module-kubernetes-dynamic@1.3.5' # https://registry.npmjs.org/@janus-idp/backstage-scaffolder-backend-module-kubernetes-dynamic integrity: 'sha512-19ie+FM3QHxWYPyYzE0uNdI5K8M4vGZ0SPeeTw85XPROY1DrIY7rMm2G0XT85L0ZmntHVwc9qW+SbHolPg/qRA==' proxy: endpoints: /explore-backend-completed: target: 'http://localhost:7017' - package: '@dfatwork-pkgs/search-backend-module-explore-wrapped-dynamic@0.1.3-next.1' # https://registry.npmjs.org/@dfatwork-pkgs/search-backend-module-explore-wrapped-dynamic integrity: 'sha512-mv6LS8UOve+eumoMCVypGcd7b/L36lH2z11tGKVrt+m65VzQI4FgAJr9kNCrjUZPMyh36KVGIjYqsu9+kgzH5A==' - package: '@dfatwork-pkgs/plugin-catalog-backend-module-test-dynamic@0.0.0' # https://registry.npmjs.org/@dfatwork-pkgs/plugin-catalog-backend-module-test-dynamic integrity: 'sha512-YsrZMThxJk7cYJU9FtAcsTCx9lCChpytK254TfGb3iMAYQyVcZnr5AA/AU+hezFnXLsr6gj8PP7z/mCZieuuDA=='
6.4. Installing external plugins in an air-gapped environment
You can install external plugins in an air-gapped environment by setting up a custom NPM registry. To configure the NPM registry URL and authentication information for dynamic plugin packages, see Using a custom NPM registry for dynamic plugin packages.
6.5. Using a custom NPM registry for dynamic plugin packages
You can configure the NPM registry URL and authentication information for dynamic plugin packages using a Helm chart. For dynamic plugin packages obtained through npm pack
, you can use a .npmrc
file.
Using the Helm chart, add the .npmrc
file to the NPM registry by creating a secret named dynamic-plugins-npmrc
with the following content:
apiVersion: v1 kind: Secret metadata: name: dynamic-plugins-npmrc type: Opaque stringData: .npmrc: | registry=<registry-url> //<registry-url>:_authToken=<auth-token> ...
6.6. Basic configuration of dynamic plugins
Some dynamic plugins require environment variables to be set. If a mandatory environment variable is not set, and the plugin is enabled, then the application might fail at startup.
The mandatory environment variables for each plugin are listed in the Dynamic plugins support matrix.
Note
|
Zib-bomb detection When installing some dynamic plugin containing large files, if the installation script considers the package archive to be a Zib-Bomb, the installation fails. To increase the maximum permitted size of a file inside a package archive, you can increase the |
6.7. Installation and configuration of Ansible Automation Platform
The Ansible Automation Platform (AAP) plugin synchronizes the accessible templates including job templates and workflow job templates from AAP into your Developer Hub catalog.
Important
|
The Ansible Automation Platform plugin is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend using them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information on Red Hat Technology Preview features, see Technology Preview Features Scope. Additional detail on how Red Hat provides support for bundled community dynamic plugins is available on the Red Hat Developer Support Policy page. |
6.7.1. For administrators
Installing and configuring the AAP Backend plugin
The AAP backend plugin allows you to configure one or multiple providers using your app-config.yaml
configuration file in Developer Hub.
-
Your Developer Hub application is installed and running.
-
You have created an account in Ansible Automation Platform.
The AAP backend plugin is pre-loaded in Developer Hub with basic configuration properties. To enable it, set the disabled
property to false
as follows:
global:
dynamic:
includes:
- dynamic-plugins.default.yaml
plugins:
- package: ./dynamic-plugins/dist/janus-idp-backstage-plugin-aap-backend-dynamic
disabled: false
To enable the AAP plugin, you must set the following environment variables:
-
AAP_BASE_URL
: Base URL of the service -
AAP AUTH TOKEN
: Authentication token for the service
-
You can use the
aap
marker to configure theapp-config.yaml
file of Developer Hub as follows:catalog: providers: aap: dev: baseUrl: $(AAP_BASE_URL) authorization: 'Bearer ${AAP_AUTH_TOKEN}' owner: <owner> system: <system> schedule: # optional; same options as in TaskScheduleDefinition # supports cron, ISO duration, "human duration" as used in code frequency: { minutes: 1 } # supports ISO duration, "human duration" as used in code timeout: { minutes: 1 }
Log lines for AAP Backend plugin troubleshoot
When you start your Developer Hub application, you can see the following log lines:
[1] 2023-02-13T15:26:09.356Z catalog info Discovered ResourceEntity API type=plugin target=AapResourceEntityProvider:dev
[1] 2023-02-13T15:26:09.423Z catalog info Discovered ResourceEntity Red Hat Event (DEV, v1.2.0) type=plugin target=AapResourceEntityProvider:dev
[1] 2023-02-13T15:26:09.620Z catalog info Discovered ResourceEntity Red Hat Event (TEST, v1.1.0) type=plugin target=AapResourceEntityProvider:dev
[1] 2023-02-13T15:26:09.819Z catalog info Discovered ResourceEntity Red Hat Event (PROD, v1.1.0) type=plugin target=AapResourceEntityProvider:dev
[1] 2023-02-13T15:26:09.819Z catalog info Applying the mutation with 3 entities type=plugin target=AapResourceEntityProvider:dev
6.7.2. For users
Accessing templates from AAP in Developer Hub
When you have configured the AAP backend plugin successfully, it synchronizes the templates including job templates and workflow job templates from AAP and displays them on the Developer Hub Catalog page as Resources.
-
Your Developer Hub application is installed and running.
-
You have installed the AAP backend plugin. For installation and configuration instructions, see Installing and configuring the AAP Backend plugin.
-
Open your Developer Hub application and Go to the Catalog page.
-
Select Resource from the Kind drop-down and job template or workflow job template from the Type drop-down on the left side of the page.
A list of all the available templates from AAP appears on the page.
-
Select a template from the list.
The OVERVIEW tab appears containing different cards, such as:
-
About: Provides detailed information about the template.
-
Relations: Displays the visual representation of the template and associated aspects.
-
Links: Contains links to the AAP dashboard and the details page of the template.
-
Has subcomponents: Displays a list of associated subcomponents.
-
6.8. Installation and configuration of Keycloak
The Keycloak backend plugin, which integrates Keycloak into Developer Hub, has the following capabilities:
-
Synchronization of Keycloak users in a realm.
-
Synchronization of Keycloak groups and their users in a realm.
6.8.1. For administrators
Installation
The Keycloak plugin is pre-loaded in Developer Hub with basic configuration properties. To enable it, set the disabled
property to false
as follows:
global:
dynamic:
includes:
- dynamic-plugins.default.yaml
plugins:
- package: ./dynamic-plugins/dist/janus-idp-backstage-plugin-keycloak-backend-dynamic
disabled: false
Basic configuration
To enable the Keycloak plugin, you must set the following environment variables:
-
KEYCLOAK_BASE_URL
-
KEYCLOAK_LOGIN_REALM
-
KEYCLOAK_REALM
-
KEYCLOAK_CLIENT_ID
-
KEYCLOAK_CLIENT_SECRET
Advanced configuration
You can configure a schedule in the app-config.yaml
file, as follows:
catalog:
providers:
keycloakOrg:
default:
# ...
# highlight-add-start
schedule: # optional; same options as in TaskScheduleDefinition
# supports cron, ISO duration, "human duration" as used in code
frequency: { minutes: 1 }
# supports ISO duration, "human duration" as used in code
timeout: { minutes: 1 }
initialDelay: { seconds: 15 }
# highlight-add-end
Note
|
If you have made any changes to the schedule in the |
You can override the default Keycloak query parameters in the app-config.yaml
file, as follows:
catalog:
providers:
keycloakOrg:
default:
# ...
# highlight-add-start
userQuerySize: 500 # Optional
groupQuerySize: 250 # Optional
# highlight-add-end
Communication between Developer Hub and Keycloak is enabled by using the Keycloak API. Username and password, or client credentials are supported authentication methods.
The following table describes the parameters that you can configure to enable the plugin under catalog.providers.keycloakOrg.<ENVIRONMENT_NAME>
object in the app-config.yaml
file:
Name | Description | Default Value | Required |
---|---|---|---|
|
Location of the Keycloak server, such as |
"" |
Yes |
|
Realm to synchronize |
|
No |
|
Realm used to authenticate |
|
No |
|
Username to authenticate |
"" |
Yes if using password based authentication |
|
Password to authenticate |
"" |
Yes if using password based authentication |
|
Client ID to authenticate |
"" |
Yes if using client credentials based authentication |
|
Client Secret to authenticate |
"" |
Yes if using client credentials based authentication |
|
Number of users to query at a time |
|
No |
|
Number of groups to query at a time |
|
No |
When using client credentials, the access type must be set to confidential
and service accounts must be enabled. You must also add the following roles from the realm-management
client role:
-
query-groups
-
query-users
-
view-users
Limitations
If you have self-signed or corporate certificate issues, you can set the following environment variable before starting Developer Hub:
NODE_TLS_REJECT_UNAUTHORIZED=0
Note
|
The solution of setting the environment variable is not recommended. |
6.8.2. For users
Import of users and groups in Developer Hub using the Keycloak plugin
After configuring the plugin successfully, the plugin imports the users and groups each time when started.
Note
|
If you set up a schedule, users and groups will also be imported. |
After the first import is complete, you can select User to list the users from the catalog page:
You can see the list of users on the page:
When you select a user, you can see the information imported from Keycloak:
You can also select a group, view the list, and select or view the information imported from Keycloak for a group:
6.9. Installation and configuration of Nexus Repository Manager
The Nexus Repository Manager plugin displays the information about your build artifacts in your Developer Hub application. The build artifacts are available in the Nexus Repository Manager.
Important
|
The Nexus Repository Manager plugin is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend using them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information on Red Hat Technology Preview features, see Technology Preview Features Scope. Additional detail on how Red Hat provides support for bundled community dynamic plugins is available on the Red Hat Developer Support Policy page. |
6.9.1. For administrators
Installing and configuring the Nexus Repository Manager plugin
The Nexus Repository Manager plugin is pre-loaded in Developer Hub with basic configuration properties. To enable it, set the disabled property to false
as follows:
global:
dynamic:
includes:
- dynamic-plugins.default.yaml
plugins:
- package: ./dynamic-plugins/dist/janus-idp-backstage-plugin-nexus-repository-manager
disabled: false
-
Set the proxy to the desired Nexus Repository Manager server in the
app-config.yaml
file as follows:proxy: '/nexus-repository-manager': target: 'https://<NEXUS_REPOSITORY_MANAGER_URL>' headers: X-Requested-With: 'XMLHttpRequest' # Uncomment the following line to access a private Nexus Repository Manager using a token # Authorization: 'Bearer <YOUR TOKEN>' changeOrigin: true # Change to "false" in case of using self hosted Nexus Repository Manager instance with a self-signed certificate secure: true
-
Optional: Change the base URL of Nexus Repository Manager proxy as follows:
nexusRepositoryManager: # default path is `/nexus-repository-manager` proxyPath: /custom-path
-
Optional: Enable the following experimental annotations:
nexusRepositoryManager: experimentalAnnotations: true
-
Annotate your entity using the following annotations:
metadata: annotations: # insert the chosen annotations here # example nexus-repository-manager/docker.image-name: `<ORGANIZATION>/<REPOSITORY>`,
For additional information about installing and configuring dynamic plugins, see the Dynamic plugin installation section.
6.9.2. For users
Using the Nexus Repository Manager plugin in Developer Hub
The Nexus Repository Manager is a front-end plugin that enables you to view the information about build artifacts.
-
Your Developer Hub application is installed and running.
-
You have installed the Nexus Repository Manager plugin. For the installation process, see Installing and configuring the Nexus Repository Manager plugin.
-
Open your Developer Hub application and select a component from the Catalog page.
-
Go to the BUILD ARTIFACTS tab.
The BUILD ARTIFACTS tab contains a list of build artifacts and related information, such as VERSION, REPOSITORY, REPOSITORY TYPE, MANIFEST, MODIFIED, and SIZE.
6.10. Installation and configuration of Tekton
You can use the Tekton plugin to visualize the results of CI/CD pipeline runs on your Kubernetes or OpenShift clusters. The plugin allows users to visually see high level status of all associated tasks in the pipeline for their applications.
6.10.1. For administrators
Installation
-
You have installed and configured the
@backstage/plugin-kubernetes
and@backstage/plugin-kubernetes-backend
dynamic plugins. For more information about installing dynamic plugins, see Dynamic plugin installation. -
You have configured the Kubernetes plugin to connect to the cluster using a
ServiceAccount
. -
The
ClusterRole
must be granted for custom resources (PipelineRuns and TaskRuns) to theServiceAccount
accessing the cluster.NoteIf you have the RHDH Kubernetes plugin configured, then the ClusterRole
is already granted. -
To view the pod logs, you have granted permissions for
pods/log
. -
You can use the following code to grant the
ClusterRole
for custom resources and pod logs:kubernetes: ... customResources: - group: 'tekton.dev' apiVersion: 'v1' plural: 'pipelineruns' - group: 'tekton.dev' apiVersion: 'v1' ... apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: backstage-read-only rules: - apiGroups: - "" resources: - pods/log verbs: - get - list - watch ... - apiGroups: - tekton.dev resources: - pipelineruns - taskruns verbs: - get - list
You can use the prepared manifest for a read-only
ClusterRole
, which provides access for both Kubernetes plugin and Tekton plugin. -
Add the following annotation to the entity’s
catalog-info.yaml
file to identify whether an entity contains the Kubernetes resources:annotations: ... backstage.io/kubernetes-id: <BACKSTAGE_ENTITY_NAME>
-
You can also add the
backstage.io/kubernetes-namespace
annotation to identify the Kubernetes resources using the defined namespace.annotations: ... backstage.io/kubernetes-namespace: <RESOURCE_NS>
-
Add the following annotation to the
catalog-info.yaml
file of the entity to enable the Tekton related features in RHDH. The value of the annotation identifies the name of the RHDH entity:annotations: ... janus-idp.io/tekton : <BACKSTAGE_ENTITY_NAME>
-
Add a custom label selector, which RHDH uses to find the Kubernetes resources. The label selector takes precedence over the ID annotations.
annotations: ... backstage.io/kubernetes-label-selector: 'app=my-app,component=front-end'
-
Add the following label to the resources so that the Kubernetes plugin gets the Kubernetes resources from the requested entity:
labels: ... backstage.io/kubernetes-id: <BACKSTAGE_ENTITY_NAME>
NoteWhen you use the label selector, the mentioned labels must be present on the resource.
-
The Tekton plugin is pre-loaded in RHDH with basic configuration properties. To enable it, set the disabled property to false as follows:
global: dynamic: includes: - dynamic-plugins.default.yaml plugins: - package: ./dynamic-plugins/dist/janus-idp-backstage-plugin-tekton disabled: false
6.10.2. For users
Using the Tekton plugin in RHDH
You can use the Tekton front-end plugin to view PipelineRun
resources.
-
You have installed the Red Hat Developer Hub (RHDH).
-
You have installed the Tekton plugin. For the installation process, see Installing and configuring the Tekton plugin.
-
Open your RHDH application and select a component from the Catalog page.
-
Go to the CI tab.
The CI tab displays the list of PipelineRun resources associated with a Kubernetes cluster. The list contains pipeline run details, such as NAME, VULNERABILITIES, STATUS, TASK STATUS, STARTED, and DURATION.
-
Click the expand row button besides PipelineRun name in the list to view the PipelineRun visualization. The pipeline run resource includes tasks to complete. When you hover the mouse pointer on a task card, you can view the steps to complete that particular task.