Configuring dynamic plugins
Configuring dynamic plugins in Red Hat Developer Hub
Abstract
- 1. Installing Ansible plugins for Red Hat Developer Hub
- 2. Install and configure Argo CD
- 3. Enable and configure the JFrog plugin
- 4. Enable and configure the Keycloak plugin
- 5. Enable and configure the Nexus Repository Manager plugin
- 6. Enable the Tekton plugin
- 7. Install the Topology plugin
- 8. Bulk importing in Red Hat Developer Hub
- 8.1. Enable and authorize Bulk Import capabilities in Red Hat Developer Hub
- 8.2. Import multiple GitHub repositories
- 8.3. Import multiple GitLab repositories
- 8.4. Monitor Bulk Import actions using audit logs
- 8.5. Input parameters for Bulk Import Scaffolder template
- 8.6. Set up a custom Scaffolder workflow for Bulk Import
- 8.7. Run Orchestrator workflows for bulk imports
- 8.8. Data handoff and custom workflow design
- 9. ServiceNow custom actions in Red Hat Developer Hub
- 10. Kubernetes custom actions in Red Hat Developer Hub
- 11. Configure Red Hat Developer Hub events module
- 12. Override Core Backend Service Configuration
As a platform engineer, you can configure dynamic plugins in Red Hat Developer Hub (RHDH) to access your development infrastructure or software development tools.
1. Installing Ansible plugins for Red Hat Developer Hub
Access Ansible-specific portal experience with curated learning paths, push-button content creation, and integrated development tools.
Ansible plugins for Red Hat Developer Hub deliver an Ansible-specific portal experience with curated learning paths, push-button content creation, integrated development tools, and other opinionated resources.
Additional resources
2. Install and configure Argo CD
You can use the Argo CD plugin to visualize the Continuous Delivery (CD) workflows in OpenShift GitOps.
2.1. Enable the Argo CD plugin
The Argo CD plugin provides a visual overview of the application's status, deployment details, commit message, author of the commit, container image promoted to environment and deployment history.
Prerequisites
Add Argo CD instance information to your
app-config.yamlconfigmap as shown in the following example:argocd: appLocatorMethods: - type: 'config' instances: - name: argoInstance1 url: https://argoInstance1.com username: ${ARGOCD_USERNAME} password: ${ARGOCD_PASSWORD} - name: argoInstance2 url: https://argoInstance2.com username: ${ARGOCD_USERNAME} password: ${ARGOCD_PASSWORD}NoteAvoid using a trailing slash in the
url, as it might cause unexpected behavior.Add the following annotation to the entity's
catalog-info.yamlfile to identify the Argo CD applications.annotations: ... # The label that Argo CD uses to fetch all the applications. The format to be used is label.key=label.value. For example, rht-gitops.com/janus-argocd=quarkus-app. argocd/app-selector: '${ARGOCD_LABEL_SELECTOR}'(Optional) Add the following annotation to the entity's
catalog-info.yamlfile to switch between Argo CD instances as shown in the following example:annotations: ... # The Argo CD instance name used in `app-config.yaml`. argocd/instance-name: '${ARGOCD_INSTANCE}'NoteIf you do not set this annotation, the Argo CD plugin defaults to the first Argo CD instance configured in
app-config.yaml.
Procedure
Add the following to your dynamic-plugins ConfigMap to enable the Argo CD plugin.
global: dynamic: includes: - dynamic-plugins.default.yaml plugins: - package: oci://ghcr.io/redhat-developer/rhdh-plugin-export-overlays/roadiehq-backstage-plugin-argo-cd-backend:<tag> disabled: false - package: oci://ghcr.io/redhat-developer/rhdh-plugin-export-overlays/backstage-community-plugin-argocd:<tag> disabled: false
The <tag> variable is your RHDH application's version of Backstage and the plugin version, in the format: bs_<backstage-version>__<plugin-version> (note the double underscore delimiter).
To find the correct image tag for
<tag>:- Look in the RHDH release notes preface for your Backstage version.
-
Locate the plugin version for paths starting with
oci://ghcr.iowithin one of the tables in the Dynamic Plugins Reference guide.
For example, because RHDH 1.9 is based on Backstage 1.45.3, the tag will have the format bs_1.45.3__<plugin-version>.
To ensure environment stability, use a SHA256 digest instead of a version tag. See Determining SHA256 Digests.
2.2. Enable Argo CD Rollouts
Enable advanced deployment strategies such as blue-green and canary deployments by integrating Argo CD Rollouts with the Red Hat Developer Hub Kubernetes plugin.
The optional Argo CD Rollouts feature enhances Kubernetes by providing advanced deployment strategies, such as blue-green and canary deployments, for your applications. When integrated into the backstage Kubernetes plugin, it allows developers and operations teams to visualize and manage Argo CD Rollouts seamlessly within the Developer Hub interface.
Prerequisites
The Developer Hub Kubernetes plugin (
@backstage/plugin-kubernetes) is installed and configured.- To install and configure Kubernetes plugin in Developer Hub, see Installation and Configuration guide.
-
You have access to the Kubernetes cluster with the necessary permissions to create and manage custom resources and
ClusterRoles. -
The Kubernetes cluster has the
argoproj.iogroup resources (for example, Rollouts and Analysis Runs) installed.
Procedure
In the
app-config.yamlfile in your Developer Hub instance, add the followingcustomResourcescomponent under thekubernetesconfiguration to enable Argo Rollouts and Analysis Runs:kubernetes: ... customResources: - group: 'argoproj.io' apiVersion: 'v1alpha1' plural: 'Rollouts' - group: 'argoproj.io' apiVersion: 'v1alpha1' plural: 'analysisruns'Grant
ClusterRolepermissions for custom resources.Note-
If the Developer Hub Kubernetes plugin is already configured, the
ClusterRolepermissions for Rollouts and AnalysisRuns might already be granted. -
Use the prepared manifest to give read-only
ClusterRoleaccess to both the Kubernetes and ArgoCD plugins.
-
If the
ClusterRolepermission is not granted, use the following YAML manifest to create theClusterRole:
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: backstage-read-only rules: - apiGroups: - argoproj.io resources: - rollouts - analysisruns verbs: - get - listApply the manifest to the cluster using
kubectl:$ kubectl apply -f <your_cluster_role_file>.yaml
-
Ensure the
ServiceAccountaccessing the cluster has thisClusterRoleassigned.
-
If the Developer Hub Kubernetes plugin is already configured, the
Add annotations to
catalog-info.yamlto identify Kubernetes resources for Backstage.For identifying resources by entity ID:
annotations: ... backstage.io/kubernetes-id: <BACKSTAGE_ENTITY_NAME>
(Optional) For identifying resources by namespace:
annotations: ... backstage.io/kubernetes-namespace: <RESOURCE_NAMESPACE>
For using custom label selectors, which override resource identification by entity ID or namespace:
annotations: ... backstage.io/kubernetes-label-selector: 'app=my-app,component=front-end'
NoteEnsure you specify the labels declared in
backstage.io/kubernetes-label-selectoron your Kubernetes resources. This annotation overrides entity-based or namespace-based identification annotations, such asbackstage.io/kubernetes-idandbackstage.io/kubernetes-namespace.
Add label to Kubernetes resources to enable Developer Hub to find the appropriate Kubernetes resources.
Developer Hub Kubernetes plugin label: Add this label to map resources to specific Developer Hub entities.
labels: ... backstage.io/kubernetes-id: <BACKSTAGE_ENTITY_NAME>
GitOps application mapping: Add this label to map Argo CD Rollouts to a specific GitOps application
labels: ... app.kubernetes.io/instance: <GITOPS_APPLICATION_NAME>
NoteIf using the label selector annotation (
backstage.io/kubernetes-label-selector), ensure the specified labels are present on the resources. The label selector will override other annotations such askubernetes-idorkubernetes-namespace.
Verification
- Push the updated configuration to your GitOps repository to trigger a rollout.
- Open Red Hat Developer Hub interface and navigate to the entity you configured.
- Select the CD tab and then select the GitOps application. The side panel opens.
- In the Resources table of the side panel, verify that the following resources are displayed:
- Rollouts
- Analysis Runs (optional)
- Expand a rollout resource and review the following details:
- The Revisions row displays traffic distribution details for different rollout versions.
- The Analysis Runs row displays the status of analysis tasks that evaluate rollout success.
Additional resources
3. Enable and configure the JFrog plugin
Enable and configure the JFrog Artifactory plugin to display container images from your repository in Red Hat Developer Hub.
3.1. Enable the JFrog Artifactory plugin
To enable the JFrog Artifactory plugin, set the disabled property to false.
Procedure
Set the value to
falseas follows:global: dynamic: includes: - dynamic-plugins.default.yaml plugins: - package: oci://ghcr.io/redhat-developer/rhdh-plugin-export-overlays/backstage-community-plugin-jfrog-artifactory:<tag> disabled: falseThe
<tag>variable is your RHDH application's version of Backstage and the plugin version, in the format:bs_<backstage-version>__<plugin-version>(note the double underscore delimiter).To find the correct image tag for
<tag>:- Look in the RHDH release notes preface for your Backstage version.
-
Locate the plugin version for paths starting with
oci://ghcr.iowithin one of the tables in the Dynamic Plugins Reference guide.
For example, because RHDH 1.9 is based on Backstage 1.45.3, the tag will have the format bs_1.45.3__<plugin-version>.
To ensure environment stability, use a SHA256 digest instead of a version tag. See Determining SHA256 Digests.
3.2. Configure the JFrog Artifactory plugin
Configure proxy settings and annotations to display container images stored in your JFrog Artifactory repository.
Procedure
Set the proxy to the required JFrog Artifactory server in the app-config.yaml file as follows:
proxy: endpoints: '/jfrog-artifactory/api': target: http://<hostname>:8082 # or https://<customer>.jfrog.io headers: # Authorization: 'Bearer <YOUR TOKEN>' # Change to "false" in case of using a self-hosted Artifactory instance with a self-signed certificate secure: trueAdd the following annotation to the entity’s
catalog-info.yamlfile to enable the JFrog Artifactory plugin features in RHDH components:metadata: annotations: 'jfrog-artifactory/image-name': '<IMAGE-NAME>'
4. Enable and configure the Keycloak plugin
Integrate Keycloak into Red Hat Developer Hub to synchronize users and groups from your Red Hat Build of Keycloak (RHBK) realm. The supported RHBK version is 26.0.
4.1. Enable the Keycloak plugin
Enable the Keycloak plugin to synchronize users and groups from your Red Hat Build of Keycloak realm into Red Hat Developer Hub.
Prerequisites
To enable the Keycloak plugin, you must set the following environment variables:
-
KEYCLOAK_BASE_URL -
KEYCLOAK_LOGIN_REALM -
KEYCLOAK_REALM -
KEYCLOAK_CLIENT_ID -
KEYCLOAK_CLIENT_SECRET
-
Procedure
The Keycloak plugin is pre-loaded in Developer Hub with basic configuration properties. To enable it, set the
disabledproperty tofalseas follows:global: dynamic: includes: - dynamic-plugins.default.yaml plugins: - package: ./dynamic-plugins/dist/backstage-community-plugin-catalog-backend-module-keycloak-dynamic disabled: false
4.2. Configure the Keycloak plugin
Configure schedule frequency, query parameters, and authentication methods for synchronizing Keycloak users and groups.
Procedure
To configure the Keycloak plugin, add the following in your
app-config.yamlfile:scheduleConfigure the schedule frequency, timeout, and initial delay. The fields support cron, ISO duration, "human duration" as used in code.
catalog: providers: keycloakOrg: default: schedule: frequency: { minutes: 1 } timeout: { minutes: 1 } initialDelay: { seconds: 15 }userQuerySizeandgroupQuerySizeOptionally, configure the Keycloak query parameters to define the number of users and groups to query at a time. Default values are 100 for both fields.
catalog: providers: keycloakOrg: default: userQuerySize: 100 groupQuerySize: 100- Authentication
Communication between Developer Hub and Keycloak is enabled by using the Keycloak API. Username and password, or client credentials are supported authentication methods.
The following table describes the parameters that you can configure to enable the plugin under
catalog.providers.keycloakOrg.<ENVIRONMENT_NAME>object in theapp-config.yamlfile:Name Description Default Value Required baseUrlLocation of the Keycloak server, such as
https://localhost:8443/auth.""
Yes
realmRealm to synchronize
masterNo
loginRealmRealm used to authenticate
masterNo
usernameUsername to authenticate
""
Yes if using password based authentication
passwordPassword to authenticate
""
Yes if using password based authentication
clientIdClient ID to authenticate
""
Yes if using client credentials based authentication
clientSecretClient Secret to authenticate
""
Yes if using client credentials based authentication
userQuerySizeNumber of users to query at a time
100No
groupQuerySizeNumber of groups to query at a time
100No
When using client credentials
-
Set the access type to
confidential. - Enable service accounts.
-
Add the following roles from the
realm-managementclient role:
-
Set the access type to
-
query-groups -
query-users -
view-users Optionally, if you have self-signed or corporate certificate issues, you can set the following environment variable before starting Developer Hub:
NODE_TLS_REJECT_UNAUTHORIZED=0
WarningSetting the environment variable is not recommended.
4.3. Keycloak plugin metrics
Monitor Keycloak fetch operations and diagnose issues by using OpenTelemetry metrics with Prometheus or Grafana.
The Keycloak backend plugin supports OpenTelemetry metrics that you can use to monitor fetch operations and diagnose potential issues.
4.3.1. Available Counters
Table 1. Keycloak metrics
| Metric Name | Description |
|---|---|
|
|
Counts fetch task failures where no data was returned due to an error. |
|
|
Counts partial data batch failures. Even if some batches fail, the plugin continues fetching others. |
4.3.2. Labels
All counters include the taskInstanceId label, which uniquely identifies each scheduled fetch task. You can use this label to trace failures back to individual task executions.
Users can enter queries in the Prometheus UI or Grafana to explore and manipulate metric data.
In the following examples, a Prometheus Query Language (PromQL) expression returns the number of backend failures.
Example to get the number of backend failures associated with a taskInstanceId
backend_keycloak_fetch_data_batch_failure_count_total{taskInstanceId="df040f82-2e80-44bd-83b0-06a984ca05ba"} 1
Example to get the number of backend failures during the last hour
sum(backend_keycloak_fetch_data_batch_failure_count_total) - sum(backend_keycloak_fetch_data_batch_failure_count_total offset 1h)
PromQL supports arithmetic operations, comparison operators, logical/set operations, aggregation, and various functions. Users can combine these features to analyze time-series data effectively.
Additionally, the results can be visualized using Grafana.
4.3.3. Exporting Metrics
You can export metrics by using any OpenTelemetry-compatible backend, such as Prometheus.
Additional resources
5. Enable and configure the Nexus Repository Manager plugin
Use the Nexus Repository Manager plugin to view build artifacts in your Developer Hub application. You can find this community-sourced plugin in the Community plugins migration table.
5.1. Enable the Nexus Repository Manager plugin
To enable the Nexus Repository Manager plugin, set the disabled property to false.
Procedure
Set the value to
falseas follows:global: dynamic: includes: - dynamic-plugins.default.yaml plugins: - package: oci://ghcr.io/redhat-developer/rhdh-plugin-export-overlays/backstage-community-plugin-nexus-repository-manager:<tag> disabled: falseThe
<tag>variable is your RHDH application's version of Backstage and the plugin version, in the format:bs_<backstage-version>__<plugin-version>(note the double underscore delimiter).To find the correct image tag for
<tag>:- Look in the RHDH release notes preface for your Backstage version.
-
Locate the plugin version for paths starting with
oci://ghcr.iowithin one of the tables in the Dynamic Plugins Reference guide.
For example, because RHDH 1.9 is based on Backstage 1.45.3, the tag will have the format bs_1.45.3__<plugin-version>.
To ensure environment stability, use a SHA256 digest instead of a version tag. See Determining SHA256 Digests.
5.2. Configure the Nexus Repository Manager plugin
Configure the Nexus Repository Manager plugin to display artifact information from your Nexus Repository Manager instance.
Procedure
Set the proxy to the required Nexus Repository Manager server in the
app-config.yamlfile as follows:proxy: '/nexus-repository-manager': target: 'https://<NEXUS_REPOSITORY_MANAGER_URL>' headers: X-Requested-With: 'XMLHttpRequest' # Uncomment the following line to access a private Nexus Repository Manager using a token # Authorization: 'Bearer <YOUR TOKEN>' changeOrigin: true # Change to "false" in case of using self hosted Nexus Repository Manager instance with a self-signed certificate secure: trueOptional: Change the base URL of Nexus Repository Manager proxy as follows:
nexusRepositoryManager: # default path is `/nexus-repository-manager` proxyPath: /custom-pathOptional: Enable the following experimental annotations:
nexusRepositoryManager: experimentalAnnotations: trueAnnotate your entity using the following annotations:
metadata: annotations: # insert the chosen annotations here # example nexus-repository-manager/docker.image-name: `<ORGANIZATION>/<REPOSITORY>`,
6. Enable the Tekton plugin
The Tekton plugin enables you to monitor CI/CD pipeline results across your Kubernetes or OpenShift clusters. It provides a high-level overview of all associated tasks, allowing you to track the real-time status of your application pipelines.
Prerequisites
-
You have installed and configured the
@backstage/plugin-kubernetesand@backstage/plugin-kubernetes-backenddynamic plugins. -
You have configured the Kubernetes plugin to connect to the cluster using a
ServiceAccount. The
ClusterRolemust be granted for custom resources (PipelineRuns and TaskRuns) to theServiceAccountaccessing the cluster.NoteIf you have the RHDH Kubernetes plugin configured, then the
ClusterRoleis already granted.-
To view the pod logs, you have granted permissions for
pods/log. You can use the following code to grant the
ClusterRolefor custom resources and pod logs:kubernetes: ... customResources: - group: 'tekton.dev' apiVersion: 'v1' plural: 'pipelineruns' - group: 'tekton.dev' apiVersion: 'v1' ... apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: backstage-read-only rules: - apiGroups: - "" resources: - pods/log verbs: - get - list - watch ... - apiGroups: - tekton.dev resources: - pipelineruns - taskruns verbs: - get - listYou can use the prepared manifest for a read-only
ClusterRole, which provides access for both Kubernetes plugin and Tekton plugin.Add the following annotation to the entity’s
catalog-info.yamlfile to identify whether an entity contains the Kubernetes resources:annotations: ... backstage.io/kubernetes-id: <BACKSTAGE_ENTITY_NAME>
You can also add the
backstage.io/kubernetes-namespaceannotation to identify the Kubernetes resources using the defined namespace.annotations: ... backstage.io/kubernetes-namespace: <RESOURCE_NS>
Add the following annotation to the
catalog-info.yamlfile of the entity to enable the Tekton related features in RHDH. The value of the annotation identifies the name of the RHDH entity:annotations: ... janus-idp.io/tekton : <BACKSTAGE_ENTITY_NAME>
Add a custom label selector, which RHDH uses to find the Kubernetes resources. The label selector takes precedence over the ID annotations.
annotations: ... backstage.io/kubernetes-label-selector: 'app=my-app,component=front-end'
Add the following label to the resources so that the Kubernetes plugin gets the Kubernetes resources from the requested entity:
labels: ... backstage.io/kubernetes-id: <BACKSTAGE_ENTITY_NAME>
NoteWhen you use the label selector, the mentioned labels must be present on the resource.
Procedure
To enable the Tekton plugin, set the
disabledproperty tofalseas follows:global: dynamic: includes: - dynamic-plugins.default.yaml plugins: - package: oci://ghcr.io/redhat-developer/rhdh-plugin-export-overlays/backstage-community-plugin-tekton:<tag> disabled: false
The <tag> variable is your RHDH application's version of Backstage and the plugin version, in the format: bs_<backstage-version>__<plugin-version> (note the double underscore delimiter).
To find the correct image tag for
<tag>:- Look in the RHDH release notes preface for your Backstage version.
-
Locate the plugin version for paths starting with
oci://ghcr.iowithin one of the tables in the Dynamic Plugins Reference guide.
For example, because RHDH 1.9 is based on Backstage 1.45.3, the tag will have the format bs_1.45.3__<plugin-version>.
To ensure environment stability, use a SHA256 digest instead of a version tag. See Determining SHA256 Digests.
7. Install the Topology plugin
Install and configure the Topology plugin to visualize Kubernetes workloads and manage labels and annotations.
7.1. Install the Topology plugin
Visualize Kubernetes workloads like Deployments, Pods, and Virtual Machines by enabling the Topology plugin.
The Topology plugin enables you to visualize the workloads such as Deployment, Job, Daemonset, Statefulset, CronJob, Pods and Virtual Machines powering any service on your Kubernetes cluster.
Prerequisites
- You have installed and configured the @backstage/plugin-kubernetes-backend dynamic plugins.
- You have configured the Kubernetes plugin to connect to the cluster using a ServiceAccount.
The
ClusterRolemust be granted to ServiceAccount accessing the cluster.NoteIf you have the Developer Hub Kubernetes plugin configured, then the
ClusterRoleis already granted.
Procedure
The Topology plugin is pre-loaded in Developer Hub with basic configuration properties. To enable it, set the disabled property to false as follows:
app-config.yamlfragmentauth: global: dynamic: includes: - dynamic-plugins.default.yaml plugins: - package: ./dynamic-plugins/dist/backstage-community-plugin-topology disabled: false
7.2. Configure the Topology plugin
Configure the Topology plugin to view OpenShift routes, pod logs, Tekton PipelineRuns, and virtual machines.
7.2.1. View OpenShift routes
Grant read access to routes resource in ClusterRole to view OpenShift routes in the Topology plugin.
Procedure
To view OpenShift routes, grant read access to the
routesresource in theClusterRole:apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: backstage-read-only rules: ... - apiGroups: - route.openshift.io resources: - routes verbs: - get - listAlso add the following in
kubernetes.customResourcesproperty in yourapp-config.yamlfile:kubernetes: ... customResources: - group: 'route.openshift.io' apiVersion: 'v1' plural: 'routes'
7.2.2. View pod logs
Grant ClusterRole permissions to pods and pods/log resources to view pod logs in the Topology plugin.
Procedure
To view pod logs, you must grant the following permission to the
ClusterRole:apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: backstage-read-only rules: ... - apiGroups: - '' resources: - pods - pods/log verbs: - get - list - watch
7.2.3. View Tekton PipelineRuns
Grant ClusterRole access to Tekton resources to view PipelineRuns status in the Topology plugin.
Procedure
To view the Tekton
PipelineRuns, grant read access to thepipelines,pipelineruns, andtaskrunsresources in theClusterRole:... apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: backstage-read-only rules: ... - apiGroups: - tekton.dev resources: - pipelines - pipelineruns - taskruns verbs: - get - listTo view the Tekton
PipelineRunslist in the side panel and the latestPipelineRunsstatus in the Topology node decorator, add the following code to thekubernetes.customResourcesproperty in yourapp-config.yamlfile:kubernetes: ... customResources: - group: 'tekton.dev' apiVersion: 'v1' plural: 'pipelines' - group: 'tekton.dev' apiVersion: 'v1' plural: 'pipelineruns' - group: 'tekton.dev' apiVersion: 'v1' plural: 'taskruns'
7.2.4. View virtual machines
Grant ClusterRole access to VirtualMachines resources to view virtual machine nodes in the Topology plugin.
Prerequisites
- The OpenShift Virtualization operator is installed and configured on a Kubernetes cluster.
Procedure
Grant read access to the
VirtualMachinesresource in theClusterRole:... apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: backstage-read-only rules: ... - apiGroups: - kubevirt.io resources: - virtualmachines - virtualmachineinstances verbs: - get - listTo view the virtual machine nodes on the topology plugin, add the following code to the
kubernetes.customResourcesproperty in theapp-config.yamlfile:kubernetes: ... customResources: - group: 'kubevirt.io' apiVersion: 'v1' plural: 'virtualmachines' - group: 'kubevirt.io' apiVersion: 'v1' plural: 'virtualmachineinstances'
7.2.5. Enable the source code editor
Enable the source code editor to allow developers to open source code directly from RHDH.
Procedure
Grant read access to the CheClusters resource in the
ClusterRole:... apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: backstage-read-only rules: ... - apiGroups: - org.eclipse.che resources: - checlusters verbs: - get - listAdd the following configuration to the
kubernetes.customResourcesproperty in yourapp-config.yamlfile:kubernetes: ... customResources: - group: 'org.eclipse.che' apiVersion: 'v2' plural: 'checlusters'
7.3. Manage labels and annotations for Topology plugins
Configure labels and annotations to customize Kubernetes resource identification and visualization in the Topology plugin.
7.3.1. Link to the source code editor or the source
Add annotations to workload resources to enable navigation to the Git repository of the associated application using the source code editor.
Procedure
Add the following annotations to workload resources, such as Deployments, to navigate to the Git repository of the associated application using the source code editor:
annotations: app.openshift.io/vcs-uri: <GIT_REPO_URL>
Optional: Add the following annotation to navigate to a specific branch:
annotations: app.openshift.io/vcs-ref: <GIT_REPO_BRANCH>
Optional: Add the
app.openshift.io/edit-linkannotation with the edit URL that you want to access using the decorator.NoteIf Red Hat OpenShift Dev Spaces is installed and configured and Git URL annotations are also added to the workload YAML file, then clicking on the edit code decorator redirects you to the Red Hat OpenShift Dev Spaces instance.
NoteWhen you deploy your application using the OCP Git import flows, you do not need to add the labels as import flows do that. Otherwise, you need to add the labels manually to the workload YAML file.
7.3.2. Add entity annotations and labels for the Kubernetes plugin
Add annotations and labels to enable RHDH to detect that an entity has Kubernetes components.
Procedure
Add the following annotation to the
catalog-info.yamlfile of the entity:annotations: backstage.io/kubernetes-id: <BACKSTAGE_ENTITY_NAME>
Add the following label to the resources so that the Kubernetes plugin gets the Kubernetes resources from the requested entity:
labels: backstage.io/kubernetes-id: <BACKSTAGE_ENTITY_NAME>
NoteWhen using the label selector, the mentioned labels must be present on the resource.
7.3.3. Namespace annotation
Identify Kubernetes resources by namespace using the backstage.io/kubernetes-namespace annotation.
Procedure
To identify the Kubernetes resources using the defined namespace, add the
backstage.io/kubernetes-namespaceannotation:annotations: backstage.io/kubernetes-namespace: <RESOURCE_NS>
The Red Hat OpenShift Dev Spaces instance is not accessible using the source code editor if the
backstage.io/kubernetes-namespaceannotation is added to thecatalog-info.yamlfile.To retrieve the instance URL, you require the CheCluster custom resource (CR). As the CheCluster CR is created in the openshift-devspaces namespace, the instance URL is not retrieved if the namespace annotation value is not openshift-devspaces.
7.3.4. Add a label selector query annotation
Add a custom label selector annotation so that RHDH uses your custom labels to find Kubernetes resources.
Procedure
Add the
backstage.io/kubernetes-label-selectorannotation to thecatalog-info.yamlfile of the entity. The label selector takes precedence over the ID annotations:annotations: backstage.io/kubernetes-label-selector: 'app=my-app,component=front-end'
Optional: If you have many entities while Red Hat Dev Spaces is configured and want multmanyities to support the edit code decorator that redirects to the Red Hat Dev Spaces instance, add the
backstage.io/kubernetes-label-selectorannotation to thecatalog-info.yamlfile for each entity:annotations: backstage.io/kubernetes-label-selector: 'component in (<BACKSTAGE_ENTITY_NAME>,che)'
If you are using the previous label selector, add the following labels to your resources so that the Kubernetes plugin gets the Kubernetes resources from the requested entity:
labels: component: che # add this label to your che cluster instance labels: component: <BACKSTAGE_ENTITY_NAME> # add this label to the other resources associated with your entity
You can also write your own custom query for the label selector with unique labels to differentiate your entities. However, you need to ensure that you add those labels to the resources associated with your entities including your
CheClusterinstance.
7.3.5. Display a runtime icon in the topology node
Add a label to workload resources to display a runtime icon in the topology nodes.
Procedure
Add the following label to workload resources, such as Deployments:
labels: app.openshift.io/runtime: <RUNTIME_NAME>
Alternatively, you can include the following label to display the runtime icon:
labels: app.kubernetes.io/name: <RUNTIME_NAME>
Supported values of
<RUNTIME_NAME>includedjango,dotnet,drupal,go-gopher,golang,grails,jboss,jruby,js,nginx,nodejs,openjdk,perl,phalcon,php,python,quarkus,rails,redis,rh-spring-boot,rust,java,rh-openjdk,ruby,spring, andspring-boot. Other values result in icons not being rendered for the node.
7.3.6. Group applications in the topology view
Add a label to workload resources to display them in a visual group in the topology view.
Procedure
Add the following label to workload resources, such as deployments or pods, to display them in a visual group:
labels: app.kubernetes.io/part-of: <GROUP_NAME>
7.3.7. Node connector
Display visual connectors between workload resources like deployments and pods using annotations.
Procedure
To display the workload resources such as deployments or pods with a visual connector, add the following annotation:
annotations: app.openshift.io/connects-to: '[{"apiVersion": <RESOURCE_APIVERSION>,"kind": <RESOURCE_KIND>,"name": <RESOURCE_NAME>}]'
Additional resources
8. Bulk importing in Red Hat Developer Hub
Automate onboarding of GitHub repositories and GitLab projects to Red Hat Developer Hub catalog, and monitor import status by using bulk import capabilities.
These features are for Technology Preview only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend using them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information on Red Hat Technology Preview features, see Technology Preview Features Scope.
8.1. Enable and authorize Bulk Import capabilities in Red Hat Developer Hub
Enable Bulk Import plugins and configure RBAC permissions to allow users to import multiple GitHub repositories and GitLab projects into the catalog.
Prerequisites
- For GitHub only: You have enabled GitHub repository discovery.
Procedure
The Bulk Import plugins are installed but disabled by default. To enable the
./dynamic-plugins/dist/red-hat-developer-hub-backstage-plugin-bulk-import-backend-dynamicand./dynamic-plugins/dist/red-hat-developer-hub-backstage-plugin-bulk-importplugins, edit yourdynamic-plugins.yamlwith the following content:plugins: - package: ./dynamic-plugins/dist/red-hat-developer-hub-backstage-plugin-bulk-import-backend-dynamic disabled: false - package: ./dynamic-plugins/dist/red-hat-developer-hub-backstage-plugin-bulk-import disabled: falseSee Installing and viewing plugins in Red Hat Developer Hub.
Configure the required
bulk.importRBAC permission for the users who are not administrators as shown in the following code:rbac-policy.csvfragmentp, role:default/bulk-import, bulk.import, use, allow g, user:default/<your_user>, role:default/bulk-importNote that only Developer Hub administrators or users with the
bulk.importpermission can use the Bulk Import feature. See Permission policies in Red Hat Developer Hub.
Verification
- The sidebar displays a Bulk Import option.
- The Bulk Import page shows a list of added GitHub repositories and GitLab projects.
8.2. Import multiple GitHub repositories
Select and import multiple GitHub repositories to the Red Hat Developer Hub catalog, automatically creating pull requests with required catalog-info.yaml files.
Prerequisites
Procedure
- Click Bulk Import in Developer Hub left sidebar.
- If your RHDH instance has multiple source control tools configured, select GitHub from the Source control tool list.
Select the repositories to import, and validate.
Developer Hub creates a pull request in each selected repository to add the required
catalog-info.yamlfile.- For each repository to import, click PR to review and merge the changes in GitHub.
Verification
- Click Bulk Import in Developer Hub left sidebar.
- Verify that each imported GitHub repository in the Selected repositories list has the status Waiting for approval or Imported.
-
For each Waiting for approval repository, click the pull request link to review and merge the
catalog-info.yamlfile in the corresponding repository.
8.3. Import multiple GitLab repositories
Select and import multiple GitLab projects to the Red Hat Developer Hub catalog by using Technology Preview bulk import capabilities.
These features are for Technology Preview only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend using them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information on Red Hat Technology Preview features, see Technology Preview Features Scope.
Prerequisites
- You have enabled the Bulk Import feature and given access to it.
- You have set up a GitLab personal access token (PAT).
You configured the GitLab integration by adding the following section to your RHDH
app-config.yamlfile:integrations: gitlab: - host: ${GITLAB_HOST} token: ${GITLAB_TOKEN}You enabled the GitLab catalog provider plugin in your
dynamic-plugins.yamlfile to import GitLab users and groups:plugins: - package: './dynamic-plugins/dist/backstage-plugin-catalog-backend-module-gitlab-org-dynamic' disabled: false
Procedure
- In the Developer Hub left sidebar, click Bulk Import.
- If your RHDH instance has multiple source control tools configured, select GitLab as your Source control tool option.
Select the projects to import, and validate.
Developer Hub creates a merge request in each selected project to add the required
catalog-info.yamlfile.- For each project to import, click PR to review and merge the changes in GitLab.
Verification
- Click Bulk Import in Developer Hub left sidebar.
- Verify that each imported GitLab project in the Selected projects list has the status Waiting for approval or Imported.
-
For projects with the Waiting for approval status, click the merge request link to add the
catalog-info.yamlfile to the project repository.
8.4. Monitor Bulk Import actions using audit logs
Review Bulk Import backend plugin audit log events to monitor repository import operations, track API requests, and troubleshoot import issues.
Procedure
- Access your Developer Hub backend logs where audit log events are recorded.
Review the following Bulk Import audit log events to monitor repository operations:
BulkImportUnknownEndpoint- Tracks requests to unknown endpoints.
BulkImportPing-
Tracks
GETrequests to the/pingendpoint, which allows us to make sure the bulk import backend is up and running. BulkImportFindAllOrganizations-
Tracks
GETrequests to the/organizationsendpoint, which returns the list of organizations accessible from all configured GitHub Integrations. BulkImportFindRepositoriesByOrganization-
Tracks
GETrequests to the/organizations/:orgName/repositoriesendpoint, which returns the list of repositories for the specified organization (accessible from any of the configured GitHub Integrations). BulkImportFindAllRepositories-
Tracks GET requests to the
/repositoriesendpoint, which returns the list of repositories accessible from all configured GitHub Integrations. BulkImportFindAllImports-
Tracks
GETrequests to the/importsendpoint, which returns the list of existing import jobs along with their statuses. BulkImportCreateImportJobs-
Tracks
POSTrequests to the/importsendpoint, which allows to submit requests to bulk-import one or many repositories into the Developer Hub catalog, by eventually creating import pull requests in the target repositories. BulkImportFindImportStatusByRepo-
Tracks
GETrequests to the/import/by-repoendpoint, which fetches details about the import job for the specified repository. BulkImportDeleteImportByRepoTracks
DELETErequests to the/import/by-repoendpoint, which deletes any existing import job for the specified repository, by closing any open import pull request that could have been created.Example audit log output:
{ "actor": { "actorId": "user:default/myuser", "hostname": "localhost", "ip": "::1", "userAgent": "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/128.0.0.0 Safari/537.36" }, "eventName": "BulkImportFindAllOrganizations", "isAuditLog": true, "level": "info", "message": "'get /organizations' endpoint hit by user:default/myuser", "meta": {}, "plugin": "bulk-import", "request": { "body": {}, "method": "GET", "params": {}, "query": { "pagePerIntegration": "1", "sizePerIntegration": "5" }, "url": "/api/bulk-import/organizations?pagePerIntegration=1&sizePerIntegration=5" }, "response": { "status": 200 }, "service": "backstage", "stage": "completion", "status": "succeeded", "timestamp": "2024-08-26 16:41:02" }
8.5. Input parameters for Bulk Import Scaffolder template
Define Scaffolder template parameters such as repository URL, name, organization, and branch details to customize bulk import automation workflows for your repositories.
As an administrator, you can use the Bulk Import plugin to run a Scaffolder template task with specified parameters, which you must define within the template.
The Bulk Import plugin analyzes Git repository information and provides the following parameters for the Scaffolder template task:
repoUrlNormalized repository URL in the following format:
${gitProviderHost}?owner=${owner}&repo=${repository-name}name- The repository name.
organization- The repository owner, which can be a user nickname or organization name.
branchName-
The proposed repository branch. By default, the proposed repository branch is
bulk-import-catalog-entity. targetBranchName- The default branch of the Git repository.
gitProviderHost-
The Git provider host parsed from the repository URL. You can use this parameter to write
Git-provider-agnostictemplates.
Example of a Scaffolder template:
parameters:
- title: Repository details
required:
- repoUrl
- branchName
- targetBranchName
- name
- organization
properties:
repoUrl:
type: string
title: Repository URL ({product-short} format)
description: github.com?owner=Org&repo=repoName
organization:
type: string
title: Owner of the repository
name:
type: string
title: Name of the repository
branchName:
type: string
title: Branch to add the catalog entity to
targetBranchName:
type: string
title: Branch to target the PR/MR to
gitProviderHost:
type: string
title: Git provider host8.6. Set up a custom Scaffolder workflow for Bulk Import
Create custom Scaffolder templates aligned with your organization’s repository conventions to automate bulk import tasks such as entity imports, pull request creation, and webhook integration.
As an administrator, you can create a custom Scaffolder template inline with the repository conventions of your organization and add the template into the Red Hat Developer Hub catalog for use by the Bulk Import plugin on many selected repositories.
You can define various custom tasks, including, but not limited to the following:
- Importing existing catalog entities from a repository
- Creating pull requests for cleanup
- Calling webhooks for external system integration
Prerequisites
- You created a custom Scaffolder template for the Bulk Import plugin.
You have run your RHDH instance with the following environment variable enabled to allow the use of the Scaffolder functionality:
export NODE_OPTIONS=--no-node-snapshot
Procedure
Configure your app-config.yaml configuration to instruct the Bulk Import plugin to use your custom template as shown in the following example:
bulkImport: importTemplate: <your_template_entity_reference_or_template_name> importAPI: `open-pull-requests` | `scaffolder`;
where:
importTemplate:- Enter your Scaffolder template entity reference.
importAPI- Set the API to 'scaffolder' to trigger the defined workflow for high-fidelity automation. This field defines the import workflow and currently supports two following options:
open-pull-requests- This is the default import workflow, which includes the logic for creating pull requests for every selected repository.
scaffolderThis workflow uses an import scenario defined in the Scaffolder template to create import jobs. Select this option to use the custom import scenario defined in your Scaffolder template.
Optional: You can direct the Bulk Import plugin to hand off the entire list of selected repositories to a custom Orchestrator workflow.
ImportantThe Scaffolder template must be generic and not specific to a single repository if you want your custom Scaffolder template to run successfully for every repository in the bulk list.
Verification
-
The Bulk Import plugin runs the custom Scaffolder template for the list of repositories using the
/task-importsAPI endpoint.
8.7. Run Orchestrator workflows for bulk imports
Configure Bulk Import to use Orchestrator workflows for advanced bulk operations across multiple repositories, enabling automated pull request creation and configuration publishing at scale.
As a platform engineer, you can configure the Bulk Import plugin to run Orchestrator workflows for bulk import operations. This mode uses the Orchestrator engine to provide advanced capabilities, such as creating pull requests or publishing configurations across multiple repositories.
Prerequisites
- You have installed and configured the Orchestrator plugin in your Developer Hub instance.
-
You have registered a generic custom workflow (for example,
universal-pr) in the Orchestrator plugin. - You have role-based access control (RBAC) permissions to configure the Bulk Import plugin.
Procedure
Configure the Bulk Import plugin by editing your
app-config.yamlfile to enable Orchestrator mode.bulkImport: orchestratorWorkflow: your_workflow_id importAPI: 'orchestrator'
where:
orchestratorWorkflow- The ID of the workflow to run for each repository.
importAPI-
The execution mode for the workflow. Enter
orchestratorto enable workflow execution.
Verify that the Orchestrator workflow receives the following input:
{ "inputData": { "owner": "redhat-developer", "repo": "rhdh-plugins", "baseBranch": "main", "targetBranch": "bulk-import-orchestrator" }, "authTokens": [ { "token": "<github_token>", "provider": "github" } ] }where:
owner- Specifies the repository owner (organization or user name).
repo- Specifies the repository name.
baseBranch- Specifies the default branch of the Git repository (for example, main).
targetBranch-
Specifies the target branch for the import operation. By default, this is set to
bulk-import-orchestrator. authTokens- Specifies the authentication tokens for the Git provider:
-
For GitHub: {
token: <github_token>, provider: github} -
For GitLab: {
token: <gitlab_token>, provider: gitlab} Navigate to the Bulk Import page in the sidebar and complete the following steps:
- Select your Git provider (for example, GitHub or GitLab).
- Select the projects you want to import.
- Click import to run the workflow.
Verification
- Locate your repository and confirm status is COMPLETED.
8.8. Data handoff and custom workflow design
Design Scaffolder templates to receive repository data as parameters and automate repository-specific tasks when using Scaffolder mode for bulk imports.
When you configure the Bulk Import plugin by setting the importAPI field to scaffolder, the Bulk Import Backend passes all necessary context directly to the Scaffolder API.
As an administrator, you can define the Scaffolder template workflow and structure the workflow to do the following:
- Define template parameters to consume input
- Structure the Scaffolder template to receive the repository data as template parameters for the current workflow run. The template must be generic, and not specific to a single repository, so that it can successfully run for every repository in the bulk list.
- Automate processing for each repository
-
Implement the custom logic needed for a single repository within the template. The Orchestrator iterates through the repository list, launching the template once for each repository and passes only the data for that single repository to the template run. This allows you to automate tasks such as creating the
catalog-info.yaml, running compliance checks, or registering the entity with the catalog.
9. ServiceNow custom actions in Red Hat Developer Hub
In Red Hat Developer Hub, you can use ServiceNow custom actions to fetch and register resources within the catalog.
The custom actions in Developer Hub help you automate the management of records. By using the custom actions, you can:
- Create, update, or delete a record
- Retrieve information about a single record or many records
The ServiceNow custom actions plugin is community-sourced.
9.1. Enable ServiceNow custom actions plugin in Red Hat Developer Hub
To use ServiceNow custom actions, you must first activate the plugin.
Prerequisites
- Red Hat Developer Hub is installed and running.
- You have created a project in the Developer Hub.
Procedure
Add a
packagewith plugin name and update thedisabledfield in your Helm chart as follows:global: dynamic: includes: - dynamic-plugins.default.yaml plugins: - package: oci://ghcr.io/redhat-developer/rhdh-plugin-export-overlays/backstage-community-plugin-scaffolder-backend-module-servicenow:<tag> disabled: falseThe
<tag>variable is your RHDH application's version of Backstage and the plugin version, in the format:bs_<backstage-version>__<plugin-version>(note the double underscore delimiter).To find the correct image tag for
<tag>:- Look in the RHDH release notes preface for your Backstage version.
-
Locate the plugin version for paths starting with
oci://ghcr.iowithin one of the tables in the Dynamic Plugins Reference guide.
For example, because RHDH 1.9 is based on Backstage 1.45.3, the tag will have the format
bs_1.45.3__<plugin-version>.TipTo ensure environment stability, use a SHA256 digest instead of a version tag. See Determining SHA256 Digests.
NoteThe default configuration for a plugin is extracted from the
dynamic-plugins.default.yamlfile, however, you can use apluginConfigentry to override the default configuration.Set the following variables in the Helm chart to access the custom actions:
servicenow: # The base url of the ServiceNow instance. baseUrl: ${SERVICENOW_BASE_URL} # The username to use for authentication. username: ${SERVICENOW_USERNAME} # The password to use for authentication. password: ${SERVICENOW_PASSWORD}
9.2. Supported ServiceNow custom actions in Red Hat Developer Hub
The ServiceNow custom actions enable you to manage records in the Red Hat Developer Hub.
The custom actions support the following HTTP methods for API requests:
-
GET: Retrieves specified information from a specified resource endpoint -
POST: Creates or updates a resource -
PUT: Modify a resource -
PATCH: Updates a resource DELETE: Deletes a resource- [GET] servicenow:now:table:retrieveRecord
Retrieves information of a specified record from a table in the Developer Hub.
Table 2. Input parameters
Name Type Requirement Description tableNamestringRequired
Name of the table to retrieve the record from
sysIdstringRequired
Unique identifier of the record to retrieve
sysparmDisplayValueenum("true", "false", "all")Optional
Returns field display values such as
true, actual values asfalse, or both. The default value isfalse.sysparmExcludeReferenceLinkbooleanOptional
Set as
trueto exclude Table API links for reference fields. The default value isfalse.sysparmFieldsstring[]Optional
Array of fields to return in the response
sysparmViewstringOptional
Renders the response according to the specified UI view. You can override this parameter using
sysparm_fields.sysparmQueryNoDomainbooleanOptional
Set as
trueto access data across domains if authorized. The default value isfalse.Table 3. Output parameters
Name Type Description resultRecord<PropertyKey, unknown>The response body of the request
- [GET]
servicenow:now:table:retrieveRecords Retrieves information about multiple records from a table in the Developer Hub.
Table 4. Input parameters
Name Type Requirement Description tableNamestringRequired
Name of the table to retrieve the records from
sysparamQuerystringOptional
Encoded query string used to filter the results
sysparmDisplayValueenum("true", "false", "all")Optional
Returns field display values such as
true, actual values asfalse, or both. The default value isfalse.sysparmExcludeReferenceLinkbooleanOptional
Set as
trueto exclude Table API links for reference fields. The default value isfalse.sysparmSuppressPaginationHeaderbooleanOptional
Set as
trueto suppress pagination header. The default value isfalse.sysparmFieldsstring[]Optional
Array of fields to return in the response
sysparmLimitintOptional
Maximum number of results returned per page. The default value is
10,000.sysparmViewstringOptional
Renders the response according to the specified UI view. You can override this parameter using
sysparm_fields.sysparmQueryCategorystringOptional
Name of the query category to use for queries
sysparmQueryNoDomainbooleanOptional
Set as
trueto access data across domains if authorized. The default value isfalse.sysparmNoCountbooleanOptional
Does not run a select count(*) on the table. The default value is
false.Table 5. Output parameters
Name Type Description resultRecord<PropertyKey, unknown>The response body of the request
- [POST]
servicenow:now:table:createRecord Creates a record in a table in the Developer Hub.
Table 6. Input parameters
Name Type Requirement Description tableNamestringRequired
Name of the table to save the record in
requestBodyRecord<PropertyKey, unknown>Optional
Field name and associated value for each parameter to define in the specified record
sysparmDisplayValueenum("true", "false", "all")Optional
Returns field display values such as
true, actual values asfalse, or both. The default value isfalse.sysparmExcludeReferenceLinkbooleanOptional
Set as
trueto exclude Table API links for reference fields. The default value isfalse.sysparmFieldsstring[]Optional
Array of fields to return in the response
sysparmInputDisplayValuebooleanOptional
Set field values using their display value such as
trueor actual value asfalse. The default value isfalse.sysparmSuppressAutoSysFieldbooleanOptional
Set as
trueto suppress auto-generation of system fields. The default value isfalse.sysparmViewstringOptional
Renders the response according to the specified UI view. You can override this parameter using
sysparm_fields.Table 7. Output parameters
Name Type Description resultRecord<PropertyKey, unknown>The response body of the request
- [PUT]
servicenow:now:table:modifyRecord Modifies a record in a table in the Developer Hub.
Table 8. Input parameters
Name Type Requirement Description tableNamestringRequired
Name of the table to change the record from
sysIdstringRequired
Unique identifier of the record to change
requestBodyRecord<PropertyKey, unknown>Optional
Field name and associated value for each parameter to define in the specified record
sysparmDisplayValueenum("true", "false", "all")Optional
Returns field display values such as
true, actual values asfalse, or both. The default value isfalse.sysparmExcludeReferenceLinkbooleanOptional
Set as
trueto exclude Table API links for reference fields. The default value isfalse.sysparmFieldsstring[]Optional
Array of fields to return in the response
sysparmInputDisplayValuebooleanOptional
Set field values using their display value such as
trueor actual value asfalse. The default value isfalse.sysparmSuppressAutoSysFieldbooleanOptional
Set as
trueto suppress auto-generation of system fields. The default value isfalse.sysparmViewstringOptional
Renders the response according to the specified UI view. You can override this parameter using
sysparm_fields.sysparmQueryNoDomainbooleanOptional
Set as
trueto access data across domains if authorized. The default value isfalse.Table 9. Output parameters
Name Type Description resultRecord<PropertyKey, unknown>The response body of the request
- [PATCH]
servicenow:now:table:updateRecord Updates a record in a table in the Developer Hub.
Table 10. Input parameters
Name Type Requirement Description tableNamestringRequired
Name of the table to update the record in
sysIdstringRequired
Unique identifier of the record to update
requestBodyRecord<PropertyKey, unknown>Optional
Field name and associated value for each parameter to define in the specified record
sysparmDisplayValueenum("true", "false", "all")Optional
Returns field display values such as
true, actual values asfalse, or both. The default value isfalse.sysparmExcludeReferenceLinkbooleanOptional
Set as
trueto exclude Table API links for reference fields. The default value isfalse.sysparmFieldsstring[]Optional
Array of fields to return in the response
sysparmInputDisplayValuebooleanOptional
Set field values using their display value such as
trueor actual value asfalse. The default value isfalse.sysparmSuppressAutoSysFieldbooleanOptional
Set as
trueto suppress auto-generation of system fields. The default value isfalse.sysparmViewstringOptional
Renders the response according to the specified UI view. You can override this parameter using
sysparm_fields.sysparmQueryNoDomainbooleanOptional
Set as
trueto access data across domains if authorized. The default value isfalse.Table 11. Output parameters
Name Type Description resultRecord<PropertyKey, unknown>The response body of the request
- [DELETE]
servicenow:now:table:deleteRecord Deletes a record from a table in the Developer Hub.
Table 12. Input parameters
Name Type Requirement Description tableNamestringRequired
Name of the table to delete the record from
sysIdstringRequired
Unique identifier of the record to delete
sysparmQueryNoDomainbooleanOptional
Set as
trueto access data across domains if authorized. The default value isfalse.
10. Kubernetes custom actions in Red Hat Developer Hub
You can create and manage Kubernetes resources by using custom scaffolder actions in Red Hat Developer Hub templates. The Kubernetes custom actions plugin is preinstalled in a disabled state.
10.1. Enable Kubernetes custom actions plugin in Red Hat Developer Hub
Enable the preinstalled Kubernetes custom actions plugin by updating the Helm chart configuration.
In Red Hat Developer Hub, the Kubernetes custom actions are provided as a preinstalled plugin, which is disabled by default. You can enable the Kubernetes custom actions plugin by updating the disabled key value in your Helm chart.
Prerequisites
- You have installed Red Hat Developer Hub with the Helm chart.
Procedure
In your Helm chart, add a
packagewith the Kubernetes custom action plugin name and update thedisabledfield to enable the plugin. For example:global: dynamic: includes: - dynamic-plugins.default.yaml plugins: - package: ./dynamic-plugins/dist/backstage-community-plugin-scaffolder-backend-module-kubernetes-dynamic disabled: falseNoteThe default configuration for a plugin is extracted from the
dynamic-plugins.default.yamlfile, however, you can use apluginConfigentry to override the default configuration.
10.2. Use Kubernetes custom actions plugin in Red Hat Developer Hub
Add Kubernetes actions to your custom templates to create namespaces and manage cluster resources.
In Red Hat Developer Hub, the Kubernetes custom actions enable you to run template actions for Kubernetes.
Procedure
To use a Kubernetes custom action in your custom template, add the following Kubernetes actions to your template:
action: kubernetes:create-namespace id: create-kubernetes-namespace name: Create kubernetes namespace input: namespace: my-rhdh-project clusterRef: bar token: TOKEN skipTLSVerify: false caData: Zm9v labels: app.io/type=ns; app.io/managed-by=org;
Additional resources
10.3. Create a template using Kubernetes custom actions in Red Hat Developer Hub
Define a Template object with Kubernetes actions to automate namespace creation and resource management.
Procedure
To create a template, define a
Templateobject as a YAML file.The
Templateobject describes the template and its metadata. It also contains required input variables and a list of actions that are executed by the scaffolding service.apiVersion: scaffolder.backstage.io/v1beta3 kind: Template metadata: name: create-kubernetes-namespace title: Create a kubernetes namespace description: Create a kubernetes namespace spec: type: service parameters: - title: Information required: [namespace, token] properties: namespace: title: Namespace name type: string description: Name of the namespace to be created clusterRef: title: Cluster reference type: string description: Cluster resource entity reference from the catalog ui:field: EntityPicker ui:options: catalogFilter: kind: Resource url: title: Url type: string description: Url of the kubernetes API, will be used if clusterRef is not provided token: title: Token type: string ui:field: Secret description: Bearer token to authenticate with skipTLSVerify: title: Skip TLS verification type: boolean description: Skip TLS certificate verification, not recommended to use in production environment, default to false caData: title: CA data type: string ui:field: Secret description: Certificate Authority base64 encoded certificate labels: title: Labels type: string description: Labels to be applied to the namespace ui:widget: textarea ui:options: rows: 3 ui:help: 'Hint: Separate multiple labels with a semicolon!' ui:placeholder: 'kubernetes.io/type=namespace; app.io/managed-by=org' steps: - id: create-kubernetes-namespace name: Create kubernetes namespace action: kubernetes:create-namespace input: namespace: ${ parameters.namespace } clusterRef: ${ parameters.clusterRef } url: ${ parameters.url } token: ${ secrets.token } skipTLSVerify: ${ parameters.skipTLSVerify } caData: ${ secrets.caData } labels: ${ parameters.labels }
10.4. Supported Kubernetes custom actions in Red Hat Developer Hub
Access parameter specifications and requirements for the kubernetes:create-namespace scaffolder action.
In Red Hat Developer Hub, you can use custom Kubernetes actions in Scaffolder templates.
- Action:
kubernetes:create-namespace - Creates a namespace for the Kubernetes cluster in the Developer Hub.
| Parameter name | Type | Requirement | Description | Example |
|---|---|---|---|---|
|
|
|
Required |
Name of the Kubernetes namespace |
|
|
|
|
Required only if |
Cluster resource entity reference from the catalog |
|
|
|
|
Required only if |
API url of the Kubernetes cluster |
|
|
|
|
Required |
Kubernetes API bearer token used for authentication | |
|
|
|
Optional |
If true, certificate verification is skipped |
false |
|
|
|
Optional |
Base64 encoded certificate data | |
|
|
|
Optional |
Labels applied to the namespace |
app.io/type=ns; app.io/managed-by=org; |
11. Configure Red Hat Developer Hub events module
You can enable real-time updates for GitHub entities by configuring the Events Module with webhooks together with scheduled updates.
These features are for Technology Preview only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend using them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information on Red Hat Technology Preview features, see Technology Preview Features Scope.
11.1. Configure the GitHub Events Module plugin
Configure GitHub webhooks to trigger real-time updates for GitHub Discovery and organizational data.
Learn how to configure Events Module for use with the RHDH GitHub Discovery feature and GitHub organization data.
Prerequisites
-
You have added your GitHub integration credentials in the
app-config.yamlfile. -
You have defined the
schedule.frequencyin theapp-config.yamlfile as longer time period, such as 24 hours. - For GitHub Discovery only: You have enabled GitHub Discovery.
- For GitHub Organizational Data only: You have enabled Github Authentication with user ingestion.
Procedure
Add the GitHub Events Module to your
dynamic-plugins.yamlconfiguration file as follows:data: dynamic-plugins.yaml: | includes: - dynamic-plugins.default.yaml plugins: - package: oci://registry.access.redhat.com/rhdh/backstage-plugin-events-backend-module-github@sha256:2c1ccc4fb01883dc4da1aa0c417d6e28d944c6ce941454ee41698f2c1812035c disabled: false
To create HTTP endpoints to receive events for the
github, add the following to yourapp-config.yamlfile:events: http: topics: - github modules: github: webhookSecret: ${GITHUB_WEBHOOK_SECRET}ImportantSecure your workflow by adding a webhook secret token to validate webhook deliveries.
Create a GitHub webhook with the following specifications:
- For GitHub Discovery Events: push, repository
- For GitHub Organizational Data Events: organization, team and membership
- Content Type: application/json
Payload URL:
https://<my_developer_hub_domain>/api/events/http/githubNotePayload URL is the URL exposed after configuring the HTTP endpoint.
Verification
Check the log for an entry that confirms that http endpoint was set up successfully to receive events from the GitHub webhook.
- Example of a log of successfully set up http endpoint
{"level":"\u001b[32minfo\u001b[39m","message":"Registered /api/events/http/github to receive events","plugin":"events","service":"backstage","timestamp":"2025-11-03 02:19:12"}
For GitHub Discovery only:
Trigger a GitHub push event by adding, modifying or deleting the
catalog-info.yamlfile in the repository where you set up your webhook. A record of this event should appear in the pod logs of your RHDH instance.- Example of a log with changes to
catalog-info.yamlfile {"level":"\u001b[32minfo\u001b[39m","message":"Processed Github push event: added 0 - removed 0 - modified 1","plugin":"catalog","service":"backstage","span_id":"47534b96c4afc654","target":"github-provider:providerId","timestamp":"2025-06-15 21:33:14","trace_flags":"01","trace_id":"ecc782deb86aed2027da0ae6b1999e5c"}
- Example of a log with changes to
For GitHub Organizational Data only:
- Newly added users and teams appear in the RHDH catalog.
12. Override Core Backend Service Configuration
Customize core backend services by installing them as BackendFeatures using dynamic plugin functionality.
The Red Hat Developer Hub (RHDH) backend platform consists of several core services that are well encapsulated. The RHDH backend installs these default core services statically during initialization.
Customize a core service by installing it as a BackendFeature by using the dynamic plugin functionality.
Procedure
Configure Developer Hub to allow a core service override, by setting the corresponding core service ID environment variable to
truein the Developer Hubapp-config.yamlconfiguration file.Table 13. Environment variables and core service IDs
Variable Overrides the related service ENABLE_CORE_AUTH_OVERRIDEcore.authENABLE_CORE_CACHE_OVERRIDEcore.cacheENABLE_CORE_ROOTCONFIG_OVERRIDEcore.rootConfigENABLE_CORE_DATABASE_OVERRIDEcore.databaseENABLE_CORE_DISCOVERY_OVERRIDEcore.discoveryENABLE_CORE_HTTPAUTH_OVERRIDEcore.httpAuthENABLE_CORE_HTTPROUTER_OVERRIDEcore.httpRouterENABLE_CORE_LIFECYCLE_OVERRIDEcore.lifecycleENABLE_CORE_LOGGER_OVERRIDEcore.loggerENABLE_CORE_PERMISSIONS_OVERRIDEcore.permissionsENABLE_CORE_ROOTHEALTH_OVERRIDEcore.rootHealthENABLE_CORE_ROOTHTTPROUTER_OVERRIDEcore.rootHttpRouterENABLE_CORE_ROOTLIFECYCLE_OVERRIDEcore.rootLifecycleENABLE_CORE_SCHEDULER_OVERRIDEcore.schedulerENABLE_CORE_USERINFO_OVERRIDEcore.userInfoENABLE_CORE_URLREADER_OVERRIDEcore.urlReaderENABLE_EVENTS_SERVICE_OVERRIDEevents.serviceInstall your custom core service as a
BackendFeatureas shown in the following example:Example of a
BackendFeaturemiddleware function to handle incomingHTTPrequests// Create the BackendFeature $ export const customRootHttpServerFactory: BackendFeature = rootHttpRouterServiceFactory({ configure: ({ app, routes, middleware, logger }) => { logger.info( 'Using custom root HttpRouterServiceFactory configure function', ); app.use(middleware.helmet()); app.use(middleware.cors()); app.use(middleware.compression()); app.use(middleware.logging()); // Add a the custom middleware function before all // of the route handlers app.use(addTestHeaderMiddleware({ logger })); app.use(routes); app.use(middleware.notFound()); app.use(middleware.error()); }, }); // Export the BackendFeature as the default entrypoint $ export default customRootHttpServerFactory;In the previous example, as the
BackendFeatureoverrides the default implementation of the HTTP router service, you must set theENABLE_CORE_ROOTHTTPROUTER_OVERRIDEenvironment variable totrueso that the Developer Hub does not install the default implementation automatically.