Orchestrator in Red Hat Developer Hub
Orchestrator enables serverless workflows for cloud migration, onboarding, and customization in Red Hat Developer Hub
Abstract
- 1. About Orchestrator in Red Hat Developer Hub
- 2. Build and deploy serverless workflows
- 3. Installing Red Hat Developer Hub with Orchestrator
- 3.1. Enabling the Orchestrator plugin using Operator
- 3.2. Installing Red Hat Developer Hub (RHDH) on OpenShift Container Platform with the Orchestrator using the Helm CLI
- 3.3. Install Red Hat Developer Hub (RHDH) using Helm from the OpenShift Container Platform web console
- 3.4. Resource limits for installing Red Hat Developer Hub with the Orchestrator plugin when using Helm
1. About Orchestrator in Red Hat Developer Hub
You can streamline and automate your work by using the Orchestrator in Red Hat Developer Hub. It enables you to:
- Design, run, and monitor workflows to simplify multi-step processes across applications and services.
- Standardize onboarding, migration, and integration workflows to reduce manual effort and improve consistency.
- Extend RHDH with enterprise-grade Orchestration features to support collaboration and scalability.
Orchestrator currently supports only Red Hat OpenShift Container Platform (OpenShift Container Platform); it is not available on Microsoft Azure Kubernetes Service (AKS), Amazon Elastic Kubernetes Service (EKS), or Google Kubernetes Engine (GKE).
To start using Orchestrator in RHDH, you must:
- Install the required infrastructure components, such as Red Hat OpenShift Serverless Operator, Knative Serving, Knative Eventing, and OpenShift Serverless Logic Operator
- Configure your Backstage custom resource (CR) or Helm values file for Orchestrator
- Import the Orchestrator software templates into the Red Hat Developer Hub catalog
1.1. Understand Orchestrator architecture
The Orchestrator architecture is composed of several components, each contributing to the running and management of workflows.
- Red Hat Developer Hub (RHDH)
Serves as the primary interface. It contains the following subcomponents:
- Orchestrator frontend plugins
- Provide the interface for users to run and monitor workflows within RHDH.
- Orchestrator backend plugins
- Get workflow data into Developer Hub.
- Notifications plugins
- Inform users about workflow events.
- Sonataflow
The Sonataflow orchestrator and its subcomponents handle the workflows. The Red Hat Developer Hub Orchestrator and the Red Hat Developer Hub Helm chart manage the following subcomponents lifecycle:
- OpenShift Serverless Logic Operator
- Manages the Sonataflow custom resource (CR), where each CR represents a deployed workflow.
- Sonataflow Runtime/Workflow Application
- Functions as a deployed workflow. Operates as an HTTP server, handling requests for running workflow instances. It is managed as a Kubernetes (K8s) deployment by the Openshift Serverless Logic Operator.
- Data Index Service
- Serves as a repository for workflow definitions, instances, and associated jobs. It exposes a GraphQL API used by the Orchestrator backend plugin to retrieve workflow definitions and instances.
- Job Service
- Orchestrates scheduled tasks for workflows.
- OpenShift Serverless
- Provides serverless capabilities essential for workflow communication. It employs Knative eventing to interface with the Data Index service and uses Knative functions to introduce more complex logic to workflows.
- PostgreSQL Server
- Provides a database solution essential for data persistence within the Orchestrator ecosystem. The system uses PostgreSQL Server for storing both Sonataflow information and Developer Hub data.
- Keycloak
- Provides authentication and security services within applications. Keycloak must be provisioned externally to manage authentication, as the Orchestrator Operator does not install it.
- OpenShift AMQ Streams (Strimzi/Kafka)
Provides enhanced reliability of the eventing system. Eventing can work without Kafka by using direct HTTP calls, however, this approach is not reliable.
Optional: The current deployment iteration does not natively integrate or include the AMQ Streams Operator. However, you can add the Operator post-install for enhanced reliability if you require it.
1.2. Compatibility guide for Orchestrator
The following table lists the RHDH Orchestrator plugin versions and their compatible corresponding infrastructure versions.
Orchestrator plugin version |
Red Hat Developer Hub (RHDH) version |
OpenShift version |
OpenShift Serverless Logic (OSL) version |
OpenShift Serverless version |
Orchestrator |
|
|
OSL |
|
Orchestrator |
|
|
OSL |
|
Orchestrator |
1.7 |
|
OSL |
|
Orchestrator plugin supports the same OpenShift Container Platform versions as RHDH. See the Life Cycle page.
1.3. Orchestrator plugin dependencies for Operator installation
When you enable the Orchestrator plugin in your Backstage custom resource (CR), the Operator automatically provisions the following required dependencies:
-
A
SonataflowPlatform
CR -
NetworkPolicies
that allow traffic between infrastructure resources (Knative, Serverless Logic Operator), monitoring traffic, and intra-namespace traffic
The Orchestrator plugin requires these components to run. For example, to communicate with the SonataFlow platform, the Orchestrator plugin uses the sonataflow-platform-data-index-service
, which is created by the SonataFlowPlatform
CR.
The SonataFlowPlatform
CR contains Data Index service that requires PostgreSQL database as shown in the following example:
persistence: postgresql: secretRef: name: backstage-psql-secret-{{backstage-name}} userKey: POSTGRES_USER passwordKey: POSTGRES_PASSWORD serviceRef: name: backstage-psql-{{backstage-name}} # # Namespace where the Backstage CR is created namespace: {{backstage-ns}} # Namespace where the Backstage (CR) is created databaseName: backstage_plugin_orchestrator
By default, the Orchestrator plugin dependencies use the following:
-
The PostgreSQL database named
backstage_plugin_orchestrator
created by Backstage -
A Secret created by Backstage Operator for the PostgreSQL with
POSTGRES_USER
andPOSTGRES_PASSWORD
keys as the database credentials in the Backstage CR namespace. -
A Service created by Backstage Operator for the PostgreSQL database with the name
backstage-psql-{{backstage-name}}
in the Backstage CR namespace.
For more information about automatic plugin dependency creation when the Backstage CR is applied to the cluster, see Dynamic plugins dependency management.
To enable the Backstage Operator to work with the SonataFlow platform, its ServiceAccount
must have the appropriate permissions.
The Operator automatically creates the required Role and RoleBinding resource in profile/rhdh/plugin-rbac
directory.
1.4. Enabling Orchestrator plugins components
To use the Orchestrator, enable the Orchestrator plugins for Red Hat Developer Hub, that are disabled by default:
- Orchestrator frontend plugins
backstage-plugin-orchestrator
- Provides the interface for users to run and monitor workflows within RHDH. You can run and track the execution status of processes.
backstage-plugin-orchestrator-form-widgets
- Provides custom widgets for the workflow execution form, allowing you to customize input fields and streamline the process of launching workflows.
backstage-plugin-orchestrator-form
- Provides the workflow execution form where you can define and submit the necessary input data required to start a new workflow instance.
backstage-plugin-orchestrator-form-api
- Defines the API for extending the workflow execution form.
- Orchestrator backend plugins
backstage-plugin-orchestrator-backend
- Gets workflow data into Developer Hub making sure RHDH ingests critical workflow metadata and runtime status fulfilling your need for visibility.
backstage-plugin-orchestrator-common
- Contains the backend OpenAPI specification along with autogenerated API documentation and client libraries.
scaffolder-backend-module-orchestrator
-
Provides callable actions from scaffolder templates, such as
orchestrator:workflow:run
ororchestrator:workflow:get_params
.
- Notification plugins
backstage-plugin-notifications
- Provides notification frontend components that allow you to display immediate, visible alerts about key workflow state changes, allowing real-time status tracking.
backstage-plugin-signals
- Provides notification frontend components user experience enhancements so you can process the real-time lifecycle events.
backstage-plugin-notifications-backend-dynamic
- Provides notification backend components allowing you to manage and store the stream of workflow events, making sure that critical notifications are ready to be served to the front-end user interface.
backstage-plugin-signals-backend-dynamic
- Provides the backend components for notification user experience enhancements allowing you to establish the necessary communication channels for the event-driven orchestration that is core to Serverless Workflows.
Prerequisites
When using the Red Hat Developer Hub Helm chart, you have installed the necessary OpenShift Serverless Operators.
NoteWhen using the Red Hat Developer Hub Operator, the Operator installs the necessary OpenShift Serverless Operators automatically. For specific use cases, install the dependencies manually or use helper utilities.
- (Optional) For managing the Orchestrator project, you have an instance of Argo CD or Red Hat OpenShift GitOps in the cluster. It is disabled by default.
- (Optional) To use Tekton tasks and the build pipeline, you have an instance of Tekton or Red Hat OpenShift Pipelines in the cluster. These features are disabled by default.
Procedure
Locate your Developer Hub configuration and enable the Orchestrator plugins and the supporting notification plugins.
plugins: - package: "@redhat/backstage-plugin-orchestrator@1.7.1" disabled: false - package: "@redhat/backstage-plugin-orchestrator-backend-dynamic@1.7.1" disabled: false - package: "@redhat/backstage-plugin-scaffolder-backend-module-orchestrator-dynamic@1.7.1" disabled: false - package: "@redhat/backstage-plugin-orchestrator-form-widgets@1.7.1" disabled: false - package: "@redhat/backstage-plugin-orchestrator-common@1.7.1" disabled: false - package: "@redhat/backstage-plugin-orchestrator-form@1.7.1" disabled: false - package: "@redhat/backstage-plugin-orchestrator-form-api@1.7.1" disabled: false - package: "./dynamic-plugins/dist/backstage-plugin-notifications" disabled: false - package: "./dynamic-plugins/dist/backstage-plugin-signals" disabled: false - package: "./dynamic-plugins/dist/backstage-plugin-notifications-backend-dynamic" disabled: false - package: "./dynamic-plugins/dist/backstage-plugin-signals-backend-dynamic" disabled: false
1.4.1. Installing components using the Orchestrator Infrastructure for Red Hat Developer Hub Helm chart
You can use Orchestrator Infrastructure for Red Hat Developer Hub to install components for the Orchestrator plugins.
Procedure
-
Run the
helm install
command for theorchestrator-infra
chart. This command initiates the installation of the Red Hat Serverless Operator and Red Hat Serverless Logic Operator components. -
Manually approve the install plans for the Operators. You must run the
oc patch installplan
commands provided in the output to approve their installation.
By default, Orchestrator Infrastructure for Red Hat Developer Hub Helm chart does not auto-approve the required Serverless Operators. You must manually approve the install plans.
1.4.2. Installing Orchestrator components manually on OpenShift Container Platform
Use manual installation when you want full control of the setup process and component versions. Manual installation method focuses on setting up the underlying infrastructure.
Procedure
- Install the OpenShift Serverless components manually by following the instructions in the Red Hat OpenShift Serverless documentation.
-
You must also configure workflow persistence to prevent workflow context from being lost when the Pod restarts. You can do this configuration at the namespace level using the
SonataFlowPlatform
orSonataFlow
custom resources (CR). For detailed instructions on configuring persistence using theSonataFlowPlatform
orSonataFlow
custom resources, see Managing workflow persistence. - (Optional) If required, deploy a custom PostgreSQL database.
1.4.3. Installing components using the RHDH
helper script
You can use the RHDH helper script plugin-infra.sh
to quickly install the OpenShift Serverless infrastructure and Openshift Serverless Logic infrastructure required by the Orchestrator plugin.
Do not use plugin-infra.sh
in production.
Procedure
Download the
plugin-infra.sh
script as shown in the following example:curl -sSLO https://raw.githubusercontent.com/redhat-developer/rhdh-operator/refs/heads/release-${PRODUCT_VERSION}/config/profile/rhdh/plugin-infra/plugin-infra.sh # Specify the Red Hat Developer Hub version in the URL or use main
Run the script:
$ ./plugin-infra.sh
2. Build and deploy serverless workflows
To deploy a workflow and make it available in the Orchestrator plugin, follow these main steps:
- Building workflow images
- Generating workflow manifests
- Deploying workflows to a cluster
This process moves the workflow from your local machine to deployment on a cluster.
2.1. Benefits of workflow images
While the OpenShift Serverless Logic Operator supports the building of workflows dynamically, this approach is primarily for experimentation. For production deployments, building images is the preferred method due to the following reasons:
- Production readiness: Prebuilt images can be scanned, secured, and tested before going live.
-
GitOps compatibility: The Orchestrator relies on a central OpenShift Serverless Logic Operator instance to track workflows and their state. To use this tracking service, you must deploy workflows with the
gitops
profile, which expects a prebuilt image. - Testing and quality: Building an image gives you more control over the testing process.
2.1.1. Project structure overview
The project utilizes Quarkus project layout (Maven project structure). This structure is illustrated by the following 01_basic
workflow example:
01_basic ├── pom.xml ├── README.md └── src └── main ├── docker │ ├── Dockerfile.jvm │ ├── Dockerfile.legacy-jar │ ├── Dockerfile.native │ └── Dockerfile.native-micro └── resources ├── application.properties ├── basic.svg ├── basic.sw.yaml ├── schemas │ ├── basic__main-schema.json │ └── workflow-output-schema.json └── secret.properties
The main workflow resources are located under the src/main/resources/
directory.
The kn-workflow CLI
generated this project structure. You can try generating the structure yourself by following the Getting Started guide. For more information on the Quarkus project, see Creating your first application.
2.1.2. Creating and running your serverless workflow project locally
The kn-workflow
CLI is an essential tool that generates workflow manifests and project structures. To ensure successful development and immediate testing, begin developing a new serverless workflow locally by completing the following steps:
Procedure
Use the
kn-workflow
CLI to create a new workflow project, which adheres to the Quarkus structure as shown in the following example:kn-workflow quarkus create --name <specify project name, for example ,00_new_project>
Edit the workflow, add schema and specific files, and run it locally from project folder as shown in the following example:
kn-workflow quarkus run
Run the workflow locally using the
kn-workflow run
which pulls the following image:registry.redhat.io/openshift-serverless-1/logic-swf-devmode-rhel8:1.36.0
For building the workflow image, the
kn-workflow
CLI pulls the following images:registry.redhat.io/openshift-serverless-1/logic-swf-builder-rhel8:1.36.0-8 registry.access.redhat.com/ubi9/openjdk-17:1.21-2
2.2. Building workflow images locally
You can use the build script (build.sh
) to build workflow images. You can run it either locally or inside a container. This section highlights how build workflow images locally.
Procedure
Clone the project as shown in the following example:
git clone git@github.com:rhdhorchestrator/orchestrator-demo.git cd orchestrator-demo
Check the help menu of the script:
./scripts/build.sh --help
Run the
build.sh
script, providing the required flags, for instance, the image path (-i
), workflow source directory (-w
), and manifests output directory (-m
).ImportantYou must specify the full target image path with a tag as shown in the following example:
./scripts/build.sh --image=quay.io/orchestrator/demo-basic:test -w 01_basic/ -m 01_basic/manifests
2.2.1. The build-sh
script functionality and important flags
The build-sh
script does the following tasks in order:
-
Generates workflow manifests using the
kn-workflow
CLI. -
Builds the workflow image using
podman
ordocker
. -
Optional: The script pushes the images to an image registry and deploys the workflow using
kubectl
.
You can review the script configuration options and see available flags and their functions by accessing the help menu:
./scripts/build.sh [flags]
The following flags are essential for running the script:
Flag | Description |
---|---|
|
Required: Full image path, for example, |
|
Workflow source directory (default is the current directory) |
|
Where to save generated manifests |
|
Push the image to the registry |
|
Deploy the workflow |
|
Show the help message |
The script also supports builder and runtime image overrides, namespace targeting, and persistence flags.
2.2.2. Environment variables supported by the build script
The build-sh
script supports the following environment variables to customize the workflow build process without modifying the script itself:
QUARKUS_EXTENSIONS
The
QUARKUS_EXTENSIONS
variable specifies additional Quarkus extensions required by the workflow. This variable takes the format of a comma-separated list of fully qualified extension IDs as shown in the following example:export QUARKUS_EXTENSIONS="io.quarkus:quarkus-smallrye-reactive-messaging-kafka"
Add Kafka messaging support or other integrations at build time.
MAVEN_ARGS_APPEND
The
MAVEN_ARGS_APPEND
variable appends additional arguments to the Maven build command. This variable takes the format of a string of Maven CLI arguments as shown in the following example:export MAVEN_ARGS_APPEND="-DmaxYamlCodePoints=35000000"
Control build behavior. For example, set
maxYamlCodePoints
parameter that controls the maximum input size for YAML input files to 35000000 characters (~33MB in UTF-8).
2.2.3. Required tools
To run the build-sh
script locally and manage the workflow lifecycle, you must install the following command-line tools:
Tool | Conceptual Purpose. |
---|---|
podman or docker |
Container runtime required for building the workflow images. |
|
Kubernetes CLI. |
|
YAML processor. |
|
JSON processor. |
|
Shell utilities. |
|
CLI for generating workflow manifests. |
2.2.4. Building the 01_basic
workflow
To run the script from the root directory of the repository, you must use the -w
flag to point to the workflow directory. Additionally, specify the output directory with the -m
flag.
Prerequisites
- You have specified the target image using a tag.
Procedure
Run the following command:
./scripts/build.sh --image=quay.io/orchestrator/demo-basic:test -w 01_basic/ -m 01_basic/manifests
This build command produces the following two artifacts:
-
A workflow image and Kubernetes manifests:
quay.io/orchestrator/demo-basic:test
and tagged aslatest
. -
Kubernetes manifests under:
01_basic/manifests/
-
A workflow image and Kubernetes manifests:
-
Optional: You can add the
--push
flag to automatically push the image after building. Otherwise, pushing manually is mandatory before deploying.
2.3. Generated workflow manifests
The following example is an illustration of what is generated under the 01_basic/manifests
:
01_basic/manifests ├── 00-secret_basic-secrets.yaml ├── 01-configmap_basic-props.yaml ├── 02-configmap_01-basic-resources-schemas.yaml └── 03-sonataflow_basic.yaml
00-secret_basic-secrets.yaml
-
Contains secrets from
01_basic/src/main/resources/secret.properties
. Values are not required at this stage as you can set them later after applying CRs or when using GitOps.
In OpenShift Serverless Logic v1.36
, after updating a secret, you must manually restart the workflow Pod for changes to apply.
01-configmap_basic-props.yaml
- Holds application properties from application.properties. Any change to this ConfigMap triggers an automatic Pod restart.
02-configmap_01-basic-resources-schemas.yaml
Contains JSON schemas from src/main/resources/schemas.
NoteYou do not need to deploy certain configuration resources when using the GitOps profile.
03-sonataflow_basic.yaml
The SonataFlow custom resource (CR) that defines the workflow.
podTemplate: container: image: quay.io/orchestrator/demo-basic resources: {} envFrom: - secretRef: name: basic-secrets
persistence: postgresql: secretRef: name:
sonataflow-psql-postgresql
userKey:<your_postgres_username>
passwordKey:<your_postgres_password>
serviceRef: name:sonataflow-psql-postgresql
port: 5432 databaseName: sonataflow databaseSchema: basicwhere:
postgresql:secretRef:name
- Enter the Secret name for your deployment.
postgresql:secretRef:userKey
- Enter the key for your deployment.
postgresql:secretRef:passwordKey
- Enter the password for your deployment.
postgresql:serviceRef:name
Enter the Service name for your deployment.
If you must connect to an external database, replace
serviceRef
withjdbcUrl
. See Managing workflow persistence.
By default, the script generates all the manifests without a namespace. You can specify a namespace to the script by using the --namespace
flag if you know the target namespace in advance. Otherwise, you must provide the namespace when applying the manifests to the cluster. See Configuring workflow services.
2.4. Deploying workflows on a cluster
You can deploy the workflow on a cluster, since the image is pushed to the image registry and the deployment manifests are available.
Prerequisites
You have an OpenShift Container Platform cluster with the following versions of components installed:
-
Red Hat Developer Hub (RHDH)
v1.7
-
Orchestrator plugins
v1.7.1
-
OpenShift Serverless
v1.36
OpenShift Serverless Logic
v1.36
For instructions on how to install these components, see the Orchestrator plugin components on OpenShift Container Platform.
-
Red Hat Developer Hub (RHDH)
-
You must apply the workflow manifests in a namespace that contains a
SonataflowPlatform
custom resource (CR), which manages the supporting services.
Procedure
Use the
kubectl create
command specifying the target namespace to apply the Kubernetes manifests as shown in the following example:kubectl create -n <your_namespace> -f ./01_basic/manifests/.
After deployment, monitor the status of the workflow pods as shown in the following example:
kubectl get pods -n <your_namespace> -l app=basic
The pod may initially appear in an
Error
state because of missing or incomplete configuration in the Secret or ConfigMap.Inspect the Pod logs as shown in the following example:
oc logs -n <your_namespace> basic-f7c6ff455-vwl56
The following code is an example of the output:
SRCFG00040: The config property quarkus.openapi-generator.notifications.auth.BearerToken.bearer-token is defined as the empty String ("") which the following Converter considered to be null: io.smallrye.config.Converters$BuiltInConverter java.lang.RuntimeException: Failed to start quarkus ... Caused by: io.quarkus.runtime.configuration.ConfigurationException: Failed to read configuration properties
The error indicates a missing property:
quarkus.openapi-generator.notifications.auth.BearerToken.bearer-token
.In such a case where the logs show the
ConfigurationException: Failed to read configuration properties
error or indicate a missing value, retrieve the ConfigMap as shown in the following example:oc get -n <your_namespace> configmaps basic-props -o yaml
The following code is an example of the sample output:
apiVersion: v1 data: application.properties: | # Backstage notifications service quarkus.rest-client.notifications.url=${BACKSTAGE_NOTIFICATIONS_URL} quarkus.openapi-generator.notifications.auth.BearerToken.bearer-token=${NOTIFICATIONS_BEARER_TOKEN} ...
Resolve the placeholders using values provided using a Secret.
You must edit the corresponding Secret and provide appropriate base64-encoded values to resolve the placeholders in
application.properties
as shown in the following example:kubectl edit secrets -n <your_namespace> basic-secrets
-
Restart the workflow Pod for Secret changes to take effect in OpenShift Serverless Logic
v1.36
.
Verification
Verify the deployment status by checking the Pods again as shown in the following example:
oc get pods -n <your_namespace> -l app=basic
The expected status for a successfully deployed workflow Pod is as shown in the following example:
NAME READY STATUS RESTARTS AGE basic-f7c6ff455-grkxd 1/1 Running 0 47s
-
Once the Pod is in the
Running
state, the workflow now appears in the Orchestrator plugin inside the Red Hat Developer Hub.
Next steps
- Inspect the provided build script to extract the actual steps and implement them in your preferred CI/CD tool, for example, GitHub Actions, GitLab CI, Jenkins, and Tekton.
3. Installing Red Hat Developer Hub with Orchestrator
To install Red Hat Developer Hub, use one of the following methods:
- The Red Hat Developer Hub Operator
- The Red Hat Developer Hub Helm chart
3.1. Enabling the Orchestrator plugin using Operator
You can enable the Orchestrator plugin in RHDH by configuring dynamic plugins in your Backstage custom resource (CR).
Prerequisites
- You have installed RHDH on OpenShift Container Platform.
- You have access to edit or create ConfigMaps in the namespace where the Backstage CR is deployed.
Procedure
To enable the Orchestrator plugin with default settings, set
disabled: false
for the package. For example,package: "@redhat/backstage-plugin-orchestrator@<plugin_version>
is set todisabled: false
:- package: "@redhat/backstage-plugin-orchestrator@<plugin_version>" disabled: false
Example: Complete configuration of the Orchestrator plugin
apiVersion: v1 kind: ConfigMap metadata: name: orchestrator-plugin data: dynamic-plugins.yaml: | includes: - dynamic-plugins.default.yaml plugins: - package: "@redhat/backstage-plugin-orchestrator@1.7.1" disabled: false pluginConfig: dynamicPlugins: frontend: red-hat-developer-hub.backstage-plugin-orchestrator: appIcons: - importName: OrchestratorIcon name: orchestratorIcon dynamicRoutes: - importName: OrchestratorPage menuItem: icon: orchestratorIcon text: Orchestrator path: /orchestrator entityTabs: - path: /workflows title: Workflows mountPoint: entity.page.workflows mountPoints: - mountPoint: entity.page.workflows/cards importName: OrchestratorCatalogTab config: layout: gridColumn: '1 / -1' if: anyOf: - IsOrchestratorCatalogTabAvailable - package: "@redhat/backstage-plugin-orchestrator-backend-dynamic@1.7.1" disabled: false pluginConfig: orchestrator: dataIndexService: url: http://sonataflow-platform-data-index-service dependencies: - ref: sonataflow - package: "@redhat/backstage-plugin-scaffolder-backend-module-orchestrator-dynamic@1.7.1" disabled: false pluginConfig: orchestrator: dataIndexService: url: http://sonataflow-platform-data-index-service - package: "@redhat/backstage-plugin-orchestrator-form-widgets@1.7.1" disabled: false pluginConfig: dynamicPlugins: frontend: red-hat-developer-hub.backstage-plugin-orchestrator-form-widgets: { } --- apiVersion: rhdh.redhat.com/v1alpha3 kind: Backstage metadata: name: orchestrator spec: application: appConfig: configMaps: - name: app-config-rhdh dynamicPluginsConfigMapName: orchestrator-plugin
Create a secret containing the
BACKEND_SECRET
value as shown in the following example:apiVersion: v1 kind: ConfigMap metadata: name: app-config-rhdh data: app-config-rhdh.yaml: |- auth: environment: development providers: guest: # using the guest user to query the '/api/dynamic-plugins-info/loaded-plugins' endpoint. dangerouslyAllowOutsideDevelopment: true backend: auth: externalAccess: - type: static options: token: ${BACKEND_SECRET} subject: orchestrator --- apiVersion: v1 kind: Secret metadata: name: backend-auth-secret stringData: # generated with the command below (from https://backstage.io/docs/auth/service-to-service-auth/#setup): # node -p 'require("crypto").randomBytes(24).toString("base64")' # notsecret BACKEND_SECRET: "R2FxRVNrcmwzYzhhN3l0V1VRcnQ3L1pLT09WaVhDNUEK"
Configure your Backstage CR to update the secret name in the
extraEnvs
field as shown in the following example:apiVersion: rhdh.redhat.com/v1alpha4 kind: Backstage metadata: name: orchestrator spec: application: appConfig: configMaps: - name: app-config-rhdh dynamicPluginsConfigMapName: orchestrator-plugin extraEnvs: secrets: # secret that contains the BACKEND_SECRET key - name: backend-auth-secret
Verification
- In the RHDH console, confirm that the Orchestrator frontend and backend features are available.
3.2. Installing Red Hat Developer Hub (RHDH) on OpenShift Container Platform with the Orchestrator using the Helm CLI
You can install Red Hat Developer Hub (RHDH) on OpenShift Container Platform with the Orchestrator by using the Helm CLI. The installation automatically enables the required dynamic plugins and integrates workflow infrastructure.
Prerequisites
- You are logged in as an administrator and have access to the Red Hat Developer Hub Helm chart repository.
You can install the necessary infrastructures resources, such as SonataFlow, alongside RHDH in the same namespace.
This is a one-off requirement and must be completed before enabling the Orchestrator plugin.
Procedure
As an administrator, install relevant cluster-wide resources.
helm repo add openshift-helm-charts https://charts.openshift.io/ helm install
<release_name>
openshift-helm-charts/redhat-developer-hub-orchestrator-infraImportantYou must be an administrator to install the
redhat-developer-hub-orchestrator-infra
Helm chart because it deploys additional cluster-scoped OpenShift Serverless and OpenShift Serverless Logic Operators. As an administrator, you must manually approve the install plans for OpenShift Serverless and Serverless Logic Operators.Install the Backstage chart with the orchestrator enabled as shown in the following example:
$ helm install <release_name> openshift-helm-charts/redhat-developer-hub --version 1.7.1 \ --set orchestrator.enabled=true
(Optional) Enable Notifications and Signals plugins by adding them to the
global.dynamic.plugins
list in yourvalues.yaml
file as shown in the following example:global: dynamic: plugins: - disabled: false package: "./dynamic-plugins/dist/backstage-plugin-notifications" - disabled: false package: "./dynamic-plugins/dist/backstage-plugin-signals" - disabled: false package: "./dynamic-plugins/dist/backstage-plugin-notifications-backend-dynamic" - disabled: false package: "./dynamic-plugins/dist/backstage-plugin-signals-backend-dynamic"
(Optional) You can disable the Serverless Logic and Serverless Operators individually or together by setting their values to
false
, as shown in the following example:helm install <release_name> openshift-helm-charts/redhat-developer-hub \ --version 1.7.1 \ --set orchestrator.enabled=true \ --set orchestrator.serverlessOperator=false \ --set orchestrator.serverlessLogicOperator=false
(Optional) If you are using an external database, add the following configuration under
orchestrator.sonataflowPlatform
in yourvalues.yaml
file:orchestrator: sonataflowPlatform: externalDBsecretRef: "<cred-secret>" externalDBName: "<database_name>" # The name of the user-configured existing database (Not the database that the orchestrator and sonataflow resources use). externalDBHost: "<database_host>" externalDBPort: "<database_port>"
NoteThis step only configures the Orchestrators use of an external database. To configure Red Hat Developer Hub to use an external PostgreSQL instance, follow the steps in Configuring a PostgreSQL instance using Helm.
Verification
- Verify that the Orchestrator plugin is visible in the Red Hat Developer Hub UI.
- Create and run sample workflows to confirm the orchestration is functioning correctly.
3.3. Install Red Hat Developer Hub (RHDH) using Helm from the OpenShift Container Platform web console
You can install Red Hat Developer Hub (RHDH) with the Orchestrator by using the (OpenShift Container Platform) web console. This method is useful if you prefer a graphical interface or want to deploy cluster-wide resources without using the Helm CLI.
Prerequisites
- You are logged in to the OpenShift Container Platform web console as an administrator.
- You have access to the Red Hat Developer Hub Helm chart repository.
- Your cluster has internet access or the Helm charts are mirrored in a disconnected environment.
Procedure
- In the OpenShift Container Platform web console, go to the Helm Charts and verify that the Red Hat Developer Hub Helm chart repository is available.
Search for the Orchestrator infrastructure for Red Hat Developer Hub and select Install.
ImportantYou must be an administrator to install the Orchestrator Infrastructure for Red Hat Developer Hub Helm chart because it deploys cluster-scoped resources. As an administrator, you must manually approve the install plans for OpenShift Serverless and Serverless Logic Operators.
As a regular user, search for the Red Hat Developer Hub chart and install it by setting the value of
orchestrator.enabled
totrue
. Otherwise, the Orchestrator will not be deployed.- Wait until they are successfully deployed.
- Monitor the deployment status by navigating to Pods or releases.
Verification
After deployment completes:
- The orchestrator-related pods are running in the selected namespace.
- Cluster-wide resources are present.
- You can start connecting the orchestrator to your Red Hat Developer Hub UI.
3.4. Resource limits for installing Red Hat Developer Hub with the Orchestrator plugin when using Helm
When installing Red Hat Developer Hub (RHDH) with the Orchestrator plugin using Helm, the chart defines default CPU and memory limits for the SonataFlowPlatform
component.
These limits are enforced by the cluster so that pods do not exceed their allocated resources.
- Default resource limits
Resource | Default value |
---|---|
CPU limits |
|
Memory limits |
|
You can override these values in any of the following ways:
-
With
values.yaml
-
With
--set
flags
-
With
Override defaults with
values.yaml
as shown in the following example:orchestrator: enabled: true sonataflowPlatform: resources: limits: cpu: "500m" memory: "1Gi"
Override with
--set
as shown in the following example:helm upgrade --install <release_name> openshift-helm-charts/redhat-developer-hub \ --set orchestrator.enabled=true \ --set orchestrator.sonataflowPlatform.resources.requests.cpu=500m \ --set orchestrator.sonataflowPlatform.resources.requests.memory=128Mi \ --set orchestrator.sonataflowPlatform.resources.limits.cpu=1 \ --set orchestrator.sonataflowPlatform.resources.limits.memory=2Gi
NoteThe
--set
setting is applicable only whenorchestrator.enabled
istrue
. By default, it is set tofalse
.