Configuring dynamic plugins
Abstract
- 1. Installing Ansible plug-ins for Red Hat Developer Hub
- 2. Enabling the Argo CD plugin
- 3. Installing and configuring Keycloak
- 4. Installing and configuring the Nexus Repository Manager plugin
- 5. Installing and configuring the Tekton plugin
- 6. Installing and configuring the Topology plugin
- 7. Using the dynamic plugins cache
- 8. Using Redis Cache with dynamic plugins
1. Installing Ansible plug-ins for Red Hat Developer Hub
Ansible plug-ins for Red Hat Developer Hub deliver an Ansible-specific portal experience with curated learning paths, push-button content creation, integrated development tools, and other opinionated resources.
The Ansible plug-ins are a Technology Preview feature only.
Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend using them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information on Red Hat Technology Preview features, see Technology Preview Features Scope.
Additional detail on how Red Hat provides support for bundled community dynamic plugins is available on the Red Hat Developer Support Policy page.
To install and configure the Ansible plugins, see Installing Ansible plug-ins for Red Hat Developer Hub.
2. Enabling the Argo CD plugin
You can use the Argo CD plugin to visualize the Continuous Delivery (CD) workflows in OpenShift GitOps. This plugin provides a visual overview of the application’s status, deployment details, commit message, author of the commit, container image promoted to environment and deployment history.
Prerequisites
Add Argo CD instance information to your
app-config.yaml
configmap as shown in the following example:argocd: appLocatorMethods: - type: 'config' instances: - name: argoInstance1 url: https://argoInstance1.com username: ${ARGOCD_USERNAME} password: ${ARGOCD_PASSWORD} - name: argoInstance2 url: https://argoInstance2.com username: ${ARGOCD_USERNAME} password: ${ARGOCD_PASSWORD}
Add the following annotation to the entity’s
catalog-info.yaml
file to identify the Argo CD applications.annotations: ... # The label that Argo CD uses to fetch all the applications. The format to be used is label.key=label.value. For example, rht-gitops.com/janus-argocd=quarkus-app. argocd/app-selector: '${ARGOCD_LABEL_SELECTOR}'
(Optional) Add the following annotation to the entity’s
catalog-info.yaml
file to switch between Argo CD instances as shown in the following example:annotations: ... # The Argo CD instance name used in `app-config.yaml`. argocd/instance-name: '${ARGOCD_INSTANCE}'
NoteIf you do not set this annotation, the Argo CD plugin defaults to the first Argo CD instance configured in
app-config.yaml
.
Procedure
Add the following to your dynamic-plugins ConfigMap to enable the Argo CD plugin.
global: dynamic: includes: - dynamic-plugins.default.yaml plugins: - package: ./dynamic-plugins/dist/roadiehq-backstage-plugin-argo-cd-backend-dynamic disabled: false - package: ./dynamic-plugins/dist/backstage-community-plugin-redhat-argocd disabled: false
2.1. (Optional) Enabling Argo CD Rollouts feature
The Argo CD Rollouts feature enhances Kubernetes by providing advanced deployment strategies, such as blue-green and canary deployments, for your applications. When integrated into the backstage Kubernetes plugin, it allows developers and operations team to visualize and manage these Rollouts seamlessly within the Backstage interface.
Prerequisites
-
Backstage Kubernetes plugin installed: Verify that the Kubernetes plugin in Backstage (
@backstage/plugin-kubernetes
) is installed and configured. To install and configure Kubernetes plugin in Backstage, see Installaltion and Configuration guide. -
Kubernetes cluster access: Ensure you have access to the Kubernetes cluster with the necessary permissions to create and manage custom resources and
ClusterRoles
. -
Argo Rollouts resources: Confirm that the Kubernetes cluster has the
argoproj.io
group resources (for example, Rollouts and AnalysisRuns) installed.
Procedures
Update
app-config.yaml
for custom resources.-
Open the
app-config.yaml
file in your Backstage instance. Add the following
customResources
component under thekubernetes
configuration to enable Argo Rollouts and AnalysisRuns:kubernetes: ... customResources: - group: 'argoproj.io' apiVersion: 'v1alpha1' plural: 'Rollouts' - group: 'argoproj.io' apiVersion: 'v1alpha1' plural: 'analysisruns'
-
Open the
Grant
ClusterRole
permissions for custom resources.Verify whether the Backstage Kubernetes plugin is already configured. If it is, the
ClusterRole
for Rollouts and AnalysisRuns might already be granted.NoteUse a prepared manifest for a read-only
ClusterRole
that provides access for both the Kubernetes plugin and the ArgoCD plugin.If not already granted, use the following YAML manifest to create the ClusterRole:
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: backstage-read-only rules: - apiGroups: - argoproj.io resources: - Rollouts - analysisruns verbs: - get - list
Apply the manifest to the cluster using
kubectl
:kubectl apply -f <your-clusterrole-file>.yaml
-
Ensure the
ServiceAccount
accessing the cluster has thisClusterRole
assigned.
Add annotations to
catalog-info.yaml
to identify Kubernetes resources for BackstageFor identifying resources by entity ID:
annotations: ... backstage.io/kubernetes-id: <BACKSTAGE_ENTITY_NAME>
For identifying resources by namespace (optional):
annotations: ... backstage.io/kubernetes-namespace: <RESOURCE_NAMESPACE>
For using custom label selectors (takes precedence over the above annotations):
annotations: ... backstage.io/kubernetes-label-selector: 'app=my-app,component=front-end'
Add label to Kubernetes resources to enable Backstage to find the appropriate Kubernetes resources.
Backstage Kubernetes plugin label: Add this label to map resources to specific Backstage entities.
labels: ... backstage.io/kubernetes-id: <BACKSTAGE_ENTITY_NAME>
GitOps application mapping: Add this label to map Argo CD Rollouts to a specific GitOps application
labels: ... app.kubernetes.io/instance: <GITOPS_APPLICATION_NAME>
NoteIf using the label selector annotation (backstage.io/kubernetes-label-selector), ensure the specified labels are present on the resources. The label selector will override other annotations like kubernetes-id or kubernetes-namespace.
Verification
- Push code changes to your GitOps repository to trigger a deployment.
- Open the Backstage interface and navigate to the entity you configured.
Confirm that:
- Kubernetes resources, such as Rollouts and AnalysisRuns, are visible under the Kubernetes tab.
- The Argo CD Rollouts feature is functioning as expected, allowing you to monitor rollouts status and manage deployments.
Additional resources
- The package path, scope, and name of the Red Hat ArgoCD plugin has changed since 1.2. For more information, see Breaking Changes in the Release notes for Red Hat Developer Hub.
- For more information on installing dynamic plugins, see Installing and viewing dynamic plugins.
3. Installing and configuring Keycloak
The Keycloak backend plugin, which integrates Keycloak into Developer Hub, has the following capabilities:
- Synchronization of Keycloak users in a realm.
- Synchronization of Keycloak groups and their users in a realm.
3.1. Installation
The Keycloak plugin is pre-loaded in Developer Hub with basic configuration properties. To enable it, set the disabled
property to false
as follows:
global: dynamic: includes: - dynamic-plugins.default.yaml plugins: - package: ./dynamic-plugins/dist/janus-idp-backstage-plugin-keycloak-backend-dynamic disabled: false
3.2. Basic configuration
To enable the Keycloak plugin, you must set the following environment variables:
-
KEYCLOAK_BASE_URL
-
KEYCLOAK_LOGIN_REALM
-
KEYCLOAK_REALM
-
KEYCLOAK_CLIENT_ID
-
KEYCLOAK_CLIENT_SECRET
3.3. Advanced configuration
Schedule configuration
You can configure a schedule in the app-config.yaml
file, as follows:
catalog: providers: keycloakOrg: default: # ... # highlight-add-start schedule: # optional; same options as in TaskScheduleDefinition # supports cron, ISO duration, "human duration" as used in code frequency: { minutes: 1 } # supports ISO duration, "human duration" as used in code timeout: { minutes: 1 } initialDelay: { seconds: 15 } # highlight-add-end
If you have made any changes to the schedule in the app-config.yaml
file, then restart to apply the changes.
Keycloak query parameters
You can override the default Keycloak query parameters in the app-config.yaml
file, as follows:
catalog: providers: keycloakOrg: default: # ... # highlight-add-start userQuerySize: 500 # Optional groupQuerySize: 250 # Optional # highlight-add-end
Communication between Developer Hub and Keycloak is enabled by using the Keycloak API. Username and password, or client credentials are supported authentication methods.
The following table describes the parameters that you can configure to enable the plugin under catalog.providers.keycloakOrg.<ENVIRONMENT_NAME>
object in the app-config.yaml
file:
Name | Description | Default Value | Required |
---|---|---|---|
|
Location of the Keycloak server, such as |
"" |
Yes |
|
Realm to synchronize |
|
No |
|
Realm used to authenticate |
|
No |
|
Username to authenticate |
"" |
Yes if using password based authentication |
|
Password to authenticate |
"" |
Yes if using password based authentication |
|
Client ID to authenticate |
"" |
Yes if using client credentials based authentication |
|
Client Secret to authenticate |
"" |
Yes if using client credentials based authentication |
|
Number of users to query at a time |
|
No |
|
Number of groups to query at a time |
|
No |
When using client credentials, the access type must be set to confidential
and service accounts must be enabled. You must also add the following roles from the realm-management
client role:
-
query-groups
-
query-users
-
view-users
3.4. Limitations
If you have self-signed or corporate certificate issues, you can set the following environment variable before starting Developer Hub:
NODE_TLS_REJECT_UNAUTHORIZED=0
The solution of setting the environment variable is not recommended.
4. Installing and configuring the Nexus Repository Manager plugin
The Nexus Repository Manager plugin displays the information about your build artifacts in your Developer Hub application. The build artifacts are available in the Nexus Repository Manager.
The Nexus Repository Manager plugin is a Technology Preview feature only.
Technology Preview features are not supported with Red Hat production service level agreements (SLAs), might not be functionally complete, and Red Hat does not recommend using them for production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.
For more information on Red Hat Technology Preview features, see Technology Preview Features Scope.
Additional detail on how Red Hat provides support for bundled community dynamic plugins is available on the Red Hat Developer Support Policy page.
4.1. Installation
The Nexus Repository Manager plugin is pre-loaded in Developer Hub with basic configuration properties. To enable it, set the disabled property to false
as follows:
global: dynamic: includes: - dynamic-plugins.default.yaml plugins: - package: ./dynamic-plugins/dist/janus-idp-backstage-plugin-nexus-repository-manager disabled: false
4.2. Configuration
Set the proxy to the desired Nexus Repository Manager server in the
app-config.yaml
file as follows:proxy: '/nexus-repository-manager': target: 'https://<NEXUS_REPOSITORY_MANAGER_URL>' headers: X-Requested-With: 'XMLHttpRequest' # Uncomment the following line to access a private Nexus Repository Manager using a token # Authorization: 'Bearer <YOUR TOKEN>' changeOrigin: true # Change to "false" in case of using self hosted Nexus Repository Manager instance with a self-signed certificate secure: true
Optional: Change the base URL of Nexus Repository Manager proxy as follows:
nexusRepositoryManager: # default path is `/nexus-repository-manager` proxyPath: /custom-path
Optional: Enable the following experimental annotations:
nexusRepositoryManager: experimentalAnnotations: true
Annotate your entity using the following annotations:
metadata: annotations: # insert the chosen annotations here # example nexus-repository-manager/docker.image-name: `<ORGANIZATION>/<REPOSITORY>`,
5. Installing and configuring the Tekton plugin
You can use the Tekton plugin to visualize the results of CI/CD pipeline runs on your Kubernetes or OpenShift clusters. The plugin allows users to visually see high level status of all associated tasks in the pipeline for their applications.
5.1. Installation
Prerequsites
-
You have installed and configured the
@backstage/plugin-kubernetes
and@backstage/plugin-kubernetes-backend
dynamic plugins. -
You have configured the Kubernetes plugin to connect to the cluster using a
ServiceAccount
. The
ClusterRole
must be granted for custom resources (PipelineRuns and TaskRuns) to theServiceAccount
accessing the cluster.NoteIf you have the RHDH Kubernetes plugin configured, then the
ClusterRole
is already granted.-
To view the pod logs, you have granted permissions for
pods/log
. You can use the following code to grant the
ClusterRole
for custom resources and pod logs:kubernetes: ... customResources: - group: 'tekton.dev' apiVersion: 'v1' plural: 'pipelineruns' - group: 'tekton.dev' apiVersion: 'v1' ... apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: backstage-read-only rules: - apiGroups: - "" resources: - pods/log verbs: - get - list - watch ... - apiGroups: - tekton.dev resources: - pipelineruns - taskruns verbs: - get - list
You can use the prepared manifest for a read-only
ClusterRole
, which provides access for both Kubernetes plugin and Tekton plugin.Add the following annotation to the entity’s
catalog-info.yaml
file to identify whether an entity contains the Kubernetes resources:annotations: ... backstage.io/kubernetes-id: <BACKSTAGE_ENTITY_NAME>
You can also add the
backstage.io/kubernetes-namespace
annotation to identify the Kubernetes resources using the defined namespace.annotations: ... backstage.io/kubernetes-namespace: <RESOURCE_NS>
Add the following annotation to the
catalog-info.yaml
file of the entity to enable the Tekton related features in RHDH. The value of the annotation identifies the name of the RHDH entity:annotations: ... janus-idp.io/tekton : <BACKSTAGE_ENTITY_NAME>
Add a custom label selector, which RHDH uses to find the Kubernetes resources. The label selector takes precedence over the ID annotations.
annotations: ... backstage.io/kubernetes-label-selector: 'app=my-app,component=front-end'
Add the following label to the resources so that the Kubernetes plugin gets the Kubernetes resources from the requested entity:
labels: ... backstage.io/kubernetes-id: <BACKSTAGE_ENTITY_NAME>
NoteWhen you use the label selector, the mentioned labels must be present on the resource.
Procedure
The Tekton plugin is pre-loaded in RHDH with basic configuration properties. To enable it, set the disabled property to false as follows:
global: dynamic: includes: - dynamic-plugins.default.yaml plugins: - package: ./dynamic-plugins/dist/janus-idp-backstage-plugin-tekton disabled: false
6. Installing and configuring the Topology plugin
6.1. Installation
The Topology plugin enables you to visualize the workloads such as Deployment, Job, Daemonset, Statefulset, CronJob, Pods and Virtual Machines powering any service on your Kubernetes cluster.
Prerequisites
- You have installed and configured the @backstage/plugin-kubernetes-backend dynamic plugins.
- You have configured the Kubernetes plugin to connect to the cluster using a ServiceAccount.
The
ClusterRole
must be granted to ServiceAccount accessing the cluster.NoteIf you have the Developer Hub Kubernetes plugin configured, then the
ClusterRole
is already granted.
Procedure
The Topology plugin is pre-loaded in Developer Hub with basic configuration properties. To enable it, set the disabled property to false as follows:
app-config.yaml
fragmentauth: global: dynamic: includes: - dynamic-plugins.default.yaml plugins: - package: ./dynamic-plugins/dist/janus-idp-backstage-plugin-topology disabled: false
6.2. Configuration
6.2.1. Viewing OpenShift routes
To view OpenShift routes, you must grant read access to the routes resource in the Cluster Role:
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: backstage-read-only rules: ... - apiGroups: - route.openshift.io resources: - routes verbs: - get - list
You must also add the following in kubernetes.customResources
property in your app-config.yaml
file:
kubernetes: ... customResources: - group: 'route.openshift.io' apiVersion: 'v1' plural: 'routes'
6.2.2. Viewing pod logs
To view pod logs, you must grant the following permission to the ClusterRole
:
apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: backstage-read-only rules: ... - apiGroups: - '' resources: - pods - pods/log verbs: - get - list - watch
6.2.3. Viewing Tekton PipelineRuns
To view the Tekton PipelineRuns you must grant read access to the pipelines
, pipelinesruns
, and taskruns
resources in the ClusterRole
:
... apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: backstage-read-only rules: ... - apiGroups: - tekton.dev resources: - pipelines - pipelineruns - taskruns verbs: - get - list
To view the Tekton PipelineRuns list in the side panel and the latest PipelineRuns status in the Topology node decorator, you must add the following code to the kubernetes.customResources
property in your app-config.yaml
file:
kubernetes: ... customResources: - group: 'tekton.dev' apiVersion: 'v1' plural: 'pipelines' - group: 'tekton.dev' apiVersion: 'v1' plural: 'pipelineruns' - group: 'tekton.dev' apiVersion: 'v1' plural: 'taskruns'
6.2.4. Viewing virtual machines
To view virtual machines, the OpenShift Virtualization operator must be installed and configured on a Kubernetes cluster. You must also grant read access to the VirtualMachines
resource in the ClusterRole
:
... apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: backstage-read-only rules: ... - apiGroups: - kubevirt.io resources: - virtualmachines - virtualmachineinstances verbs: - get - list
To view the virtual machine nodes on the topology plugin, you must add the following code to the kubernetes.customResources
property in the app-config.yaml
file:
kubernetes: ... customResources: - group: 'kubevirt.io' apiVersion: 'v1' plural: 'virtualmachines' - group: 'kubevirt.io' apiVersion: 'v1' plural: 'virtualmachineinstances'
6.2.5. Enabling the source code editor
To enable the source code editor, you must grant read access to the CheClusters resource in the ClusterRole
as shown in the following example code:
... apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: backstage-read-only rules: ... - apiGroups: - org.eclipse.che resources: - checlusters verbs: - get - list
To use the source code editor, you must add the following configuration to the kubernetes.customResources
property in your app-config.yaml
file:
kubernetes: ... customResources: - group: 'org.eclipse.che' apiVersion: 'v2' plural: 'checlusters'
6.2.6. Labels and annotations
6.2.6.1. Linking to the source code editor or the source
Add the following annotations to workload resources, such as Deployments to navigate to the Git repository of the associated application using the source code editor:
annotations: app.openshift.io/vcs-uri: <GIT_REPO_URL>
Add the following annotation to navigate to a specific branch:
annotations: app.openshift.io/vcs-ref: <GIT_REPO_BRANCH>
If Red Hat OpenShift Dev Spaces is installed and configured and git URL annotations are also added to the workload YAML file, then clicking on the edit code decorator redirects you to the Red Hat OpenShift Dev Spaces instance.
When you deploy your application using the OCP Git import flows, then you do not need to add the labels as import flows do that. Otherwise, you need to add the labels manually to the workload YAML file.
You can also add the app.openshift.io/edit-url
annotation with the edit URL that you want to access using the decorator.
6.2.6.2. Entity annotation/label
For RHDH to detect that an entity has Kubernetes components, add the following annotation to the entity’s catalog-info.yaml
:
annotations: backstage.io/kubernetes-id: <BACKSTAGE_ENTITY_NAME>
The following label is added to the resources so that the Kubernetes plugin gets the Kubernetes resources from the requested entity, add the following label to the resources:
labels: backstage.io/kubernetes-id: <BACKSTAGE_ENTITY_NAME>`
When using the label selector, the mentioned labels must be present on the resource.
6.2.6.3. Namespace annotation
To identify the Kubernetes resources using the defined namespace, add the backstage.io/kubernetes-namespace
annotation:
annotations: backstage.io/kubernetes-namespace: <RESOURCE_NS>
The Red Hat OpenShift Dev Spaces instance is not accessible using the source code editor if the backstage.io/kubernetes-namespace
annotation is added to the catalog-info.yaml
file.
To retrieve the instance URL, you require the CheCluster Custom Resource (CR). As the CheCluster CR is created in the openshift-devspaces namespace, the instance URL is not retrieved if the namespace annotation value is not openshift-devspaces.
6.2.6.4. Label selector query annotation
You can write your own custom label, which RHDH uses to find the Kubernetes resources. The label selector takes precedence over the ID annotations:
annotations: backstage.io/kubernetes-label-selector: 'app=my-app,component=front-end'
If you have multiple entities while Red Hat Dev Spaces is configured and want multiple entities to support the edit code decorator that redirects to the Red Hat Dev Spaces instance, you can add the backstage.io/kubernetes-label-selector annotation to the catalog-info.yaml file for each entity.
annotations: backstage.io/kubernetes-label-selector: 'component in (<BACKSTAGE_ENTITY_NAME>,che)'
If you are using the previous label selector, you must add the following labels to your resources so that the Kubernetes plugin gets the Kubernetes resources from the requested entity:
labels: component: che # add this label to your che cluster instance labels: component: <BACKSTAGE_ENTITY_NAME> # add this label to the other resources associated with your entity
You can also write your own custom query for the label selector with unique labels to differentiate your entities. However, you need to ensure that you add those labels to the resources associated with your entities including your CheCluster instance.
6.2.6.5. Icon displayed in the node
To display a runtime icon in the topology nodes, add the following label to workload resources, such as Deployments:
labels: app.openshift.io/runtime: <RUNTIME_NAME>
Alternatively, you can include the following label to display the runtime icon:
labels: app.kubernetes.io/name: <RUNTIME_NAME>
Supported values of <RUNTIME_NAME>
include:
- django
- dotnet
- drupal
- go-gopher
- golang
- grails
- jboss
- jruby
- js
- nginx
- nodejs
- openjdk
- perl
- phalcon
- php
- python
- quarkus
- rails
- redis
- rh-spring-boot
- rust
- java
- rh-openjdk
- ruby
- spring
- spring-boot
Other values result in icons not being rendered for the node.
6.2.6.6. App grouping
To display workload resources such as deployments or pods in a visual group, add the following label:
labels: app.kubernetes.io/part-of: <GROUP_NAME>
6.2.6.7. Node connector
To display the workload resources such as deployments or pods with a visual connector, add the following annotation:
annotations: app.openshift.io/connects-to: '[{"apiVersion": <RESOURCE_APIVERSION>,"kind": <RESOURCE_KIND>,"name": <RESOURCE_NAME>}]'
For more information about the labels and annotations, see Guidelines for labels and annotations for OpenShift applications.
7. Using the dynamic plugins cache
The dynamic plugins cache in Red Hat Developer Hub (RHDH) enhances the installation process and reduces platform boot time by storing previously installed plugins. If the configuration remains unchanged, this feature prevents the need to re-download plugins on subsequent boots.
When you enable dynamic plugins cache:
-
The system calculates a checksum of each plugin’s YAML configuration (excluding
pluginConfig
). -
The checksum is stored in a file named
dynamic-plugin-config.hash
within the plugin’s directory. - During boot, if a plugin’s package reference matches the previous installation and the checksum is unchanged, the download is skipped.
- Plugins that are disabled since the previous boot are automatically removed.
7.1. Enabling the dynamic plugins cache
To enable the dynamic plugins cache in RHDH, the plugins directory dynamic-plugins-root
must be a persistent volume.
For Helm chart installations, a persistent volume named dynamic-plugins-root
is automatically created.
For operator-based installations, you must manually create the PersistentVolumeClaim (PVC) as follows:
kind: PersistentVolumeClaim apiVersion: v1 metadata: name: dynamic-plugins-root spec: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi --- apiVersion: rhdh.redhat.com/v1alpha2 kind: Backstage metadata: name: developer-hub spec: deployment: patch: spec: template: spec: volumes: - $patch: replace name: dynamic-plugins-root persistentVolumeClaim: claimName: dynamic-plugins-root
Future versions of the RHDH operator are planned to automatically create the PVC.
7.2. Configuring the dynamic plugins cache
You can set the following optional dynamic plugin cache parameters:
forceDownload
: Set totrue
to force a reinstall of the plugin, bypassing the cache. Default isfalse
. For example, modify yourdynamic-plugins.yaml
file as follows:plugins: - disabled: false forceDownload: true package: 'oci://quay.io/example-org/example-plugin:v1.0.0!internal-backstage-plugin-example'
8. Using Redis Cache with dynamic plugins
You can use the Redis cache store to improve RHDH performance and reliability. Plugins in RHDH receive dedicated cache connections, which are powered by Keyv.
8.1. Installing Redis Cache in Red Hat Developer Hub
Prerequisites
- You have installed Red Hat Developer Hub by using either the Operator or Helm chart.
-
You have an active Redis server. For more information on setting up an external Redis server, see the
Redis official documentation
.
Procedure
Add the following code to your app-config.yaml
file:
backend: cache: store: redis connection: redis://user:pass@cache.example.com:6379 useRedisSets: true
8.2. Configuring Redis Cache in Red Hat Developer Hub
8.2.1. useRedisSets
The useRedisSets
option lets you decide whether to use Redis sets for key management. By default, this option is set to true
.
When useRedisSets
is enabled (true
):
- A namespace for the Redis sets is created, and all generated keys are added to that namespace, enabling group management of the keys.
- When a key is deleted, it’s removed from the main storage and the Redis set.
- When using the clear function to delete all keys, every key in the Redis set is checked for deletion, and the set itself is also removed.
In high-performance scenarios, enabling useRedisSets
can result in memory leaks. If you are running a high-performance application or service, you must set useRedisSets
to false
.
When you set useRedisSets
to false
, the keys are handled individually and Redis sets are not utilized. This configuration might lead to performance issues in production when using the clear
function, as it requires iterating over all keys for deletion.