Installing Red Hat Developer Hub on Amazon Elastic Kubernetes Service
Abstract
Preface
You can install Red Hat Developer Hub on Amazon Elastic Kubernetes Service (EKS) using one of the following methods:
- The Red Hat Developer Hub Operator
- The Red Hat Developer Hub Helm chart
Chapter 1. Installing Developer Hub on EKS with the Operator
You can install the Red Hat Developer Hub Operator with or without the Operator Lifecycle Manager (OLM) framework.
Additonal resources
- For information about the OLM, see Operator Lifecycle Manager(OLM) documentation.
1.1. Installing the Developer Hub Operator with the OLM framework
You can install the Developer Hub Operator on EKS using the Operator Lifecycle Manager (OLM) framework. Following that, you can proceed to deploy your Developer Hub instance in EKS.
Prerequisites
-
You have set the context to the EKS cluster in your current
kubeconfig
. For more information, see Creating or updating a kubeconfig file for an Amazon EKS cluster. -
You have installed
kubectl
. For more information, see Installing or updating kubectl. -
You have subscribed to
registry.redhat.io
. For more information, see Red Hat Container Registry Authentication. - You have installed the Operator Lifecycle Manager (OLM). For more information about installation and troubleshooting, see How do I get Operator Lifecycle Manager?
Procedure
Run the following command in your terminal to create the
rhdh-operator
namespace where the Operator is installed:kubectl create namespace rhdh-operator
Create a pull secret using the following command:
kubectl -n rhdh-operator create secret docker-registry rhdh-pull-secret \ --docker-server=registry.redhat.io \ --docker-username=<user_name> \ 1 --docker-password=<password> \ 2 --docker-email=<email> 3
The created pull secret is used to pull the Developer Hub images from the Red Hat Ecosystem.
Create a
CatalogSource
resource that contains the Operators from the Red Hat Ecosystem:cat <<EOF | kubectl -n rhdh-operator apply -f - apiVersion: operators.coreos.com/v1alpha1 kind: CatalogSource metadata: name: redhat-catalog spec: sourceType: grpc image: registry.redhat.io/redhat/redhat-operator-index:v4.16 secrets: - "rhdh-pull-secret" displayName: Red Hat Operators EOF
Create an
OperatorGroup
resource as follows:cat <<EOF | kubectl apply -n rhdh-operator -f - apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: rhdh-operator-group EOF
Create a
Subscription
resource using the following code:cat <<EOF | kubectl apply -n rhdh-operator -f - apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: rhdh namespace: rhdh-operator spec: channel: fast installPlanApproval: Automatic name: rhdh source: redhat-catalog sourceNamespace: rhdh-operator startingCSV: rhdh-operator.v1.3.0 EOF
Run the following command to verify that the created Operator is running:
kubectl -n rhdh-operator get pods -w
If the operator pod shows
ImagePullBackOff
status, then you might need permissions to pull the image directly within the Operator deployment’s manifest.TipYou can include the required secret name in the
deployment.spec.template.spec.imagePullSecrets
list and verify the deployment name usingkubectl get deployment -n rhdh-operator
command:kubectl -n rhdh-operator patch deployment \ rhdh.fast --patch '{"spec":{"template":{"spec":{"imagePullSecrets":[{"name":"rhdh-pull-secret"}]}}}}' \ --type=merge
Update the default configuration of the operator to ensure that Developer Hub resources can start correctly in EKS using the following steps:
Edit the
backstage-default-config
ConfigMap in therhdh-operator
namespace using the following command:kubectl -n rhdh-operator edit configmap backstage-default-config
Locate the
db-statefulset.yaml
string and add thefsGroup
to itsspec.template.spec.securityContext
, as shown in the following example:db-statefulset.yaml: | apiVersion: apps/v1 kind: StatefulSet --- TRUNCATED --- spec: --- TRUNCATED --- restartPolicy: Always securityContext: # You can assign any random value as fsGroup fsGroup: 2000 serviceAccount: default serviceAccountName: default --- TRUNCATED ---
Locate the
deployment.yaml
string and add thefsGroup
to its specification, as shown in the following example:deployment.yaml: | apiVersion: apps/v1 kind: Deployment --- TRUNCATED --- spec: securityContext: # You can assign any random value as fsGroup fsGroup: 3000 automountServiceAccountToken: false --- TRUNCATED ---
Locate the
service.yaml
string and change thetype
toNodePort
as follows:service.yaml: | apiVersion: v1 kind: Service spec: # NodePort is required for the ALB to route to the Service type: NodePort --- TRUNCATED ---
Save and exit.
Wait for a few minutes until the changes are automatically applied to the operator pods.
1.2. Installing the Developer Hub Operator without the OLM framework
Prerequisites
You have installed the following commands:
-
git
-
make
-
sed
-
Procedure
Clone the Operator repository to your local machine using the following command:
git clone --depth=1 https://github.com/redhat-developer/rhdh-operator.git rhdh-operator && cd rhdh-operator
Run the following command and generate the deployment manifest:
make deployment-manifest
The previous command generates a file named
rhdh-operator-<VERSION>.yaml
, which is updated manually.Run the following command to apply replacements in the generated deployment manifest:
sed -i "s/backstage-operator/rhdh-operator/g" rhdh-operator-*.yaml sed -i "s/backstage-system/rhdh-operator/g" rhdh-operator-*.yaml sed -i "s/backstage-controller-manager/rhdh-controller-manager/g" rhdh-operator-*.yaml
Open the generated deployment manifest file in an editor and perform the following steps:
Locate the
db-statefulset.yaml
string and add thefsGroup
to itsspec.template.spec.securityContext
, as shown in the following example:db-statefulset.yaml: | apiVersion: apps/v1 kind: StatefulSet --- TRUNCATED --- spec: --- TRUNCATED --- restartPolicy: Always securityContext: # You can assign any random value as fsGroup fsGroup: 2000 serviceAccount: default serviceAccountName: default --- TRUNCATED ---
Locate the
deployment.yaml
string and add thefsGroup
to its specification, as shown in the following example:deployment.yaml: | apiVersion: apps/v1 kind: Deployment --- TRUNCATED --- spec: securityContext: # You can assign any random value as fsGroup fsGroup: 3000 automountServiceAccountToken: false --- TRUNCATED ---
Locate the
service.yaml
string and change thetype
toNodePort
as follows:service.yaml: | apiVersion: v1 kind: Service spec: # NodePort is required for the ALB to route to the Service type: NodePort --- TRUNCATED ---
Replace the default images with the images that are pulled from the Red Hat Ecosystem:
sed -i "s#gcr.io/kubebuilder/kube-rbac-proxy:.*#registry.redhat.io/openshift4/ose-kube-rbac-proxy:v4.16#g" rhdh-operator-*.yaml sed -i "s#(quay.io/janus-idp/operator:.*|quay.io/rhdh-community/operator:.*)#registry.redhat.io/rhdh/rhdh-rhel9-operator:1.3#g" rhdh-operator-*.yaml sed -i "s#quay.io/janus-idp/backstage-showcase:.*#registry.redhat.io/rhdh/rhdh-hub-rhel9:1.3#g" rhdh-operator-*.yaml sed -i "s#quay.io/fedora/postgresql-15:.*#registry.redhat.io/rhel9/postgresql-15:latest#g" rhdh-operator-*.yaml
Add the image pull secret to the manifest in the Deployment resource as follows:
--- TRUNCATED --- apiVersion: apps/v1 kind: Deployment metadata: labels: app.kubernetes.io/component: manager app.kubernetes.io/created-by: rhdh-operator app.kubernetes.io/instance: controller-manager app.kubernetes.io/managed-by: kustomize app.kubernetes.io/name: deployment app.kubernetes.io/part-of: rhdh-operator control-plane: controller-manager name: rhdh-controller-manager namespace: rhdh-operator spec: replicas: 1 selector: matchLabels: control-plane: controller-manager template: metadata: annotations: kubectl.kubernetes.io/default-container: manager labels: control-plane: controller-manager spec: imagePullSecrets: - name: rhdh-pull-secret --- TRUNCATED ---
Apply the manifest to deploy the operator using the following command:
kubectl apply -f rhdh-operator-VERSION.yaml
Run the following command to verify that the Operator is running:
kubectl -n rhdh-operator get pods -w
1.3. Deploying the Developer Hub instance on EKS with the Operator
Prerequisites
- A cluster administrator has installed the Red Hat Developer Hub Operator.
- You have an EKS cluster with AWS Application Load Balancer (ALB) add-on installed. For more information, see Application load balancing on Amazon Elastic Kubernetes Service and Installing the AWS Load Balancer Controller add-on.
- You have configured a domain name for your Developer Hub instance. The domain name can be a hosted zone entry on Route 53 or managed outside of AWS. For more information, see Configuring Amazon Route 53 as your DNS service documentation.
- You have an entry in the AWS Certificate Manager (ACM) for your preferred domain name. Make sure to keep a record of your Certificate ARN.
-
You have subscribed to
registry.redhat.io
. For more information, see Red Hat Container Registry Authentication. -
You have set the context to the EKS cluster in your current
kubeconfig
. For more information, see Creating or updating a kubeconfig file for an Amazon EKS cluster. -
You have installed
kubectl
. For more information, see Installing or updating kubectl.
Procedure
Create a ConfigMap named
app-config-rhdh
containing the Developer Hub configuration using the following template:apiVersion: v1 kind: ConfigMap metadata: name: app-config-rhdh data: "app-config-rhdh.yaml": | app: title: Red Hat Developer Hub baseUrl: https://<rhdh_dns_name> backend: auth: externalAccess: - type: legacy options: subject: legacy-default-config secret: "${BACKEND_SECRET}" baseUrl: https://<rhdh_dns_name> cors: origin: https://<rhdh_dns_name>
Create a Secret named
secrets-rhdh
and add a key namedBACKEND_SECRET
with aBase64-encoded
string as value:apiVersion: v1 kind: Secret metadata: name: secrets-rhdh stringData: # TODO: See https://backstage.io/docs/auth/service-to-service-auth/#setup BACKEND_SECRET: "xxx"
ImportantEnsure that you use a unique value of
BACKEND_SECRET
for each Developer Hub instance.You can use the following command to generate a key:
node-p'require("crypto").randomBytes(24).toString("base64")'
To enable pulling the PostgreSQL image from the Red Hat Ecosystem Catalog, add the image pull secret in the default service account within the namespace where the Developer Hub instance is being deployed:
kubectl patch serviceaccount default \ -p '{"imagePullSecrets": [{"name": "rhdh-pull-secret"}]}' \ -n <your_namespace>
Create a Custom Resource file using the following template:
apiVersion: rhdh.redhat.com/v1alpha1 kind: Backstage metadata: # TODO: this the name of your Developer Hub instance name: my-rhdh spec: application: imagePullSecrets: - "rhdh-pull-secret" route: enabled: false appConfig: configMaps: - name: "app-config-rhdh" extraEnvs: secrets: - name: "secrets-rhdh"
Create an Ingress resource using the following template, ensuring to customize the names as needed:
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: # TODO: this the name of your Developer Hub Ingress name: my-rhdh annotations: alb.ingress.kubernetes.io/scheme: internet-facing alb.ingress.kubernetes.io/target-type: ip # TODO: Using an ALB HTTPS Listener requires a certificate for your own domain. Fill in the ARN of your certificate, e.g.: alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:us-xxx:xxxx:certificate/xxxxxx alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]' alb.ingress.kubernetes.io/ssl-redirect: '443' # TODO: Set your application domain name. external-dns.alpha.kubernetes.io/hostname: <rhdh_dns_name> spec: ingressClassName: alb rules: # TODO: Set your application domain name. - host: <rhdh_dns_name> http: paths: - path: / pathType: Prefix backend: service: # TODO: my-rhdh is the name of your Backstage Custom Resource. # Adjust if you changed it! name: backstage-my-rhdh port: name: http-backend
In the previous template, replace ` <rhdh_dns_name>` with your Developer Hub domain name and update the value of
alb.ingress.kubernetes.io/certificate-arn
with your certificate ARN.
Verification
Wait until the DNS name is responsive, indicating that your Developer Hub instance is ready for use.
Chapter 2. Installing Developer Hub on EKS with the Helm chart
When you install the Developer Hub Helm chart in Elastic Kubernetes Service (EKS), it orchestrates the deployment of a Developer Hub instance, which provides a robust developer platform within the AWS ecosystem.
Prerequisites
- You have an EKS cluster with AWS Application Load Balancer (ALB) add-on installed. For more information, see Application load balancing on Amazon Developer Hub and Installing the AWS Load Balancer Controller add-on.
- You have configured a domain name for your Developer Hub instance. The domain name can be a hosted zone entry on Route 53 or managed outside of AWS. For more information, see Configuring Amazon Route 53 as your DNS service documentation.
- You have an entry in the AWS Certificate Manager (ACM) for your preferred domain name. Make sure to keep a record of your Certificate ARN.
-
You have subscribed to
registry.redhat.io
. For more information, see Red Hat Container Registry Authentication. -
You have set the context to the EKS cluster in your current
kubeconfig
. For more information, see Creating or updating a kubeconfig file for an Amazon EKS cluster. -
You have installed
kubectl
. For more information, see Installing or updating kubectl. - You have installed Helm 3 or the latest. For more information, see Using Helm with Amazon EKS.
Procedure
Go to your terminal and run the following command to add the Helm chart repository containing the Developer Hub chart to your local Helm registry:
helm repo add openshift-helm-charts https://charts.openshift.io/
Create a pull secret using the following command:
kubectl create secret docker-registry rhdh-pull-secret \ --docker-server=registry.redhat.io \ --docker-username=<user_name> \ 1 --docker-password=<password> \ 2 --docker-email=<email> 3
The created pull secret is used to pull the Developer Hub images from the Red Hat Ecosystem.
Create a file named
values.yaml
using the following template:global: # TODO: Set your application domain name. host: <your Developer Hub domain name> route: enabled: false upstream: service: # NodePort is required for the ALB to route to the Service type: NodePort ingress: enabled: true annotations: kubernetes.io/ingress.class: alb alb.ingress.kubernetes.io/scheme: internet-facing # TODO: Using an ALB HTTPS Listener requires a certificate for your own domain. Fill in the ARN of your certificate, e.g.: alb.ingress.kubernetes.io/certificate-arn: arn:aws:acm:xxx:xxxx:certificate/xxxxxx alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS":443}]' alb.ingress.kubernetes.io/ssl-redirect: '443' # TODO: Set your application domain name. external-dns.alpha.kubernetes.io/hostname: <your rhdh domain name> backstage: image: pullSecrets: - rhdh-pull-secret podSecurityContext: # you can assign any random value as fsGroup fsGroup: 2000 postgresql: image: pullSecrets: - rhdh-pull-secret primary: podSecurityContext: enabled: true # you can assign any random value as fsGroup fsGroup: 3000 volumePermissions: enabled: true
Run the following command in your terminal to deploy Developer Hub using the latest version of Helm Chart and using the values.yaml file created in the previous step:
helm install rhdh \ openshift-helm-charts/redhat-developer-hub \ [--version 1.3.0] \ --values /path/to/values.yaml
For the latest chart version, see https://github.com/openshift-helm-charts/charts/tree/main/charts/redhat/redhat/redhat-developer-hub
Verification
Wait until the DNS name is responsive, indicating that your Developer Hub instance is ready for use.