1 - MicroShift with Advanced Cluster Management

Manage the MicroShift cluster through Red Hat Advanced Cluster Management (RHACM).

MicroShift with Advanced Cluster Management

Managing through RHACM (Red Hat Advanced Cluster Management) works just like for any other imported managed cluster (see [docs]). However, as secure production deployments don't provide any form of remote access to the cluster via ssh or kubectl, the recommended approach is to define a new cluster with ACM to get managed cluster credentials, then using your device (configuration) management agent of your choice to synchronise those credentials to the device and have MicroShift apply them automatically.

The feature of using RHACM to manage the lifecycle of applications running on MicroShift is only available for AMD64 based systems. Starting with RHACM 2.5, the management functionality of applications running on MicroShift will be available on ARM based architectures.

The steps below assume that RHACM has been installed on a cluster recognized as the hub cluster and that MicroShift is installed on a separate cluster referred to as the managed cluster.

Defining the managed cluster in hub cluster

The following steps can be performed in the RHACM UI or on the CLI. On the RHACM hub cluster, run the following commands to define the MicroShift cluster as the managed cluster:

NOTE: Ensure you set the CLUSTER_NAME to a unique value that relates to the MicroShift cluster.

export CLUSTER_NAME=microshift

oc new-project ${CLUSTER_NAME}

oc label namespace ${CLUSTER_NAME} cluster.open-cluster-management.io/managedCluster=${CLUSTER_NAME}

Apply the following to define the managed MicroShift cluster.

cat <<EOF | oc apply -f -
apiVersion: agent.open-cluster-management.io/v1
kind: KlusterletAddonConfig
metadata:
  name: ${CLUSTER_NAME}
  namespace: ${CLUSTER_NAME}
spec:
  clusterName: ${CLUSTER_NAME}
  clusterNamespace: ${CLUSTER_NAME}
  applicationManager:
    enabled: true
  certPolicyController:
    enabled: true
  clusterLabels:
    cloud: auto-detect
    vendor: auto-detect
  iamPolicyController:
    enabled: true
  policyController:
    enabled: true
  searchCollector:
    enabled: true
EOF

cat <<EOF | oc apply -f -
apiVersion: cluster.open-cluster-management.io/v1
kind: ManagedCluster
metadata:
  name: ${CLUSTER_NAME}
spec:
  hubAcceptsClient: true
EOF

This will generate a secret named ${CLUSTER_NAME}-import in the ${CLUSTER_NAME} namespace. Extract the import.yaml and the crds.yaml which requires yq to be installed.

IMPORT=`oc get secret "$CLUSTER_NAME"-import -n "$CLUSTER_NAME" -o jsonpath={.data.import\\.yaml} | base64 --decode'`
IMPORT_KUBECONFIG=$(yq eval-all '. | select(.metadata.name == "bootstrap-hub-kubeconfig") | .data.kubeconfig' IMPORT)

Importing the managed Microshift cluster to hub cluster

The importing process can be done automatically by RHACM components running on the hub cluster once the following steps are performed on managed MicroShift cluster. A detailed explanation can be found in RHACM documentation.

Prepare the manifests

A list of the K8s manifests based on Kustomize can be found in this repo. This repo contains more than manifests however we will only focus on the manifests folder. Before syncing the manifests to the MicroShift node, the following commands need to be run to render manifests:

sed -i "s/{{ .clustername }}/${CLUSTER_NAME}/g" manifests/klusterlet.yaml
sed -i "s/{{ .kubeconfig }}/${IMPORT_KUBECONFIG}/g" manifests/klusterlet-kubeconfighub.yaml

Sync manifests to MicroShift node

The next step is to sync manifests to the MicroShift node. MicroShift has the feature of auto-applying manifests. Once it finds a kustomization.yaml file in ${DATADIR}/manifests (which defaults to /var/lib/microshift/manifests), kubectl apply -k will be run automatically upon start-up. The rendered manifests then need to be synced to ${DATADIR}/manifests.

The syncing of manifests to managed Microshift cluster can be done by utilizing any GitOps tool to fetch the Kubernetes Kustomize manifests and put them in the directory described above, e.g. Transmission tool can be used to pull updates and apply them transactionally on the ostree-based Linux operating systems.

MicroShift auto-applies those manifests to register with the ACM cluster

The cluster now should have all add-ons enabled and be in a READY state within RHACM.

2 - Deploying MicroShift behind Proxy

How to configure the host OS so MicroShift can work behind a proxy.

When deploying MicroShift behind a proxy, configure the host OS to use the proxy for both yum and CRI-O.

Configuring HTTP(S) proxy for yum

To configure yum to use a proxy, add the following to /etc/yum.conf:

proxy=http://$PROXY_SERVER:$PROXY_SERVER
proxy_username=$PROXY_USER
proxy_password=$PROXY_PASSWORD

Configuring HTTP(S) proxy for CRI-O or Podman

CRI-O and Podman are Go programs that use the built-in net/http package. To use an HTTP(S) proxy you need to set the HTTP_PROXY and HTTPS_PROXY environment variables and optionally the NO_PROXY variable to exclude a list of hosts from being proxied). For example, add the following to /etc/systemd/system/crio.service.d/00-proxy.conf:

[Service]
Environment=NO_PROXY="localhost,127.0.0.1,10.42.0.0/16,10.43.0.0/16"
Environment=HTTP_PROXY="http://$PROXY_USER:$PROXY_PASSWORD@$PROXY_SERVER:$PROXY_PORT/"
Environment=HTTPS_PROXY="http://$PROXY_USER:$PROXY_PASSWORD@$PROXY_SERVER:$PROXY_PORT/"

Restart CRI-O:

sudo systemctl restart crio

3 - Private registries and pull secrets

MicroShift may need access to a private registry. Access can be granted from registry login or from a pull-secret.

MicroShift may not have the pull secret for the registry that you are trying to use. For example, MicroShift does not have the pull secret for registry.redhat.io. In order to use this registry, there are a few approaches.

Pulling Container Images From Private Registries

Use Podman to Authenticate to a Registry

podman login registry.redhat.io

Once the podman login is complete, MicroShift will be able to pull images from this registry. This approach works across namespaces.

This approach assumes podman is installed. This might not be true for all MicroShift environments. For example, if MicroShift is installed through RPM, CRI-O will be installed as a dependency, but not podman. In this case, one can choose to install podman separately, or use other approaches described below.

Authenticate to a Registry With a Pull-Secret

The second approach is to create a pull secret, then let the service account use this pull secret. This approach works within a name space. For example, if the pull secret is stored in a json formatted file "secret.json",

# First create the secret in a name space
oc create secret generic my_pull_secret \
    --from-file=.dockerconfigjson=secret.json \
    --type=kubernetes.io/dockerconfigjson

Alternatively, you can use your container manager configuration file to create the secret:

# First create the secret in a name space using our configuration file
oc create secret generic my_pull_secret \
    --from-file=.dockerconfigjson=.docker/config.json \
    --type=kubernetes.io/dockerconfigjson

Finally, we set the secret as default for pulling

# Then attach the secret to a service account in the name space
oc secrets link default my_pull_secret --for=pull

Instead of attaching the secret to a service account, one can also specify the pull secret under the pod spec, Refer to this Kubernetes document for more details.

4 - Deploy a basic application

MicroShift operates similar to many other Kubernetes providers. This means that you can use the same tools to deploy and manage your applications.

All of the standard Kubernetes management tools can be used to maintain and modify your MicroShift applications. Below we will show some examples using oc, kustomize, and helm to deploy and maintain applications.

Example Applications

Metal LB

Metal LB is a load balancer that can be used to route traffic to a number of backends.

Creating the Metal LB namespace and deployment.

oc apply -f https://raw.githubusercontent.com/metallb/metallb/v0.11.0/manifests/namespace.yaml
oc apply -f https://raw.githubusercontent.com/metallb/metallb/v0.11.0/manifests/metallb.yaml

Once the components are available, a ConfigMap is required to define the address pool for the load balancer to use.

Create the Metal LB ConfigMap:

oc create -f - <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 192.168.1.240-192.168.1.250    
EOF

Now we are able to deploy a test application to verify thing are working as expected.

oc create ns test
oc create deployment nginx -n test --image nginx

Create a service:

oc create -f - <<EOF
apiVersion: v1
kind: Service
metadata:
  name: nginx
  namespace: test
  annotations:
    metallb.universe.tf/address-pool: default
spec:
  ports:
  - port: 80
    targetPort: 80
  selector:
    app: nginx
  type: LoadBalancer
EOF

Verify the service exists and that an IP address has been assigned.

oc get svc -n test
NAME    TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)        AGE
nginx   LoadBalancer   10.43.183.104   192.168.1.241   80:32434/TCP   29m

Using your browser you can now access the NGINX application by the EXTERNAL-IP provided by the service.

5 - Dynamic Provisioning of PVs

MicroShift storage solution can provision persistent volumes dynamically based on claims.

MicroShift deploys the hostpath provisioner as solution to provide persistent storage to pods. The hostpath provisioner pod mounts the /var/hpvolumes directory in order to provision volumes. It also has the ability to dynamically provision PVs when a PVC is created, and wait until a pod uses that specific PVC.

Let's see how to create a PVC so the hostpath provisioner creates the persistent volume for us.

Create a Persistent Volume Claim

MicroShift's hostpath provisioner creates a StorageClass named kubevirt-hostpath-provisioner by default.

The PVC manifest must reference this StorageClass using the storageClassName spec parameter and there should be an annotation pointing at the node where the PV is going to be created. This annotation is crucial if we want dynamic provisioning of PVs:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: task-pv-claim
  annotations:
    kubevirt.io/provisionOnNode: ricky-fedora.oglok.net
spec:
  storageClassName: kubevirt-hostpath-provisioner
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 3Gi

This manifest will create the following Persistent Volume Claim and a Persistent Volume located at /var/hpvolumes/.

$ oc get pvc
NAME            STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                    AGE
task-pv-claim   Bound    pvc-58a28c40-7726-4830-ba70-32d18188a8b4   39Gi       RWO            kubevirt-hostpath-provisioner   8m43s
$ oc get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                   STORAGECLASS                    REASON   AGE
pvc-58a28c40-7726-4830-ba70-32d18188a8b4   39Gi       RWO            Delete           Bound    default/task-pv-claim   kubevirt-hostpath-provisioner            8m43s

$ ll /var/hpvolumes/
total 0
drwxrwxrwx. 1 root root 8 Apr  5 10:26 pvc-58a28c40-7726-4830-ba70-32d18188a8b4

For sake of clarity, we will instantiate a sample NGINX pod that mounts that volume:

apiVersion: v1
kind: Pod
metadata:
  name: task-pv-pod
spec:
  volumes:
    - name: task-pv-storage
      persistentVolumeClaim:
        claimName: task-pv-claim
  containers:
    - name: task-pv-container
      image: nginx
      ports:
        - containerPort: 80
          name: "http-server"
      volumeMounts:
        - mountPath: "/usr/share/nginx/html"
          name: task-pv-storage

Any HTML file located at /var/hpvolumes/pvc-58a28c40-7726-4830-ba70-32d18188a8b4 can be exposed by the NGINX running in the pod using a regular service.

6 - Offline/disconnected container images

Offline containers are containers which are stored in the operating system image, and made available to cri-o via the /etc/container/storage.conf additionalimagestores list.

What are offline container images

Offline containers are containers which are stored in the operating, or the operating system image for an ostree based system, and made available to cri-o via the /etc/container/storage.conf additionalimagestores list.

Those container images are accesible for cri-o to create containers. Those images cannot be deleted, but newer versions of those containers can be downloaded normally, which cri-o will store in the general R/W container storage of the system.

When to use offline container images

Offline containers are useful when the edge device will have restricted connectivity, or no connectivity at all. Those containers are also helpful to improve general MicroShift and application startup on first boot, since no images need to be downloaded from the network and the applications are readily available to cri-o

RPM packaging of container images

RPM packaging of container images into read-only container storage is offered via the paack tool as an experimental method to allow users to create ostree images containing the desired containers. RPM was not designed for storing files with numeric uids/gids, or containing extended attributes, although several workarounds allow this we are looking for better ways to provide this.

Offline MicroShift containers images

MicroShift uses a set of containers for the minimal components which can be installed on the operating system image, those are published here, and can also be manually built using: packaging/rpm/make-microshift-images-rpm.sh.

To install the MicroShift container images you can use:

curl -L -o /etc/yum.repos.d/microshift-containers.repo \
          https://copr.fedorainfracloud.org/coprs/g/redhat-et/microshift-containers/repo/fedora-35/group_redhat-et-microshift-containers-fedora-35.repo

rpm-ostree install microshift-containers

Or simply include this package when using image-builder.

How package your application and manifests as rpms for offline container storage

To package workload application container images we provide packaging/rpm/paack.py. This tool accepts a yaml definition, for which an example can be found here.

The tool can produce an srpm, rpm, or push a build to a copr repository.

Some example usages:

./paack.py rpm example-user-containers.yaml centos-stream-9-aarch64

The target OS is not important (centos-stream-9) but we need one os target compatible with the destination architecture.

./paack.py srpm example-user-containers.yaml

The produced srpm format contains the repository binaries and manifests for each architecture, then the build system unpacks the specific architecture for the build. The post install step of rpm configures the additionalimagestores in /etc/container/storage.conf

./paack.py copr example-user-containers.yaml mangelajo/my-app-containers