This the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

User Documentation

After MicroShift is up and running, what's next? Follow the user documentation to explore. Here you'll find a section of HowTos to get you going.

1 - Disconnected deployment

MicroShift can run without internet connectivity.

WIP Content coming soon

Pre-load MicroShift image tarball into CRI-O

2 - Configuring MicroShift

Configuration options with MicroShift

Configuration

Microshift can be configured in three simple ways, in order of precedence:

  • Commandline arguments
  • Environment variables
  • Configuration file

An example configuration can be found here.

Below is a table of consisting of the configuration settings presently offered in MicroShift, along with the ways they can be set, what they mean, and their default values.

MicroshiftConfig field CLI Argument Environment Variable Configuration File Meaning Default
DataDir --data-dir MICROSHIFT_DATADIR .dataDir Data directory for MicroShift "~/.microshift/data"
LogDir --log-dir MICROSHIFT_LOGDIR .logDir Directory to output logfiles to ""
LogVLevel --v MICROSHIFT_LOGVLEVEL .logVLevel Log verbosity level 0
LogVModule --vmodule MICROSHIFT_LOGVMODULE .logVModule Log verbosity module ""
LogAlsotostderr --alsologtostderr MICROSHIFT_LOGALSOTOSTDERR .logAlsotostderr Log into standard error as well false
Roles --roles MICROSHIFT_ROLES .roles Roles available on the cluster ["controlplane", "node"]
NodeName n/a MICROSHIFT_NODENAME .nodeName Name of the node to run MicroShift on os.Hostname()
NodeIP n/a MICROSHIFT_NODEIP .nodeIP Node's IP util.GetHostIP()
Cluster.URL n/a n/a .cluster.url URL that the cluster will run on "https://127.0.0.1:6443"
Cluster.ClusterCIDR n/a n/a .cluster.clusterCIDR Cluster's CIDR "10.42.0.0/16"
Cluster.ServiceCIDR n/a n/a .cluster.serviceCIDR Service CIDR "10.43.0.0/16"
Cluster.DNS n/a n/a .cluster.dns Cluster's DNS server "10.43.0.10"
Cluster.Domain n/a n/a .cluster.domain Cluster's domain "cluster.local"
ConfigFile --config n/a n/a Path to a config file used to populate the rest of the values "~/.microshift/config.yaml" if the file exists, else /etc/microshift/config.yaml if it exists, else ""

3 - Auto-applying Manifests

Automatically applying manifests for bootstrapping cluster services.

A common use case after bringing up a new cluster is applying manifests for bootstrapping a management agent like the Open Cluster Management's klusterlet or for starting up services when running disconnected.

MicroShift leverages kustomize for Kubernetes-native templating and declarative management of resource objects. Upon start-up, it searches /etc/microshift/manifests, /usr/lib/microshift/manifests and ${DATADIR}/manifests (which defaults to /var/lib/microshift/manifests) for a kustomization.yaml file. If it finds one, it automatically runs kubectl apply -k to that kustomization`

Example:

cat <<EOF >/etc/microshift/manifests/nginx.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  selector:
    matchLabels:
      app: nginx
  replicas: 2
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: NGINX_IMAGE
        ports:
        - containerPort: 8080
EOF

cat <<EOF >/etc/microshift/manifests/kustomization.yaml
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

resources:
  - nginx.yaml
images:
  - name: NGINX_IMAGE
    newName: nginx:1.21
EOF

The reason for providing multiple directories is to allow a flexible method to manage MicroShift workloads.

Location Intent
/etc/microshift/manifests R/W location for configuration management systems or development
/usr/lib/microshift/manifests RO location, for embedding configuration manifests on ostree based systems
${DATADIR}/manifests R/W location for backwards compatibility (deprecated)

The list of manifest locations can be customized via configuration using the manifests section (see here) or via the MICROSHIFT_MANIFESTS environment variable as comma separated directories.

4 - Networking

Understanding and configuring networking in MicroShift.

4.1 - Overview

Overview of the MicroShift networking.

MicroShift uses the host configured networking, either statically configured or via DHCP. In the case of dynamic addresses MicroShift will restart if an IP change is detected during runtime.

Connectivity to the K8s API endpoint is provided in the default 6443 port on the master host(s) IP addresses. If other services in the network must interact with the MicroShift API connectivity should be performed in any of the following ways:

  • DNS discovery, pre-configured on the network servers.
  • Direct IP address connectivity.
  • mDNS discovery via .local domain, see mDNS

Conectivity between Pods is handled by the CNI plugin on the Pod network range which defaults to 10.42.0.0/16 which can be modified via Cluster.ClusterCIDR configuration parameter, see more details in the corresponding sections.

Connectivity to services of type ClusterIP is provided by the embedded kube-proxy iptables-based implementation on the 10.43.0.0/16 range which can be modified via Cluster.ServiceCIDR configuration parameter.

4.2 - Exposing Services

Exposing services in MicroShift.

Services deployed in MicroShift can be exposed in multiple ways.

Routes and Ingresses

By default an OpenShift router is created and exposed on host network ports 80/443. Routesor Ingresses can be used to expose HTTP or HTTPS services through the router.

Example

oc create deployment nginx --image=nginxinc/nginx-unprivileged:stable-alpine
oc expose deployment nginx --port=8080
oc expose service/nginx --hostname=my-hostname.com

# assuming my-hostname.com being mapped to the MicroShift node IP
curl http://my-hostname.com

Route with mDNS host example

The hostname of a route can be a mDNS (.local) hostname, which would be then announced via mDNS, see the mDNS section for more details.

oc expose service/nginx --hostname=my-hostname.local
curl http://my-hostname.local

Service of type NodePort

NodePort type of services expose services over a dedicated port on all the cluster nodes, such port is routed internally to the active pods backing the service.

Example

oc create deployment nginx --image=nginxinc/nginx-unprivileged:stable-alpine
oc expose deployment nginx --type=NodePort --name=nodeport-nginx --port 8080
NODEPORT=$(oc get service nodeport-nginx -o jsonpath='{.spec.ports[0].nodePort}')
IP=$(oc get node -A -o jsonpath='{.items[0].status.addresses[0].address}')
curl http://$IP:$NODEPORT/

For using NodePort services open the 30000-32767 port range , see the firewall section.

Service of type LoadBalancer

Services of type LoadBalancer are not supported yet, this kind of service is normally backed by a load balancer in the underlying cloud.

Multiple alternatives are being explored to provide LoadBalancer VIPs on the LAN.

4.3 - Firewall

Firewall considerations for MicroShift

MicroShift does not require a firewall to run, but it's recommended. In the case of using firewalld the following ports should be considered:

Port(s) Protocol(s) Description
80 TCP HTTP port used to serve applications through the OpenShift router.
443 TCP HTTPS port used to serve applications through the OpenShift router.
6443 TCP HTTPS API port for the MicroShift API
5353 UDP mDNS service to respond for OpenShift route mDNS hosts
30000-32767 TCP/UDP Port range reserved for NodePort type of services, can be used to expose applications on the LAN

Additionally pods need to be able to contact the internal coreDNS server, a way to allow such connectivity is the following, assuming the PodIP range is 10.42.0.0/16

sudo firewall-cmd --permanent --zone=trusted --add-source=10.42.0.0/16

Firewalld

An example for enabling firewalld and opening all the above mentioned ports is:

sudo dnf install -y firewalld
sudo systemctl enable firewalld --now
sudo firewall-cmd --zone=trusted --add-source=10.42.0.0/16 --permanent
sudo firewall-cmd --zone=public --add-port=80/tcp --permanent
sudo firewall-cmd --zone=public --add-port=443/tcp --permanent
sudo firewall-cmd --zone=public --add-port=6443/tcp --permanent
sudo firewall-cmd --zone=public --add-port=5353/udp --permanent
sudo firewall-cmd --zone=public --add-port=30000-32767/tcp --permanent
sudo firewall-cmd --zone=public --add-port=30000-32767/udp --permanent
sudo firewall-cmd --reload

4.4 - mDNS

embedded Multicast DNS support in MicroShift.

MicroShift includes an embedded mDNS server for deployment scenarios in which the authoritative DNS server cannot be reconfigured to point clients to services on MicroShift.

mDNS is a protocol used to allow name resolution and service discovery within a LAN using multicast exposed on the 5353/UDP port.

This allows .local domains exposed by MicroShift to be discovered by other elements on the Local Area Network.

Notes for Linux

mDNS resolution on Linux is provided by the avahi-daemon. For other Linux hosts to discover MicroShift services or for workers to locate the master node via mDNS, avahi should be enabled:

sudo dnf install -y nss-mdns avahi
hostnamectl set-hostname microshift-vm.local
systemctl enable --now avahi-daemon.service

By default only the minimal IPv4 mDNS resolver is implemented, that will only resolve TLD mDNS domains like hostname.local, if you want to use hostnames in the form of subdomain.domain.local you need to enable the full mDNS resolver on the host trying to resolve those dns entries:

echo .local > /etc/mdns.allow
echo .local. >> /etc/mdns.allow
sed -i 's/mdns4_minimal/mdns/g' /etc/nsswitch.conf

4.5 - CNI Plugin

The CNI Plugin used in MicroShift.

MicroShift uses the Flannel CNI network plugin as lightweight (but less featureful) alternative to OpenShiftSDN or OVNKubernetes.

This provides worker node to worker node pod connectivity via vxlan tunnels.

For single node operation the crio-bridge plugin could be used for additional resource saving.

Both flannel and crio-bridge have no support for NetworkPolicy.

5 - HowTos

Solving common use cases with MicroShift.

5.1 - MicroShift with Advanced Cluster Management

Manage the MicroShift cluster through Red Hat Advanced Cluster Management (RHACM).

MicroShift with Advanced Cluster Management

Managing through RHACM (Red Hat Advanced Cluster Management) works just like for any other imported managed cluster (see [docs]). However, as secure production deployments don't provide any form of remote access to the cluster via ssh or kubectl, the recommended approach is to define a new cluster with ACM to get managed cluster credentials, then using your device (configuration) management agent of your choice to synchronise those credentials to the device and have MicroShift apply them automatically.

The feature of using RHACM to manage the lifecycle of applications running on MicroShift is only available for AMD64 based systems. Starting with RHACM 2.5, the management functionality of applications running on MicroShift will be available on ARM based architectures.

The steps below assume that RHACM has been installed on a cluster recognized as the hub cluster and that MicroShift is installed on a separate cluster referred to as the managed cluster.

Defining the managed cluster in hub cluster

The following steps can be performed in the RHACM UI or on the CLI. On the RHACM hub cluster, run the following commands to define the MicroShift cluster as the managed cluster:

NOTE: Ensure you set the CLUSTER_NAME to a unique value that relates to the MicroShift cluster.

export CLUSTER_NAME=microshift

oc new-project ${CLUSTER_NAME}

oc label namespace ${CLUSTER_NAME} cluster.open-cluster-management.io/managedCluster=${CLUSTER_NAME}

Apply the following to define the managed MicroShift cluster.

cat <<EOF | oc apply -f -
apiVersion: agent.open-cluster-management.io/v1
kind: KlusterletAddonConfig
metadata:
  name: ${CLUSTER_NAME}
  namespace: ${CLUSTER_NAME}
spec:
  clusterName: ${CLUSTER_NAME}
  clusterNamespace: ${CLUSTER_NAME}
  applicationManager:
    enabled: true
  certPolicyController:
    enabled: true
  clusterLabels:
    cloud: auto-detect
    vendor: auto-detect
  iamPolicyController:
    enabled: true
  policyController:
    enabled: true
  searchCollector:
    enabled: true
EOF

cat <<EOF | oc apply -f -
apiVersion: cluster.open-cluster-management.io/v1
kind: ManagedCluster
metadata:
  name: ${CLUSTER_NAME}
spec:
  hubAcceptsClient: true
EOF

This will generate a secret named ${CLUSTER_NAME}-import in the ${CLUSTER_NAME} namespace. Extract the import.yaml and the crds.yaml which requires yq to be installed.

IMPORT=`oc get secret "$CLUSTER_NAME"-import -n "$CLUSTER_NAME" -o jsonpath={.data.import\\.yaml} | base64 --decode'`
IMPORT_KUBECONFIG=$(yq eval-all '. | select(.metadata.name == "bootstrap-hub-kubeconfig") | .data.kubeconfig' IMPORT)

Importing the managed Microshift cluster to hub cluster

The importing process can be done automatically by RHACM components running on the hub cluster once the following steps are performed on managed MicroShift cluster. A detailed explanation can be found in RHACM documentation.

Prepare the manifests

A list of the K8s manifests based on Kustomize can be found in this repo. This repo contains more than manifests however we will only focus on the manifests folder. Before syncing the manifests to the MicroShift node, the following commands need to be run to render manifests:

sed -i "s/{{ .clustername }}/${CLUSTER_NAME}/g" manifests/klusterlet.yaml
sed -i "s/{{ .kubeconfig }}/${IMPORT_KUBECONFIG}/g" manifests/klusterlet-kubeconfighub.yaml

Sync manifests to MicroShift node

The next step is to sync manifests to the MicroShift node. MicroShift has the feature of auto-applying manifests. Once it finds a kustomization.yaml file in ${DATADIR}/manifests (which defaults to /var/lib/microshift/manifests), kubectl apply -k will be run automatically upon start-up. The rendered manifests then need to be synced to ${DATADIR}/manifests.

The syncing of manifests to managed Microshift cluster can be done by utilizing any GitOps tool to fetch the Kubernetes Kustomize manifests and put them in the directory described above, e.g. Transmission tool can be used to pull updates and apply them transactionally on the ostree-based Linux operating systems.

MicroShift auto-applies those manifests to register with the ACM cluster

The cluster now should have all add-ons enabled and be in a READY state within RHACM.

5.2 - Deploying MicroShift behind Proxy

How to configure the host OS so MicroShift can work behind a proxy.

When deploying MicroShift behind a proxy, configure the host OS to use the proxy for both yum and CRI-O.

Configuring HTTP(S) proxy for yum

To configure yum to use a proxy, add the following to /etc/yum.conf:

proxy=http://$PROXY_SERVER:$PROXY_SERVER
proxy_username=$PROXY_USER
proxy_password=$PROXY_PASSWORD

Configuring HTTP(S) proxy for CRI-O or Podman

CRI-O and Podman are Go programs that use the built-in net/http package. To use an HTTP(S) proxy you need to set the HTTP_PROXY and HTTPS_PROXY environment variables and optionally the NO_PROXY variable to exclude a list of hosts from being proxied). For example, add the following to /etc/systemd/system/crio.service.d/00-proxy.conf:

[Service]
Environment=NO_PROXY="localhost,127.0.0.1,10.42.0.0/16,10.43.0.0/16"
Environment=HTTP_PROXY="http://$PROXY_USER:$PROXY_PASSWORD@$PROXY_SERVER:$PROXY_PORT/"
Environment=HTTPS_PROXY="http://$PROXY_USER:$PROXY_PASSWORD@$PROXY_SERVER:$PROXY_PORT/"

Restart CRI-O:

sudo systemctl restart crio

5.3 - Private registries and pull secrets

MicroShift may need access to a private registry. Access can be granted from registry login or from a pull-secret.

MicroShift may not have the pull secret for the registry that you are trying to use. For example, MicroShift does not have the pull secret for registry.redhat.io. In order to use this registry, there are a few approaches.

Pulling Container Images From Private Registries

Use Podman to Authenticate to a Registry

podman login registry.redhat.io

Once the podman login is complete, MicroShift will be able to pull images from this registry. This approach works across namespaces.

This approach assumes podman is installed. This might not be true for all MicroShift environments. For example, if MicroShift is installed through RPM, CRI-O will be installed as a dependency, but not podman. In this case, one can choose to install podman separately, or use other approaches described below.

Authenticate to a Registry With a Pull-Secret

The second approach is to create a pull secret, then let the service account use this pull secret. This approach works within a name space. For example, if the pull secret is stored in a json formatted file "secret.json",

# First create the secret in a name space
oc create secret generic my_pull_secret \
    --from-file=.dockerconfigjson=secret.json \
    --type=kubernetes.io/dockerconfigjson

Alternatively, you can use your container manager configuration file to create the secret:

# First create the secret in a name space using our configuration file
oc create secret generic my_pull_secret \
    --from-file=.dockerconfigjson=.docker/config.json \
    --type=kubernetes.io/dockerconfigjson

Finally, we set the secret as default for pulling

# Then attach the secret to a service account in the name space
oc secrets link default my_pull_secret --for=pull

Instead of attaching the secret to a service account, one can also specify the pull secret under the pod spec, Refer to this Kubernetes document for more details.

5.4 - Deploy a basic application

MicroShift operates similar to many other Kubernetes providers. This means that you can use the same tools to deploy and manage your applications.

All of the standard Kubernetes management tools can be used to maintain and modify your MicroShift applications. Below we will show some examples using oc, kustomize, and helm to deploy and maintain applications.

Example Applications

Metal LB

Metal LB is a load balancer that can be used to route traffic to a number of backends.

Creating the Metal LB namespace and deployment.

oc apply -f https://raw.githubusercontent.com/metallb/metallb/v0.11.0/manifests/namespace.yaml
oc apply -f https://raw.githubusercontent.com/metallb/metallb/v0.11.0/manifests/metallb.yaml

Once the components are available, a ConfigMap is required to define the address pool for the load balancer to use.

Create the Metal LB ConfigMap:

oc create -f - <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 192.168.1.240-192.168.1.250    
EOF

Now we are able to deploy a test application to verify thing are working as expected.

oc create ns test
oc create deployment nginx -n test --image nginx

Create a service:

oc create -f - <<EOF
apiVersion: v1
kind: Service
metadata:
  name: nginx
  namespace: test
  annotations:
    metallb.universe.tf/address-pool: default
spec:
  ports:
  - port: 80
    targetPort: 80
  selector:
    app: nginx
  type: LoadBalancer
EOF

Verify the service exists and that an IP address has been assigned.

oc get svc -n test
NAME    TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)        AGE
nginx   LoadBalancer   10.43.183.104   192.168.1.241   80:32434/TCP   29m

Using your browser you can now access the NGINX application by the EXTERNAL-IP provided by the service.

5.5 - Dynamic Provisioning of PVs

MicroShift storage solution can provision persistent volumes dynamically based on claims.

MicroShift deploys the hostpath provisioner as solution to provide persistent storage to pods. The hostpath provisioner pod mounts the /var/hpvolumes directory in order to provision volumes. It also has the ability to dynamically provision PVs when a PVC is created, and wait until a pod uses that specific PVC.

Let's see how to create a PVC so the hostpath provisioner creates the persistent volume for us.

Create a Persistent Volume Claim

MicroShift's hostpath provisioner creates a StorageClass named kubevirt-hostpath-provisioner by default.

The PVC manifest must reference this StorageClass using the storageClassName spec parameter and there should be an annotation pointing at the node where the PV is going to be created. This annotation is crucial if we want dynamic provisioning of PVs:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: task-pv-claim
  annotations:
    kubevirt.io/provisionOnNode: ricky-fedora.oglok.net
spec:
  storageClassName: kubevirt-hostpath-provisioner
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 3Gi

This manifest will create the following Persistent Volume Claim and a Persistent Volume located at /var/hpvolumes/.

$ oc get pvc
NAME            STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS                    AGE
task-pv-claim   Bound    pvc-58a28c40-7726-4830-ba70-32d18188a8b4   39Gi       RWO            kubevirt-hostpath-provisioner   8m43s
$ oc get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                   STORAGECLASS                    REASON   AGE
pvc-58a28c40-7726-4830-ba70-32d18188a8b4   39Gi       RWO            Delete           Bound    default/task-pv-claim   kubevirt-hostpath-provisioner            8m43s

$ ll /var/hpvolumes/
total 0
drwxrwxrwx. 1 root root 8 Apr  5 10:26 pvc-58a28c40-7726-4830-ba70-32d18188a8b4

For sake of clarity, we will instantiate a sample NGINX pod that mounts that volume:

apiVersion: v1
kind: Pod
metadata:
  name: task-pv-pod
spec:
  volumes:
    - name: task-pv-storage
      persistentVolumeClaim:
        claimName: task-pv-claim
  containers:
    - name: task-pv-container
      image: nginx
      ports:
        - containerPort: 80
          name: "http-server"
      volumeMounts:
        - mountPath: "/usr/share/nginx/html"
          name: task-pv-storage

Any HTML file located at /var/hpvolumes/pvc-58a28c40-7726-4830-ba70-32d18188a8b4 can be exposed by the NGINX running in the pod using a regular service.

5.6 - Offline/disconnected container images

Offline containers are containers which are stored in the operating system image, and made available to cri-o via the /etc/container/storage.conf additionalimagestores list.

What are offline container images

Offline containers are containers which are stored in the operating, or the operating system image for an ostree based system, and made available to cri-o via the /etc/container/storage.conf additionalimagestores list.

Those container images are accesible for cri-o to create containers. Those images cannot be deleted, but newer versions of those containers can be downloaded normally, which cri-o will store in the general R/W container storage of the system.

When to use offline container images

Offline containers are useful when the edge device will have restricted connectivity, or no connectivity at all. Those containers are also helpful to improve general MicroShift and application startup on first boot, since no images need to be downloaded from the network and the applications are readily available to cri-o

RPM packaging of container images

RPM packaging of container images into read-only container storage is offered via the paack tool as an experimental method to allow users to create ostree images containing the desired containers. RPM was not designed for storing files with numeric uids/gids, or containing extended attributes, although several workarounds allow this we are looking for better ways to provide this.

Offline MicroShift containers images

MicroShift uses a set of containers for the minimal components which can be installed on the operating system image, those are published here, and can also be manually built using: packaging/rpm/make-microshift-images-rpm.sh.

To install the MicroShift container images you can use:

curl -L -o /etc/yum.repos.d/microshift-containers.repo \
          https://copr.fedorainfracloud.org/coprs/g/redhat-et/microshift-containers/repo/fedora-35/group_redhat-et-microshift-containers-fedora-35.repo

rpm-ostree install microshift-containers

Or simply include this package when using image-builder.

How package your application and manifests as rpms for offline container storage

To package workload application container images we provide packaging/rpm/paack.py. This tool accepts a yaml definition, for which an example can be found here.

The tool can produce an srpm, rpm, or push a build to a copr repository.

Some example usages:

./paack.py rpm example-user-containers.yaml centos-stream-9-aarch64

The target OS is not important (centos-stream-9) but we need one os target compatible with the destination architecture.

./paack.py srpm example-user-containers.yaml

The produced srpm format contains the repository binaries and manifests for each architecture, then the build system unpacks the specific architecture for the build. The post install step of rpm configures the additionalimagestores in /etc/container/storage.conf

./paack.py copr example-user-containers.yaml mangelajo/my-app-containers

6 - Troubleshooting

MicroShift known issues and troubleshooting tips.

On EC2 with RHEL 8.4

service-ca can't be created

If you want to run microshift on EC2 RHEL 8.4(cat /etc/os-release), you might find ingress and service-ca will not stay online.

Inside the failing pods, you might find errors as: 10.43.0.1:443: read: connection timed out.

This a known issue on RHEL 8.4 and will be resolved in 8.5.

In order to work on RHEL 8.4, you may disable the NetworkManager and reboot to resolve this issue.

Example:

systemctl disable nm-cloud-setup.service nm-cloud-setup.timer
reboot

You can find the details of this EC2 NetworkManager issue tracked at issue.

OpenShift pods CrashLoopBackOff

A few minutes after microshift started, OpenShift pods fall into CrashLoopBackOff.

If you check up the journalctl |grep iptables, you may see the following:


Sep 21 19:12:54 ip-172-31-85-30.ec2.internal microshift[1297]: I0921 19:12:54.399365    1297 server_others.go:185] Using iptables Proxier.
Sep 21 19:13:50 ip-172-31-85-30.ec2.internal kernel: iptables[2438]: segfault at 88 ip 00007feaf5dc0e47 sp 00007fff6f2fea08 error 4 in libnftnl.so.11.3.0[7feaf5dbc000+16000]
Sep 21 19:13:50 ip-172-31-85-30.ec2.internal systemd-coredump[2442]: Process 2438 (iptables) of user 0 dumped core.
Sep 21 20:35:57 ip-172-31-85-30.ec2.internal microshift[1297]: E0921 20:35:57.914558    1297 remote_runtime.go:143] StopPodSandbox "1ae45abde0b46d8ea5176b6a00f0e5b4291e6bb496762ca25a4196a5f18d0475" from runtime service failed: rpc error: code = Unknown desc = failed to destroy network for pod sandbox k8s_service-ca-64547678c6-2nxnp_openshift-service-ca_6236deba-fc5f-4915-817d-f8699a4accfc_0(1ae45abde0b46d8ea5176b6a00f0e5b4291e6bb496762ca25a4196a5f18d0475): error removing pod openshift-service-ca_service-ca-64547678c6-2nxnp from CNI network "crio": running [/usr/sbin/iptables -t nat -D POSTROUTING -s 10.42.0.3 -j CNI-d5d0edec163ce01e4591c1c4 -m comment --comment name: "crio" id: "1ae45abde0b46d8ea5176b6a00f0e5b4291e6bb496762ca25a4196a5f18d0475" --wait]: exit status 2: iptables v1.8.4 (nf_tables): Chain 'CNI-d5d0edec163ce01e4591c1c4' does not exist

Also, the openshift-ingress pod will fail on:

I0921 17:36:17.811391       1 router.go:262] router "msg"="router is including routes in all namespaces"
E0921 17:36:17.914638       1 haproxy.go:418] can't scrape HAProxy: dial unix /var/lib/haproxy/run/haproxy.sock: connect: no such file or directory
I0921 17:36:17.948417       1 router.go:579] template "msg"="router reloaded"  "output"=" - Checking http://localhost:80 ...\n - Health check ok : 0 retry attempt(s).\n"

As a workaround, you can follow steps below:

  • delete flannel daemonset

    oc delete ds -n kube-system kube-flannel-ds
    
  • restart all the OpenShift pods.

This workaround won't affect the single node microshift functionality since the flannel daemonset is used for multi-node MicroShift.

This issue is tracked at: #296