This the multi-page printable view of this section. Click here to print.

Return to the regular view of this page.

Networking

Understanding and configuring networking in MicroShift.

1 - Overview

Overview of the MicroShift networking.

MicroShift uses the host configured networking, either statically configured or via DHCP. In the case of dynamic addresses MicroShift will restart if an IP change is detected during runtime.

Connectivity to the K8s API endpoint is provided in the default 6443 port on the master host(s) IP addresses. If other services in the network must interact with the MicroShift API connectivity should be performed in any of the following ways:

  • DNS discovery, pre-configured on the network servers.
  • Direct IP address connectivity.
  • mDNS discovery via .local domain, see mDNS

Conectivity between Pods is handled by the CNI plugin on the Pod network range which defaults to 10.42.0.0/16 which can be modified via Cluster.ClusterCIDR configuration parameter, see more details in the corresponding sections.

Connectivity to services of type ClusterIP is provided by the embedded kube-proxy iptables-based implementation on the 10.43.0.0/16 range which can be modified via Cluster.ServiceCIDR configuration parameter.

2 - Exposing Services

Exposing services in MicroShift.

Services deployed in MicroShift can be exposed in multiple ways.

Routes and Ingresses

By default an OpenShift router is created and exposed on host network ports 80/443. Routesor Ingresses can be used to expose HTTP or HTTPS services through the router.

Example

oc create deployment nginx --image=nginxinc/nginx-unprivileged:stable-alpine
oc expose deployment nginx --port=8080
oc expose service/nginx --hostname=my-hostname.com

# assuming my-hostname.com being mapped to the MicroShift node IP
curl http://my-hostname.com

Route with mDNS host example

The hostname of a route can be a mDNS (.local) hostname, which would be then announced via mDNS, see the mDNS section for more details.

oc expose service/nginx --hostname=my-hostname.local
curl http://my-hostname.local

Service of type NodePort

NodePort type of services expose services over a dedicated port on all the cluster nodes, such port is routed internally to the active pods backing the service.

Example

oc create deployment nginx --image=nginxinc/nginx-unprivileged:stable-alpine
oc expose deployment nginx --type=NodePort --name=nodeport-nginx --port 8080
NODEPORT=$(oc get service nodeport-nginx -o jsonpath='{.spec.ports[0].nodePort}')
IP=$(oc get node -A -o jsonpath='{.items[0].status.addresses[0].address}')
curl http://$IP:$NODEPORT/

For using NodePort services open the 30000-32767 port range , see the firewall section.

Service of type LoadBalancer

Services of type LoadBalancer are not supported yet, this kind of service is normally backed by a load balancer in the underlying cloud.

Multiple alternatives are being explored to provide LoadBalancer VIPs on the LAN.

3 - Firewall

Firewall considerations for MicroShift

MicroShift does not require a firewall to run, but it's recommended. In the case of using firewalld the following ports should be considered:

Port(s) Protocol(s) Description
80 TCP HTTP port used to serve applications through the OpenShift router.
443 TCP HTTPS port used to serve applications through the OpenShift router.
6443 TCP HTTPS API port for the MicroShift API
5353 UDP mDNS service to respond for OpenShift route mDNS hosts
30000-32767 TCP/UDP Port range reserved for NodePort type of services, can be used to expose applications on the LAN

Additionally pods need to be able to contact the internal coreDNS server, a way to allow such connectivity is the following, assuming the PodIP range is 10.42.0.0/16

sudo firewall-cmd --permanent --zone=trusted --add-source=10.42.0.0/16

Firewalld

An example for enabling firewalld and opening all the above mentioned ports is:

sudo dnf install -y firewalld
sudo systemctl enable firewalld --now
sudo firewall-cmd --zone=trusted --add-source=10.42.0.0/16 --permanent
sudo firewall-cmd --zone=public --add-port=80/tcp --permanent
sudo firewall-cmd --zone=public --add-port=443/tcp --permanent
sudo firewall-cmd --zone=public --add-port=6443/tcp --permanent
sudo firewall-cmd --zone=public --add-port=5353/udp --permanent
sudo firewall-cmd --zone=public --add-port=30000-32767/tcp --permanent
sudo firewall-cmd --zone=public --add-port=30000-32767/udp --permanent
sudo firewall-cmd --reload

4 - mDNS

embedded Multicast DNS support in MicroShift.

MicroShift includes an embedded mDNS server for deployment scenarios in which the authoritative DNS server cannot be reconfigured to point clients to services on MicroShift.

mDNS is a protocol used to allow name resolution and service discovery within a LAN using multicast exposed on the 5353/UDP port.

This allows .local domains exposed by MicroShift to be discovered by other elements on the Local Area Network.

Notes for Linux

mDNS resolution on Linux is provided by the avahi-daemon. For other Linux hosts to discover MicroShift services or for workers to locate the master node via mDNS, avahi should be enabled:

sudo dnf install -y nss-mdns avahi
hostnamectl set-hostname microshift-vm.local
systemctl enable --now avahi-daemon.service

By default only the minimal IPv4 mDNS resolver is implemented, that will only resolve TLD mDNS domains like hostname.local, if you want to use hostnames in the form of subdomain.domain.local you need to enable the full mDNS resolver on the host trying to resolve those dns entries:

echo .local > /etc/mdns.allow
echo .local. >> /etc/mdns.allow
sed -i 's/mdns4_minimal/mdns/g' /etc/nsswitch.conf

5 - CNI Plugin

The CNI Plugin used in MicroShift.

MicroShift uses the Flannel CNI network plugin as lightweight (but less featureful) alternative to OpenShiftSDN or OVNKubernetes.

This provides worker node to worker node pod connectivity via vxlan tunnels.

For single node operation the crio-bridge plugin could be used for additional resource saving.

Both flannel and crio-bridge have no support for NetworkPolicy.