K3s: Using loxilb as external service lb

CloudyBytes
7 min readJan 10, 2023

--

In this blog, we will see how to deploy loxilb as service LB on K3s based Kubernetes. Setting up a K3s cluster was a long time in the making. This blog is basically about my experience with K3s and loxilb.

For starters, K3s is a mini and light-weight Kubernetes distribution. Some available mini Kubernetes deployments -

Even though “mini” might sound like iPhone mini, but the fact is most of these distros fully conform to Kubernetes spec. K3s in particular is also CNCF backed. It is light because it ditches a lot of bloatware.

K3s is one of the few distributions which comes with an in-built external service LB provider called Klipper which I haven’t actually heard about before, but nonetheless it can be replaced easily with better options. Why would anybody want to do that and what is the upside ? Well, Kubernetes is about flexibility and freedom. It empowers users to choose any component as per their requirements. I chose loxilb to explore my curiosity to learn about it. In this setup, there are 3 separate nodes — k3s, loxilb, client. All of them are running Ubuntu 20.04 LTS.

K3s with loxilb topology

Install k3s

The first step is to install k3s. loxilb uses its own custom cloud-provider which provides service LB. So, the installation step is as follows:

curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC="server --disable traefik --disable servicelb --disable-cloud-controller --kubelet-arg cloud-provider=external” sh -

The above command disables traefik (the ingress), default service LB (to be replaced with loxilb) and cloud-provider. It was a little strange to disable the default cloud-provider, but, when I checked loxilb’s implementation, they provided a mini cloud-provider that consisted of implementation of load-balancer service life-cycle management. Cloud-providers usually provide a superset including LB services management. So, I guess if anybody wants loxilb in their full cloud-provider implementation, they would need to integrate with loxi-ccm provider as well.

As a result of removing the cloud-provider, the node gets tainted as a no-schedule entity. As my current setup is just a single node k3s, this taint needs to be removed for scheduling any workloads:

sudo kubectl taint nodes --all node.cloudprovider.kubernetes.io/uninitialized=false:NoSchedule-

Install loxilb

loxilb is installed in its own node as a docker. Is there any reason I chose to install it in a separate node? Well not particularly apart from the fact that I wanted to measure its raw performance (will cover it in a later blog). It’s installation is pretty straightforward :

$ docker run -u root --cap-add SYS_ADMIN  --restart unless-stopped --privileged -dit -v /dev/log:/dev/log --name loxilb ghcr.io/loxilb-io/loxilb:latest

But there is a catch here, as we need to attach mac-vlan interfaces of underlying node interfaces to it. Load-balancers usually need at least an ingress and egress interface to work with. As per situation, it might need more as well but not for the current scenario. In loxilb’s node, there are two network interfaces — enp0s7 connected to the client node and enp0s3 connected to the K3s LAN.

After going through layers of loxilb documentation, I figured out what needs to be done:

$ docker network create -d macvlan -o parent=enp0s3 --subnet 12.12.12.0/24 --gateway 12.12.12.254 --aux-address 'host=12.12.12.253’ llbnet1
$ docker network create -d macvlan -o parent=enp0s3 --subnet 11.11.11.0/24 --gateway 11.11.11.254 --aux-address 'host=11.11.11.253’ llbnet2
$ docker network connect llbnet1 loxilb --ip=12.12.12.1
$ docker network connect llbnet2 loxilb --ip=11.11.11.1

After this, two interfaces “eth0” and “eth1” appear inside loxilb docker with the specified IP addresses. You may get a different ethX depending on your setup. It can be verified with the following commands:

$ docker exec -it loxilb ifconfig eth0
$ docker exec -it loxilb ifconfig eth1

Interested readers can further learn about docker and mac-vlan interfaces here.

Install loxilb cloud-provider

Now we have k3s and loxilb running, the missing piece is loxi-ccm which is loxilb’s cloud-provider implementation containing its service LB management. In node1 i.e k3s node, we need to execute the following command:

$ wget https://github.com/loxilb-io/loxi-ccm/raw/master/manifests/loxi-ccm-k3s.yaml

We then need to find and change the following parameters in this yaml:

Before :

data:
loxiccmConfigs: |
apiServerURL:
- "http://14.14.14.1:11111"
externalCIDR: "123.123.123.0/24"
setBGP: true

After:

data:
loxiccmConfigs: |
apiServerURL:
- "http://12.12.12.1:11111"
externalCIDR: "123.123.123.0/24"

Please note that that “12.12.12.1” is the IP address of loxilb in k3s LAN. It seems to me that loxi-ccm uses this IP address to communicate with loxilb docker. Now apply the modified yaml :

$ sudo kubectl apply -f loxi-ccm-k3s.yaml

You would probably see something like this which means everything worked fine :

serviceaccount/loxi-cloud-controller-manager created
clusterrolebinding.rbac.authorization.k8s.io/system:cloud-controller-manager created
configmap/loxiccm-config created
daemonset.apps/loxi-cloud-controller-manager created

We can also quickly verify loxi-ccm component has spawned :

$ sudo kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
local-path-provisioner-79f67d76f8-x6m9n 1/1 Running 0 150m
coredns-597584b69b-bnsvw 1/1 Running 0 150m
metrics-server-5f9f776df5-6vwn4 1/1 Running 0 150m
loxi-cloud-controller-manager-x55qw 1/1 Running 0 20s

As a last setup step in node1, we install a route towards node3

$ ip route add 11.11.11.1/0 via 12.12.12.1

Why is it needed ? To provide connectivity when node3 client makes an external request to the k3s cluster. Remember that we are not running any BGP for automatic route exchange.

Client node setup (node3)

There is nothing specific to install apart from configuring the IP address as per figure and set default route towards 11.11.11.1 (loxilb)

Workload setup in k3s

Now we reached the business end of things, where we run our workload runners in the form of nginx pods. Just use the following nginx.yml and apply them (I am not using helm for this)

apiVersion: v1
kind: Pod
metadata:
name: nginx
labels:
app.kubernetes.io/name: proxy
spec:
containers:
- name: nginx
image: nginx:stable
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app.kubernetes.io/name: proxy
ports:
- name: nginx-service-port
protocol: TCP
port: 8080
targetPort: 80

Apply it :

$ sudo kubectl apply -f nginx.yml

The above creates a cluster IP service which is accessible inside K3s but not from outside so we spin up a load-balancer service. Now we will use the following nginx-svc-lb.yml

apiVersion: v1
kind: Service
metadata:
name: nginx-lb
spec:
selector:
app.kubernetes.io/name: proxy
ports:
- protocol: TCP
port: 80
type: LoadBalancer

Again, apply it :

sudo kubectl apply -f nginx-svc-lb.yml

This is it. Now, we can check whether external LB service has been allocated by loxilb.

$ sudo kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 150m
nginx-service ClusterIP 10.43.189.50 <none> 8080/TCP 149m
nginx-lb LoadBalancer 10.43.8.175 123.123.123.1 80:32755/TCP 149m

The above confirms service type LB is created and external service — 123.123.123.1:80 is allocated to map the nginx K3s service. Furthermore, we can verify the created LB rule in loxilb docker (in node2) as follows :


$ docker exec -it loxilb loxicmd get lb -o wide
| EXTERNAL IP | PORT | PROTOCOL | BLOCK | SELECT | MODE | ENDPOINT IP | TARGET PORT | WEIGHT | STATE |
|---------------|------|----------|-------|--------|---------|-------------|-------------|--------|--------|
| 123.123.123.1 | 80 | tcp | 0 | rr | default | 12.12.12.254| 32755 | 1 | active |

Final check :

We fire some curl requests from client node (node3) to verify whether this external LB service resource can be accessed without any issues:

$ curl http://123.123.123.1:80

<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

There we go — mission accomplished !!

Key learning points and takeaways:

  1. K3s works like a charm with minimal effort
  2. Easy to setup an external LB of choice in K3s
  3. External LB is usually run as part of cloud-provider implementation
  4. loxilb is also pretty easy to use
  5. It also packs in serious performance numbers (stay tuned for next part of this blog) [Edit: It looks like there is already a nice video about this, so there is no need for my second post :-) ]

Hidden Easter egg:

While toying with loxilb, I found a loxilb command:

$ docker exec -it loxilb loxicmd get ep

| HOST | DESC | PTYPE | PORT | DURATION | RETRIES | MINDELAY | AVGDELAY | MAXDELAY | STATE |
|---------------|---------------|-------|-------|----------|---------|-----------|-----------|-----------|-------|
| 12.12.12.254 | 12.12.12.254 | ping: | 32698 | 60 | 2 | 106.445µs | 156.177µs | 205.909µs | ok |

This provides end-point health and other useful metric information. Frankly this was beyond what I expected. Cool !!

Last but not the least, shout out to loxilb project for helping me with my queries during this PoC setup.

Resources and References:

https://futuredon.medium.com/5g-sctp-loadbalancer-using-loxilb-b525198a9103
https://github.com/loxilb-io/loxilb
https://docs.k3s.io/installation
https://docs.k3s.io/architecture

--

--

CloudyBytes
CloudyBytes

Written by CloudyBytes

Cloud-Native Geek, Doer/Go-getter | Starting blogging journey

Responses (1)