wtf is k3s?

k3s is a lightweight Kubernetes distribution which is optimized for edge devices. In my opinion it is also perfect for local development of your k8s microservices.

But is it really lightweight?

YES! The folks from rancherlabs did a great job. They removed all unnecessary k8s features wich are not needed by default for local development and edge-cases. This means the following features are removed:

  • Legacy and non-default features
  • Alpha features - current features gates
  • In-tree cloud providers
  • In-tree storage drivers
  • Docker (optional)

In the most usecases you are enough with the v1 features (Deployments, Services) and do not need the k8s provided alpha features. They also added/ changed some k8s features:

  • simplified the installation - perfect combination with k3d ;)
  • replaced the etcd with a less-overhead and for the purpose optimized SQLite3 database for the state
  • TLS management - auto-gen ssl-certs for a cluster wide secure communication
  • Uses Flannel and CoreDNS for the cluster network

k3s provides you two ways to setup a cluster. The first one is the classic bootstrap your cluster directly on your host - way. It's like you are using kubeadm just for k3s. This 'way' of bootstrapping may be interesting for k3s production cluster in combination with ansible. For more information and a quick startguide follow the instructions of the github repository.

The second way of bootsrapping a k3s cluster is in combination with docker-in-docker. This means you are able to deploy a multi-node k3s cluster on your host directly. All k3s nodes master/worker are fully encapsuled in their own docker container. Isn't that cool? Yeah?? I LOVE IT. In my journey with Kubernetes I always looked for a local-dev solution which is fast at bootstrapping and with less k8s overhead which I don't really need. I tested minikube and kind. Both of them worked as desired but did not meet my requirements for a perfect local-dev solution.

To bootstrap a k3s docker-in-docker(dind) cluster k3s provides some configuration options which will do the magic. But the folks from Rancher are smart. They provide you a more simplified solution. I introduce you k3d

k3d

k3d is not a new complex solution to bootstrap a k3s dind cluster. It is a CLI for your terminal to manage k3s dind cluster on your host. Followed I will show you the power of k3s with a demo. This includes the installation, bootstrapping a cluster with 10 worker and deploy a minimal http-server with a service.

install k3d

I used the installation script from the github repository, which will download the binary and move it to /usr/local/bin

$ wget -q -O - https://raw.githubusercontent.com/rancher/k3d/master/install.sh | bash

Verify your installation

$ k3d -version

bootstrap a 10 workers cluster

My local machine have the following specs:
CPU: i7-4770
MEMORY: 16 GB

Sure in the most test cases you don't need a 10 worker cluster but for the demo I think this is okay ;-).

$ k3d create --workers 10

which should print the following output

2019/05/07 23:10:17 Created cluster network with ID 3b2cade80cb560c90e6e1f2638ab116404877854659668c7e038d7e1557f8b7c
2019/05/07 23:10:17 Creating cluster [k3s_default]
2019/05/07 23:10:17 Creating server using docker.io/rancher/k3s:v0.4.0...
2019/05/07 23:10:20 Booting 10 workers for cluster k3s_default
Created worker with ID f7799681167c6944270761767e87a2487d7b41add9f295b8b24d6119174c3bb8
Created worker with ID 13ff1c3b748a35fa4fb21221057b689f46026f6253e08f24de0fc380d67e4ed6
Created worker with ID 6cfab77968f960d910aae851cf6fe0b415654e156c315a8e9d15a7e3316964ad
Created worker with ID 081f0fa4c0467734a27e423a64ee6efb90059f9834058b4d8e1c82174afccf40
Created worker with ID 81eb8b453b129d60c49df7a8ab1881e3fe80314e3711df9bc476a7bd99bd53fb
Created worker with ID 23ae2d85652435abd9072c3a4aa6dc0e02ee50396caa065f789cb3fc2083c365
Created worker with ID 95a039d8bf7e9c53935fc7b8509ac55cfd8fd8a7e85c843ee2be4ff4729befc6
Created worker with ID 2e5ff38b3f41a46d6448303a53becfaff9491de5f9dde0d9b8aba2e7c5378ff5
Created worker with ID a67686d39b0942c832dc4cf654393a0a0101874f9c597ada966c46f83fbcfbfd
Created worker with ID d711d6fdbffa17154f5db4770e880c9f98f41c2bfc2b72d5d112b40df4ebe38b
2019/05/07 23:10:43 SUCCESS: created cluster [k3s_default]
2019/05/07 23:10:43 You can now use the cluster with:

export KUBECONFIG="$(k3d get-kubeconfig --name='k3s_default')"
kubectl cluster-inf

Looks so that our cluster is provisioned. Lets see if that's true.

$ docker ps
CONTAINER ID        IMAGE                COMMAND                  CREATED             STATUS              PORTS                    NAMES
d711d6fdbffa        rancher/k3s:v0.4.0   "/bin/k3s agent"         2 minutes ago       Up 2 minutes                                 k3d-k3s_default-worker-9
a67686d39b09        rancher/k3s:v0.4.0   "/bin/k3s agent"         2 minutes ago       Up 2 minutes                                 k3d-k3s_default-worker-8
2e5ff38b3f41        rancher/k3s:v0.4.0   "/bin/k3s agent"         2 minutes ago       Up 2 minutes                                 k3d-k3s_default-worker-7
95a039d8bf7e        rancher/k3s:v0.4.0   "/bin/k3s agent"         2 minutes ago       Up 2 minutes                                 k3d-k3s_default-worker-6
23ae2d856524        rancher/k3s:v0.4.0   "/bin/k3s agent"         2 minutes ago       Up 2 minutes                                 k3d-k3s_default-worker-5
81eb8b453b12        rancher/k3s:v0.4.0   "/bin/k3s agent"         2 minutes ago       Up 2 minutes                                 k3d-k3s_default-worker-4
081f0fa4c046        rancher/k3s:v0.4.0   "/bin/k3s agent"         2 minutes ago       Up 2 minutes                                 k3d-k3s_default-worker-3
6cfab77968f9        rancher/k3s:v0.4.0   "/bin/k3s agent"         2 minutes ago       Up 2 minutes                                 k3d-k3s_default-worker-2
13ff1c3b748a        rancher/k3s:v0.4.0   "/bin/k3s agent"         2 minutes ago       Up 2 minutes                                 k3d-k3s_default-worker-1
f7799681167c        rancher/k3s:v0.4.0   "/bin/k3s agent"         2 minutes ago       Up 2 minutes                                 k3d-k3s_default-worker-0
9adfbeb78408        rancher/k3s:v0.4.0   "/bin/k3s server --h…"   2 minutes ago       Up 2 minutes        0.0.0.0:6443->6443/tcp   k3d-k3s_default-server

Okay we see that there are 10 containers. One of them exposes the k8s api default port. Now lets check if we can communicate with the api via kubectl. But first we need the kubeconfig.

$ export KUBECONFIG="$(k3d get-kubeconfig --name='k3s_default')"

Have a look at the kubeconfig, you can see that it points to localhost:6443. Directly to the k3d-k3s_default-server container.

$ cat $KUBECONFIG

...
nQzMDkKcWVqR0FuSkdxTlhSU0tNZnI1d0ZMZVNJekJyc0M4MUlSdXNIZ3R3eW4wV0dGZWFweUlwUW5XWEgrNkxwWkw3NApKbDFRT3N0MnVlc3lBK1h1b1o1UlFHK1hzWDhzMGwzQlVKWTd3SXVWVGpJeWFReDV6NnNtUE9vUEFnTUJBQUdqCkl6QWhNQTRHQTFVZER3RUIvd1FFQXdJQ3BEQVBCZ05WSFJNQkFmOEVCVEFEQVFIL01BMEdDU3FHU0liM0RRRUIKQ3dVQUE0SUJBUUJDZTBuaGhPUEZHRXo4OVFCVWxuc3d5aUtGZzV5eTBaWVZMQUZFalFRV3FvZ2l4UU4zak5iMAp3cUx4MytYVi9DZnREZklSM01nRGxwQlBNUlpqa0RvWHRBamozTEhWMzBKY0ZRMzVGdjZaT2lpMFBrYkk4MTNtCmlPKzNpdUh1Si9nNDV6Q2FzNUdKbmpGOW1vcHBGSkVicjZpL05CdWVUdWJGa3hZN3Y5UnluNHFFRlZuN2dwdW8KSmRBT0trYm9neWN4YSsvSjBKdTJNdWhYaHo4NDU3NHZTODM3RVJwazk1MjI0ekFneXIvUy9lTEVCeU54cnE0KwpTWVdpWFJJaUFDZndzbHdRL2xIVWZOVnZMSDd2V1lHQ2lMZkJERGRaQ0RNRlRtZ1JFQlQxQXpiRWZHZjUzZ3BqCm9IRU0rYTc1eFh1L1VoNkdwblpIeTlubVNYZm5GNkJKCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
    server: https://localhost:6443
  name: default
contexts:
- context:
    cluster: default
    user: default
....

Lets get some information via kubectl

$ kubectl get nodes
NAME           STATUS   ROLES    AGE   VERSION
081f0fa4c046   Ready    <none>   14m   v1.14.1-k3s.4
13ff1c3b748a   Ready    <none>   14m   v1.14.1-k3s.4
23ae2d856524   Ready    <none>   14m   v1.14.1-k3s.4
2e5ff38b3f41   Ready    <none>   14m   v1.14.1-k3s.4
6cfab77968f9   Ready    <none>   14m   v1.14.1-k3s.4
81eb8b453b12   Ready    <none>   14m   v1.14.1-k3s.4
95a039d8bf7e   Ready    <none>   14m   v1.14.1-k3s.4
9adfbeb78408   Ready    <none>   14m   v1.14.1-k3s.4
a67686d39b09   Ready    <none>   14m   v1.14.1-k3s.4
d711d6fdbffa   Ready    <none>   14m   v1.14.1-k3s.4
f7799681167c   Ready    <none>   14m   v1.14.1-k3s.4

$ kubectl get pods --all-namespaces
NAMESPACE     NAME                            READY   STATUS      RESTARTS   AGE
kube-system   coredns-857cdbd8b4-89t44        1/1     Running     0          15m
kube-system   helm-install-traefik-7njd7      0/1     Completed   0          15m
kube-system   svclb-traefik-547d54bc4-9xf5c   2/2     Running     1          14m
kube-system   svclb-traefik-547d54bc4-n5l97   2/2     Running     1          14m
kube-system   traefik-55bd9646fc-pgjbr        1/1     Running     0          14m

Okay seems so that our cluster is up and running. Now lets deploy our demo deployment

deploy workload

For the demo I deploy a minimalism http server written in Go Lang which provides a health-endpoint. The deployment consits of the k8s deployment with replica count of 20 instances and a k8s service.

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: k3d-demo-deployment
  namespace: k3d-demo
spec:
  replicas: 20
  selector:
    matchLabels:
      app: k3d-demo
  template:
    metadata:
      labels:
        app: k3d-demo
    spec:
      containers:
      - name: k3d-demo
        image: agabert/beacon
        resources:
          requests:
            memory: "32Mi"
            cpu: "10m"
          limits:
            memory: "128Mi"
            cpu: "500m"
        ports:
        - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: k3d-demo
  namespace: k3d-demo
spec:
  ports:
  - port: 80
    targetPort: 80
    name: http
  selector:
    app: k3d-demo

Now lets see what happens when we deploy this


$ kubectl create namespace k3d-demo

$ kubectl apply -f demo.yaml

deployment.apps/k3d-demo-deployment created
service/k3d-demo created

Watch for the relplicas getting ready

$ watch kubectl get pods -n k3d-demo

Every 2,0s: kubectl get pods -n k3d-demo                                                                                                                                                      dekstop: Tue May  7 23:44:36 2019

NAME                                   READY   STATUS    RESTARTS   AGE
k3d-demo-deployment-55797686f8-6642x   1/1     Running   0          18s
k3d-demo-deployment-55797686f8-6r5z4   1/1     Running   0          18s
k3d-demo-deployment-55797686f8-75dxk   1/1     Running   0          18s
k3d-demo-deployment-55797686f8-85x5x   1/1     Running   0          18s
k3d-demo-deployment-55797686f8-86fdq   1/1     Running   0          18s
k3d-demo-deployment-55797686f8-bh2w7   1/1     Running   0          18s
k3d-demo-deployment-55797686f8-c449d   1/1     Running   0          18s
...

Make a http request against the service via a helper container

$ kubectl --namespace=k3d-demo run -it --image=alpine helper-container

$ wget -SO- k3d-demo/metrics

Connecting to k3d-demo (10.43.109.98:80)
  HTTP/1.1 200 OK
  Content-Type: text/plain; version=0.0.4; charset=utf-8
  Date: Tue, 07 May 2019 21:50:16 GMT
  Connection: close
  Transfer-Encoding: chunked
  
# HELP go_gc_duration_seconds A summary of the GC invocation durations.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 0
go_gc_duration_seconds{quantile="0.25"} 0
go_gc_duration_seconds{quantile="0.5"} 0
...

Bonus: Access services from outside the cluster

Lets try to get the http-server available from outside the cluster. Here you can use the NodePort or LoadBalancer resource.

---
apiVersion: v1
kind: Service
metadata:
  name: k3d-demo
  namespace: k3d-demo
spec:
  type: LoadBalancer
  ports:
  - port: 80
    targetPort: 80
    name: http
  selector:
    app: k3d-demo

Kubernetes will now reconfigure your service type from CLusterIP to the given new type.

$ kubectl -n k3d-demo get service
NAME       TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
k3d-demo   ClusterIP   10.43.76.231   <none>        80/TCP    10s
$ kubectl apply -f test.yml 
deployment.apps/k3d-demo-deployment unchanged
service/k3d-demo configured
$ kubectl -n k3d-demo get service
NAME       TYPE           CLUSTER-IP     EXTERNAL-IP               PORT(S)          AGE
k3d-demo   LoadBalancer   10.43.10.180   172.19.0.10,172.19.0.11   8080:31995/TCP   30s

Kubernetes assigned two IPs to for the Loadbalancer. They are from the Docker-Network of k3s.

docker network inspect k3s_default host 
[
    {
        "Name": "k3s_default",
        "Id": "db7ab3451ca0058255d084b5a8819ead8757b37ff7a1e333ad8c389331fd5b22",
        "Created": "2019-05-08T23:57:41.173744627+02:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.19.0.0/16",
                    "Gateway": "172.19.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "0edfb125443002ed05ab16cd019158098f936e76f961dbbd457b9e729bdd9f58": {
                "Name": "k3d-k3s_default-worker-6",
                "EndpointID": "9f7fc1aa3d1814d0d7ac6dd1538e1add04ed622261b050a1cdcdd64df43437d5",
                "MacAddress": "02:42:ac:13:00:09",
                "IPv4Address": "172.19.0.9/16",
                "IPv6Address": ""
            },
            "2db4c1b45db71fbec3a7998ad5f836de61fd84ce75506934b10b75b6b4b22fa9": {
                "Name": "k3d-k3s_default-worker-2",
                "EndpointID": "5c7f9d48e21288824aca72c60a893bbea303082b5c267f97b64be0253b4245b7",
                "MacAddress": "02:42:ac:13:00:05",
                "IPv4Address": "172.19.0.5/16",
                "IPv6Address": ""
          ....

Finally lets check if we can access the endpoint via one of the provided IPs.

$ wget -SO- 172.19.0.10:80/metrics
--2019-05-09 00:07:39--  http://172.19.0.10/metrics
Connecting to 172.19.0.10:80... connected.
HTTP request sent, awaiting response... 
  HTTP/1.1 200 OK
  Content-Type: text/plain; version=0.0.4; charset=utf-8
  Date: Wed, 08 May 2019 22:07:39 GMT
  Transfer-Encoding: chunked
Length: unspecified [text/plain]
Saving to: ‘STDOUT’

-                                                  [<=>                                                                                                ]       0  --.-KB/s               # HELP go_gc_duration_seconds A summary of the GC invocation durations.
# TYPE go_gc_duration_seconds summary
go_gc_duration_seconds{quantile="0"} 0
go_gc_duration_seconds{quantile="0.25"} 0
go_gc_duration_seconds{quantile="0.5"} 0
go_gc_duration_seconds{quantile="0.75"} 0
go_gc_duration_seconds{quantile="1"} 0

Works perfectly!


I hope this little demo proves the simplicity of k3s and k3d. Big thanks to the developers of k3s and k3d. You did a great Job! If you have a any open questions feel free to DM me @twitter.

Links:

k3s official website
k3s GitHub
k3d GitHub
Rancher