Kubernetes – with Minikube and Helm – part 2

This is the second half of the Kubernetes with Minikube and Helm presentation, the first half explains all of the steps we went through to get to this point, and is available here:

In this section we cover the following:

  • Helm and Tiller – what they are, when & why you’d maybe use them
  • Helm and Tiller – prep, install and Helm Charts
  • Deploying Jenkins via Helm Charts
  • and WordPress w/MariaDB too
  • Wrap up

The below are mostly my technical notes from this session, with some added blurb/explanation.

Helm and Tiller – what they are, when & why you’d maybe use them

From the Helm site:

“Helm helps you manage Kubernetes applications — Helm Charts help you define, install, and upgrade even the most complex Kubernetes application. Charts are easy to create, version, share, and publish — so start using Helm and stop the copy-and-paste.”

https://helm.sh/

Helm is basically a package manager for Kubernetes applications. You can choose from a large list of Stable (or not so!) ready made packages and use the Helm Charts to quickly and easily deploy them to your own Kubernetes Cluster.

This makes light work of some very complex deployment tasks, and it’s also possible to extend these ready-made charts to suit your needs, and to write your own Charts from scratch, or pass your own values to override default ones, or… many other interesting options!

For this session we are looking at installing Helm, reviewing some example Helm Charts and deploying a few “vanilla” ones to the cluster we created in the first half of the session. We also touch upon the life-cycle of Helm Charts – it’s similar to dockers – and point out some of the ways this could be extended and customised to suit your needs – more on this at a later date hopefully.

Helm and Tiller – prep, install and Helm Charts

First, installing Helm – it’s as easy as this, run on your laptop/host that’s running the Minikube k8s we setup earlier:

Get & chmod the get_helm script, then run it:

curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.sh

chmod 700 get_helm.sh

./get_helm.sh

Tiller is the client part of Helm and is deployed inside your k8s cluster. It’s set to be removed with the release of Helm 3, but the basic functionality wont really change. More details here https://helm.sh/blog/helm-3-preview-pt1/

Next we do the Tiller prep & install – add RBAC for tiller, deploy via helm and take a look at the running pods:

kubectl create serviceaccount -n kube-system tiller

kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller

helm init --service-account tiller

kubectl --namespace kube-system get pods

Helm Charts – look at the list of available stable Charts, then deploy a couple. The github repo is here

https://github.com/helm/charts

Update the local helm repo info:

helm repo update

then, for example, install Redis from its Helm Chart to the k8s cluster as easily as this:

helm install stable/redis

or helm install stable/mysql and check the console output that explains how to access the newly deployed app.

keep an eye on the pods to see what’s going on: watch kubectl get pods -o wide

Deploying Jenkins via Helm Charts

helm ls

helm delete <things you don't want any more to free up resources>

helm install --set serviceType=NodePort --name jenki stable/jenkins

again, watch kubectl get pods -o wide

now get the URL for the Jenkins service from Minikube:

minikube service --url=true jenki-jenkins

Hit that URL in your browser, and grab the password in UI from Pods > Jenki and log in to Jenkins with the user “admin”:

That’s a Jenkins instance deployed via Helm and Tiller and a Helm Chart to our Kubernetes Cluster running inside Minikube via a VirtualBox VM… all done in a few minutes. And it’s all customisable, repeatable, highly scaleable and awesome.

and WordPress w/MariaDB too

This was the “bonus demo” if my laptop wasn’t on fire – and thanks to some rapid cleaning up it managed fine – showing how quickly we could deploy a functional WordPress with MariaDB backend to our k8s cluster using the Helm Chart.

To prepare for this I did a helm ls to see all the things I had running. then helm delete --purge jenki, gave it a while to recover then had to do

kubectl delete pods <jenkinpod>

before starting the WordPress Chart deployment with

helm install --set serviceType=NodePort --name wp-k8s stable/wordpress

watch kubectl get pods -o wide for a while – note the chart is configured with the mariadb pod as a pre requisite of the wordpress instance:

Once it’s started we requested the service URL from Minikube again, making ingress nice and easy:

minikube service --url=true wp-k8s-wordpress

Hit that in the browser, using https and accepting the cert warning…

then logged in as `user` and qureied for the password in the k8s secret…

echo Password: $(kubectl get secret wp-k8s-wordpress -o
jsonpath="{.data.wordpress-password}" | base64 --decode)

and logged in to WordPress:

Wrap up

That’s it – we covered a lot in this session, and plan to use this as a platform to explore Helm in more detail later, writing our own Helm Charts and providing our own customisations to them.

minikube delete; rm -rf ~/.minikube

Cleans up everything we’d done:

Leaving just the local tools to remove if you want to – see the first half for a reminder.

Cheers,

Don

Kubernetes – with Minikube and Helm – part 1

Intro:

This is the first of two posts on Kubernetes and Helm Charts, focusing on setting up a local development environment for Kubernetes using Minikube, then exploring Helm for package management and quickly and easily deploying several applications to the cluster – NGINX, Jenkins, WordPress with a MariaDB backend, MySQL and Redis.

The content is taken from the practical/demo session I wrote and published in Github here:

https://github.com/AutomatedIT/presentations/blob/master/minikube_demo.md

for this Meetup session we ran in Edinburgh in June 2019:

“Kubernetes – getting started with Minikube, Helm and Tiller” https://www.meetup.com/Automated-IT-Solutions/events/261623765/

<ramble>

One of the key objectives and challenges here was getting a useful local Kubernetes environment up and running as quickly and easily as possible for as wide an audience as we could- there’s so much to the Kubernetes ecosystem that it’s very easy to get side-tracked, and we could have (happily) spent a long time discussing the myriad of alternative possible solutions.

We plan to go “deeper” on all of this in future sessions and have an in-depth Helm session in the works, but for this session we were focused on creating a practical starting point.

</ramble>

Don

What is covered here:

  • Minikube – what it is (& isn’t) & why you’d use it (or not)
  • Kubernetes and Minikube components and concepts
  • setup for Mac and Linux
  • creating a first Kubernetes cluster in Minikube
  • minikube addons – what they are and how they can help you
  • minikube docker env – using DOCKER_HOST with minikube VM
  • Kubernetes dashboard with Heapster and Metrics Server – made easy by Minikube
  • kubectl – some examples and alternatives
  • example app – “hello (Kubernetes) world” minikube style with NGINX, scaling your world

and the second post covers:

  • Helm and Tiller – what they are, when & why you’d maybe use them
  • Helm and Tiller – prep, install and Helm Charts
  • Deploying Jenkins via Helm Charts
  • and WordPress w/MariaDB too
  • wrap up

Minikube – what it is (& isn’t) & why you’d use it (or not)


What it is, why you’d use it etc.

Local development of k8s – runs a single node Kubernetes cluster in a Virtual Machine on your laptop/PC.

All about making things easy for local development, it is not a production solution, or even close to it.

There are many other ways to run k8s, they all have their pros and cons and use cases. The slides form the Meetup covered this on more detail and include links for further info – they are available here:

Kubernetes and Minikube components and concepts

The (above) slides also cover this section:
Kubernetes components and concepts
what it solves
how Minikube works


Setup for Mac and Linux

There are three things you need to set up for this, they are:
VirtualBox: https://www.virtualbox.org/wiki/Downloads
Minikube: https://kubernetes.io/docs/tasks/tools/install-minikube/
kubectl: https://kubernetes.io/docs/tasks/tools/install-kubectl/

Using Ubuntu for example:

curl -Lo minikube https://storage.googleapis.com/minikube/releases/v1.1.0/minikube-linux-amd64 && chmod +x minikube && sudo cp minikube /usr/local/bin/ && rm minikube

curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.14.0/bin/linux/amd64/kubectl

`chmod +x ./kubectl

`sudo mv ./kubectl /usr/local/bin/kubectl`

Cleanup/prep – if required, remove any previous cluster & settings

`minikube delete; rm -rf ~/.minikube`

Creating a first Kubernetes cluster in Minikube

Here we create a first Kubernetes cluster with Minikube, then take a look around in & outside of the VM.

With the above initial setup done, it’s as simple as running this in a shell:

minikube start

Note you could optionally give this Cluster a name, if you are likely to have more than one for different branches of development for example. This is also where you could specify the VM provider if you want to use something other than VirtualBox – there are more details here:

https://kubernetes.io/docs/setup/learning-environment/minikube/#starting-a-cluster

This should produce output like the following, and it may well take a few minutes as the VM is downloaded and started, then a stack of Docker images are started up inside that….

At this point you should be able to see the minikube VM running in the VirtualBox GUI:

Now it’s running, we can connect from our local shell directly to the one inside the running VM by simply issuing:

minikube ssh

This will put you inside the VM where the Kubernetes Cluster is being run, and we can see and interact with the running components, for example:

docker images

should show all of the downloaded images:

and you could do this to see the running containers:

docker ps

Quitting out of the VM puts us back on the local host, where we can use kubectl to query the status of the Minikube cluster – the initial setup has told kubectl about the Minikube-managed Kubernetes Cluster, meaning there’s no other setup required here:

kubectl cluster-info

kubectl get nodes

kubectl describe nodes

minikube addons – what they are and how they can help you

Show some of the ways minkube makes things easier for local dev

First, take a moment to look around these two local folders:

ls -al ~/.minikube; ls -al ~/.kube

These are where Minikube keeps its settings and the VM Image, and where kubectl settings are persisted – and updated by Minikube.

With Minikube you’ve often got the option to either use kubectl directly, or to use some Minikube built-in features to make your life easier.

Addons are one of these features, allowing you to very easily add – or remove – functionality from the cluster like this:

minikube addons list

minikube addons enable heapster

minikube addons enable metrics-server

With those three lines we’ve taken a look at the available addons and their current status, and selected to enable both heapster and the metrics server. This was done to give us cpu and mem stats in the Kubernetes Dashboard, which we will set up in a moment. The output should look something like this:

minikube config view

shows the current state of the config – i.e. what changes have been made, so we can keep a track of them easily.

kubectl --namespace kube-system get pods

now we can enable the dashboard:

minikube addons enable dashboard

and check again to see the current state

minikube addons list

we’ll connect to the Dashboard and take a look around in a moment, but first…

minikube docker env – using the DOCKER_HOST in you minikube VM – how & why


Minikube docker-env – setup local docker client to use minikube docker host

We’re going to look at connecting our local docker client to the docker host inside the Minikube VM. This is made easy by:

minikube docker-env

if you run that command on its own it wiull show you what settings it will export and you can set them by doing:

eval ${minikube docker-env}

From then on, in that shell, your local docker commands will use the docker host inside Minikube.

This is very useful for debugging and local development – when you change and deploy anything to your Kubernetes Cluster, you can easily tail the logs or check for errors or issues. You can also do all of this via the dashboard or kubectl too if you prefer, but it’s another handy and powerful feature from Minikube.

The following image shows the result of running this command:

eval $(minikube docker-env) && docker ps | grep -i metrics

so we can now use our local docker client to run docker commands like…

docker ps

docker ps | grep -i metrics

docker logs -f <some container id>

etc.

Kubernetes dashboard with Heapster and Metrics Server – made easy by Minikube

Minikube k8s dashboard – here we will start up the k8s dashboard and take look around.

We’ve delayed starting the dashboard up until after we enabled the metrics-server & heapster components we deployed earlier. By doing it in this order, the dashboard will automatically detect and use these components, giving us cpu & mem stats and a nicer looking dash, with no additional config required.

Starting the dashboard simply involved running

minikube dashboard

and waiting for a minute…

That should fire up your browser automatically, then you can take a look around at things like Default namespace > Nodes

and in the namespace kube-system > Deployments

and kube-system > Pods

You can see the logs and statuses of everything running in your k8s cluster – from the core components we covered at the start, to the dashboard, metrics and heapster we enabled recently, and the application we’re going to deploy and scale up soon.

kubectl – some examples and alternatives

# kubectl command line – look at kubectl and keep an eye on things
kubectl get deployment -n kube-system

kubectl get pods -o wide -n kube-system

kubectl get services

kubectl

example app – “hello (Kubernetes) world” minikube style with NGINX, scaling your world

Now we’ll deploy the most basic application we can – a “Hello World” style NGINX docker image.

It’s as simple as this, where nginx is the name of the docker image you want to deploy, hello-nginx is the label you want to give it, and port 80 is where you want it to listen:

kubectl run hello-nginx --image=nginx --port=80

that shouldn’t take long, and you can watch the progress like this:

kubectl get pods -o wide

We can then expose the deployment using NodePort:

kubectl expose deployment hello-nginx --type=NodePort

then we can ask Minikube to provide the URL for Ingress:

minikube service --url=true hello-nginx

and hitting that URL in your browser should show the obvious:

“Welcome to nginx!

If you see this page, the nginx web server is successfully installed and working. Further configuration is required.”

you can keep an eye on the Service with

kubectl get svc

while we scale to x3 replicas:

kubectl scale --replicas=3 deployment/hello-nginx

and take a look at what happens with

kubectl get deployment

kubectl get pods -o wide

or check in the Dashboard to see something like this:

and monitor what’s going on in our “hello world” NGINX app with kubectl then scale it down to 0 or 1 or whatever you like…

kubectl get deployment

kubectl get pods -o wide

kubectl scale --replicas=0 deployment/hello-nginx

Next post – Helm & Tiller onwards…

Jenkins Global Pipeline Libraries – a v.quick start guide

This post runs through the steps required to start using Global Pipeline Libraries in your Jenkins Pipelines.

There are many posts about these all over the ‘net, but they mostly seemed overly complex and not too helpful to me – I just wanted to know how to get the most basic example possible working quickly on my dev Jenkins instance, so I could see how they work in practice and take it from there.

That’s what this post covers – getting a simple “Hello World” type example library published and made available in Jenkins, then calling it very easily from within a Pipeline job with the expected results. More detail and advanced usage to come later… these are a very powerful addition to Jenkins pipelines.

This is done in three simple and logical steps:

Create a Library and Publish it

Tell Jenkins about this nice new library

Calling the Global Library from my Jenkins Pipeline


The first step is to…

Create your Library and publish it somewhere.

I have reused one of my existing GitHub repos: https://github.com/DonaldSimpson/groovy.git for this example, but most version control systems should do just as well.

That’s all that’s needed for this most-basic example – here is the code in plain text, as taken from the guide here:

#!/usr/bin/env groovy
def call(String name = 'human') {
    // Any valid steps can be called from this code, just like in other
    // Scripted Pipeline
    echo "Hello, ${name}."
}

It is important to note that the file is in a “vars” directory, this is the naming convention Jenkins expects to find your groovy libraries within, and is best followed.

A. Note

Next step is to:

Tell Jenkins about this nice new library

This is done by going to Manage Jenkins then Configure System, then scrolling down to Global Pipeline Libraries and defining a new instance of one, just like this:

The settings used here are:

Name: mycommonlibs // any name you’d like to reference these libraries by

Default version: master // or use a branch or version number if you prefer

I then checked the three tick boxes, especially the Load implicitly which removes the need to load Libraries explicitly in your Jenkinsfile (you can do this and it may be very useful depending on your needs, but I want simple and easy for now).

The final section tells Jenkins where this Library is:

https://github.com/DonaldSimpson/groovy.git

and I provide a user to access GitHub with.

That is all that is needed to set up a Library and tell Jenkins all about it.

Note that anyone with write access to the location of your defined Libraries will effectively have full access to your Jenkins instance

W. Arning

And finally, it’s time time for a test drive…

Calling the Global Library from my Jenkins Pipeline:

    sayHello ()
    sayHello 'Donald'

To end up with a mega-basic Pipeline that looks like this:

When this Jenkins Pipeline job is run, it generates the following output:

Summary

Which as you can see means that Jenkins has pulled in the Shared Library from GitHub, resolved and called the sayHello() method from the remote common library, called it again with a passed parameter (‘Donald’) and produced the expected results. Yay. How neat and how easy was that?

There’s a whole lot more you can do with Global Pipeline Libraries in Jenkins. From this point you can easily add complexity and functionality to build up a library of powerful and useful utilities that will greatly improve the quality and manageability of your Pipelines.

I plan to expand on some of these points in a later post, but hopefully this shows how to quickly and easily start using them.

Cheers,

Don

Kubernetes – adding persistent storage to the Cluster

Previously

In the last Kubernetes post…

I wrote about getting Helm and Tiler working on the Kubernetes Cluster I set up here…

There was an obvious flaw in the example MySQL Chart I deployed via Helm and Tiller, in that the required Persistent Volume Claims could not be satisfied so the pod was stuck in a “Pending” state for ever.

Adding Persistent Storage

In this post I will sort that out, by adding Persistent Storage to the Cluster and redeploying and testing the same Chart deployed via “helm deploy stable/mysql“. This time, it should be able to claim all of the resources it needs with no tweaking or hints supplied…

First a few notes on some of the commands and tools I used for troubleshooting what was wrong with the mysql deploy.

watch -d 'sudo kubectl get pods --all-namespaces -o wide'

watch -d kubectl describe pod wise-mule-mysql

kubectl attach wise-mule-mysql-d69788f48-zq5gz -i

The above commands showed a pod that generally wasn’t happy or connectable, but little detail.

Running “kubectl get events -w” is much more informative:

LAST SEEN   TYPE      REASON              KIND                    MESSAGE
17m         Warning   FailedScheduling    Pod                     pod has unbound immediate PersistentVolumeClaims
17m         Normal    SuccessfulCreate    ReplicaSet              Created pod: quaffing-turkey-mysql-65969c88fd-znwl9
2m38s       Normal    FailedBinding       PersistentVolumeClaim   no persistent volumes available for this claim and no storage class is set
17m         Normal    ScalingReplicaSet   Deployment              Scaled up replica set quaffing-turkey-mysql-65969c88fd to 1

and doing “kubectl describe pod <pod name>” is also very useful:

<snip a whole load of events and details>
  Type     Reason            Age                    From               Message
  ----     ------            ----                   ----               -------
  Warning  FailedScheduling  5m26s (x2 over 5m26s)  default-scheduler  pod has unbound immediate PersistentVolumeClaims

Making it pretty clear what’s going on and exactly what is noticeably absent from the Cluster.

My initial plan had been to use GlusterFS and Heketi, but having dabbled with this before and knowing it wasn’t really something I wanted to do for this use case, it was a bit of Yak Shaving I’d really like to avoid if possible.

So, I had a look around and found “Rook“. This sounded much simpler and more suited to my needs. It’s also open source, Apache licensed, and works on multi-node clusters. I’d previously considered using hostPath storage but it’s a bit too basic even for here, and would restrict me to a single node cluster due to the (lack of) replication, missing a lot of the point of a Cluster, so I thought I’d give Rook a shot.

Here’s the guide on deploying Rook that I used:

https://github.com/hobby-kube/guide#deploying-rook

Which says to

Apply the storage manifests in the following order:

storage/00-namespace.yml

storage/operator.yml (wait for the rook-agent pods to be deployed kubectl -n rook get pods before continuing)

storage/cluster.yml

storage/storageclass.yml

storage/tools.yml

I tried to follow this but had some issues, which I will try and clarify when I run through this again – I’d made a bit of a mess trying a bit of Gluster and some hostPath and messing about with the default storage class etc, so it was quite possibly “just me”, and not Rook to blame here 🙂 This is some of my shell history:

kubectl apply -f https://raw.githubusercontent.com/rook/rook/release-0.5/cluster/examples/kubernetes/rook-operator.yaml
kubectl apply -f https://raw.githubusercontent.com/rook/rook/release-0.5/cluster/examples/kubernetes/rook-cluster.yaml
kubectl apply -f https://raw.githubusercontent.com/rook/rook/release-0.5/cluster/examples/kubernetes/rook-storageclass.yaml
kubectl -n rook get pods
kubectl apply -f https://github.com/hobby-kube/manifests/blob/master/storage/00-namespace.yml
kubectl apply -f https://github.com/hobby-kube/manifests/blob/master/storage/00-namespace.yml
kubectl apply -f https://github.com/hobby-kube/manifests/blob/master/storage/00-namespace.yml
kubectl apply -f https://raw.githubusercontent.com/rook/rook/release-0.5/cluster/examples/kubernetes/rook-operator.yaml
kubectl apply -f https://raw.githubusercontent.com/rook/rook/release-0.5/cluster/examples/kubernetes/rook-cluster.yaml
watch -d 'sudo kubectl get pods --all-namespaces -o wide'
kubectl apply -f https://raw.githubusercontent.com/rook/rook/release-0.5/cluster/examples/kubernetes/rook-storageclass.yaml

I definitely ran through this more than once, and I think it also took a while for things to start up and work – the subsequent runs went much better than the initial ones anyway. I also applied a few patches to the rook user and storage class (below) – these and many other alternatives were recommended by others facing similar sounding issues, but I think for me the fundamental is solved further below, re the rbd binary missing from $PATH, and installing ceph:


kubectl get secret rook-rook-user -oyaml | sed "/resourceVer/d;/uid/d;/self/d;/creat/d;/namespace/d" | kubectl -n kube-system apply -f -

kubectl get secret rook-rook-user -oyaml | sed "/resourceVer/d;/uid/d;/self/d;/creat/d;/namespace/d" | kubectl -n default -f -
 kubectl get secret rook-rook-user -oyaml | sed "/resourceVer/d;/uid/d;/self/d;/creat/d;/namespace/d" | kubectl -n default apply -f -
  kubectl patch storageclass rook-block -p '{"metadata":{"annotations": {"storageclass.kubernetes.io/is-default-class": "true"}}}

That all done, I still had issues with my pods, specifically this error:

MountVolume.WaitForAttach failed for volume “pvc-4895a379-104b-11e9-9d98-000c29702bc8” : fail to check rbd image status with: (executable file not found in $PATH), rbd output: ()

which took me a little while to figure out. I think reading this page on RBD gave me the hint that there was something (well yeah, the rbd binary specifically) missing on the hosts, but there’s a lot of talk of folk solving this by creating custom images with the rbd binary added to the $PATH in them, replacing core k8s containers with them, which didn’t sound too appealing to me. I had assumed that the images would include the binaries, but hadn’t checked this is any way.

This issue may well be part or possibly all of the reason why I ran the above commands repeatedly and applied all of those patches.

The simple yet not too obvious solution to this – in my case anyway – was to ensure that the ceph common package was available both on the master:

apt-get update && apt-get install ceph-common -y

and critically that it was also available on each of the worker nodes too.

Once that was done, I think I deleted and reapplied everything rook-related again, then things started working as they should, finally.

A quick check:

ansible@umaster:~$ kubectl get sc
NAME PROVISIONER AGE
rook-block (default) rook.io/block 22h

And things are looking much better now.

Checking the Dashboard I can see a Rook namespace with a number of Rook pods all looking green, and Persistent Volume Claims in the default namespace too:

Test with an example – “helm install stable/mysql”, take 2…

To verify this I re ran the same Helm Chart for mysql, with no changes or overrides, to ensure that rook provisioning was working, that it was properly detected and used as the default storage class in the Cluster with no args/hints needed.

The output from running “helm install stable/mysql” includes this info:


MySQL can be accessed via port 3306 on the following DNS name from within your cluster:
donmysql.default.svc.cluster.local

To get your root password run:

    MYSQL_ROOT_PASSWORD=$(kubectl get secret –namespace default donmysql -o jsonpath=”{.data.mysql-root-password}” | base64 –decode; echo)

To connect to your database:

1. Run an Ubuntu pod that you can use as a client:

    kubectl run -i –tty ubuntu –image=ubuntu:16.04 –restart=Never — bash -il

2. Install the mysql client:

    $ apt-get update && apt-get install mysql-client -y

3. Connect using the mysql cli, then provide your password:
    $ mysql -h donmysql -p

So I tried the above, opting to create an ubuntu client pod, installing mysql utils to that then connecting to the above MySQL instance with the root password like so:

ansible@umaster:~$  MYSQL_ROOT_PASSWORD=$(kubectl get secret --namespace default donmysql  -o jsonpath="{.data.mysql-root-password}" | base64 --decode; echo)
ansible@umaster:~$ echo $MYSQL_ROOT_PASSWORD
<THE ROOT PASSWORD WAS HERE>
ansible@umaster:~$ kubectl run -i --tty ubuntu --image=ubuntu:16.04 --restart=Never -- bash -il
If you don't see a command prompt, try pressing enter.
root@ubuntu:/#
root@ubuntu:/# apt-get update && apt-get install mysql-client -y
Get:1 http://archive.ubuntu.com/ubuntu xenial InRelease [247 kB]
Get:2 http://security.ubuntu.com/ubuntu xenial-security InRelease [107 kB]
<snip a load of boring apt stuff>
Setting up mysql-common (5.7.24-0ubuntu0.16.04.1) ...
update-alternatives: using /etc/mysql/my.cnf.fallback to provide /etc/mysql/my.cnf (my.cnf) in auto mode
Setting up mysql-client-5.7 (5.7.24-0ubuntu0.16.04.1) ...
Setting up mysql-client (5.7.24-0ubuntu0.16.04.1) ...
Processing triggers for libc-bin (2.23-0ubuntu10) ...
root@ubuntu:/# mysql -h donmysql -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 67
Server version: 5.7.14 MySQL Community Server (GPL)
<snip some more boring stuff>
mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
4 rows in set (0.00 sec)
mysql> exit
Bye
root@ubuntu:/

In the Kubernetes Dashboard (loads more on that little adventure coming soon!) I can also see that the MySQL Pod is Running and looks happy, no more Pending or Init issues for me now:

and that the Rook Persistent Volume Claims are present and looking healthy too:

Conclusion & next steps

That’s storage sorted, kind of – I’m not totally happy everything I did was needed, correct and repeatable yet, or that I know enough about this.

Rook.io looks very good and I’m happy it’s the best solution for my current needs, but I can see that I should have spent more time reading the documentation and thinking about prerequisites, yadda yadda. To be honest when it comes to storage I’m a bit of a Luddite – i just want it to be there and work as I’d expect it to, and I was keen to move on to the next steps….

I plan to scrub the k8s cluster shortly and run through this again from scratch to make sure I’ve got it clear enough to add to my provisioning pipeline process.

Next, a probably not-too-brief post on how I got Heapster stats working with an InfluxDB backend monitoring stats for both the Master and Nodes, installing a usable Kubernetes Dashboard, and getting that working with suitable access/permissions, aaaaand getting the k8s Dashbaord showing the CPU and Memory stats from Heapster as seen in the Dashboard pic of the pod statuses above…. phew!

Kubernetes – adding Helm and Tiller and deploying a Chart

Introduction

This is Step 2 in my recent series of Kubernetes blog posts.

Step 0 covers the initial host creation and basic provisioning with Ansible: https://www.donaldsimpson.co.uk/2019/01/03/kubernetes-setting-up-the-hosts/

Step 1 details the Kubernetes install and putting the cluster together, as well as reprovisioning it: https://www.donaldsimpson.co.uk/2018/12/29/kubernetes-from-cluster-reset-to-up-and-running/

Caveat

My aim here is to create a Kubernetes environment on my home lab that allows me to play with k8s and related technologies, then quickly and easily rebuild the cluster and start over.

The focus here in on trying out new technologies and solutions and in automating processes, so in this particular context I am not at all bothered with security, High Availability, redundancy or any of the usual considerations.

Helm and Tiller

The quick start guide is very good: https://docs.helm.sh/using_helm/ and I used this as I went through the process of installing Helm, initializing Tiller and deploying it to my Kubernetes cluster, then deploying a first example Chart to the Cluster. The following are my notes from doing this, as I plan to repeat then automate the entire process and am bound to forget something later 🙂

From the Helm home page, Helm describes itself as

The package manager for Kubernetes

and states that

Helm is the best way to find, share, and use software built for Kubernetes.

I have been following this project for a while and it looks to live up to the hype – there’s a rapidly growing and pretty mature collection of Helm Charts available here: https://github.com/helm/charts/tree/master/stable which as you can see covers an impressive amount of things you may want to use in your own Kubernetes cluster.

Get the Helm and Tiller binaries

This is as easy as described – for my architecture it meant simply

wget https://storage.googleapis.com/kubernetes-helm/helm-v2.12.1-linux-amd64.tar.gz

and extract and copy the 2 binaries (helm & tiller) to somewhere in your path

I usually do a quick sanity test or 2 – e.g. running “which helm” as a non-root user and maybe check “helm –help” and “helm version” all say something sensible too.

Install Tiller

Helm is the Client side app that directs Tiller, which is the Server side part. Just like steering a ship… and stretching the Kubernetes nautical metaphors to the max.

Tiller can be installed to your k8s Cluster simply by running “helm init“, which should produce output like the following:


ansible@umaster:~/helm$ helm init
Creating /home/ansible/.helm
Creating /home/ansible/.helm/repository
Creating /home/ansible/.helm/repository/cache
Creating /home/ansible/.helm/repository/local
Creating /home/ansible/.helm/plugins
Creating /home/ansible/.helm/starters
Creating /home/ansible/.helm/cache/archive
Creating /home/ansible/.helm/repository/repositories.yaml
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Adding local repo with URL: http://127.0.0.1:8879/charts
$HELM_HOME has been configured at /home/ansible/.helm.
Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster
Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
To prevent this, run `helm init` with the --tiller-tls-verify flag.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming

That should do it, and a quick check of running pods confirms we now have a tiller pod running inside the kubernetes cluster in the kube-system namespace:

ansible@umaster:~/helm$ sudo kubectl get pods --all-namespaces -o wide
NAMESPACE     NAME                              READY   STATUS    RESTARTS   AGE     IP             NODE       NOMINATED NODE   READINESS GATES
kube-system   coredns-86c58d9df4-mg8b9          1/1     Running   0          22h     10.244.0.11    umaster    <none>           <none>
kube-system   coredns-86c58d9df4-zv24d          1/1     Running   0          22h     10.244.0.10    umaster    <none>           <none>
kube-system   etcd-umaster                      1/1     Running   0          22h     192.168.0.46   umaster    <none>           <none>
kube-system   kube-apiserver-umaster            1/1     Running   0          22h     192.168.0.46   umaster    <none>           <none>
kube-system   kube-controller-manager-umaster   1/1     Running   0          22h     192.168.0.46   umaster    <none>           <none>
kube-system   kube-flannel-ds-amd64-2npnw       1/1     Running   0          14h     192.168.0.46   umaster    <none>           <none>
kube-system   kube-flannel-ds-amd64-lpphn       1/1     Running   0          7m13s   192.168.0.43   ubuntu01   <none>           <none>
kube-system   kube-proxy-b7rwv                  1/1     Running   0          22h     192.168.0.46   umaster    <none>           <none>
kube-system   kube-proxy-wqw8c                  1/1     Running   0          7m13s   192.168.0.43   ubuntu01   <none>           <none>
kube-system   kube-scheduler-umaster            1/1     Running   0          22h     192.168.0.46   umaster    <none>           <none>
kube-system   tiller-deploy-6f8d4f6c9c-v8k9x    1/1     Running   0          112s    10.244.1.21    ubuntu01   <none>           <none>

So far so nice and easy, and as per the docs the next steps are to do a repo update and a test chart install…

ansible@umaster:~/helm$ helm repo update
Hang tight while we grab the latest from your chart repositories…
…Skip local chart repository
…Successfully got an update from the "stable" chart repository
Update Complete. ⎈ Happy Helming!⎈
ansible@umaster:~/helm$ helm install stable/mysql
Error: no available release name found
ansible@umaster:~/helm$

Doh. A quick google makes that “Error: no available release name found” look like a k8s/helm version conflict, but the fix is pretty easy and detailed here: https://github.com/helm/helm/issues/3055


So I did as suggested, creating a service account cluster role binding and patch to deploy them to the kube-system namespace:

kubectl create serviceaccount --namespace kube-system tiller 
kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller
kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'

and all then went ok:

ansible@umaster:~/helm$ kubectl create serviceaccount --namespace kube-system tillerserviceaccount/tiller created 

ansible@umaster:~/helm$ kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tillerclusterrolebinding.rbac.authorization.k8s.io/tiller-cluster-rule created

ansible@umaster:~/helm$ kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}'deployment.extensions/tiller-deploy patchedansible@umaster:~/helm$

From then on everything went perfectly and as described:

try the example mysql chart from here https://docs.helm.sh/using_helm/

like this:

helm install stable/mysql
and check with "helm ls"
helm lsansible@umaster:~/helm$ helm ls 
NAME REVISION UPDATED STATUS CHART APP VERSION NAMESPACEdunking-squirrel 1 Thu Jan 3 15:38:37 2019 DEPLOYED mysql-0.12.0 5.7.14 defaultansible@umaster:~/helm$
and all is groovy
list pods with ansible@umaster:~/helm$ sudo kubectl get pods --all-namespaces -o wide 

NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
default dunking-squirrel-mysql-bb478fc54-4c69r 0/1 Pending 0 105s
kube-system coredns-86c58d9df4-mg8b9 1/1 Running 0 22h 10.244.0.11 umaster
kube-system coredns-86c58d9df4-zv24d 1/1 Running 0 22h 10.244.0.10 umaster
kube-system etcd-umaster 1/1 Running 0 22h 192.168.0.46 umaster
kube-system kube-apiserver-umaster 1/1 Running 0 22h 192.168.0.46 umaster
kube-system kube-controller-manager-umaster 1/1 Running 0 22h 192.168.0.46 umaster
kube-system kube-flannel-ds-amd64-2npnw 1/1 Running 0 15h 192.168.0.46 umaster
kube-system kube-flannel-ds-amd64-lpphn 1/1 Running 0 45m 192.168.0.43 ubuntu01
kube-system kube-proxy-b7rwv 1/1 Running 0 22h 192.168.0.46 umaster
kube-system kube-proxy-wqw8c 1/1 Running 0 45m 192.168.0.43 ubuntu01
kube-system kube-scheduler-umaster 1/1 Running 0 22h 192.168.0.46 umaster
kube-system tiller-deploy-8485766469-62c22 1/1 Running 0 2m17s 10.244.1.22 ubuntu01 ansible@umaster:~/helm$

The MySQL pod is failing to start as it has persistent volume claims defined, and I’ve not set up default storage for that yet – that’s covered in the next step/post 🙂

If you want to use or delete that MySQL deployment all the details are in the rest of the getting started guide – for the above it would mean doing a ‘helm ls‘ then a ‘ helm delete <release-name> ‘ where <release-name> is ‘dunking-squirrel’ or whatever you have.

A little more on Helm

Just running out of the box Helm Charts is great, but obviously there’s a lot more you can do with Helm, from customising the existing Stable Charts to suit your needs, to writing and deploying your own Charts from scratch. I plan to expand on this in more detail later on, but will add and update some notes and examples here as I do:

You can clone the Helm github repo locally:

git clone https://github.com/kubernetes/charts.git

and edit the values for a given Chart:

vi charts/stable/mysql/values.yaml

then use your settings to override the defaults:

helm install --name=donmysql -f charts/stable/mysql/values.yaml stable/mysql

using a specified name makes installing and deleting much easier to automate:

helm del donmysql

and the Helm ‘release’ lifecycle is quite docker-like:

helm ls -a
helm del --purge donmysql

There are some Helm tips & tricks here that I’m working my way through:

https://github.com/helm/helm/blob/master/docs/charts_tips_and_tricks.md

in conjunction with this Bitnami doc:

https://docs.bitnami.com/kubernetes/how-to/create-your-first-helm-chart/


Conclusion

For me and for now, I’m just happy that Helm, Tiller and Charts are working, and I can move on to automating these setup steps and some testing to my overall pipelines. And sorting out the persistent volumes too. After that’s all done I plan to start playing around with some of the stable (and perhaps not so stable) Helm charts.

As they said, this could well be “the best way to find, share, and use software built for Kubernetes” – it’s very slick!

Kubernetes – setting up the hosts

Introduction

This is Step 0 in my recent Kubernetes setup where I very quickly describe the process followed to build and configure the basic requirements for a simple Kubernetes cluster.

Step 1 is here https://www.donaldsimpson.co.uk/2018/12/29/kubernetes-from-cluster-reset-to-up-and-running/

and Step 2 where I set up Helm and Tiller and deploy an initial chart to the cluster is coming very soon

The TL/DR

A quick summary should cover 99% of this, but I wanted to make sure I’d recorded my process/journey to get there – to cut a long story short, I ended up using this Ansible project:

https://github.com/DonaldSimpson/ansible-kubeadm


which I forked from the original here:

https://github.com/ben-st/ansible-kubeadm

on the 5 Ubuntu linux hosts I created by hand (the horror) on my VMWare ESX home lab server. I started off writing my own ansible playbook which did the job, then went looking for improvements and found the above fitted my needs perfectly.

The inventory file here: https://github.com/DonaldSimpson/ansible-kubeadm/blob/master/inventory details the addresses and functions of the 5 hosts – 4 x workers and a single master, which I’m planning on keeping solely for master role.

My notes:

Host prerequisites are in my rough notes below – simple things like ssh keys, passwwordless sudo from the ansible user, installing required tools like python, setting suitable ip addresses and adding the users you want to use. Also allocating suitable amounts of mem, cpu and disk – all of which are down to your preference, availability and expectations.

https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/

ubuntumaster is 192.168.0.46
su – ansible
check history

ansible setup

https://www.howtoforge.com/tutorial/setup-new-user-and-ssh-key-authentication-using-ansible/
1 x master  - sudo apt-get install open-vm-tools-desktop - sudo apt install openssh-server vim whois python ansible - export TERM=linux re https://stackoverflow.com/questions/49643357/why-p-appears-at-the-first-line-of-vim-in-iterm
 - /etc/hosts:
127.0.1.1       umaster
192.168.0.43    ubuntu01
192.168.0.44    ubuntu02
192.168.0.45    ubuntu03
// slave nodes need:ssh-rsa AAAAB3NzaC1y<snip>fF2S6X/RehyyJ24VhDd2N+Dh0n892rsZmTTSYgGK8+pfwCH/Vv2m9OHESC1SoM+47A0iuXUlzdmD3LJOMSgBLoQt ansible@umaster
added to root user auth keys in .ssh and apt install python ansible -y
//apt install python ansible -y
useradd -m -s /bin/bash ansible
passwd ansible <type the password you want>

echo  -e ‘ansible\tALL=(ALL)\tNOPASSWD:\tALL’ > /etc/sudoers.d/ansibleecho  -e 'don\tALL=(ALL)\tNOPASSWD:\tALL' > /etc/sudoers.d/don
mkpasswd --method=SHA-512 <type password "secret">
Password:
$6$dqxHiCXHN<snip>rGA2mvE.d9gEf2zrtGizJVxrr3UIIL9Qt6JJJt5IEkCBHCnU3nPYH/
su - ansible
ssh-keygen -t rsa

cd ansible01/
vim inventory.ini
ansible@umaster:~/ansible01$ cat inventory.ini
[webserver]
ubuntu01 ansible_host=192.168.0.43
ubuntu02 ansible_host=192.168.0.44
ubuntu03 ansible_host=192.168.0.45

ansible@umaster:~/ansible01$ cat ansible.cfg
[defaults]
 inventory = /home/ansible/ansible01/inventory.ini
ansible@umaster:~/ansible01$ ssh-keyscan 192.168.0.43 >> ~/.ssh/known_hosts
# 192.168.0.43:22 SSH-2.0-OpenSSH_7.6p1 Ubuntu-4
# 192.168.0.43:22 SSH-2.0-OpenSSH_7.6p1 Ubuntu-4
# 192.168.0.43:22 SSH-2.0-OpenSSH_7.6p1 Ubuntu-4
ansible@umaster:~/ansible01$ ssh-keyscan 192.168.0.44 >> ~/.ssh/known_hosts
# 192.168.0.44:22 SSH-2.0-OpenSSH_7.6p1 Ubuntu-4
# 192.168.0.44:22 SSH-2.0-OpenSSH_7.6p1 Ubuntu-4
# 192.168.0.44:22 SSH-2.0-OpenSSH_7.6p1 Ubuntu-4
ansible@umaster:~/ansible01$ ssh-keyscan 192.168.0.45 >> ~/.ssh/known_hosts
# 192.168.0.45:22 SSH-2.0-OpenSSH_7.6p1 Ubuntu-4
# 192.168.0.45:22 SSH-2.0-OpenSSH_7.6p1 Ubuntu-4
# 192.168.0.45:22 SSH-2.0-OpenSSH_7.6p1 Ubuntu-4
ansible@umaster:~/ansible01$ cat ~/.ssh/known_hosts
or could have donefor i in $(cat list-hosts.txt)
do
ssh-keyscan $i >> ~/.ssh/known_hosts
done
cat deploy-ssh.yml

 – hosts: all
   vars:
     – ansible_password: ‘$6$dqxHiCXH<kersnip>l.urCyfQPrGA2mvE.d9gEf2zrtGizJVxrr3UIIL9Qt6JJJt5IEkCBHCnU3nPYH/’
  gather_facts: no
   remote_user: root

   tasks:

   – name: Add a new user named provision
     user:
          name=ansible
          password={{ ansible_password }}

   – name: Add provision user to the sudoers
     copy:
          dest: “/etc/sudoers.d/ansible”
          content: “ansible ALL=(ALL)  NOPASSWD: ALL”

   – name: Deploy SSH Key
     authorized_key: user=ansible
                     key=”{{ lookup(‘file’, ‘/home/ansible/.ssh/id_rsa.pub’) }}”
                     state=present

   – name: Disable Password Authentication
     lineinfile:
           dest=/etc/ssh/sshd_config
           regexp=’^PasswordAuthentication’
           line=”PasswordAuthentication no”
           state=present
           backup=yes
     notify:
       – restart ssh

   – name: Disable Root Login
     lineinfile:
           dest=/etc/ssh/sshd_config
           regexp=’^PermitRootLogin’
           line=”PermitRootLogin no”
           state=present
           backup=yes
     notify:
       – restart ssh

   handlers:
   – name: restart ssh
     service:
       name=sshd
       state=restarted

// end of the above file

ansible-playbook deploy-ssh.yml –ask-pass
results inLAY [all] *********************************************************************************************************************************************************************************************************************************************************************

TASK [Add a new user named provision] ******************************************************************************************************************************************************************************************************************************************
fatal:

[ubuntu02]

: FAILED! => {"msg": "to use the 'ssh' connection type 
with passwords, you must install the sshpass program"}
for each node/slave/hostsudo apt-get install -y sshpass
ubuntu01 ansible_host=192.168.0.43
ubuntu02 ansible_host=192.168.0.44
ubuntu03 ansible_host=192.168.0.45

kubernetes setup
https://www.techrepublic.com/article/how-to-quickly-install-kubernetes-on-ubuntu/run install_apy.yml against all hosts and localhost too
on master:

kubeadm init

results in:root@umaster:~# kubeadm init
[init] using Kubernetes version: v1.11.1
[preflight] running pre-flight checks
I0730 15:17:50.330589   23504 kernel_validator.go:81] Validating kernel version
I0730 15:17:50.330701   23504 kernel_validator.go:96] Validating kernel config
    [WARNING SystemVerification]: docker version is greater than the most recently validated version. Docker version: 17.12.1-ce. Max validated version: 17.03
[preflight] Some fatal errors occurred:
    [ERROR Swap]: running with swap on is not supported. Please disable swap
[preflight] If you know what you are doing, you can make a check non-fatal with `–ignore-preflight-errors=…`
root@umaster:~#
doswapoff -a then try again
kubeadm init… wait for images to be pulled etc – takes a while

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run “kubectl apply -f [podnetwork].yaml” with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/

You can now join any number of machines by running the following on each node
as root:

  kubeadm join 192.168.0.46:6443 --token 9e85jo.77nzvq1eonfk0ar6 --discovery-token-ca-cert-hash sha256:61d4b5cd0d7c21efbdf2fd64c7bca8f7cb7066d113daff07a0ab6023236fa4bc
root@umaster:~#

Next up…

The next post in the series is here: https://www.donaldsimpson.co.uk/2018/12/29/kubernetes-from-cluster-reset-to-up-and-running/ and details an automated process to scrub my cluster and reprovision it (form a Kubernetes point of view – the hosts are left intact).

Kubernetes – from cluster reset to up and running

These are notes on going from a freshly reset kubernetes cluster to a running & healthy cluster with a pod network applied and worker nodes connected.

To get to this starting point I provisioned 4 Ubuntu hosts (1 master & 3 workers) on my VMWare server – a Dell Poweredge R710 with 128GB RAM.

I then used this Ansible project:

https://github.com/DonaldSimpson/ansible-kubeadm

to configure the hosts and prep for Kubernetes with kubeadm:

I’ll write about this in more detail in another post…

Please note that none of this is production grade or recommended, it’s simply what I have done to suit my needs in my home lab. My focus is on automating Kubernetes processes and deployments, not creating highly available bullet-proof production systems.

To reset and restore a ‘new’ cluster, first on the master instance – reboot and as a normal user (I’m using an “ansible” user with sudo throughout):


sudo kubeadm reset
(y)
sudo swapoff -a
sudo kubeadm init --pod-network-cidr=10.244.0.0/16

I’m passing that CIDR address as I’m using Flannel for pod networking (details follow) – if you use something else you may not need that, but may well need something else.

That should be the MASTER started, with a message to add nodes with:


  kubeadm join 192.168.0.46:6443 --token 9w09pn.9i9uu1ht8gzv36od --discovery-token-ca-cert-hash sha256:4bb0bbb1033a96347c6dd888c769ec9c5f6caa1b699066a58720ffdb97a0f3d7

which all sounds good, but the first most basic check produces the following error:


ansible@umaster:~$ kubectl cluster-info
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Unable to connect to the server: x509: certificate signed by unknown authority (possibly because of "crypto/rsa: verification error" while trying to verify candidate authority certificate "kubernetes")

which I think is due to the kubeadm reset cleaning up the previous config, but can be easily fixed with this:


mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

then it works and MASTER is up and running ok:


ansible@umaster:~$ sudo kubectl cluster-info
Kubernetes master is running at https://192.168.0.46:6443
KubeDNS is running at https://192.168.0.46:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

————- ADD NODES ——————

Use the command and token provided by the master on the worker node(s) (in my case that’s “ubuntu01” to “ubuntu04”). Again I’m running as the ansible user everywhere, and I’m disabling swap and doing a kubeadm reset first as I want this repeatable:

sudo swapoff -a
sudo kubeadm reset
sudo  kubeadm join 192.168.0.46:6443 --token 9w09pn.9i9uu1ht8gzv36od --discovery-token-ca-cert-hash sha256:4bb0bbb1033a96347c6dd888c769ec9c5f6caa1b699066a58720ffdb97a0f3d7

I think the token expires after a few hours. If you want to get a new one you can query the Master using:

https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm-token/

Or, as I’ve just found out, the more recent versions ok k8s provide “kubeadm token create –print-join-command”, which provide output like the following example that you can save to a file/variable/whatever:

kubeadm join 192.168.0.46:6443 --token 8z5obf.2pwftdav48rri16o --discovery-token-ca-cert-hash sha256:2fabde5ad31a6f911785500730084a0e08472bdcb8cf935727c409b1e94daf44

I believe options to specify json or alternative output formatting is in the works too.

That’s all that is needed, if you’ve not used this node already it may take a while to pull things in but if you have it should be pretty much instant.

When ready, running a quick check on the MASTER shows the connected node (ubuntu01) and the Master (umaster) and their status:


ansible@umaster:~$ sudo kubectl get nodes --all-namespaces
NAME       STATUS     ROLES    AGE     VERSION
ubuntu01   NotReady   <none>   27s     v1.13.1
umaster    NotReady   master   8m26s   v1.13

The NotReady status is because there’s no pod network available – see here for details and options:

https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#pod-network

so apply a pod network (I’m using flannel) like this on the Master only:


ansible@umaster:~$ sudo kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.extensions/kube-flannel-ds-amd64 created
daemonset.extensions/kube-flannel-ds-arm64 created
daemonset.extensions/kube-flannel-ds-arm created
daemonset.extensions/kube-flannel-ds-ppc64le created
daemonset.extensions/kube-flannel-ds-s390x created

Then check again and things should look better now they can communicate…


ansible@umaster:~$ sudo kubectl get nodes --all-namespaces
NAME       STATUS   ROLES    AGE     VERSION
ubuntu01   Ready    <none>   2m23s   v1.13.1
umaster    Ready    master   10m     v1.13.1
ansible@umaster:~$

Adding any number of subsequent nodes is very easy and exactly the same (the pod networking setup is a one-off step on the master only). I added all 4 of my worker vms and checked they were all Ready and “schedulable”. My server coped with this no problem at all. Note that by default you can’t schedule tasks on the Master, but this can be changed if you want to.

That’s the very basic “reset and restore” steps done. I plan to add this process to a Jenkins Pipeline, so that I can chain a complete cluster destroy/reprovision and application build, deploy and test process together.

The next steps I did were to:

  • install the Kubernetes Dashboard to the cluster
  • configure the Kubernetes Dashboard and fix permissions
  • deploy a sample application, replicaset & service and expose it to the network
  • configure Heapster

which I’ll post more on soonish… and I’ll add the precursor to this post on the host provisioning and kubeadm setup too.

Jenkins and Docker – Part 1 of 3

This post is the first in a series of 3 introducing the combined power of Jenkins, Docker, and the Jenkins DSL.

They should hopefully provide enough information to get to grips with both Docker and Jenkins – what they both do and how to use them – by showing some practical examples of them working together.

The first step, if you haven’t already, is to download and install Docker on your platform – the Docker website covers this in good detail for most platforms…

Docker for Mac

Docker for Windows

Docker for Linux

Once that’s done, you can try it out with the customary “Hello World” example…

I’m running Docker on an Ubuntu VM, but the commands and the results are the same regardless of platform – that’s one of the main Docker concepts.

You can then check which processes (docker containers) are running using the “docker ps” command – in my example you can see that there’s one Jenkins image running. If you run “docker ps -a” you will see all containers (including stopped ones, of which I have a few on this host):

and you can check your Docker version with:

root@ubuntud:~# docker --version
Docker version 1.13.0, build 49bf474

Now that the basic setup is done, we can move on to something a little more interesting – downloading and running a “Dockerised” Jenkins container.

I’m going to use my own Dockerised Jenkins Image, and there will be more detail on that in the next post – you’re welcome to try it out too, just run this command in your terminal:

docker run -d -p 8080:8080 donaldsimpson/dockerjenkins

if you don’t happen to have my docker image cached locally (like I do) then docker will automatically download it for you from Docker Hub then run it:

That command did a quite few important things, here’s a quick explanation of them all:

docker run -d

tells docker that we want to run the container in the background so that we can carry on and do other things while it runs. The alternative is -it, for an interactive/foreground session.

docker run -d -p 8080:8080

The -p 8080:8080 tells docker to map port 8080 on the local host to port 8080 in the running container. This means that when we visit localhost:8080 the request will be passed through to the container.

docker run -d -p 8080:8080 donaldsimpson/dockerjenkins

and finally, we have the namespace and name of the Docker image we want to run – my “donaldsimpson/dockerjenkins” one – more on this later!

You can now visit port 8080 on your Docker host and see that Jenkins is up and running….

 

That’s Jenkins up and running and being happily served from the Docker container that was just pulled from Docker Hub – how easy was that?!

And the best thing is, it’s entirely and reliably repeatable, it’s guaranteed to work the same on all platforms that can run Docker, and you can quickly and easily update, delete, replace, change or share it with others! Ok, that’s more than one thing, but the point is that there’s a lot to like here 🙂

That’s it for this post – in the next one we will look in to the various elements that came together to make this work – the code and configuration files in my Git repo, the automated build process on Docker Hub that builds and updates the Docker Image, and how the two are related.

Tunneling out of Carrier Grade Nat (CGNAT) with SSH and AWS

Update: there’s a new & improved solution here too.
Intro

After switching to a 4G broadband provider, who shall (pretty much) remain nameless, I discovered they were using Carrier-Grade  NAT (aka CGNAT) on me.

There are more details on that here and here but in short, the ISP is ‘saving’ IPv4 addresses by sharing them out amongst several users and NAT’ing their connections – in much the same way as you do at home, when you port forward multiple devices using one external IP address: my home network is just one ‘device’ in a pool of their users, who are all sharing the same external IP address.

The impact of this for me is that I can no longer NAT my internal network services, as I have been given a shared pubic-facing IPv4 address. This approach may be practical for a bunch of mobile phone users wanting to check Twitter and Facebook, but it sucks big time for gamers or anyone else wanting to connect things from their home network to the internet. So, rather than having “Everything Everywhere” through my very expensive new 4G connection – with 12 months contract – it turns out I get “not much to anywhere“.

The Aim

Point being; I would like to be able to check my internal servers and websites when I’m away – especially my ZoneMinder CCTV setup – but my home broadband no longer has its own internet address. So an alternative solution had to be found…

The “TL; DR” summary

I basically use 2 servers, the one at home (unhelpfully now stuck behind my ISPs CGNAT) and one in the Amazon Cloud (my public facing AWS web server with DNS), and create a reverse SSH Tunnel between them. Plus a couple of essential tweaks you wont find out about if you don’t read any further 🙂

The Steps
Step 1 – create the reverse SSH tunnel:

This is initiated on the internal/home server, and connects outwards to the AWS host on the internet, like so.

ssh -N -R 8888:localhost:80 -i /home/don/DonKey.pem awsuser@ec2-xx-xx-xx-xx.compute-x.amazonaws.com

Here is an explanation of each part of this command:

-N (from the SSH man page) “Do not execute a remote command.  This is useful for just forwarding ports.”

-R (from the SSH man page)  “Specifies that connections to the given TCP port or Unix socket  on the remote (server) host are to be forwarded to the given host and port, or Unix socket, on the local side.”

8888:localhost:80 – means, create the reverse tunnel from localhost port 80 (my ZoneMinder web app) to port 8888 on the destination host. This doesn’t look right to me, but it’s what’s needed for a reverse tunnel

the -i and everything after it is just me connecting to my AWS host as my user with an identity file. YMMV, whatever you nornally do should be fine.

When you run this command you should not see any issues or warnings. You need to leave it running using whatever method you like – personally I like screen for this kind of thing, and will also be setting up Jenkins jobs later (below).

Step 2 – check on the AWS host

With that SSH command still running on your local server you should now be able to connect to the web app from your remote AWS Web Server, by reading from port 8888 with curl or wget.

This is a worthwhile check to perform at this point, before moving on to the next 2 steps – for example:

don@MyAWSHost:~$ wget -q -O- localhost:8888/zm | grep -i ZoneMinder
      <h1>ZoneMinder Login</h1>
don@MyAWSHost:~$

This shows that port 8888 on my AWS server is currently connected to the ZoneMinder application that’s running on port 80 of my home web server. A good sign.

Step 3 – configure AWS Security & Ports

Progress is being made, but in order to be able to hit that port with a browser and have things work as I’d like, I still need to configure AWS to allow incomming connections to the newly chosen port 8888.

This is done through the Amazon EC2 Management Console using the left hand menu item “Network & Security” then “Security Groups”:

awsmenuThis should load your current Security Groups, which you can click on to Edit. You may have a few to check.

Now select Add and configure a new Inbound rule something like so:

awsinboundruleIt’s the “Custom TCP Rule” second from the bottom, with port 8888 and “Anywhere” and “0.0.0.0/0” as the source in my picture. Don’t go for the HTTP option – unless you’re sure that’s what you want 🙂

Step 4 – configure SSH on AWS host

At this point I thought I was done… but it didn’t work and I couldn’t immediately see why, as the wget check was all good.

Some head scratching and checking of firewalls later, I realised it was most likely to be permissions on the port I was tunneling – it’s not very likely to be exposed and world readable by default, is it? Doh.

After a quick google I found a site that explained the changes I needed to make to my sshd_config file, so:

vim /etc/ssh/sshd_config

and add a new line that says:

GatewayPorts yes

to that file, checking that there’s no existing reference to GatewayPorts – edit this file carefully and at your own risk.

As I understand it – which may best be described as ‘loosely’ – the reason this worked when I tested with wget earlier is because I was connecting to the loopback interface; this change to sshd binds the port to all interfaces. See the detailed answer on this post for further detail, including ways to limit this to specific users.

Once that’s done, restart sshd with

service ssh restart

and you should now be able to connect by pointing a web browser at port 8888 (or whatever you set) of your AWS web server and see your app responding from the other end:

zmlogin
Step 5 – automate it with Jenkins

The final step for me is to wrap this (the ssh tunnel creation part) up in a Jenkins job running on my home server.

This is useful for a number of reasons, such as avoiding and resetting defunct/stale connections and enabling scheduling – i.e. I can have the port forwarded when I want it, and have it shutdown during the hours I don’t.

CCTV with Tenvis cameras and ZoneMinder

This post details the processes I went through to create my own DIY home CCTV system.

Topics covered include:
1. Hardware – some cheap but impressive Tenvis TH692 720p IP cameras, and some Power over Ethernet (PoE) injectors and extractors to go with them
2. Camera setup – how to set them up, connect to them, and a quick summary of basic functions
3. Clients – info on a few different ways to attach to and use the cameras – VLC, Kodi/XBMC, RTSP and the built-in app and web interfaces
4. Jenkins – using Jenkins jobs to capture and record from Tenvis cameras
5. ZoneMinder – installation, OVA and manual install, settings used
6. Summary, links and general info

1. Hardware
On recommendation from a friend, the cameras I went for are these:
Tenvis TH692’s
“720P HD Outdoor Network Wireless CCTV IP Camera with 15M Night Vision”

these cameras are currently available on Amazon for only £27 each!

The cameras can happily run over WiFi but as they will still need a power connection, I have opted to run them over Ethernet and to send the power over the CAT6 cable too – this way there’s still only 1 cable required, and I get a faster network connection too.

To do this I have used these:
AKORD® POE Passive Power Over Ethernet Adapter Injector Extractor Kit

These clever little beasties work with the power adapter that comes with the Tenvis TH692’s, and come complete with both a PoE Injector and Extractor, for only £3.99 – another mega-bargain! I haven’t tested them for outdoor use in bad weather yet, but suspect they may require some protection from the elements, which is fair enough.

2. Camera Setup
Connecting the cameras to your home network and getting them up and running is pretty easy. You need to connect them wired initially and use DHCP to assign an address. With that done, you can then use the supplied software to find, connect to and configure the cameras. After that’s complete, you can connect them to your WiFi, change the name/label for the camera, set up users and passwords, set up Email and FTP alerts and settings and so on.

3. Clients
I found the supplied software sufficient for the initial setup, and the phone app (search for “NEW Tenvis” in the App store) works very well, allowing you to monitor your camera(s) from anywhere in the world assuming you’ve got an internet connection at both ends. Here’s a picture from my iPhone:

iphone_tenvis

 

The web interface relies on browser plugins and didn’t work on my Mac under Chrome, Firefox or Safari – it wanted an out dated QuickTime plugin which I couldn’t get working, though I confess I didn’t try too hard. It worked ok on my Windows VM, but I don’t want to use that interface or that OS. Luckily there are plenty of alternative options though, as these cameras use RTSP…

The Real Time Streaming Protocol (RTSP) is a network control protocol designed for use in entertainment and communications systems to control streaming media servers. The protocol is used for establishing and controlling media sessions between end points.

[ Source: Real Time Streaming Protocol – Wikipedia, the free encyclopedia ]

This opens up several options for connecting to the cameras, and means that you don’t need to rely on the supplied software and interfaces. For me, this is what makes these cameras so good.

Here are the solutions I use, though there are many more available…

VLCVideoLAN – as you’ll probably know, this great free and open source cross-platform multimedia player plays pretty much anything, and on pretty much every platform.  Not surprisingly, I found I could point this player at the cameras RTSP feed, enabling me to view the video content from all devices that VLC runs on.

I use this approach on my Mac laptop mostly, and it’s as easy as creating a small config file for each camera feed then clicking on it to open the live feed. The files can be saved with “.m3u” extensions, as long as you’ve set that file type to be handled by VLC.

For example, here are the contents of the “cctv_driveway.m3u” file I currently have on my OSX Desktop, and that I click to connect to that feed:

#EXTM3U
#EXTINF:0, Driveway CCTV
rtsp://USERNAME:YOURPASSWORD@192.168.0.151:554/1

that’s it – just 3 lines.

Line 1, “#EXTM3U” is the file header which must be the first line of the file – like a Bash “shebang”.

Line 2, “#EXTINF:0, Driveway CCTV” contains the track information (just a zero here) and the title of the feed. This is displayed as “Driveway CCTV” in the VLC Window title, which is a handy feature.

Line 3, ” rtsp://USERNAME:YOURPASSWORD@192.168.0.151:554/1″ is simply the RTSP URL for the camera feed you want to stream from.

The RTSP URL contains the protocol (rtsp://), then user and password details, then the address of the camera (192.168.0.151 in this case), which is followed by the port the feed is served on: 554. This port can be seen in the camera config during the initial setup, but if you are unsure you can run a simple nmap scan against your camera like this:

nmap

Here we can see port 80 and 8080 are open for the web interfaces (viewing and configuration respectively), and 554 which is the standard RTSP port.

This useful web page can also generate the correct RTSP URL for many popular cameras:
Tenvis IP camera URL

The final part of the URL is the endpoint to connect to on the remote camera/host – you can see in the config above that I am connecting to “/1” at the very end of the third line in my M3U file; this is the location for the full 720 HD feed for these particualr cameras. There are also lower resolution feeds available which can also be useful to know about, especially when monitoring multiple cameras or connecting remotely (e.g. with lower bandwidth).

For these Tenvis cameras, changing to the “/12” endpoint will fetch the lower quality feed, and there are other options inbetween that you can use to suit your requirements. These end points can also be modified further through the Tenvis settings app (which is running on port 8080).

Kodi (formerly XBMC) – from a quick google it looks like there are several ways in which Kodi can be set up to consume and view RTSP feeds. The simple option I’ve gone for is, again, to create a tiny config file containing the settings for each camera, and to place these files on my NAS storage. This means that watching a camera live on my TV is as simple as selecting the corresponding file in Kodi, and it will launch the stream just like you had clicked on a movie.

The files I use have the “.strm” extension and simply contain the full URL for the RTSP stream:

rtsp://user:password@192.168.0.156:554/1

Using this simple approach, I can click on files like “cctv_driveway.strm” in Kodi to launch the various streams. Because I only ever use this on my TV or Projector, I go for the full 720 HD feed in these files via the “/1” end points.

4. Jenkins

Disclaimer: I have a tendency to use Jenkins to automate everything. 
Sometimes this extends to things that don't really need it, just to see if/how it can be done. 
This section and idea is driven from that personal tendency/obsession.
The ZoneMinder solution (described below) is by far the more sensible option for most cases :-)

After setting up some cameras and connecting to them, I then wanted to record and archive the footage. The provided software enables you to set up FTP archiving and email alerts, but I wanted to do something more flexible, that would allow me to easily change & tune the retention, housekeeping and archiving. The approach I used is slightly unusual, but it’s very simple, effective and flexible, allowing me to easily tweak things to suit my requirements.

To use Jenkins for recording and managing my CCTV Camera feeds, I went through the following high-level steps:

1. Create a new ‘Freestyle’ Jenkins job, set to run on my Ubuntu host

2. Add an ‘Execute shell’ step. To this I added the following shell commands:

export MY_DATE=`date +"%Y%m%d%H%M%S"`
rm -f *.ts
/usr/bin/vlc -vvv rtsp://USER:PASSWORD@192.168.0.151:554/12 --sout=file/ts:/home/don/cctv/recording-${MY_DATE}.ts -I dummy --stop-time=1800 vlc://quit
mv /home/don/cctv/recording-${MY_DATE}.ts .

This is cleaning up any previous/old files then capturing 30 minutes of output from the camera via VLC, writing the data stream to a file. After 30 minites VLC quits, and I move the newly captured file to the current working directory with a timestamp in the filename.

3. Archive files
After the shell command above is complete, I have configured the Jenkins job to archive the captured file along with this job run. This makes it nice and easy to browse through previous (date & timestamped) jobs and simply click to view the corresponding video capture from that time.

4. Create a Jenkins job loop
At the end of every 30 minute run, I set the “Build other projects” option for this build to trigger another run of this same job, creating an infinite loop of 30 minute runs. There’s a tiny pause between the job ending and the next build starting, but it’s only a second or two at most, which I can live with.

Once I was happy that the data was being captured and archived ok, I was then able to configure and tune the retention through Jenkins – there are loads of Jenkins built-in options that enable you to do things like ‘keep the last x builds’, or ‘keep builds for n days’, or whatever you would like. You can also mark certain builds as ‘keep forever’ if you wanted to preserve anything interesting.

This process works well for me, and the CPU and memory usage created from having 3 of these jobs running constantly is, to my surprise, next to nothing; thanks to the impressive efficiency of VLC.

The disk usage is the main issue here; with this approach I’m constantly recording, and you can fill up a LOT of disk by writing several HD video streams to disk! One plan I was considering is to reencode video footage at a lesser bitrate (to reduce the file size) as they get older (using another Jenkins job), but I think that may be over-kill: for me, 2 weeks retention with the ability to archive/keep anything I want to quite easily is more than enough really.

5. ZoneMinder
Nearly every search I did when looking for software to manage my new CCTV cameras led me to the same place – https://zoneminder.com/

Like VLC, Kodi and Jenkins, ZoneMinder is a fantastic bit of software; it’s free, there’s loads of documentation, and it’s extremely configurable. For managing CCTV video recordings I’ve not yet seen anything that compares to it, even if you are willing to spend serious money.

Initially I tried installing everything in a ready-made VM Template – an OVA file – I think it was this one:
http://blog.waldrondigital.com/2012/09/23/zoneminder-virtual-machine-appliance-for-vmware-esxi-workstation-fusion/

This is a great solution and can be a real timesaver to get you up and running, especially if you don’t have a VM with Ubuntu and a LAMP stack to hand. It took something like 2 minutes to deploy this to my ESX server, and it was working a few minutes after that. The software was out of date with the VM I downloaded and deployed, but there are clear and easy instructions on that page explaining how to update to the latest versions.

I decided I didn’t want the overhead of running another VM just for this one function, and as I already have a few running I looked in to installing ZoneMinder from scratch on an existing Ubuntu VM, which is actually pretty easy as detailed here:

http://zoneminder.readthedocs.io/en/stable/installationguide/ubuntu.html

This went quite smoothly, I had to do a couple of MySQL tweaks but it took about 20 minutes from start to finish, and I ended up with ZoneMinder running on an existing Ubuntu host which will mean less update and maintenance grief for me (as oppposed to running a separate and dedicated VM just for ZoneMinder).

It took a little experimenting to get the Tenvis TH692 cameras working in ZoneMinder, but nothing complex – here’s what I used for the “General” settings with the Tenvis TH692 cameras:

ZoneMinderGeneral

and here are the “Source” settings for the RTSP Stream, using the same basic details we’ve used to set up VLC, Kodi etc previously:

ZoneMinderSource

Once that’s done, you can tweak the settings to your liking. You can have ZoneMinder record events as they happen and archive them, and/or use it to act as a nicer web interface to your cameras. You get the option to cycle through your different cameras automatically, or you can watch several feeds on one page – the options and possibilities are great.

One of the main points of using ZoneMinder for me is that it serves the camera feeds to the browser without the need for plugins like QuickTime, and it works well on all operating systems I’ve tried – and all devices.

Note that it’s advisable to set up a ZoneMinder Filter to archive your old footage – preferably before your disks get full!

This link explains how to do this in a variety of ways:

http://zoneminder.readthedocs.io/en/latest/faq.html

After some inital experimenting I have gone for both a “Purge after x days” filter and a “Purge when disk over 50% full” – both types of Filter are detailed in that FAQ.

Summary

I can now connect to all of my cameras from all of my devices – my Nexus tablet, mobile phone, Mac and Linux computers, television, projector – quickly and easily, and from anywhere. I can also monitor, record, replay and generate alerts whenever they are required, and tune each camera to suit my needs. I think these cameras are a total bargain, the HD picture quality is excellent, and the night time IR is good too. If you are happy to set up your own connectivity and monitoring solutions like ZoneMinder (or Jenkins) you can quite easily create a sophisicated system for very little cost, and it’s good fun too!