Frigate – object detection and notifications

Intro

My notes on setting up Frigate NVR for a home CCTV setup.

The main focus of this post is on object detection (utilising a Google Coral TPU) and configuring notifications to Amazon Fire TVs (and other devices) via intregration with HomeAssistant.

There’s a lot to cover and no point in reproducing the existing documentation, you can find full details & info on setting up the main components here:

ZoneMinder
Frigate
HomeAssistant

Background

I used Zoneminder for many years to capture and display my home CCTV cameras. There are several posts – going back to around 2016 – on this site under the ZoneMinder category here

This worked really well for me all that time, but I was never able to setup Object Detection in a way I liked – it can be done, in a number of different ways, but everything I tried out was either very resource intensive, required linking to Cloud services like TensorFlow for processing, or was just too flaky and unreliable, none of which suited my needs. Integration and notification options were also possible, but not straightforward.

So, I eventually took the plunge and switched to Frigate along with HomeAssistant. There was a lot to learn and figure out, so I’m posting some general info here in case it helps other people – or myself in future when I wonder why/how I did things this way….

Hardware

I have 4 CCTV cameras, these are generic and cheap 1080p Network IP cameras, connected via Ethernet. I don’t permit them any direct access to the Internet for notifications, updates, event analysis or anything.

I ran ZoneMinder (the server software that manages and presents the feeds from the cameras) on various hardware over the years, but for the Frigate and HomeAssistant setup I have gone for an energy-efficient and quiet little “server” – an HP ProDesk 600 G1 Mini – it’s very very basic and very low powered… and cost £40 on eBay:

I have added a Google Coral Edge TPU to that via USB so I can offload the detection/inference work and spare the little CPU’s energy for other tasks:

Objectives

My key goals here were to:

  1. Setup and trial Frigate – to see if it could fit my requirements and replace ZoneMinder
  2. Add Object Detection – without having to throw a lot of hardware at it or use Cloud Services like TensorFlow
  3. Integrate with HomeAssistant – I’d been wanting to try this for a while, to integrate my HomeKit devices with other things like Sonos, Amazon Fire TVs, etc

Setup and trial Frigate: setting up Frigate was easy, I went for Ubuntu on my host and installed Docker on that then configured Frigate and MQTT containers to communicate. These are both simply declared in the Frigate config like this:

mqtt:
  host: 192.168.0.27
detectors:
  coral:
    type: edgetpu
    device: usb

Add Object Detection: with Frigate, this can be done by a Google Coral Edge TPU (above) – more info here: https://coral.ai/products/accelerator/ and details on my config below. I first trialled this using the host CPU and it ‘worked’ but was very CPU intensive: adding the dedicated TPU makes a massive difference and inference speeds are usually around 10ms for analysis of 4 HD feeds. This means the host CPU is free to focus on other things (which is just as well given the size of the thing).

    objects:
      track:
        - person
        - dog
        - car
        - bird
        - cat

https://docs.frigate.video/configuration/objects

Integrate with HomeAssistant : Added the HomeAssistant Docker instance to my host, then ran and configured MQTT container for Frigate then configured Frigate + HomeAssistant to work together. This was done by first installing HACS in HA, then using the Frigate Integration as explained here:
https://docs.frigate.video/integrations/home-assistant/

Setup Notifications

Phone notifications – I have previosuly had (and posted about my) issues with CGNAT and expected I would need to set up and ngrok tunnel and certs and jump through all sorts of hoops to get HA working remotely.

HA offers a very simple Cloud Integration via https://www.nabucasa.com/

I trialled this and was so impressed I have already signed up for a year – it’s well worth it for me and makes things much simpler. Phone notifications can be setup under HomeAssistant > Settings > Automations and Scenes > Frigate Notifications – after installing the Frigate Notifications Bueprint via HACS.


Amazon FireTV notifications – I have just setup the sending of notifications to the screen of my Amazon Fire TV, this was done by first installing this app on the device:
https://www.amazon.com/Christian-Fees-Notifications-for-Fire/dp/B00OESCXEK
Then installing
https://www.home-assistant.io/integrations/nfandroidtv/
on HA and configuring Notifications as described there. I now get a pop-up window on my projector screen whenever there’s someone at my front gate.

This is a quick pic of my projector screen with an Amazon Fire TV 4k displaying a pop-up notification in the bottom-right corner:

This means I now don’t need to leave a monitor on showing my CCTV feeds any more, as I am notified either via my mobile or on screen. And my notifications are only set up for specific object types – people & cars, and not for birds or sheep!

Minor Apple Watch update – these notifications are also picked up on Apple Watch, if it’s set to display your phone notifications. So I can also get a short video clip of the key frames on mine.


My Frigate Config – here’s an example from the main “driveway” camera feed, this is the one I want to be montoring & ntoified about most. It’s using RTSP to connect, record and detect the listed object types that I am interested in:

  driveway:
    birdseye:
      order: 1  
    enabled: True
    ffmpeg:
      inputs:
        - path: rtsp://THEUSER:THEPASSWORD@192.168.0.123:554/1
          roles:
            - detect
            - rtmp
        - path: rtsp://THEUSER:THEPASSWORD@192.168.0.123:554/1
          roles:
            - record        
    detect:
      width: 1280
      height: 720
      fps: 5
      stationary:
        interval: 0
        threshold: 50  
    objects:
      track:
        - person
        - dog
        - car
        - bird
        - cat

The full 24/7 recordings are all kept (one file/hour) for a few days then deleted and can be seen via HA under
Media > Frigate > Recordings > {camera name} > {date} > {hour}

Docker container start scripts

A note of the scripts I use to start the various docker containers.

This would be much better managed under Docker Compose or something, there are plenty of examples of that online, but I’d like to look at setting all of this up on Kubernetes so leaving this as rough as it is for now.

I am also running Grafana and NodeExporter at the moment to keep an eye on the stats, although things would probably look less worrying if I wasn’t adding to the load just to monitor them:

<help!>

I’ll need to do something about that system load; it’s tempting to just get a second HP host & Coral TPU and put some of the load and half of the cameras on that – will see… a k8s cluster of them would be neat.

# Start Frigate container
docker run -d \
--name frigate \
--restart=unless-stopped \
--mount type=tmpfs,target=/tmp/cache,tmpfs-size=1000000000 \
--device /dev/bus/usb:/dev/bus/usb \
--device /dev/dri/renderD128 \
--shm-size=80m \
-v /root/frigate//storage:/media/frigate \
-v /root/frigate/config.yml:/config/config.yml \
-v /etc/localtime:/etc/localtime:ro \
-e FRIGATE_RTSP_PASSWORD='password' \
-p 5000:5000 \
-p 8554:8554 \
-p 8555:8555/tcp \
-p 8555:8555/udp \
ghcr.io/blakeblackshear/frigate:stable

# Start homeassistant container
docker run -d \
--name homeassistant \
--privileged \
--restart=unless-stopped \
-e TZ=Europe/Belfast \
-v /root/ha_files:/config \
--network=host \
ghcr.io/home-assistant/home-assistant:stable

# Start MQTT container
docker run -itd \
--name=mqtt \
--restart=unless-stopped \
--network=host \
-v /storage/mosquitto/config:/mosquitto/config \
-v /storage/mosquitto/data:/mosquitto/data \
-v /storage/mosquitto/log:/mosquitto/log \
eclipse-mosquitto

# Start NodeExporter container
docker run -d \
--name node_exporter \
--privileged \
--restart=unless-stopped \
-e TZ=Europe/Belfast \
-p 9100:9100 \
prom/node-exporter

# Start Grafana container
docker run -d \
--name grafana \
--privileged \
--restart=unless-stopped \
-e TZ=Europe/Belfast \
-p 3000:3000 \
grafana/grafana

Kind – local kubernetes with docker nodes made quick and easy

Quick notes on trying out Kind for a local and lightweight Kubernetes cluster.

The “getting started” steps for Kind are easy and well documented on the Kind site, but I didn’t find a good guide on adding the Kubernetes Dashboard to a newly created Kind cluster… I’m planning on using this as the basis for a few local projects so wanted to capture it here, plus checkout using the Lens IDE to manage and monitor a local “Kind” cluster.

As it says on the Kind website… if you have go 1.16+ and docker or podman installed go install sigs.k8s.io/kind@v0.20.0 && kind create cluster is all you need!

Here’s me doing just that to create a new kind cluster on my Mac in 21 seconds….

all very quick and very easy, and it is incredibly light on resources too.

Notes on adding the Kubernetes Dashboard to a new Kind cluster

Apply the dashboard yaml

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml

Give it a moment or two to start up, checking with:

kubectl get pod -n kubernetes-dashboard

then create the admin user & cluster role bindings

kubectl create serviceaccount -n kubernetes-dashboard admin-user

kubectl create clusterrolebinding -n kubernetes-dashboard admin-user --clusterrole cluster-admin --serviceaccount=kubernetes-dashboard:admin-user

Next, get the auth token:

kubectl -n kubernetes-dashboard create token admin-user

Start up the local proxy

kubectl proxy

Browse to the local login endpoint here and pass it the token from above

and you should see the Kubernetes Dashboard for your (very new) Kind cluster like this…

The Kubernetes Dashboard could also be deployed via Helm:

helm install stable/kubernetes-dashboard --name my-release

or via the Lens UI (more on that below):

Setting up Lens is even simpler

Download Lens and log in:

https://k8slens.dev/

then select your “kind-kind” cluster from Lens > Catalogue > Clusters and you can see & do a whole load more with your cluster via Lens:

The missing metrics warning in Lens saying “Metrics are not available due to missing or invalid prometheus configuration.”

can be sorted by installing Prometheus using Helm via the CLI or from Lens:

helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
helm install prometheus prometheus-community/kube-prometheus-stack

Alternatively, you can enable the inbuilt Metrics options in Lens:

Either way, things should soon look much better:

Adding multiple worker nodes to a kind cluster

This can be done in Kind by defining a cluster manifest with multiple worker nodes like this:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
- role: worker

then creating the cluster specifying that file, e.g.

kind create cluster --config my-kind-cluster-config.yaml

“docker ps” should then show your multiple nodes, as will Lens:

Hey Siri, manage my server…

Intro

I use Siri and Apple Homekit to automate some basic things – switching lights and heaters on/off, etc – and was wondering if there was some way I could use Siri to run tasks on my computers and servers at home.

Some googling showed me this was possible and also reasonably easy to set up – these are my notes on the process and some examples of what I’ve done with it so far.

Setup on iPhone

There’s a free Apple “Shortcuts” app for iPhones:

https://apps.apple.com/us/app/shortcuts/id915249334

which can perform a wide range of tasks, including – as of reasonably recently – the abiltiy to run scripts over SSH.

Open the Shortcuts App, click + and then Add action. These pics show the process from that point on:

Click on Add Action….

From here you fill out the details – the IP address of the remote computer, the user and password, and the path to the script you want to run.

Requirements

You need to have SSH setup and a working script you can run over SSH first.

On Ubuntu that means installing and configuring SSH as described here:

https://linuxize.com/post/how-to-enable-ssh-on-ubuntu-20-04/

On MacOS you need to enable Remote Login under Sharing here:

You also need a script that is executable as the user you are connecting with.

Obviously, be aware of the security risk of enabling tasks to be run remotely, etc.

Examples

Here are some I made earlier.

This one connects to my old Mac Pro (it runs Ubuntu) and runs a ‘shutdown’ script.

My /home/don/shutdown script simply contains “sudo init 0” and the ‘don‘ user is enabled for passwordless sudo.

and this one connects to the same host and powers on the attached monitor, that runs Firefox showing my CCTV/Zoneminder conosole:

The “/home/don/screenon” script contains this:

xset -display :0.0 dpms force on

and there’s a ‘screenoff’ that switches the display off when I don’t want it too.

For my iMac runnning MacOS I’ve added a shutdown script – useful when I don’t want to go and power it off manually.

I’ve ended up with a selection of shortcuts to power things on & off, and can now say “Hey Siri, CCTV on please“, or “Hey Siri, shutdown iMac please“, and Siri makes it so….

This setup enables me to run pretty much anything on a Linux or Mac host simply by asking Siri – it could trigger deployment pipelines, perform updates, start/stop/restart services…. anything you can put in a shell script.

If you have any interesting ideas or suggestions please let me know below.

AWS CodeCommit – prep for AWS CDK & CodePipelines

This is the next step in a series on using the AWS CDK and AWS CodePipeline.

In the previous post I set up a new local AWS CDK environment and a remote AWS Cloud account, user etc, and connected the two. That got as far as deploying a simple local AWS CDK application to my AWS account and then cleaning it up. This post looks at the next step which is setting up CodeCommit – AWS’s managed and git-based version control system, much like github or gitlab – in preparation for some AWS CodePipeline and AWS CodeBuild posts that will follow on.

The first step is to add permissions to AWS CodeCommit for your IAM user – I’m using the “cdk-user” that was created previously – as detailed here:

https://docs.aws.amazon.com/codecommit/latest/userguide/setting-up-gc.html

In the AWS UI, go to IAM > User > Security Credentials:
Select the “HTTPS Git credentials for AWS CodeCommit (Generate)” option then download the newly generated credentials:

In CodeCommit, create a new Repo if you don’t already have one, click Clone and select/copy the HTTPS link

In your local cli, do a “git clone” of the HTTPS repo

when prompted, supply the credentials from above.

You should now be able to interact with the AWS CodeCommit repo in your AWS account using your local git cli in the same way you would for github, bitbucket or gitlab – an example clone, add, merge and push to master (!) as a quick test:

In the next post, this setup will be used to manage and host the source code for new AWS CDK applications, and to manage and trigger the AWS CodePipelines (also written in CDK!) that will build and deploy them.

Installing APKs on Amazon Fire HD with ADB

My notes on “sideloading” APK files to an Amazon Fire TV HD using ADB.

I don’t do this often and had forgotten how, so this may help me out next time.

Getting and using Android Debug Bridge (adb)

Useful info here:

https://developer.android.com/studio/command-line/adb

download the stable binaries for Mac, linux or Windows from here:

https://developer.android.com/studio/releases/platform-tools

you can either add the location of the binaries to your PATH, or cd to them and run them directly like I did, e.g.

./adb help

Download the APK files you want to install

For example

https://smartyoutubetv.github.io/en/

or Kodi

https://mirrors.kodi.tv/releases/android/arm/

I put the downloaded APK files in the same dir as the adb tools to keep things very simple.

Connect to your Amazon Fire TV

Find the IP address of your Amazon Fire device from Network Settings (From Settings, go to Device (or My Fire TV) > About > Network), for example mine was 192.168.0.176.

Enable ADB debugging in your Amazon Fire device via Settings.

connect from client laptop/pc to Fire TV, for example:

./adb connect 192.168.0.176:5555

you can also list local devices:

donaldsimpson@Donalds-iMac adb-tools % ./adb devices
List of devices attached
192.168.0.176:5555 unauthorized
192.168.0.59:5555 unauthorized

Install APK to connected device

Once connected, installing a new app should be as simple as

./adb install yourapp.apk

Note that if you have multiple devices you may get this message:

➜ adb-tools ./adb install smartyoutubetv_latest.apk
Performing Push Install
adb: error: failed to get feature set: more than one device/emulator

check the list of attached devices:

➜ adb-tools ./adb devices
List of devices attached
G070VM1904950F5U device
192.168.0.18:5555 device

then specify the device you are aiming for with “-s <address:port>” like this:

➜ adb-tools ./adb -s 192.168.0.18:5555 install smartyoutubetv_latest.apk
Performing Streamed InstallSuccess

I also had this response at one point:

./adb install smartyoutubetv_latest.apk
Performing Push Install
adb: error: failed to get feature set: device unauthorized.
This adb server's $ADB_VENDOR_KEYS is not set
Try 'adb kill-server' if that seems wrong.
Otherwise check for a confirmation dialog on your device.

… the last line promoted me to look at the Fire TV screen and notice it was asking me to approve the connection request from my laptop.
Doh.
Once approved the app installed no problem:

./adb install smartyoutubetv_latest.apk
Performing Push Install
smartyoutubetv_latest.apk: 1 file pushed, 0 skipped. 3.3 MB/s (7901934 bytes in 2.261s)
pkg: /data/local/tmp/smartyoutubetv_latest.apk
Success

Updating an existing app

I’ve had an outdated Kodi install for ages and wanted to update that while I was here. The process is simple, just add an -r for “replace existing application”:

./adb install -r kodi-18.8-Leia-armeabi-v7a.apk
Performing Push Install
kodi-18.8-Leia-armeabi-v7a.apk: 1 file pushed, 0 skipped. 3.5 MB/s (63508040 bytes in 17.391s)
pkg: /data/local/tmp/kodi-18.8-Leia-armeabi-v7a.apk
Success

This went very smoothly, all my settings, connections and shares etc were still there after the upgrade, and it looks a lot nicer for it too.

That’s it – there’s a ton of useful info on other commands and options from

./adb help

I found some more useful info on connecting from ADB to Fire TV here:

https://developer.amazon.com/docs/fire-app-builder/connecting-adb-to-fire-tv.html

Starting up Kodi on Amazon FireTV remotely

After getting the above sorted out, I wanted to find a way to start Kodi on my FireTV without having to switch my projector on & off to do so.

I use Kodi as an AirPlay target for music during the day, and it switches itself off overnight. I could probably change that.

Using ADB tools, I connect to the device remotely, as before, with:

./adb connect 192.168.0.176:5555

though normally that comes back with “already connected to…

then start up Kodi using the “Android activity manager”, “am“:

./adb shell am start -n org.xbmc.kodi/.Splash

this takes a little while to start, but after about 30 seconds I can connect to the Kodi web interface on port 8080 of my FireTV, and the AirPlay target becomes available.

It looks like there are many other interesting things you can do with “am”.

Uninstalling packages with adb

List installed packages

./adb shell pm list packages

and filter for whatever you’re looking for (e.g. “guard“)

./adb shell pm list packages | grep -i guard

then unsinstall that package name:

./adb uninstall com.adguard.vpn

Update on smartyoutube to fix ads

Quick update specifically on Smart Youtube TV on Android. This was brought on by my initial install of Smart Youtube TV starting to show adverts (a lot).

I had installed Smart Youtube TV, version 6.17.739 (at time of writing this is still the latest stable release available) on my Android Fire – details above. This worked very well for months, but has started to not filter out youtube advertisements.

Having not found an update and while looking for another solution, I found “SmartTubeNext Beta”, which looks to be pretty stable and widely used, for a beta version:

https://www.apklinker.com/apk/liskovsoft/

From that site, it looks like around 4 months since SmartYouTube was updated, but SmartTubeNext is actively being developed, so could be worth a try – here’s how:

Get the latest smarttube beta APK (via wget, or download via browser from here: https://smartyoutubetv.github.io/)

wget https://github.com/yuliskov/SmartTubeNext/releases/download/latest/smarttube_beta.apk

connect to your Android device (update the IP to match yours):

./adb connect 192.168.0.176:5555

install the APK:

./adb -s 192.168.0.18:5555 install -r smarttube_beta.apk

All done.

I wasn’t sure if this would replace the existing SmartYouTube (which is why I added the -r switch that wasn’t necessary), but it’s ok: it’s installed as a different app so the stable version is kept and available should there be any issues with the beta version.

This version of SmartYoutube looks a lot better than the previoous/stable one.

List of improvements from their site:

  • 4K support
  • runs without Google Services
  • designed for TV screens
  • stock controller support
  • external keyboard support

Personally I really like the better controller support, and the overall look is much more suitable for a large screen. It’s also a lot more customisable. And, most importantly, it removes all the adverts.

Kubernetes Operators for Monitoring with Prometheus and Grafana Dashboards

Introduction

This post takes a look at setting up monitoring and alerting in Kubernetes, using Helm and Kubernetes Operators to deploy and configure Prometheus and Grafana.

This platform is quickly and easily deployed to the cluster using a Helm Chart, which in turn uses a Kubernetes Operator, to setup all of the required resources in an existing Kubernetes Cluster.

I’m re-using the Minikube Kubernetes cluster with Helm that was built and described in previous posts here and here, but the same steps should work for any working Kubernetes & Helm setup.

An example Grafana Dashboard for Kubernetes monitoring is then imported and we take a quick look at monitoring of Cluster components with other dashboards

Kubernetes Operators & Helm combo

K8s Operators are described ‘in plain English’ here:
https://enterprisersproject.com/article/2019/2/kubernetes-operators-plain-english

and defined by CoreOS as “a method of packaging, deploying and managing a Kubernetes application

The Operator used in this post can be seen here:

https://github.com/coreos/prometheus-operator

and this is deployed to the Cluster using this Helm Chart:

https://github.com/helm/charts/tree/master/stable/prometheus-operator

It may sound like Helm and Operators do much the same thing, but they are different and complimentary

Helm and Operators are complementary technologies. Helm is geared towards performing day-1 operations of templatization and deployment of Kubernetes YAMLs — in this case Operator deployment. Operator is geared towards handling day-2 operations of managing application workloads on Kubernetes.

from https://medium.com/@cloudark/kubernetes-operators-and-helm-it-takes-two-to-tango-3ff6dcf65619

Let’s get (re)started

I’m reusing the Minikube cluster from previous posts, so start it back up with:

minikube start

which outputs the following in the console

🎉  minikube 1.10.1 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.10.1
💡  To disable this notice, run: ‘minikube config set WantUpdateNotification false’

🙄  minikube v1.9.2 on Darwin 10.13.6
✨  Using the virtualbox driver based on existing profile
👍  Starting control plane node m01 in cluster minikube
🔄  Restarting existing virtualbox VM for “minikube” …
🐳  Preparing Kubernetes v1.18.0 on Docker 19.03.8 …
🌟  Enabling addons: dashboard, default-storageclass, helm-tiller, metrics-server, storage-provisioner
🏄  Done! kubectl is now configured to use “minikube”

this all looks ok, and includes the minikube addons I’d selected previously.
Now a quick check to make sure my local helm repo is up to date:

helm repo update

I then used this command to find the latest version of the stable prometheus-operator via a helm search:
helm search stable/prometheus-operator --versions | head -2

there’s no doubt a neater/builtin way to find out the latest version, but this did the job – I’m going to install 8.13.8:

install the prometheus operator using Helm, in to a new dedicated “monitoring” namespace just takes this one command:
helm install stable/prometheus-operator --version=8.13.8 --name=monitoring --namespace=monitoring

Ooops

that should normally be it, but for me, this resulted in some issues along these lines:

Error: Get http://localhost:8080/version?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused

– looks like Helm can’t communicate with Tiller any more; I confirmed this with a simple helm ls which also failed with the same message. This shouldn’t be a problem when v3 of Helm goes “tillerless”, but to fix this quickly I simply re-enabled Tiller in my cluster via Minikube Addons:


➞  minikube addons disable helm-tiller
➞  minikube addons enable helm-tiller

verified things worked again with helm ls, then the helm install... command worked and started to do its thing…

New Operator and Namespace

Keeping an eye on progress in my k8s dashboard, I can see the new “monitoring” namespace has been created, and the various Operator components are being downloaded, started up and configured:

you can also keep an eye on progress with:
watch -d kubectl get po --namespace=monitoring

this takes a while on my machine, but eventually completes with this console output:

NOTES:
The Prometheus Operator has been installed. Check its status by running:
  kubectl –namespace monitoring get pods -l “release=monitoring”

Visit https://github.com/coreos/prometheus-operator for instructions on how
to create & configure Alertmanager and Prometheus instances using the Operator.

kubectl get po --namespace=monitoring shows the pods now running in the cluster, and for this quick example the easiest way to get access to the new Grafana instance is to forward the pods port 3000 to localhost like this:

➞  kubectl --namespace monitoring port-forward monitoring-grafana-64d4f6fcf7-t5zkv 3000:3000

(check and adjust the above to use the full/correct name of your monitoring-grafana-* pod)

Connecting to Grafana

now I can hit http://localhost:3000 and have that connect to port 3000 in the Grafana pod:


from the documentation on the Helm Chart and Operator here:

https://github.com/helm/charts/tree/master/stable/prometheus-operator

the default user for this Grafana is “admin” and the password for that user is “prom-operator“, so log in with those credentials…

Grafana Dashboards for Kubernetes

We can now use the ready-made Grafana dashboards, or add/import ones from the extensive online collection, like this one here for example: https://grafana.com/grafana/dashboards/6417 – simply save the JSON file

then go to Grafana and import it with these settings:

and you should now have a dashboard showing some pretty helpful stats on your kubernetes cluster, it’s health and resource usage:

Finally a very quick look at some of the other inbuilt dashboards – you can use and adjust these to monitor all of the components that comprise your cluster and set up alerting when limits or triggers are reached:

All done & next steps

There’s a whole lot more that can be done here, and many other ways to get to this point, but I found this pretty quick and easy.

I’ve only been looking at monitoring of k8s resources here, but you can obviously set up grafana dashboards for many other things, like monitoring your deployed applications. Many applications (and charts and operators) come with prom endpoints built in, and can easily and automatically be added to your monitoring and alerting dashboards along with other datasources.

Cheers,

Don

Kubernetes – Jenkins Pipelines with Docker Agents

This is the second post on Jenkins Pipelines on Kubernetes with Minikube, following on from the initial setup steps here:

That post went as far as having a Kubernetes cluster up and running for local development. That was primarily focused on Mac, but once you reach the point of having a running Kubernetes Cluster with kubectl configured to talk to it, the hosting platform/OS makes little difference.

This second section takes a more detailed look at running Jenkins Pipelines inside the Kubernetes Cluster, and automatically provisioning Jenkins JNLP Agents via Kubernetes, then takes an in-depth look at what we can do with all of that, with a complete working example.

This post covers quite a lot:

  • Adding Helm to the Kubernetes cluster for package management
  • Deploying Jenkins on Kubernetes with Helm
  • Connecting to the Jenkins UI
  • Setting up a first Jenkins Pipeline job
  • Running our pipeline and taking a look at the results
  • What Next

Adding Helm to the Kubernetes cluster for package management

Helm is a package manager for Kubernetes, and like Minikube it is ideal for quickly setting up development environments, plus much more if you want to. Take a look through the Helm hub to see just some of the other things it can do.

On Mac you can use brew to install the local helm component:

brew install helm

and again you can use minikube addons for the k8s cluster side – note that helm v3 removes the requirement for tiller.

minikube addons enable helm-tiller

you should then see a tiller pod start up in your Kubernetes kube-system namespace:

Before you can use Helm we first need to initialise the local Helm client, so simply run:

helm init --client-only

as our earlier minikube addons command has configured the connectivity and cluster already. Before we can use Helm to install Jenkins (or any of the many other things it can do), we need to update the local repo that contains the Helm Charts:

helm repo update

Hang tight while we grab the latest from your chart repositories…
…Skip local chart repository
…Successfully got an update from the "stable" chart repository
Update Complete.

That should be Helm setup complete and ready to use now.

Deploying Jenkins on Kubernetes with Helm

Now that Helm is setup and can speak to our k8s instance, installing 100’s of software packages suddenly becomes very simple – including, Jenkins. We’ll just give the install a friendly name “jenki” and use NodePort to simplify the networking, nothing more is required for this dev setup:

helm install --set serviceType=NodePort --name jenki stable/jenkins

obviously we’re skipping over all the for-real things you may want for a longer lived Jenkins instance, like backups, persistence, resilience, authentication and authorisation etc., but this bare-bones setup is sufficient for now.

Connect to the Jenkins UI

The Helm install should spit out some helpful info like this, explaining how to get the Jenkins Admin password and how to connect to the UI:

  1. Get your ‘admin’ user password by running:
    printf $(kubectl get secret –namespace default jenki-jenkins -o jsonpath=”{.data.jenkins-admin-password}” | base64 –decode);echo
  2. Get the Jenkins URL to visit by running these commands in the same shell:
    export POD_NAME=$(kubectl get pods –namespace default -l “app.kubernetes.io/component=jenkins-master” -l “app.kubernetes.io/instance=jenki” -o jsonpath=”{.items[0].metadata.name}”)
    echo http://127.0.0.1:8080
    kubectl –namespace default port-forward $POD_NAME 8080:8080
  3. Login with the password from step 1 and the username: admin

For more information on running Jenkins on Kubernetes, visit:
https://cloud.google.com/solutions/jenkins-on-container-engine

looking something like this in the console:


going back to the Kubernetes Dashboard we can now see the “jenki” Jenkins deployment in the default namespace:

and you can monitor the pods via the console with:

watch kubectl get pods -o wide

Note: I install the useful ‘watch‘ command via brew too, along with the zsh plugin for minikube

After following the steps to get the admin password and hit the Jenkins URL http://127.0.0.1:8080 in your desktop browser, you should see the familiar “Welcome to Jenkins!” page…

Pause a moment to appreciate that this Jenkins is running in a JVM inside a Docker container on a Kubernetes Pod as a Service in a Namespace in a Kubernetes Instance that’s running inside a Virtual Machine running under a Hypervisor on a host device….

turtles all the way down

there are many things I’ve skipped over here, including looking at storage, auth, security and all the usual considerations but the aim has been to quickly and easily get to this point so we can start developing the pipelines and processes we’re really wanting to focus on.

Navigating to Manage Jenkins then Plugins Manager should show some updates already available – this proves we have connectivity to the public Jenkins Update Centre out of the box. The Kubernetes Jenkins plugin is the key thing I’m looking for – select and update if required:

If you go to http://127.0.0.1:8080/configure you should see a link at the foot of the page to the new location for “Clouds”: http://127.0.0.1:8080/configureClouds/ – that should already be configured with sufficient settings for Jenkins to use your Kubernetes cluster, but it’s worthwhile taking a look through the settings and options there. No changes should be required here now though.

Setup a first Jenkins Pipeline job

Create a new Jenkins Pipeline job and add the following settings as shown in the picture below…

In the job config page under “Pipeline”, for “Definition” select “Pipeline script from SCM” and enter the URL of this github project which contains my example pipeline code:

https://github.com/DonaldSimpson/minikube-pipelines.git

everything else can be left as the default, and should look something like this:

This means that your Job will checkout my example repo and run the pipeline Groovy code in the Jenkinsfile, which you can see here:

https://github.com/DonaldSimpson/minikube-pipelines/blob/master/Jenkinsfile

This file has been heavily commented to explain every part of the pipeline and shows what each step is doing. Taking a read through it should show you how pipelines work, how Jenkins is creating Docker Containers for the different Stages, and give you some ideas on how you could develop this simple example further.

Run it and take a look at the results

Save and run the job, and you should (eventually) see something like this:

The jobs Console Output will have a ton of info, showing everything from the container images being pulled, the git repo being cloned, the very verbose gradle build output and all the local files.

So in summary, what just happened?

Jenkins connected to Kubernetes via the Kubernetes plugin and its settings

The required Docker images (git and gradle, as specified at the top of the Jenkinsfile pipeline) were pulled from Docker Hub

A git Docker container was started up (as a new pod in k8s) and connected to Jenkins as an Agent using JNLP

A ‘git clone’ was run inside that container to check out the source code from an example repo

A gradle Docker container was started and connected as a Jenkins JNLP Agent, running as another k8s pod

The gradle build stage was run inside that gradle container, using the source files checked out from git in the previous Stage

The newly built JAR file was archived so we could use it later if wanted

The pipeline ends, and k8s will clean up the containers

This pipeline could easily be expanded to run that new JAR file as an application as demonstrated here: https://github.com/AutomatedIT/springbootjenkinspipelinedemo/blob/master/Jenkinsfile#L5, or, you could build a new Docker image containing this version of the JAR file and start that up and test it and so on. You could also automate this so that whenever the source code is changed a build is triggered that does all of this automatically and records the result… hello CI/CD!

What next?

From the above demo you can hopefully see how easy it is to create an end to end pipeline that will automatically provision Jenkins Agents running on Kubernetes for you.

You can use this functionality to quickly and safely develop pipeline processes like the one we have examined, that run across multiple Agents, using each for a particular function/step in your workflow, leaving the provisioning and housekeeping work to the underlying Kubernetes cluster. With this, you can build or pull docker images, run them, test them, start them up as other Jenkins JNLP Agents and so on, all “as code” and all fully automated.

And after all that… ?

Being able to fire up Docker containers and use them as Jenkins Agents running on a Kubernetes platform is extremely powerful in itself, but you can go a step further and start using this setup to build, deploy and manage Kubernetes resources directly, too – from Jenkins Pipelines running on the same Kubernetes Cluster – or even from one Kubernetes to another.

We’ve seen during setup that we can use kubectl to manage the k8s cluster and its components – we can also do that from within containers and stages in our pipelines, wherever they are.

This example project demonstrates just that:

https://github.com/DonaldSimpson/devdoncoin

and contains an example pipeline and supporting files to build, lint, security scan, push to registry, deploy to Kubernetes, run, test and clean up the example “doncoin” application via a Jenkins pipeline running on Kubernetes.

It also includes outlines and suggestions for expanding things even further, in to a more mature and production-ready setup, introducing things like Jenkins shared libraries, linting and testing, automating vulnerability scanning within the pipeline, and so on.

Note the docker containers used there, the kubernetes yaml file and shell script, and the simple container with kubectl inside it.

Cheers,

Don

Kubernetes on Mac with Minikube

Intro

This is a follow on to the previous writeup on Kubernetes with Minikube and shows how to quickly and easily get a Kubernetes cluster up and running using VirtualBox and Minikube.

The setup is very similar for all platforms, but this post is specifically focused on Mac, as I’m planning on using this as the basis for a more complex post on Jenkins & Kubernetes Pipelines (and that post is now posted, here!).

Installing required components

There are three main components required:

VirtualBox is a free and open source hypervisor. It is a light weight app that allows you to run Virtual Machines on most platforms (Mac, Windows, Linux). We will use it here to run the Minikube Virtual Machine.

Kubectl is a command line tool for controlling Kubernetes clusters, we install this on the host (Mac) and use it to control and interact with the Kubernetes cluster we will be running inside the Minikube VM.

Minikube is a tool that runs a single-node Kubernetes cluster in a virtual machine on your personal computer. We’re using this to provision our k8s cluster and will also take advantage of some of the developer friendly addons it offers.

Downloads and Instructions

Here are links to the required files and detailed instructions on setting each of these components up – I went for the ‘brew install‘ options but there are many alternatives in these links. The whole process is very simple and took about 10 minutes.

VirtualBox: https://www.virtualbox.org/wiki/Downloads

simply download the Mac VirtualBox .dmg image file and install it

kubectl: https://kubernetes.io/docs/tasks/tools/install-kubectl/

brew install kubectl

Minikube: https://kubernetes.io/docs/tasks/tools/install-minikube/

brew install minikube

Starting up Kubernetes via Minikube in VirtualBox on Mac

From the Mac terminal (iTerm2 or whatever you use) running minikube start should kick off the download of the minikube VirtualMachine image.

If you would prefer to use another hypervisor (VMWare, kvm etc) you may need to specify the driver from this list:
https://kubernetes.io/docs/setup/learning-environment/minikube/#specifying-the-vm-driver

most popular hypervisors are well supported by Minikube.

Here’s what that looks like on my Mac – this may take a few minutes as it’s downloading a VM (if not already available locally), starting it up and configuring a Kubernetes Cluster inside it:

there’s quite a lot going on and not very much to see; you don’t even need to look at VirtualBox as it’s running ‘headless’, but if you open it up you can see the new running VM and its settings:

these values are all set to sensible defaults, but you may want to tweak things like memory or cpu allocations – running

minikube config -h

should help you see what to do, for example

minikube start --memory 1024

to change the allocated memory.

If you then take a look at the config file in ~/.minikube/config/config.js you will see how your preferences – resource limits, addons etc – are persisted and managed there.

Looking back at VirtualBox, if you click on “Show” or the running VM you can open that up to see the console for the Minikube VM:

to stop the vm simply do a minikube stop, or just type minikube to see a list of args and options to manage the lifecycle, e.g. minikube delete, status, pause, ssh and so on.

Minikube Addons

One of the handy features Minikube provides are its selection of easy to use addons. As explained in the official docs here you can see the list and current status of each addon by typing minikube addons list

the storage-provisioner and default-storeageclass addons were automatically enabled on startup, but I usually like to add the metrics server and dashboard too, like so:

minikube addons enable metrics-server
minikube addons enable dashboard

I often use helm & tiller, efk, istio and the registry too – this feature save me a lot of time and messing about!

Accessing the Kubernetes Dashboard – all done!

Once that’s completed you can run minikube dashboard to open up the Kubernetes dashboard on your host.

Minikube makes this all very easy; we didn’t have to forward ports, configure firewalls, consider ingress and egress, create RBAC roles, manage accounts/passwords/keys or set up DNS, or any of the many things you would normally want or have to consider to get to this point.

These features make Minikube a great choice for development work, where you don’t want to care about things like this as you would in a “for real” environment.

Your browser should open up the Kubernetes Dashboard, and you can click around and see the status of the many components that comprise your new Kubernetes cluster:

And then…

Next up I’ll be building on this setup by deploying a Jenkins instance inside the Kubernetes Cluster, then configuring that to use Kubernetes to build, manage and deploy applications on the same Kubernetes Cluster.

This is now covered in the next post, here:

HTTPS Certs for WordPress Multisite with Let’s Encrypt

Intro

This post looks at creating and maintaining HTTPS/SSL/TLS Certificates for multiple WordPress sites running on the same host.

Some background…

This website is one of several different domains/sites/blogs hosted on my single Google Cloud server, with one public IP address shared for all websites. I’m using WordPress Multisite to do this, based on a very well put together Appliance provided by Bitnami.

WordPress Multisite allows me to cheaply, easily and efficiently serve multiple sites from the one host and IP address, sharing the same host resources (CPU, Mem, Disk) which is great but makes seting up HTTPS/SSL Certificates a little different to the norm – the same cert has to validate multiple sites in multiple domains.

I’d banged my head against this for a while and looked at many different tools and tech (some of which are mentioned below) to try and sort this out previously, but finally settled on the following process which works very well for my situation.

There is some good info on why you may want SSL/TLS certificate for your website(s), background info and some popular providers reviewed: https://makeawebsitehub.com/free-ssl-tls-certificate/

“WordPress is the world’s most popular blogging and content management platform. With WordPress Multisite, conserve resources by managing multiple blogs and websites from the same server and interface.”

CERT PROVIDER

Let’s Encrypt is a free, automated, and open Certificate Authority created by the Linux Foundation in collaboration with the Internet Security Research Group. There are many other certificate providers available, but I’m using this one.

LEGO – the Let’s Encrypt Go Client

Here’s the high level plan:

  • Install the Lego client – see Step 1 here
  • Generate a Let’s Encrypt certificate for your domain
  • Configure the Web server to use the Let’s Encrypt certificate – see Apache or Nginx options on Bitnami site
  • Add a cron job to run every <90 days

I used this excellent Bitnami article to work through the process, it explains the steps in greater detail:

https://docs.bitnami.com/aws/how-to/generate-install-lets-encrypt-ssl/

Stop services

sudo /opt/bitnami/ctlscript.sh stop

Get/renew certificates

Once lego is set up, you can request multiple certs like this – just make sure to change the --domains="whatever" entries and add as many as you need. Remember all of your sub domains (www. etc) too.

sudo lego --tls --email="my@email.com"--domains="donaldsimpson.co.uk" --domains="www.donaldsimpson.co.uk" --domains="www.someothersite.com" --domains="someothersite.com" --path="/etc/lego" run

Noe you’ve got the certs, move them in to place and chmod them etc:

sudo mv /opt/bitnami/apache2/conf/server.crt /opt/bitnami/apache2/conf/server.crt.old
sudo mv /opt/bitnami/apache2/conf/server.key /opt/bitnami/apache2/conf/server.key.old
sudo mv /opt/bitnami/apache2/conf/server.csr /opt/bitnami/apache2/conf/server.csr.old
sudo ln -sf /opt/bitnami/letsencrypt/certificates/DOMAIN.key /opt/bitnami/apache2/conf/server.key
sudo ln -sf /opt/bitnami/letsencrypt/certificates/DOMAIN.crt /opt/bitnami/apache2/conf/server.crt
sudo chown root:root /opt/bitnami/apache2/conf/server*
sudo chmod 600 /opt/bitnami/apache2/conf/server*

Restart services

sudo /opt/bitnami/ctlscript.sh start

PLUGIN – JSM’s Force SSL / HTTPS

By this point I was happy that the nice new HTTPS certs were finally working reliably for all of my sites, but was aware that Google and external links would still try to get in through HTTP URLs.

After trying a few WordPress plugins that sounded like they should correct this neatly for me, I settled on JSM’s Force SSL/HTTPS plugin. As the name suggested, it quickly and easily redirects all incoming HTTP requests to HTTPS. It was simple to install and setup and works very well with WordPress Multisite too – thanks very much JSM!

CRONJOB

Now that the process works, the certificates need updated every 90 days which would be a bit of a pain to remember and do, so adding a simple script to a cron job saves some hassle.

OTHER OPTIONS and things I found interesting…

Many other clients are available, there’s a large list here:
https://letsencrypt.org/docs/client-options/

One of the more popular is Certbot: https://certbot.eff.org/

Tech links

SNI – Server Name Indication:
https://www.digicert.com/ssl-support/apache-multiple-ssl-certificates-using-sni.htm

SAN – Subject Alternative Name:
https://www.digicert.com/subject-alternative-name.htm

Kubernetes – with Minikube and Helm – part 2

This is the second half of the Kubernetes with Minikube and Helm presentation, the first half explains all of the steps we went through to get to this point, and is available here:

In this section we cover the following:

  • Helm and Tiller – what they are, when & why you’d maybe use them
  • Helm and Tiller – prep, install and Helm Charts
  • Deploying Jenkins via Helm Charts
  • and WordPress w/MariaDB too
  • Wrap up

The below are mostly my technical notes from this session, with some added blurb/explanation.

Helm and Tiller – what they are, when & why you’d maybe use them

From the Helm site:

“Helm helps you manage Kubernetes applications — Helm Charts help you define, install, and upgrade even the most complex Kubernetes application. Charts are easy to create, version, share, and publish — so start using Helm and stop the copy-and-paste.”

https://helm.sh/

Helm is basically a package manager for Kubernetes applications. You can choose from a large list of Stable (or not so!) ready made packages and use the Helm Charts to quickly and easily deploy them to your own Kubernetes Cluster.

This makes light work of some very complex deployment tasks, and it’s also possible to extend these ready-made charts to suit your needs, and to write your own Charts from scratch, or pass your own values to override default ones, or… many other interesting options!

For this session we are looking at installing Helm, reviewing some example Helm Charts and deploying a few “vanilla” ones to the cluster we created in the first half of the session. We also touch upon the life-cycle of Helm Charts – it’s similar to dockers – and point out some of the ways this could be extended and customised to suit your needs – more on this at a later date hopefully.

Helm and Tiller – prep, install and Helm Charts

First, installing Helm – it’s as easy as this, run on your laptop/host that’s running the Minikube k8s we setup earlier:

Get & chmod the get_helm script, then run it:

curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.sh

chmod 700 get_helm.sh

./get_helm.sh

Tiller is the client part of Helm and is deployed inside your k8s cluster. It’s set to be removed with the release of Helm 3, but the basic functionality wont really change. More details here https://helm.sh/blog/helm-3-preview-pt1/

Next we do the Tiller prep & install – add RBAC for tiller, deploy via helm and take a look at the running pods:

kubectl create serviceaccount -n kube-system tiller

kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller

helm init --service-account tiller

kubectl --namespace kube-system get pods

Helm Charts – look at the list of available stable Charts, then deploy a couple. The github repo is here

https://github.com/helm/charts

Update the local helm repo info:

helm repo update

then, for example, install Redis from its Helm Chart to the k8s cluster as easily as this:

helm install stable/redis

or helm install stable/mysql and check the console output that explains how to access the newly deployed app.

keep an eye on the pods to see what’s going on: watch kubectl get pods -o wide

Deploying Jenkins via Helm Charts

helm ls

helm delete <things you don't want any more to free up resources>

helm install --set serviceType=NodePort --name jenki stable/jenkins

again, watch kubectl get pods -o wide

now get the URL for the Jenkins service from Minikube:

minikube service --url=true jenki-jenkins

Hit that URL in your browser, and grab the password in UI from Pods > Jenki and log in to Jenkins with the user “admin”:

That’s a Jenkins instance deployed via Helm and Tiller and a Helm Chart to our Kubernetes Cluster running inside Minikube via a VirtualBox VM… all done in a few minutes. And it’s all customisable, repeatable, highly scaleable and awesome.

and WordPress w/MariaDB too

This was the “bonus demo” if my laptop wasn’t on fire – and thanks to some rapid cleaning up it managed fine – showing how quickly we could deploy a functional WordPress with MariaDB backend to our k8s cluster using the Helm Chart.

To prepare for this I did a helm ls to see all the things I had running. then helm delete --purge jenki, gave it a while to recover then had to do

kubectl delete pods <jenkinpod>

before starting the WordPress Chart deployment with

helm install --set serviceType=NodePort --name wp-k8s stable/wordpress

watch kubectl get pods -o wide for a while – note the chart is configured with the mariadb pod as a pre requisite of the wordpress instance:

Once it’s started we requested the service URL from Minikube again, making ingress nice and easy:

minikube service --url=true wp-k8s-wordpress

Hit that in the browser, using https and accepting the cert warning…

then logged in as `user` and qureied for the password in the k8s secret…

echo Password: $(kubectl get secret wp-k8s-wordpress -o
jsonpath="{.data.wordpress-password}" | base64 --decode)

and logged in to WordPress:

Wrap up

That’s it – we covered a lot in this session, and plan to use this as a platform to explore Helm in more detail later, writing our own Helm Charts and providing our own customisations to them.

minikube delete; rm -rf ~/.minikube

Cleans up everything we’d done:

Leaving just the local tools to remove if you want to – see the first half for a reminder.

Cheers,

Don

Update: this follow-on post runs through setting up Jenkins with Helm then creating Jenkins Pipelines that dynamically provision dockerised Jenkins Agents:

Pin It on Pinterest

%d bloggers like this: