Seting up remote access to Frigate NVR with Nginx Proxy Manager LXC on Proxmox

This took me a little while to piece together, so I thought I’d write it up here in case it’s of use to anyone else, or if I ever need to go through it again….

Background

I use Frigate to to access and manage my home CCTV cameras. It is awesome, and I would like to be able to access it securely from outside my local network/LAN.

I also use HomeAssistant (“HA”) to process the feeds and notifications from Frigate, but would like to directly access the Frigate web UI. I’ll keep HA mostly out of this post.

Setup

A quick overview on my current setup:

Nginx Proxy Manager running on a Proxmox host as an LXC

Homeassistant also on Proxmox but as a VM (HAOS)

– Frigate and MQTT run as Docker containers on Ubuntu, on an old HP Prodesk. I may eventually migrate these over to Proxmox too, but they are working happily on this device and there may be issue migrating them to a VM or LXC due to harware; I use a USB Coral TPU for processing, and while I know you can pass that through to an LXC or VM, I haven’t gotten around to it.

Installing Nginx Proxy Manager on Proxmox

Thanks to Proxmox and the amazing community scripts, this was very quick and easy. I used this script to deploy it as an LXC:

https://community-scripts.github.io/ProxmoxVE/scripts?id=nginxproxymanager

When that was completed I opened up a firewall rule on my router to allow traffic via HTTPS/443 to the new Nginx LXC’s address.

Configure Nginx Proxy Manager and Frigate

The next step – and the crux of this post – was to setup Nginx Proxy Manager to allow access through to Frigate and handle authentication:

  • create a new Proxy Host

This is reasonably simple; specify a domain name that resolves to your host/router, then set the local IP your Frigate runs on and the port. I gather Websocket Suport is required, and you only need HTTPS here if your Frigate endpoint is using it. Nginx will serve this connection as HTTPS once setup to do so.

After some googling I found the following Nginx config was also recommended:

Once done, you should have an “Online” Proxy Host combining your domain name, your Frigate (destination) IP & listening port, with SSL option (I use Let’s Encrypt):

  • A simple Access List was defined prior to the above, just containing a user & password set under ‘Athorisation’. You will need to use these credentials to log in.
  • Frigate updates for TLS?
  • The trusted_proxies below were also recommended, but I didn’t need them in my case:


When I eventually got things working using port 8971 (instead of 5000) I was prompted for a login by Frigate, but I hadn’t set up auth in Frigate, just Nginx.

Nginx has the option to pass auth through to the destination, which may be nice, but for now I just disabled the feature in Frigate and after a restart things worked as expected, with the basic Nginx auth only:

auth:
  enabled: False
tls:
  enabled: false

It may be better/safer/nicer to have the auth passed through, enabled and managed in Frigate, along with TLS, but I haven’t done so yet.

  • port issues – 5000 or 8971?

When I was testing this I started off using port 8971 which is recommended here:
https://docs.frigate.video/configuration/authentication

This didn’t work for me; I then discovered I couldn’t connect to that port at all (even locally) so I went with 5000 initially as I knew that did work locally at least.

Eventually I realised that I’d never needed or opened up that port to my Frigate container! I updated my config to map port 8971 to 8971:

-p 8971:8971 

after that little oversight was corrected, it worked correctly!

When testing via a Browser (behind a VPN to emulate external access) I was prompted once for a login and then everything just worked; perfect!

I then went to check via my mobile phone, and that kept asking me to log in, with the message “Authorization required”


This was fixed by updating the Nginx Access List and setting “Satisfy Any” to be On/checked. That small change seems to have sorted the issue and everything now works perfectly on my phone too.

Integrating Solana with GitHub Workflows for Enhanced CI/CD

Intro

Being a fan of Solana and interested in exploring and using the technology, I wanted to find some practical use for it in my role as a DevOps Engineer.

This post attempts to do that, by integrating Solana in to a CI/CD workflow to provide an audit of build artefacts. Yes, there are many other ways & tools you could do this, but I found this particular combination interesting.

Overview

Solana is a high-performance blockchain platform known for its speed and scalability.

Integrating Solana with GitHub Workflows can bring a new level of security, transparency, and efficiency to your CI/CD pipelines.

This blog post demonstrates how to leverage Solana in a GitHub Workflow to enhance your development and deployment processes.

What is Solana?

Solana is a decentralised blockchain platform designed for high throughput and low latency. It supports smart contracts and decentralized applications (dApps) with a focus on scalability and performance. Solana’s unique consensus mechanism, Proof of History (PoH), allows it to process thousands of transactions per second.

Why Integrate Solana with GitHub Workflows?

Integrating Solana with GitHub Workflows can provide several benefits:

  • Immutable Build Artifacts: Store cryptographic hashes of build artifacts on the Solana blockchain to ensure their integrity and immutability.
  • Automated Smart Contract Deployment: Use Solana smart contracts to automate deployment processes.
  • Transparent Audit Trails: Record CI/CD pipeline activities on the blockchain for transparency and auditability.

Setting Up Solana in a GitHub Workflow

Let’s walk through an example of how to integrate Solana with a GitHub Workflow to store build artifact hashes on the Solana blockchain.

Step 1: Install Solana CLI

Ensure you have the Solana CLI installed on your local machine or CI environment:

sh -c "$(curl -sSfL https://release.solana.com/v1.8.0/install)"
Step 2: Set Up a Solana Wallet

Then, you need a Solana wallet to interact with the blockchain. You can use the Solana CLI to create a new wallet:

solana-keygen new --outfile ~/my-solana-wallet.json

This command generates a new wallet and saves the keypair to ~/my-solana-wallet.json.

Step 3: Create a GitHub Workflow

Create a new GitHub Workflow file in your repository at .github/workflows/solana.yml:

name: Solana Integration

on:
  push:
    branches:
      - main

jobs:
  build:
    runs-on: ubuntu-latest

    steps:
      - name: Checkout code
        uses: actions/checkout@v2

      - name: Set up Solana CLI
        run: |
          sh -c "$(curl -sSfL https://release.solana.com/v1.8.0/install)"
          export PATH="/home/runner/.local/share/solana/install/active_release/bin:$PATH"
          solana --version

      - name: Build project
        run: |
          # Replace with your build commands
          echo "Building project..."
          echo "Build complete" > build-artifact.txt

      - name: Generate SHA-256 hash
        run: |
          sha256sum build-artifact.txt > build-artifact.txt.sha256
          cat build-artifact.txt.sha256

      - name: Store hash on Solana blockchain
        env:
          SOLANA_WALLET: ${{ secrets.SOLANA_WALLET }}
        run: |
          echo $SOLANA_WALLET > ~/my-solana-wallet.json
          solana config set --keypair ~/my-solana-wallet.json
          solana airdrop 1
          HASH=$(cat build-artifact.txt.sha256 | awk '{print $1}')
          solana transfer <RECIPIENT_ADDRESS> 0.001 --allow-unfunded-recipient --memo "$HASH"
Step 4: Configure GitHub Secrets

To securely store your Solana wallet keypair, add it as a secret in your GitHub repository:

  1. Go to your repository on GitHub.
  2. Click on Settings.
  3. Click on Secrets in the left sidebar.
  4. Click on New repository secret.
  5. Add a secret with the name SOLANA_WALLET and the content of your ~/my-solana-wallet.json file.
Step 5: Run the Workflow

Push your changes to the main branch to trigger the workflow. The workflow will:

  1. Check out the code.
  2. Set up the Solana CLI.
  3. Build the project.
  4. Generate a SHA-256 hash of the build artifact.
  5. Store the hash on the Solana blockchain.

Example Output and Actions

After the workflow runs, you can verify the transaction on the Solana blockchain using a block explorer like Solscan. The memo field of the transaction will contain the SHA-256 hash of the build artifact, ensuring its integrity and immutability.

Example Output:
Run sha256sum build-artifact.txt > build-artifact.txt.sha256
b1946ac92492d2347c6235b4d2611184a1e3d9e6 build-artifact.txt
Run solana transfer <RECIPIENT_ADDRESS> 0.001 --allow-unfunded-recipient --memo "b1946ac92492d2347c6235b4d2611184a1e3d9e6"
Signature: 5G9f8k9... (shortened for brevity)
Possible Actions:
  • Verify Artifact Integrity: Use the stored hash to verify the integrity of the build artifact before deployment.
  • Audit Trail: Maintain a transparent and immutable audit trail of all build artifacts.
  • Automate Deployments: Extend the workflow to trigger automated deployments based on the stored hashes.

Conclusion

Integrating Solana with GitHub Workflows provides a powerful way to enhance the security, transparency, and efficiency of your CI/CD pipelines.

By leveraging Solana’s blockchain technology, you can ensure the integrity and immutability of your build artifacts, automate deployment processes, and maintain transparent audit trails.

I have used solutions similar to this previously; by automatically adding a containers hash to an immutable database when it passes testing, while at the same time ensuring that the only images permissable for deployment in the next environment up (e.g. Production) exist on that list, you can (at least help to) ensure that only approved code is deployed.

If you’d like to learn more about Solana they have some great documentation and examples: https://solana.com/docs/intro/quick-start

Enhancing CI/CD Pipelines with Immutable Build Artifacts Using Crypto Technologies

Intro:

In the ever-evolving landscape of software development, ensuring the integrity and security of build artifacts is paramount. As CI/CD pipelines become more sophisticated, integrating cryptocurrency technologies can provide a robust solution for managing and securing build artifacts. This blog post delves into the concept of immutable build artifacts and how crypto technologies can enhance CI/CD pipelines.

Understanding CI/CD Pipelines

CI/CD pipelines are automated workflows that streamline the process of integrating, testing, and deploying code changes. They aim to:

  • Continuous Integration (CI): Automatically integrate code changes from multiple contributors into a shared repository, ensuring a stable and functional codebase.
  • Continuous Deployment (CD): Automatically deploy integrated code to production environments, delivering new features and fixes to users quickly and reliably.

The Importance of Immutable Build Artifacts

Build artifacts are the compiled binaries, libraries, and other files generated during the build process. Ensuring these artifacts are immutable—unchangeable once created—is crucial for several reasons:

  • Security: Prevents tampering and unauthorized modifications.
  • Reproducibility: Ensures that the same artifact can be deployed consistently across different environments.
  • Auditability: Provides a clear and verifiable history of artifacts.

Leveraging Crypto Technologies for Immutable Build Artifacts

Cryptocurrency technologies, particularly blockchain, offer unique advantages for managing build artifacts:

  • Decentralization: Distributes data across multiple nodes, reducing the risk of a single point of failure.
  • Immutability: Ensures that once data is written, it cannot be altered or deleted.
  • Transparency: Provides a transparent and auditable history of all transactions.

Implementing Immutable Build Artifacts in CI/CD Pipelines

  1. Generate Build Artifacts: During the CI process, generate the build artifacts as usual.
   # Example: Building a Docker image
   docker build -t my-app:latest .
  1. Create a Cryptographic Hash: Generate a cryptographic hash (e.g., SHA-256) of the build artifact to ensure its integrity.
   # Example: Generating a SHA-256 hash of a Docker image
   docker save my-app:latest | sha256sum
  1. Store the Hash on a Blockchain: Store the cryptographic hash on a blockchain to ensure immutability and transparency.
   # Example: Using a blockchain-based storage service
   blockchain-store --hash <generated-hash> --metadata "Build #123"
  1. Retrieve and Verify the Hash: When deploying the artifact, retrieve the hash from the blockchain and verify it against the artifact to ensure integrity.
   # Example: Verifying the hash
   retrieved_hash=$(blockchain-retrieve --metadata "Build #123")
   echo "<artifact-hash>  my-app.tar.gz" | sha256sum -c -

Example Workflow

  1. CI Pipeline:
  • Build the artifact (e.g., Docker image).
  • Generate a cryptographic hash of the artifact.
  • Store the hash on a blockchain.
  1. CD Pipeline:
  • Retrieve the hash from the blockchain.
  • Verify the artifact’s integrity using the retrieved hash.
  • Deploy the verified artifact to the production environment.

Benefits of Using Immutable Build Artifacts

  • Enhanced Security: Blockchain’s immutable nature ensures that build artifacts are secure and tamper-proof.
  • Improved Reproducibility: Immutable artifacts guarantee consistent deployments across different environments.
  • Increased Transparency: Blockchain provides a transparent and auditable history of all build artifacts.

Conclusion

Integrating cryptocurrency technologies with CI/CD pipelines to manage immutable build artifacts offers a range of benefits that enhance security, reproducibility, and transparency. By leveraging blockchain’s decentralized and immutable nature, organizations can ensure the integrity and authenticity of their build artifacts, providing a robust foundation for their CI/CD processes.

As the software development landscape continues to evolve, embracing these cutting-edge technologies will be crucial for maintaining a competitive edge and ensuring the reliability and security of software deployments. By implementing immutable build artifacts, organizations can build a more secure and efficient CI/CD pipeline, paving the way for future innovations.

ArgoCD and K8sGPT with KinD

This post covers a lot (very quickly and reasonably easily);

It starts with using Kuberenets in Docker (KinD) to create a minimal but functional local Kubernetes Cluster.
Then, ArgoCD is setup and a sample app is deployed to the cluster.
Finally, k8sgpt is configured and a basic analysis of the cluster is run.

The main point of all of this was to try out k8sgpt in a safe and disposable environment.

Step 1: Install kind

First, ensure you have kind installed.

KinD can be installed quickly and easily with just the following commands:

curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.17.0/kind-linux-amd64
chmod +x ./kind
sudo mv ./kind /usr/local/bin/kind

Check out this older post for more detail on KinD:
https://www.donaldsimpson.co.uk/2023/08/09/kind-local-kubernetes-with-docker-nodes-made-quick-and-easy/

Step 2: Create a kind Cluster

Create a new kind cluster:

kind create cluster --name argocd-cluster

Step 3: Install kubectl

Ensure you have kubectl installed. You can install it using the following command:

curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
chmod +x kubectl
sudo mv kubectl /usr/local/bin/

Step 4: Install ArgoCD

  1. Create the argocd namespace:
kubectl create namespace argocd
  1. Install ArgoCD using the official manifests:
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml

Step 5: Access the ArgoCD API Server

  1. Forward the ArgoCD server port to localhost:
kubectl port-forward svc/argocd-server -n argocd 8080:443
  1. Retrieve the initial admin password:
kubectl get secret argocd-initial-admin-secret -n argocd -o jsonpath="{.data.password}" | base64 -d; echo

Step 6: Login to ArgoCD

  1. Open your browser and navigate to https://localhost:8080.
  2. Login with the username admin and the password retrieved in the previous step.

Step 7: Install argocd CLI

  1. Download the argocd CLI:
curl -sSL -o argocd https://github.com/argoproj/argo-cd/releases/latest/download/argocd-linux-amd64
chmod +x argocd
sudo mv argocd /usr/local/bin/
  1. Login using the argocd CLI:
argocd login localhost:8080
  1. Set the admin password (optional):
argocd account update-password

Step 8: Deploy an Application with ArgoCD

  1. Create a new application:
argocd app create guestbook \
    --repo https://github.com/argoproj/argocd-example-apps.git \
    --path guestbook \
    --dest-server https://kubernetes.default.svc \
    --dest-namespace default
  1. Sync the application:
argocd app sync guestbook
  1. Check the application status:
argocd app get guestbook

Step 9: Install K8sGPT

  1. Install K8sGPT CLI:
curl -Lo k8sgpt https://github.com/k8sgpt-ai/k8sgpt/releases/latest/download/k8sgpt-linux-amd64
chmod +x k8sgpt
sudo mv k8sgpt /usr/local/bin/
  1. Configure K8sGPT:
k8sgpt auth --kubeconfig ~/.kube/config

Step 10: Inspect the Cluster with K8sGPT

  1. Run K8sGPT to inspect the cluster:
k8sgpt analyze

Example Output and Possible Associated Actions

Example Output:

[INFO] Analyzing cluster…

[INFO] Found 3 issues in namespace default:

[WARNING] Pod guestbook-frontend-5d8d4f5d6f-abcde is in CrashLoopBackOff state

[WARNING] Service guestbook-frontend is not reachable

[INFO] Deployment guestbook-frontend has 1 unavailable replica

Associated Actions:

  1. Pod in CrashLoopBackOff State:
    • Action: Check the logs of the pod to identify the cause of the crash.
    • Command: kubectl logs guestbook-frontend-5d8d4f5d6f-abcde -n default
    • Possible Fix: Resolve any issues found in the logs, such as missing environment variables, incorrect configurations, or application errors.
  2. Service Not Reachable:
    • Action: Verify the service configuration and ensure it is correctly pointing to the appropriate pods.
    • Command: kubectl describe svc guestbook-frontend -n default
    • Possible Fix: Ensure the service selector matches the labels of the pods and that the pods are running and ready.
  3. Deployment with Unavailable Replica:
    • Action: Check the deployment status and events to understand why the replica is unavailable.
    • Command: kubectl describe deployment guestbook-frontend -n default
    • Possible Fix: Address any issues preventing the deployment from scaling, such as resource constraints or scheduling issues.

Conclusion

Ok, addmitedly that was a bit of a whirlwind, but if you followed it you have successfully deployed ArgoCD to a kind cluster, deployed an application using ArgoCD to that new cluster, then inspected the cluster & app using K8sGPT.

The example output and associated actions from provide guidance on how to address common issues identified by K8sGPT.

This setup allows you to manage your applications and monitor the health of your Kubernetes cluster effectively, and being able to spin up a disposable cluster like this is handy for many reasons.

Frigate – object detection and notifications

Intro

My notes on setting up Frigate NVR for a home CCTV setup.

The main focus of this post is on object detection (utilising a Google Coral TPU) and configuring notifications to Amazon Fire TVs (and other devices) via intregration with HomeAssistant.

There’s a lot to cover and no point in reproducing the existing documentation, you can find full details & info on setting up the main components here:

ZoneMinder
Google Coral Edge TPU
Frigate
HomeAssistant

Background

I used Zoneminder for many years to capture and display my home CCTV cameras. There are several posts – going back to around 2016 – on this site under the ZoneMinder category here

This worked really well for me all that time, but I was never able to setup Object Detection in a way I liked – it can be done in a number of different ways, but everything I tried out was either very resource intensive, required linking to Cloud services like TensorFlow for processing, or was just too flaky and unreliable. It was fun trying them out, but none of them ever suited my needs. Integration and notification options were also possible, but were not straightforward.

So, I eventually took the plunge and switched to Frigate along with HomeAssistant. There was a lot to learn and figure out, so I’m posting some general info here in case it helps other people – or myself in future when I wonder why/how I did things this way….

Hardware

I have 4 CCTV cameras, these are generic and cheap 1080p Network IP cameras, connected via Ethernet. I don’t permit them any direct access to the Internet for notifications, updates, event analysis or anything.

I ran ZoneMinder (the server software that manages and presents the feeds from the cameras) on various hardware over the years, but for the Frigate and HomeAssistant setup I have gone for an energy-efficient and quiet little “server” – an HP ProDesk 600 G1 Mini – it’s very very basic and very low powered… and cost £40 on eBay:

After testing Object Detection using the CPU (this is waaaay too much load for the CPU to cope with longer-term, but really helps to test proves the concept) I have since added a Google Coral Edge TPU to the host via USB. This enables me to offload the detection/inference work to the TPU and spare the little CPU’s energy for other tasks:

Objectives

My key goals here were to:

  1. Setup and trial Frigate – to see if it could fit my requirements and replace ZoneMinder
  2. Add Object Detection – without having to throw a lot of hardware at it or use Cloud Services like TensorFlow
  3. Integrate with HomeAssistant – I’d been wanting to try this for a while, to integrate my HomeKit devices with other things like Sonos, Amazon Fire TVs, etc

Note that you do not need to use HomeAssistant or MQTT in order to use or try Frigate, it can run as a standalone insatnce if you like. Frigate also comes with its own web interface which is very good, and I run this full-screen/kiosk mode on one of my monitors.

Setup and trial Frigate: setting up Frigate was easy, I went for Ubuntu on my host and installed Docker on that, then configured Frigate and MQTT containers to communicate. These are both simply declared in the Frigate config like this:

mqtt:
  host: 192.168.0.27
detectors:
  coral:
    type: edgetpu
    device: usb

Add Object Detection: with Frigate, this can be done by a Google Coral Edge TPU (pic above) – more info here: https://coral.ai/products/accelerator/ and details on my config below. I first trialled this using the host CPU and it ‘worked’ but was very CPU intensive: adding the dedicated TPU makes a massive difference and inference speeds are usually around 10ms for analysis of 4 HD feeds. This means the host CPU is free to focus on running other things (which is just as well given the size of the thing).

    objects:
      track:
        - person
        - dog
        - car
        - bird
        - cat

https://docs.frigate.video/configuration/objects

Integrate with HomeAssistant : Added the HomeAssistant Docker instance to my host, then ran and configured MQTT container for Frigate then configured Frigate + HomeAssistant to work together. This was done by first installing HACS in HA, then using the Frigate Integration as explained here:
https://docs.frigate.video/integrations/home-assistant/

Setup Notifications

Phone notifications – I have previosuly had (and posted about my) issues with CGNAT and expected I would need to set up and ngrok tunnel and certs and jump through all sorts of hoops to get HA working remotely.

HA offers a very simple Cloud Integration via https://www.nabucasa.com/

I trialled this and was so impressed I have already signed up for a year – it’s well worth it for me and makes things much simpler. Phone notifications can be setup under HomeAssistant > Settings > Automations and Scenes > Frigate Notifications – after installing the Frigate Notifications Bueprint via HACS.

I can now open HomeAssistant on my phone from anywhere in the World and view a dashboard that has live feeds from my CCTV cameras at home. I have also set it up to show recently detected objects from certain cameras too.


Amazon FireTV notifications – I have just setup the sending of notifications to the screen of my Amazon Fire TV, this was done by first installing this app on the device:
https://www.amazon.com/Christian-Fees-Notifications-for-Fire/dp/B00OESCXEK
Then installing
https://www.home-assistant.io/integrations/nfandroidtv/
on HA and configuring Notifications as described there. I now get a pop-up window on my projector screen whenever there’s someone at my front gate.

This is a quick (and poor quality) pic of my projector screen (and chainsaw collection) with an Amazon Fire TV 4k displaying a pop-up notification in the bottom-right corner:

This means I now don’t need to leave a monitor on showing my CCTV feeds any more, as I am notified either via my mobile or on screen. And my notifications are only set up for specific object types – people & cars, and not for things it picks up frequently that I don’t want to be alerted on, like birds or passing sheep or cows.

Minor Apple Watch update – these notifications are also picked up on my Apple Watch, which is set to display my phone notifications. So I also get a short video clip of the key frames which is pretty awesome and works well.


My Frigate Config – here’s an example from the main “driveway” camera feed, this is the one I want to be montoring & ntoified about most. It’s using RTSP to connect, record and detect the listed object types that I am interested in:

  driveway:
    birdseye:
      order: 1  
    enabled: True
    ffmpeg:
      inputs:
        - path: rtsp://THEUSER:THEPASSWORD@192.168.0.123:554/1
          roles:
            - detect
            - rtmp
        - path: rtsp://THEUSER:THEPASSWORD@192.168.0.123:554/1
          roles:
            - record        
    detect:
      width: 1280
      height: 720
      fps: 5
      stationary:
        interval: 0
        threshold: 50  
    objects:
      track:
        - person
        - dog
        - car
        - bird
        - cat

The full 24/7 recordings are all kept (one file/hour) for a few days then deleted and can be seen via HA under
Media > Frigate > Recordings > {camera name} > {date} > {hour}

Docker container start scripts

A note of the scripts I use to start the various docker containers.

This would be much better managed under Docker Compose or something, there are plenty of examples of that online, but I’d like to look at setting all of this up on Kubernetes so leaving this as rough as it is for now.

I am also running Grafana and NodeExporter at the moment to keep an eye on the stats, although things would probably look less worrying if I wasn’t adding to the load just to monitor them:

<help!>

I’ll need to do something about that system load; it’s tempting to just get a second HP host & Coral TPU and put some of the load and half of the cameras on that – will see… a k8s cluster of them would be neat.

# Start Frigate container
docker run -d \
--name frigate \
--restart=unless-stopped \
--mount type=tmpfs,target=/tmp/cache,tmpfs-size=1000000000 \
--device /dev/bus/usb:/dev/bus/usb \
--device /dev/dri/renderD128 \
--shm-size=80m \
-v /root/frigate//storage:/media/frigate \
-v /root/frigate/config.yml:/config/config.yml \
-v /etc/localtime:/etc/localtime:ro \
-e FRIGATE_RTSP_PASSWORD='password' \
-p 5000:5000 \
-p 8554:8554 \
-p 8555:8555/tcp \
-p 8555:8555/udp \
ghcr.io/blakeblackshear/frigate:stable

# Start homeassistant container
docker run -d \
--name homeassistant \
--privileged \
--restart=unless-stopped \
-e TZ=Europe/Belfast \
-v /root/ha_files:/config \
--network=host \
ghcr.io/home-assistant/home-assistant:stable

# Start MQTT container
docker run -itd \
--name=mqtt \
--restart=unless-stopped \
--network=host \
-v /storage/mosquitto/config:/mosquitto/config \
-v /storage/mosquitto/data:/mosquitto/data \
-v /storage/mosquitto/log:/mosquitto/log \
eclipse-mosquitto

# Start NodeExporter container
docker run -d \
--name node_exporter \
--privileged \
--restart=unless-stopped \
-e TZ=Europe/Belfast \
-p 9100:9100 \
prom/node-exporter

# Start Grafana container
docker run -d \
--name grafana \
--privileged \
--restart=unless-stopped \
-e TZ=Europe/Belfast \
-p 3000:3000 \
grafana/grafana

Kind – local kubernetes with docker nodes made quick and easy

Quick notes on trying out Kind for a local and lightweight Kubernetes cluster.

The “getting started” steps for Kind are easy and well documented on the Kind site, but I didn’t find a good guide on adding the Kubernetes Dashboard to a newly created Kind cluster… I’m planning on using this as the basis for a few local projects so wanted to capture it here, plus checkout using the Lens IDE to manage and monitor a local “Kind” cluster.

As it says on the Kind website… if you have go 1.16+ and docker or podman installed go install sigs.k8s.io/kind@v0.20.0 && kind create cluster is all you need!

Here’s me doing just that to create a new kind cluster on my Mac in 21 seconds….

all very quick and very easy, and it is incredibly light on resources too.

Notes on adding the Kubernetes Dashboard to a new Kind cluster

Apply the dashboard yaml

kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml

Give it a moment or two to start up, checking with:

kubectl get pod -n kubernetes-dashboard

then create the admin user & cluster role bindings

kubectl create serviceaccount -n kubernetes-dashboard admin-user

kubectl create clusterrolebinding -n kubernetes-dashboard admin-user --clusterrole cluster-admin --serviceaccount=kubernetes-dashboard:admin-user

Next, get the auth token:

kubectl -n kubernetes-dashboard create token admin-user

Start up the local proxy

kubectl proxy

Browse to the local login endpoint here and pass it the token from above

and you should see the Kubernetes Dashboard for your (very new) Kind cluster like this…

The Kubernetes Dashboard could also be deployed via Helm:

helm install stable/kubernetes-dashboard --name my-release

or via the Lens UI (more on that below):

Setting up Lens is even simpler

Download Lens and log in:

https://k8slens.dev/

then select your “kind-kind” cluster from Lens > Catalogue > Clusters and you can see & do a whole load more with your cluster via Lens:

The missing metrics warning in Lens saying “Metrics are not available due to missing or invalid prometheus configuration.”

can be sorted by installing Prometheus using Helm via the CLI or from Lens:

helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
helm install prometheus prometheus-community/kube-prometheus-stack

Alternatively, you can enable the inbuilt Metrics options in Lens:

Either way, things should soon look much better:

Adding multiple worker nodes to a kind cluster

This can be done in Kind by defining a cluster manifest with multiple worker nodes like this:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
- role: worker

then creating the cluster specifying that file, e.g.

kind create cluster --config my-kind-cluster-config.yaml

“docker ps” should then show your multiple nodes, as will Lens:

Hey Siri, manage my server…

Intro

I use Siri and Apple Homekit to automate some basic things – switching lights and heaters on/off, etc – and was wondering if there was some way I could use Siri to run tasks on my computers and servers at home.

Some googling showed me this was possible and also reasonably easy to set up – these are my notes on the process and some examples of what I’ve done with it so far.

Setup on iPhone

There’s a free Apple “Shortcuts” app for iPhones:

https://apps.apple.com/us/app/shortcuts/id915249334

which can perform a wide range of tasks, including – as of reasonably recently – the abiltiy to run scripts over SSH.

Open the Shortcuts App, click + and then Add action. These pics show the process from that point on:

Click on Add Action….

From here you fill out the details – the IP address of the remote computer, the user and password, and the path to the script you want to run.

Requirements

You need to have SSH setup and a working script you can run over SSH first.

On Ubuntu that means installing and configuring SSH as described here:

https://linuxize.com/post/how-to-enable-ssh-on-ubuntu-20-04/

On MacOS you need to enable Remote Login under Sharing here:

You also need a script that is executable as the user you are connecting with.

Obviously, be aware of the security risk of enabling tasks to be run remotely, etc.

Examples

Here are some I made earlier.

This one connects to my old Mac Pro (it runs Ubuntu) and runs a ‘shutdown’ script.

My /home/don/shutdown script simply contains “sudo init 0” and the ‘don‘ user is enabled for passwordless sudo.

and this one connects to the same host and powers on the attached monitor, that runs Firefox showing my CCTV/Zoneminder conosole:

The “/home/don/screenon” script contains this:

xset -display :0.0 dpms force on

and there’s a ‘screenoff’ that switches the display off when I don’t want it too.

For my iMac runnning MacOS I’ve added a shutdown script – useful when I don’t want to go and power it off manually.

I’ve ended up with a selection of shortcuts to power things on & off, and can now say “Hey Siri, CCTV on please“, or “Hey Siri, shutdown iMac please“, and Siri makes it so….

This setup enables me to run pretty much anything on a Linux or Mac host simply by asking Siri – it could trigger deployment pipelines, perform updates, start/stop/restart services…. anything you can put in a shell script.

If you have any interesting ideas or suggestions please let me know below.

AWS CodeCommit – prep for AWS CDK & CodePipelines

This is the next step in a series on using the AWS CDK and AWS CodePipeline.

In the previous post I set up a new local AWS CDK environment and a remote AWS Cloud account, user etc, and connected the two. That got as far as deploying a simple local AWS CDK application to my AWS account and then cleaning it up. This post looks at the next step which is setting up CodeCommit – AWS’s managed and git-based version control system, much like github or gitlab – in preparation for some AWS CodePipeline and AWS CodeBuild posts that will follow on.

The first step is to add permissions to AWS CodeCommit for your IAM user – I’m using the “cdk-user” that was created previously – as detailed here:

https://docs.aws.amazon.com/codecommit/latest/userguide/setting-up-gc.html

In the AWS UI, go to IAM > User > Security Credentials:
Select the “HTTPS Git credentials for AWS CodeCommit (Generate)” option then download the newly generated credentials:

In CodeCommit, create a new Repo if you don’t already have one, click Clone and select/copy the HTTPS link

In your local cli, do a “git clone” of the HTTPS repo

when prompted, supply the credentials from above.

You should now be able to interact with the AWS CodeCommit repo in your AWS account using your local git cli in the same way you would for github, bitbucket or gitlab – an example clone, add, merge and push to master (!) as a quick test:

In the next post, this setup will be used to manage and host the source code for new AWS CDK applications, and to manage and trigger the AWS CodePipelines (also written in CDK!) that will build and deploy them.

Installing APKs on Amazon Fire HD with ADB

My notes on “sideloading” APK files to an Amazon Fire TV HD using ADB.

I don’t do this often and had forgotten how, so this may help me out next time.

Getting and using Android Debug Bridge (adb)

Useful info here:

https://developer.android.com/studio/command-line/adb

download the stable binaries for Mac, linux or Windows from here:

https://developer.android.com/studio/releases/platform-tools

you can either add the location of the binaries to your PATH, or cd to them and run them directly like I did, e.g.

./adb help

Download the APK files you want to install

For example

https://smartyoutubetv.github.io/en/

or Kodi

https://mirrors.kodi.tv/releases/android/arm/

I put the downloaded APK files in the same dir as the adb tools to keep things very simple.

Connect to your Amazon Fire TV

Find the IP address of your Amazon Fire device from Network Settings (From Settings, go to Device (or My Fire TV) > About > Network), for example mine was 192.168.0.176.

Enable ADB debugging in your Amazon Fire device via Settings.

connect from client laptop/pc to Fire TV, for example:

./adb connect 192.168.0.176:5555

you can also list local devices:

donaldsimpson@Donalds-iMac adb-tools % ./adb devices
List of devices attached
192.168.0.176:5555 unauthorized
192.168.0.59:5555 unauthorized

Install APK to connected device

Once connected, installing a new app should be as simple as

./adb install yourapp.apk

Note that if you have multiple devices you may get this message:

➜ adb-tools ./adb install smartyoutubetv_latest.apk
Performing Push Install
adb: error: failed to get feature set: more than one device/emulator

check the list of attached devices:

➜ adb-tools ./adb devices
List of devices attached
G070VM1904950F5U device
192.168.0.18:5555 device

then specify the device you are aiming for with “-s <address:port>” like this:

➜ adb-tools ./adb -s 192.168.0.18:5555 install smartyoutubetv_latest.apk
Performing Streamed InstallSuccess

I also had this response at one point:

./adb install smartyoutubetv_latest.apk
Performing Push Install
adb: error: failed to get feature set: device unauthorized.
This adb server's $ADB_VENDOR_KEYS is not set
Try 'adb kill-server' if that seems wrong.
Otherwise check for a confirmation dialog on your device.

… the last line promoted me to look at the Fire TV screen and notice it was asking me to approve the connection request from my laptop.
Doh.
Once approved the app installed no problem:

./adb install smartyoutubetv_latest.apk
Performing Push Install
smartyoutubetv_latest.apk: 1 file pushed, 0 skipped. 3.3 MB/s (7901934 bytes in 2.261s)
pkg: /data/local/tmp/smartyoutubetv_latest.apk
Success

Also, when doing this:

./adb -s 192.168.0.88:5555 install FlixVision_v2.9.2r.apk
adb: device '192.168.0.88:5555' not found

despite adb appearing to connect, the device was listed as offline:

error: device offline

this again turned out to be the FireTV having prompted me for approval on-screen, which I didn’t see when connecting from my laptop.

Note to self – check for a dialog on-screen when having issues!

Updating an existing app

I’ve had an outdated Kodi install for ages and wanted to update that while I was here. The process is simple, just add an -r for “replace existing application”:

./adb install -r kodi-18.8-Leia-armeabi-v7a.apk
Performing Push Install
kodi-18.8-Leia-armeabi-v7a.apk: 1 file pushed, 0 skipped. 3.5 MB/s (63508040 bytes in 17.391s)
pkg: /data/local/tmp/kodi-18.8-Leia-armeabi-v7a.apk
Success

This went very smoothly, all my settings, connections and shares etc were still there after the upgrade, and it looks a lot nicer for it too.

That’s it – there’s a ton of useful info on other commands and options from

./adb help

I found some more useful info on connecting from ADB to Fire TV here:

https://developer.amazon.com/docs/fire-app-builder/connecting-adb-to-fire-tv.html

Starting up Kodi on Amazon FireTV remotely

After getting the above sorted out, I wanted to find a way to start Kodi on my FireTV without having to switch my projector on & off to do so.

I use Kodi as an AirPlay target for music during the day, and it switches itself off overnight. I could probably change that.

Using ADB tools, I connect to the device remotely, as before, with:

./adb connect 192.168.0.176:5555

though normally that comes back with “already connected to…

then start up Kodi using the “Android activity manager”, “am“:

./adb shell am start -n org.xbmc.kodi/.Splash

this takes a little while to start, but after about 30 seconds I can connect to the Kodi web interface on port 8080 of my FireTV, and the AirPlay target becomes available.

It looks like there are many other interesting things you can do with “am”.

Uninstalling packages with adb

List installed packages

./adb shell pm list packages

and filter for whatever you’re looking for (e.g. “guard“)

./adb shell pm list packages | grep -i guard

then unsinstall that package name:

./adb uninstall com.adguard.vpn

Update on smartyoutube to fix ads

Quick update specifically on Smart Youtube TV on Android. This was brought on by my initial install of Smart Youtube TV starting to show adverts (a lot).

I had installed Smart Youtube TV, version 6.17.739 (at time of writing this is still the latest stable release available) on my Android Fire – details above. This worked very well for months, but has started to not filter out youtube advertisements.

Having not found an update and while looking for another solution, I found “SmartTubeNext Beta”, which looks to be pretty stable and widely used, for a beta version:

https://www.apklinker.com/apk/liskovsoft/

From that site, it looks like around 4 months since SmartYouTube was updated, but SmartTubeNext is actively being developed, so could be worth a try – here’s how:

Get the latest smarttube beta APK (via wget, or download via browser from here: https://smartyoutubetv.github.io/)

wget https://github.com/yuliskov/SmartTubeNext/releases/download/latest/smarttube_beta.apk

connect to your Android device (update the IP to match yours):

./adb connect 192.168.0.176:5555

install the APK:

./adb -s 192.168.0.18:5555 install -r smarttube_beta.apk

All done.

I wasn’t sure if this would replace the existing SmartYouTube (which is why I added the -r switch that wasn’t necessary), but it’s ok: it’s installed as a different app so the stable version is kept and available should there be any issues with the beta version.

This version of SmartYoutube looks a lot better than the previoous/stable one.

List of improvements from their site:

  • 4K support
  • runs without Google Services
  • designed for TV screens
  • stock controller support
  • external keyboard support

Personally I really like the better controller support, and the overall look is much more suitable for a large screen. It’s also a lot more customisable. And, most importantly, it removes all the adverts.

Kubernetes Operators for Monitoring with Prometheus and Grafana Dashboards

Introduction

This post takes a look at setting up monitoring and alerting in Kubernetes, using Helm and Kubernetes Operators to deploy and configure Prometheus and Grafana.

This platform is quickly and easily deployed to the cluster using a Helm Chart, which in turn uses a Kubernetes Operator, to setup all of the required resources in an existing Kubernetes Cluster.

I’m re-using the Minikube Kubernetes cluster with Helm that was built and described in previous posts here and here, but the same steps should work for any working Kubernetes & Helm setup.

An example Grafana Dashboard for Kubernetes monitoring is then imported and we take a quick look at monitoring of Cluster components with other dashboards

Kubernetes Operators & Helm combo

K8s Operators are described ‘in plain English’ here:
https://enterprisersproject.com/article/2019/2/kubernetes-operators-plain-english

and defined by CoreOS as “a method of packaging, deploying and managing a Kubernetes application

The Operator used in this post can be seen here:

https://github.com/coreos/prometheus-operator

and this is deployed to the Cluster using this Helm Chart:

https://github.com/helm/charts/tree/master/stable/prometheus-operator

It may sound like Helm and Operators do much the same thing, but they are different and complimentary

Helm and Operators are complementary technologies. Helm is geared towards performing day-1 operations of templatization and deployment of Kubernetes YAMLs — in this case Operator deployment. Operator is geared towards handling day-2 operations of managing application workloads on Kubernetes.

from https://medium.com/@cloudark/kubernetes-operators-and-helm-it-takes-two-to-tango-3ff6dcf65619

Let’s get (re)started

I’m reusing the Minikube cluster from previous posts, so start it back up with:

minikube start

which outputs the following in the console

🎉  minikube 1.10.1 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.10.1
💡  To disable this notice, run: ‘minikube config set WantUpdateNotification false’

🙄  minikube v1.9.2 on Darwin 10.13.6
✨  Using the virtualbox driver based on existing profile
👍  Starting control plane node m01 in cluster minikube
🔄  Restarting existing virtualbox VM for “minikube” …
🐳  Preparing Kubernetes v1.18.0 on Docker 19.03.8 …
🌟  Enabling addons: dashboard, default-storageclass, helm-tiller, metrics-server, storage-provisioner
🏄  Done! kubectl is now configured to use “minikube”

this all looks ok, and includes the minikube addons I’d selected previously.
Now a quick check to make sure my local helm repo is up to date:

helm repo update

I then used this command to find the latest version of the stable prometheus-operator via a helm search:
helm search stable/prometheus-operator --versions | head -2

there’s no doubt a neater/builtin way to find out the latest version, but this did the job – I’m going to install 8.13.8:

install the prometheus operator using Helm, in to a new dedicated “monitoring” namespace just takes this one command:
helm install stable/prometheus-operator --version=8.13.8 --name=monitoring --namespace=monitoring

Ooops

that should normally be it, but for me, this resulted in some issues along these lines:

Error: Get http://localhost:8080/version?timeout=32s: dial tcp 127.0.0.1:8080: connect: connection refused

– looks like Helm can’t communicate with Tiller any more; I confirmed this with a simple helm ls which also failed with the same message. This shouldn’t be a problem when v3 of Helm goes “tillerless”, but to fix this quickly I simply re-enabled Tiller in my cluster via Minikube Addons:


➞  minikube addons disable helm-tiller
➞  minikube addons enable helm-tiller

verified things worked again with helm ls, then the helm install... command worked and started to do its thing…

New Operator and Namespace

Keeping an eye on progress in my k8s dashboard, I can see the new “monitoring” namespace has been created, and the various Operator components are being downloaded, started up and configured:

you can also keep an eye on progress with:
watch -d kubectl get po --namespace=monitoring

this takes a while on my machine, but eventually completes with this console output:

NOTES:
The Prometheus Operator has been installed. Check its status by running:
  kubectl –namespace monitoring get pods -l “release=monitoring”

Visit https://github.com/coreos/prometheus-operator for instructions on how
to create & configure Alertmanager and Prometheus instances using the Operator.

kubectl get po --namespace=monitoring shows the pods now running in the cluster, and for this quick example the easiest way to get access to the new Grafana instance is to forward the pods port 3000 to localhost like this:

➞  kubectl --namespace monitoring port-forward monitoring-grafana-64d4f6fcf7-t5zkv 3000:3000

(check and adjust the above to use the full/correct name of your monitoring-grafana-* pod)

Connecting to Grafana

now I can hit http://localhost:3000 and have that connect to port 3000 in the Grafana pod:


from the documentation on the Helm Chart and Operator here:

https://github.com/helm/charts/tree/master/stable/prometheus-operator

the default user for this Grafana is “admin” and the password for that user is “prom-operator“, so log in with those credentials…

Grafana Dashboards for Kubernetes

We can now use the ready-made Grafana dashboards, or add/import ones from the extensive online collection, like this one here for example: https://grafana.com/grafana/dashboards/6417 – simply save the JSON file

then go to Grafana and import it with these settings:

and you should now have a dashboard showing some pretty helpful stats on your kubernetes cluster, it’s health and resource usage:

Finally a very quick look at some of the other inbuilt dashboards – you can use and adjust these to monitor all of the components that comprise your cluster and set up alerting when limits or triggers are reached:

All done & next steps

There’s a whole lot more that can be done here, and many other ways to get to this point, but I found this pretty quick and easy.

I’ve only been looking at monitoring of k8s resources here, but you can obviously set up grafana dashboards for many other things, like monitoring your deployed applications. Many applications (and charts and operators) come with prom endpoints built in, and can easily and automatically be added to your monitoring and alerting dashboards along with other datasources.

Cheers,

Don

Pin It on Pinterest