This (awesome and huge) tree fell down in a storm at the start of 2025:
Stump cut:
Trunk cut in to sections for milling:
DevOps, CI/CD, Kubernetes, Docker, Jenkins, Woodcraft, and my CV/Resume
This (awesome and huge) tree fell down in a storm at the start of 2025:
Stump cut:
Trunk cut in to sections for milling:
This took me a little while to piece together, so I thought I’d write it up here in case it’s of use to anyone else, or if I ever need to go through it again….
I use Frigate to to access and manage my home CCTV cameras. It is awesome, and I would like to be able to access it securely from outside my local network/LAN.
I also use HomeAssistant (“HA”) to process the feeds and notifications from Frigate, but would like to directly access the Frigate web UI. I’ll keep HA mostly out of this post.
A quick overview on my current setup:
– Nginx Proxy Manager running on a Proxmox host as an LXC
– Homeassistant also on Proxmox but as a VM (HAOS)
– Frigate and MQTT run as Docker containers on Ubuntu, on an old HP Prodesk. I may eventually migrate these over to Proxmox too, but they are working happily on this device and there may be issue migrating them to a VM or LXC due to harware; I use a USB Coral TPU for processing, and while I know you can pass that through to an LXC or VM, I haven’t gotten around to it.
Thanks to Proxmox and the amazing community scripts, this was very quick and easy. I used this script to deploy it as an LXC:
https://community-scripts.github.io/ProxmoxVE/scripts?id=nginxproxymanager
When that was completed I opened up a firewall rule on my router to allow traffic via HTTPS/443 to the new Nginx LXC’s address.
The next step – and the crux of this post – was to setup Nginx Proxy Manager to allow access through to Frigate and handle authentication:
This is reasonably simple; specify a domain name that resolves to your host/router, then set the local IP your Frigate runs on and the port. I gather Websocket Suport is required, and you only need HTTPS here if your Frigate endpoint is using it. Nginx will serve this connection as HTTPS once setup to do so.
After some googling I found the following Nginx config was also recommended:
Once done, you should have an “Online” Proxy Host combining your domain name, your Frigate (destination) IP & listening port, with SSL option (I use Let’s Encrypt):
When I eventually got things working using port 8971 (instead of 5000) I was prompted for a login by Frigate, but I hadn’t set up auth in Frigate, just Nginx.
Nginx has the option to pass auth through to the destination, which may be nice, but for now I just disabled the feature in Frigate and after a restart things worked as expected, with the basic Nginx auth only:
auth:
enabled: False
tls:
enabled: false
It may be better/safer/nicer to have the auth passed through, enabled and managed in Frigate, along with TLS, but I haven’t done so yet.
When I was testing this I started off using port 8971 which is recommended here:
https://docs.frigate.video/configuration/authentication
This didn’t work for me; I then discovered I couldn’t connect to that port at all (even locally) so I went with 5000 initially as I knew that did work locally at least.
Eventually I realised that I’d never needed or opened up that port to my Frigate container! I updated my config to map port 8971 to 8971:
-p 8971:8971
after that little oversight was corrected, it worked correctly!
When testing via a Browser (behind a VPN to emulate external access) I was prompted once for a login and then everything just worked; perfect!
I then went to check via my mobile phone, and that kept asking me to log in, with the message “Authorization required”
This was fixed by updating the Nginx Access List and setting “Satisfy Any” to be On/checked. That small change seems to have sorted the issue and everything now works perfectly on my phone too.
Being a fan of Solana and interested in exploring and using the technology, I wanted to find some practical use for it in my role as a DevOps Engineer.
This post attempts to do that, by integrating Solana in to a CI/CD workflow to provide an audit of build artefacts. Yes, there are many other ways & tools you could do this, but I found this particular combination interesting.
Solana is a high-performance blockchain platform known for its speed and scalability.
Integrating Solana with GitHub Workflows can bring a new level of security, transparency, and efficiency to your CI/CD pipelines.
This blog post demonstrates how to leverage Solana in a GitHub Workflow to enhance your development and deployment processes.
Solana is a decentralised blockchain platform designed for high throughput and low latency. It supports smart contracts and decentralized applications (dApps) with a focus on scalability and performance. Solana’s unique consensus mechanism, Proof of History (PoH), allows it to process thousands of transactions per second.
Integrating Solana with GitHub Workflows can provide several benefits:
Let’s walk through an example of how to integrate Solana with a GitHub Workflow to store build artifact hashes on the Solana blockchain.
Ensure you have the Solana CLI installed on your local machine or CI environment:
sh -c "$(curl -sSfL https://release.solana.com/v1.8.0/install)"
Then, you need a Solana wallet to interact with the blockchain. You can use the Solana CLI to create a new wallet:
solana-keygen new --outfile ~/my-solana-wallet.json
This command generates a new wallet and saves the keypair to ~/my-solana-wallet.json
.
Create a new GitHub Workflow file in your repository at .github/workflows/solana.yml
:
name: Solana Integration
on:
push:
branches:
- main
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v2
- name: Set up Solana CLI
run: |
sh -c "$(curl -sSfL https://release.solana.com/v1.8.0/install)"
export PATH="/home/runner/.local/share/solana/install/active_release/bin:$PATH"
solana --version
- name: Build project
run: |
# Replace with your build commands
echo "Building project..."
echo "Build complete" > build-artifact.txt
- name: Generate SHA-256 hash
run: |
sha256sum build-artifact.txt > build-artifact.txt.sha256
cat build-artifact.txt.sha256
- name: Store hash on Solana blockchain
env:
SOLANA_WALLET: ${{ secrets.SOLANA_WALLET }}
run: |
echo $SOLANA_WALLET > ~/my-solana-wallet.json
solana config set --keypair ~/my-solana-wallet.json
solana airdrop 1
HASH=$(cat build-artifact.txt.sha256 | awk '{print $1}')
solana transfer <RECIPIENT_ADDRESS> 0.001 --allow-unfunded-recipient --memo "$HASH"
To securely store your Solana wallet keypair, add it as a secret in your GitHub repository:
Settings
.Secrets
in the left sidebar.New repository secret
.SOLANA_WALLET
and the content of your ~/my-solana-wallet.json
file.Push your changes to the main
branch to trigger the workflow. The workflow will:
After the workflow runs, you can verify the transaction on the Solana blockchain using a block explorer like Solscan. The memo field of the transaction will contain the SHA-256 hash of the build artifact, ensuring its integrity and immutability.
Run sha256sum build-artifact.txt > build-artifact.txt.sha256
b1946ac92492d2347c6235b4d2611184a1e3d9e6 build-artifact.txt
Run solana transfer <RECIPIENT_ADDRESS> 0.001 --allow-unfunded-recipient --memo "b1946ac92492d2347c6235b4d2611184a1e3d9e6"
Signature: 5G9f8k9... (shortened for brevity)
Integrating Solana with GitHub Workflows provides a powerful way to enhance the security, transparency, and efficiency of your CI/CD pipelines.
By leveraging Solana’s blockchain technology, you can ensure the integrity and immutability of your build artifacts, automate deployment processes, and maintain transparent audit trails.
I have used solutions similar to this previously; by automatically adding a containers hash to an immutable database when it passes testing, while at the same time ensuring that the only images permissable for deployment in the next environment up (e.g. Production) exist on that list, you can (at least help to) ensure that only approved code is deployed.
If you’d like to learn more about Solana they have some great documentation and examples: https://solana.com/docs/intro/quick-start
In the ever-evolving landscape of software development, ensuring the integrity and security of build artifacts is paramount. As CI/CD pipelines become more sophisticated, integrating cryptocurrency technologies can provide a robust solution for managing and securing build artifacts. This blog post delves into the concept of immutable build artifacts and how crypto technologies can enhance CI/CD pipelines.
CI/CD pipelines are automated workflows that streamline the process of integrating, testing, and deploying code changes. They aim to:
Build artifacts are the compiled binaries, libraries, and other files generated during the build process. Ensuring these artifacts are immutable—unchangeable once created—is crucial for several reasons:
Cryptocurrency technologies, particularly blockchain, offer unique advantages for managing build artifacts:
# Example: Building a Docker image
docker build -t my-app:latest .
# Example: Generating a SHA-256 hash of a Docker image
docker save my-app:latest | sha256sum
# Example: Using a blockchain-based storage service
blockchain-store --hash <generated-hash> --metadata "Build #123"
# Example: Verifying the hash
retrieved_hash=$(blockchain-retrieve --metadata "Build #123")
echo "<artifact-hash> my-app.tar.gz" | sha256sum -c -
Integrating cryptocurrency technologies with CI/CD pipelines to manage immutable build artifacts offers a range of benefits that enhance security, reproducibility, and transparency. By leveraging blockchain’s decentralized and immutable nature, organizations can ensure the integrity and authenticity of their build artifacts, providing a robust foundation for their CI/CD processes.
As the software development landscape continues to evolve, embracing these cutting-edge technologies will be crucial for maintaining a competitive edge and ensuring the reliability and security of software deployments. By implementing immutable build artifacts, organizations can build a more secure and efficient CI/CD pipeline, paving the way for future innovations.
This post covers a lot (very quickly and reasonably easily);
It starts with using Kuberenets in Docker (KinD) to create a minimal but functional local Kubernetes Cluster.
Then, ArgoCD is setup and a sample app is deployed to the cluster.
Finally, k8sgpt is configured and a basic analysis of the cluster is run.
The main point of all of this was to try out k8sgpt in a safe and disposable environment.
kind
First, ensure you have kind
installed.
KinD can be installed quickly and easily with just the following commands:
curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.17.0/kind-linux-amd64
chmod +x ./kind
sudo mv ./kind /usr/local/bin/kind
Check out this older post for more detail on KinD:
https://www.donaldsimpson.co.uk/2023/08/09/kind-local-kubernetes-with-docker-nodes-made-quick-and-easy/
kind
ClusterCreate a new kind
cluster:
kind create cluster --name argocd-cluster
kubectl
Ensure you have kubectl
installed. You can install it using the following command:
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
chmod +x kubectl
sudo mv kubectl /usr/local/bin/
argocd
namespace:kubectl create namespace argocd
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
kubectl port-forward svc/argocd-server -n argocd 8080:443
kubectl get secret argocd-initial-admin-secret -n argocd -o jsonpath="{.data.password}" | base64 -d; echo
https://localhost:8080
.admin
and the password retrieved in the previous step.argocd
CLIargocd
CLI:curl -sSL -o argocd https://github.com/argoproj/argo-cd/releases/latest/download/argocd-linux-amd64
chmod +x argocd
sudo mv argocd /usr/local/bin/
argocd
CLI:argocd login localhost:8080
argocd account update-password
argocd app create guestbook \
--repo https://github.com/argoproj/argocd-example-apps.git \
--path guestbook \
--dest-server https://kubernetes.default.svc \
--dest-namespace default
argocd app sync guestbook
argocd app get guestbook
curl -Lo k8sgpt https://github.com/k8sgpt-ai/k8sgpt/releases/latest/download/k8sgpt-linux-amd64
chmod +x k8sgpt
sudo mv k8sgpt /usr/local/bin/
k8sgpt auth --kubeconfig ~/.kube/config
k8sgpt analyze
[INFO] Analyzing cluster…
[INFO] Found 3 issues in namespace default:
[WARNING] Pod guestbook-frontend-5d8d4f5d6f-abcde is in CrashLoopBackOff state
[WARNING] Service guestbook-frontend is not reachable
[INFO] Deployment guestbook-frontend has 1 unavailable replica
kubectl logs guestbook-frontend-5d8d4f5d6f-abcde -n default
kubectl describe svc guestbook-frontend -n default
kubectl describe deployment guestbook-frontend -n default
Ok, addmitedly that was a bit of a whirlwind, but if you followed it you have successfully deployed ArgoCD to a kind
 cluster, deployed an application using ArgoCD to that new cluster, then inspected the cluster & app using K8sGPT.
The example output and associated actions from provide guidance on how to address common issues identified by K8sGPT.
This setup allows you to manage your applications and monitor the health of your Kubernetes cluster effectively, and being able to spin up a disposable cluster like this is handy for many reasons.
Quick notes on using the No Man’s Sky Save Editor on Mac
—
Make a local directory to install and keep the required files in
mkdir NMSEditor; cd NMSEditor
download the jar file from the GitHub repo:
https://github.com/goatfungus/NMSSaveEditor/blob/master/NMSSaveEditor.jar
or if you’re happier using the command line:
curl -L -O https://github.com/goatfungus/NMSSaveEditor/raw/master/NMSSaveEditor.jar
Next, you need java installed to run the jar file with, the easist way for this (and for adding lots of other useful tools to your Mac) is to use HomeBrew, install instructions for that are here:
When you’ve got brew setup, you can then install java with this command:
brew install java
As recommended by brew, then run the following (or similar, depending on your shell) to update your shell & path;
echo 'export PATH="/opt/homebrew/opt/openjdk/bin:$PATH"' >> ~/.zshrc
Remember to open a new shell/terminal session to pick up this change
NMS Save file location
The Steam save file location on Mac is the equivalent of this, you need to update for your user name and whatever your “st_xxx” numbers are…
/Users/<YOURUSERNAME>/Library/Application Support/HelloGames/NMS/st_<my_numbers>
to run the save editor, you can now just do:
java -jar NMSSaveEditor.jar
when it opens, select the path to your save file based on the above, choose a save slot, modify as desired… and remember to take frequent backups
Storm Isha left an obstacle that needed cleared up in a hurry, here are some pics!
In the early hours of the morning, blocking our way out past our neighbours farm at the end of our track:
This was the start of my “lunch break”:
Things went quickly, this was about half way through:
All packed up (nearly – I went back for those big bits later!) and off back to work with about 10 mins to spare….
My notes on setting up Frigate NVR for a home CCTV setup.
The main focus of this post is on object detection (utilising a Google Coral TPU) and configuring notifications to Amazon Fire TVs (and other devices) via intregration with HomeAssistant.
There’s a lot to cover and no point in reproducing the existing documentation, you can find full details & info on setting up the main components here:
ZoneMinder
Google Coral Edge TPU
Frigate
HomeAssistant
I used Zoneminder for many years to capture and display my home CCTV cameras. There are several posts – going back to around 2016 – on this site under the ZoneMinder category here
This worked really well for me all that time, but I was never able to setup Object Detection in a way I liked – it can be done in a number of different ways, but everything I tried out was either very resource intensive, required linking to Cloud services like TensorFlow for processing, or was just too flaky and unreliable. It was fun trying them out, but none of them ever suited my needs. Integration and notification options were also possible, but were not straightforward.
So, I eventually took the plunge and switched to Frigate along with HomeAssistant. There was a lot to learn and figure out, so I’m posting some general info here in case it helps other people – or myself in future when I wonder why/how I did things this way….
I have 4 CCTV cameras, these are generic and cheap 1080p Network IP cameras, connected via Ethernet. I don’t permit them any direct access to the Internet for notifications, updates, event analysis or anything.
I ran ZoneMinder (the server software that manages and presents the feeds from the cameras) on various hardware over the years, but for the Frigate and HomeAssistant setup I have gone for an energy-efficient and quiet little “server” – an HP ProDesk 600 G1 Mini – it’s very very basic and very low powered… and cost £40 on eBay:
After testing Object Detection using the CPU (this is waaaay too much load for the CPU to cope with longer-term, but really helps to test proves the concept) I have since added a Google Coral Edge TPU to the host via USB. This enables me to offload the detection/inference work to the TPU and spare the little CPU’s energy for other tasks:
My key goals here were to:
Note that you do not need to use HomeAssistant or MQTT in order to use or try Frigate, it can run as a standalone insatnce if you like. Frigate also comes with its own web interface which is very good, and I run this full-screen/kiosk mode on one of my monitors.
Setup and trial Frigate: setting up Frigate was easy, I went for Ubuntu on my host and installed Docker on that, then configured Frigate and MQTT containers to communicate. These are both simply declared in the Frigate config like this:
mqtt:
host: 192.168.0.27
detectors:
coral:
type: edgetpu
device: usb
Add Object Detection: with Frigate, this can be done by a Google Coral Edge TPU (pic above) – more info here: https://coral.ai/products/accelerator/ and details on my config below. I first trialled this using the host CPU and it ‘worked’ but was very CPU intensive: adding the dedicated TPU makes a massive difference and inference speeds are usually around 10ms for analysis of 4 HD feeds. This means the host CPU is free to focus on running other things (which is just as well given the size of the thing).
objects:
track:
- person
- dog
- car
- bird
- cat
https://docs.frigate.video/configuration/objects
Integrate with HomeAssistant : Added the HomeAssistant Docker instance to my host, then ran and configured MQTT container for Frigate then configured Frigate + HomeAssistant to work together. This was done by first installing HACS in HA, then using the Frigate Integration as explained here:
https://docs.frigate.video/integrations/home-assistant/
Phone notifications – I have previosuly had (and posted about my) issues with CGNAT and expected I would need to set up and ngrok tunnel and certs and jump through all sorts of hoops to get HA working remotely.
HA offers a very simple Cloud Integration via https://www.nabucasa.com/
I trialled this and was so impressed I have already signed up for a year – it’s well worth it for me and makes things much simpler. Phone notifications can be setup under HomeAssistant > Settings > Automations and Scenes > Frigate Notifications – after installing the Frigate Notifications Bueprint via HACS.
I can now open HomeAssistant on my phone from anywhere in the World and view a dashboard that has live feeds from my CCTV cameras at home. I have also set it up to show recently detected objects from certain cameras too.
Amazon FireTV notifications – I have just setup the sending of notifications to the screen of my Amazon Fire TV, this was done by first installing this app on the device:
https://www.amazon.com/Christian-Fees-Notifications-for-Fire/dp/B00OESCXEK
Then installing
https://www.home-assistant.io/integrations/nfandroidtv/
on HA and configuring Notifications as described there. I now get a pop-up window on my projector screen whenever there’s someone at my front gate.
This is a quick (and poor quality) pic of my projector screen (and chainsaw collection) with an Amazon Fire TV 4k displaying a pop-up notification in the bottom-right corner:
This means I now don’t need to leave a monitor on showing my CCTV feeds any more, as I am notified either via my mobile or on screen. And my notifications are only set up for specific object types – people & cars, and not for things it picks up frequently that I don’t want to be alerted on, like birds or passing sheep or cows.
Minor Apple Watch update – these notifications are also picked up on my Apple Watch, which is set to display my phone notifications. So I also get a short video clip of the key frames which is pretty awesome and works well.
My Frigate Config – here’s an example from the main “driveway” camera feed, this is the one I want to be montoring & ntoified about most. It’s using RTSP to connect, record and detect the listed object types that I am interested in:
driveway:
birdseye:
order: 1
enabled: True
ffmpeg:
inputs:
- path: rtsp://THEUSER:THEPASSWORD@192.168.0.123:554/1
roles:
- detect
- rtmp
- path: rtsp://THEUSER:THEPASSWORD@192.168.0.123:554/1
roles:
- record
detect:
width: 1280
height: 720
fps: 5
stationary:
interval: 0
threshold: 50
objects:
track:
- person
- dog
- car
- bird
- cat
The full 24/7 recordings are all kept (one file/hour) for a few days then deleted and can be seen via HA under
Media > Frigate > Recordings > {camera name} > {date} > {hour}
A note of the scripts I use to start the various docker containers.
This would be much better managed under Docker Compose or something, there are plenty of examples of that online, but I’d like to look at setting all of this up on Kubernetes so leaving this as rough as it is for now.
I am also running Grafana and NodeExporter at the moment to keep an eye on the stats, although things would probably look less worrying if I wasn’t adding to the load just to monitor them:
I’ll need to do something about that system load; it’s tempting to just get a second HP host & Coral TPU and put some of the load and half of the cameras on that – will see… a k8s cluster of them would be neat.
# Start Frigate containerdocker run -d \
--name frigate \
--restart=unless-stopped \
--mount type=tmpfs,target=/tmp/cache,tmpfs-size=1000000000 \
--device /dev/bus/usb:/dev/bus/usb \
--device /dev/dri/renderD128 \
--shm-size=80m \
-v /root/frigate//storage:/media/frigate \
-v /root/frigate/config.yml:/config/config.yml \
-v /etc/localtime:/etc/localtime:ro \
-e FRIGATE_RTSP_PASSWORD='password' \
-p 5000:5000 \
-p 8554:8554 \
-p 8555:8555/tcp \
-p 8555:8555/udp \
ghcr.io/blakeblackshear/frigate:stable
# Start homeassistant containerdocker run -d \
--name homeassistant \
--privileged \
--restart=unless-stopped \
-e TZ=Europe/Belfast \
-v /root/ha_files:/config \
--network=host \
ghcr.io/home-assistant/home-assistant:stable
# Start MQTT containerdocker run -itd \
--name=mqtt \
--restart=unless-stopped \
--network=host \
-v /storage/mosquitto/config:/mosquitto/config \
-v /storage/mosquitto/data:/mosquitto/data \
-v /storage/mosquitto/log:/mosquitto/log \
eclipse-mosquitto
# Start NodeExporter containerdocker run -d \
--name node_exporter \
--privileged \
--restart=unless-stopped \
-e TZ=Europe/Belfast \
-p 9100:9100 \
prom/node-exporter
# Start Grafana containerdocker run -d \
--name grafana \
--privileged \
--restart=unless-stopped \
-e TZ=Europe/Belfast \
-p 3000:3000 \
grafana/grafana
Quick notes on trying out Kind for a local and lightweight Kubernetes cluster.
The “getting started” steps for Kind are easy and well documented on the Kind site, but I didn’t find a good guide on adding the Kubernetes Dashboard to a newly created Kind cluster… I’m planning on using this as the basis for a few local projects so wanted to capture it here, plus checkout using the Lens IDE to manage and monitor a local “Kind” cluster.
As it says on the Kind website… if you have go 1.16+ and docker or podman installed go install sigs.k8s.io/kind@v0.20.0 && kind create cluster
is all you need!
Here’s me doing just that to create a new kind cluster on my Mac in 21 seconds….
all very quick and very easy, and it is incredibly light on resources too.
Apply the dashboard yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
Give it a moment or two to start up, checking with:
kubectl get pod -n kubernetes-dashboard
then create the admin user & cluster role bindings
kubectl create serviceaccount -n kubernetes-dashboard admin-user kubectl create clusterrolebinding -n kubernetes-dashboard admin-user --clusterrole cluster-admin --serviceaccount=kubernetes-dashboard:admin-user
Next, get the auth token:
kubectl -n kubernetes-dashboard create token admin-user
Start up the local proxy
kubectl proxy
Browse to the local login endpoint here and pass it the token from above
and you should see the Kubernetes Dashboard for your (very new) Kind cluster like this…
The Kubernetes Dashboard could also be deployed via Helm:
helm install stable/kubernetes-dashboard --name my-release
or via the Lens UI (more on that below):
Download Lens and log in:
then select your “kind-kind” cluster from Lens > Catalogue > Clusters and you can see & do a whole load more with your cluster via Lens:
The missing metrics warning in Lens saying “Metrics are not available due to missing or invalid prometheus configuration.”
can be sorted by installing Prometheus using Helm via the CLI or from Lens:
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
helm install
prometheus prometheus-community/kube-prometheus-stack
Alternatively, you can enable the inbuilt Metrics options in Lens:
Either way, things should soon look much better:
This can be done in Kind by defining a cluster manifest with multiple worker nodes like this:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
- role: worker
then creating the cluster specifying that file, e.g.
kind create cluster --config my-kind-cluster-config.yaml
“docker ps” should then show your multiple nodes, as will Lens:
This one caused me some grief for the last few weeks, thought I’d share in case it helps anyone else…
Firefox is my prefered browser on Mac, it’s setup with all my passwords, bookmarks, extensions etc and I use Chrome for work, so when it stopped loading and started crashing for no good reason it was VERY annoying. Couldn’t find any cure for it, little mention of issues and nothing more helpful than the usual: update it, restart, disable extensions, clear your profile, reinstall etc etc.
The issue being that Firefox would hang on startup – nothing would load, not even the basic home page, and I had to force-restart the browser on a regular basis until it would eventually decide to work like nomal again…. ARGH!
The solution to my issue was to disable support for HTTP3. That’s very easily done and has completely fixed things on my Mac (in this case it’s an iMac Retina 5k, 27-inch with macOS Monterey).
Here’s how – it takes 5 seconds + a restart.
enter “about:config” in the address bar and then hit the “Accept the Risk and Continue” button…
filter the settings for “http3” via the search bar, click on the bottom right icon seen in this pic to change the value from true to false:
so that it loks like this:
then just restart the browser and if you had the same issue as me, it should now be sorted! Happy surfing.
Update: There is a some more detailed info here:
https://bugzilla.mozilla.org/show_bug.cgi?id=1749908
incluing reports that disabling all of the data collection features may solve this (without the HTTP3 disabling update). Looks like this has been around for a while but only started affecting me a couple of weeks ago, and was hard to debug as there’s no obvious/searchable error message.