Some pics of my first attempts at turning “natural edge” wooden bowls
These were made from a very nice chunk of Rowan wood I was kindly given a while back
And a random pic of our Newfie, Ula 🙂
This post is the first in a series of 3 introducing the combined power of Jenkins, Docker, and the Jenkins DSL.
They should hopefully provide enough information to get to grips with both Docker and Jenkins – what they both do and how to use them – by showing some practical examples of them working together.
The first step, if you haven’t already, is to download and install Docker on your platform – the Docker website covers this in good detail for most platforms…
Once that’s done, you can try it out with the customary “Hello World” example…
I’m running Docker on an Ubuntu VM, but the commands and the results are the same regardless of platform – that’s one of the main Docker concepts.
You can then check which processes (docker containers) are running using the “docker ps” command – in my example you can see that there’s one Jenkins image running. If you run “docker ps -a” you will see all containers (including stopped ones, of which I have a few on this host):
root@ubuntud:~# docker --version Docker version 1.13.0, build 49bf474
Now that the basic setup is done, we can move on to something a little more interesting – downloading and running a “Dockerised” Jenkins container.
I’m going to use my own Dockerised Jenkins Image, and there will be more detail on that in the next post – you’re welcome to try it out too, just run this command in your terminal:
docker run -d -p 8080:8080 donaldsimpson/dockerjenkins
if you don’t happen to have my docker image cached locally (like I do) then docker will automatically download it for you from Docker Hub then run it:
docker run -d
tells docker that we want to run the container in the background so that we can carry on and do other things while it runs. The alternative is -it, for an interactive/foreground session.
docker run -d -p 8080:8080
The -p 8080:8080 tells docker to map port 8080 on the local host to port 8080 in the running container. This means that when we visit localhost:8080 the request will be passed through to the container.
docker run -d -p 8080:8080 donaldsimpson/dockerjenkins
and finally, we have the namespace and name of the Docker image we want to run – my “donaldsimpson/dockerjenkins” one – more on this later!
You can now visit port 8080 on your Docker host and see that Jenkins is up and running….
That’s Jenkins up and running and being happily served from the Docker container that was just pulled from Docker Hub – how easy was that?!
And the best thing is, it’s entirely and reliably repeatable, it’s guaranteed to work the same on all platforms that can run Docker, and you can quickly and easily update, delete, replace, change or share it with others! Ok, that’s more than one thing, but the point is that there’s a lot to like here 🙂
That’s it for this post – in the next one we will look in to the various elements that came together to make this work – the code and configuration files in my Git repo, the automated build process on Docker Hub that builds and updates the Docker Image, and how the two are related.
I wrote a while back about my troubles with Carrier Grade Nat (CGNAT), and described a solution that involved tunneling out of CGNAT using a combination of SSH and an AWS server – the full article is here.
That worked ok, but it was pretty fragile and not ideal – connections could be dropped, sessions expired, hosts rebooted etc etc. Passing data through my EC2 host is also not ideal.
My “new and improved” solution to this is to use a local tool like ngrok to create the tunnel for me. This is proving to be far simpler to manage, more reliable, and ngrok also provides a load of handy additional features too.
Here’s a very quick run through of getting it up and running on my Ubuntu VM, which sits behind CGNAT and hosts a webserver I’d like to be able to access from the outside occasionally. This is the front end to my ZoneMinder CCTV interface, but it could be anything you want to host and on any port.
First off, don’t use the default Ubuntu install, that will give you version 1.x which is out of date and didn’t work for me at all – it’s better, quicker and easier to get the latest binary for your platform directly from the ngrok website, extract that on your host and run it directly or add it to you PATH.
wget http://<YourDownloadURL>/ngrok-stable-linux-amd64.zip unzip ngrok-stable-linux-amd64.zip
once that’s downloaded and extracted, you can (optionally) add your auth token, which you get when you register on the ngrok site. This is optional, but you get some worthwhile features from doing so.
./ngrok authtoken <YourAuthTokenFromTheNgrokWebsite>
Then you simply run ngrok like so:
./ngrok http 80
which should give you a console something like this:
Note I’m using this command:
screen ./ngrok http -region eu 80
to start up ngrok using screen, so I can CTRL+A+D out of that and resume it when I want using screen -r,
Here’s a pic of the console running, showing requests, and Apache being served by the ngrok URL:
That’s it – quick and easy, more stable, and far less faffing too.
There are tons of other options worth exploring, like specifying basic HTTP auth, saving your config to a local file, running other ports etc, all of them are explained in the documentation.
There’s a handy review of ngrok and several very similar tools here: http://john-sheehan.com/blog/a-survey-of-the-localhost-proxying-landscape
And some good tips & tricks with ngrok here:
as noted in the comments on that page: you obviously need to be safe and sensible when opening up ports to the internet…
After switching to a 4G broadband provider, who shall (pretty much) remain nameless, I discovered they were using Carrier-Grade NAT (aka CGNAT) on me.
There are more details on that here and here but in short, the ISP is ‘saving’ IPv4 addresses by sharing them out amongst several users and NAT’ing their connections – in much the same way as you do at home, when you port forward multiple devices using one external IP address: my home network is just one ‘device’ in a pool of their users, who are all sharing the same external IP address.
The impact of this for me is that I can no longer NAT my internal network services, as I have been given a shared pubic-facing IPv4 address. This approach may be practical for a bunch of mobile phone users wanting to check Twitter and Facebook, but it sucks big time for gamers or anyone else wanting to connect things from their home network to the internet. So, rather than having “Everything Everywhere” through my very expensive new 4G connection – with 12 months contract – it turns out I get “not much to anywhere“.
Point being; I would like to be able to check my internal servers and websites when I’m away – especially my ZoneMinder CCTV setup – but my home broadband no longer has its own internet address. So an alternative solution had to be found…
I basically use 2 servers, the one at home (unhelpfully now stuck behind my ISPs CGNAT) and one in the Amazon Cloud (my public facing AWS web server with DNS), and create a reverse SSH Tunnel between them. Plus a couple of essential tweaks you wont find out about if you don’t read any further 🙂
This is initiated on the internal/home server, and connects outwards to the AWS host on the internet, like so.
ssh -N -R 8888:localhost:80 -i /home/don/DonKey.pem firstname.lastname@example.org
Here is an explanation of each part of this command:
-N (from the SSH man page) “Do not execute a remote command. This is useful for just forwarding ports.”
-R (from the SSH man page) “Specifies that connections to the given TCP port or Unix socket on the remote (server) host are to be forwarded to the given host and port, or Unix socket, on the local side.”
8888:localhost:80 – means, create the reverse tunnel from localhost port 80 (my ZoneMinder web app) to port 8888 on the destination host. This doesn’t look right to me, but it’s what’s needed for a reverse tunnel
the -i and everything after it is just me connecting to my AWS host as my user with an identity file. YMMV, whatever you nornally do should be fine.
When you run this command you should not see any issues or warnings. You need to leave it running using whatever method you like – personally I like screen for this kind of thing, and will also be setting up Jenkins jobs later (below).
With that SSH command still running on your local server you should now be able to connect to the web app from your remote AWS Web Server, by reading from port 8888 with curl or wget.
This is a worthwhile check to perform at this point, before moving on to the next 2 steps – for example:
don@MyAWSHost:~$ wget -q -O- localhost:8888/zm | grep -i ZoneMinder <h1>ZoneMinder Login</h1> don@MyAWSHost:~$
This shows that port 8888 on my AWS server is currently connected to the ZoneMinder application that’s running on port 80 of my home web server. A good sign.
Progress is being made, but in order to be able to hit that port with a browser and have things work as I’d like, I still need to configure AWS to allow incomming connections to the newly chosen port 8888.
This is done through the Amazon EC2 Management Console using the left hand menu item “Network & Security” then “Security Groups”:
Now select Add and configure a new Inbound rule something like so:
At this point I thought I was done… but it didn’t work and I couldn’t immediately see why, as the wget check was all good.
Some head scratching and checking of firewalls later, I realised it was most likely to be permissions on the port I was tunneling – it’s not very likely to be exposed and world readable by default, is it? Doh.
After a quick google I found a site that explained the changes I needed to make to my sshd_config file, so:
and add a new line that says:
to that file, checking that there’s no existing reference to GatewayPorts – edit this file carefully and at your own risk.
As I understand it – which may best be described as ‘loosely’ – the reason this worked when I tested with wget earlier is because I was connecting to the loopback interface; this change to sshd binds the port to all interfaces. See the detailed answer on this post for further detail, including ways to limit this to specific users.
Once that’s done, restart sshd with
service ssh restart
and you should now be able to connect by pointing a web browser at port 8888 (or whatever you set) of your AWS web server and see your app responding from the other end:
The final step for me is to wrap this (the ssh tunnel creation part) up in a Jenkins job running on my home server.
This is useful for a number of reasons, such as avoiding and resetting defunct/stale connections and enabling scheduling – i.e. I can have the port forwarded when I want it, and have it shutdown during the hours I don’t.
The people of eBay kindly provided me with a Sun 900 38u rack cabinet for much, much cheapness. They also chucked in a wopping v890, a couple of Storedge 3300’s and something 2u-sized and servery that I’ve not yet managed to identify or attempt to power on.
Seeing a Sun cabinet being towed across the countryside by a mad man in a Landrover Defender is not a regular occurence, so I thought I’d share pics of the process…
It was delivered on a pallet, which was collapsing under the incredible weight of all the steel inside the cabinet; it must weigh about the same as a small car:
The Landy won the tug of war, but only just…
I had to partially dismantle the thing:
but it was soon restocked with some new additions when it was safely indoors – my old Cobalt 550 server and a SunBlade 100 I had sitting around.
I’ve not had a chance to fire up the v890 yet, need to speak to eBay about some disks first, but I did power on the Sun Microsytems light on the top – my wife now refers to it as “that geeky vending machine thingy”…
Here’s the finished beastie:
The blank I started off with was so large I had to swivel the head of my lathe sideways to get it turning. Once it was roughed and made round, I was just able to rotate it back over the bed of the lathe – there was no room to spare.
I kept as much of the width as I could, so the finished bowl is very nearly 12″ in diameter at its widest point.
Here are some pics of it mounted on the chuck & on the lathe.
I used several thin coats of Wood Wax 22 for the finish on this one, which is a mix of beeswax and carnauba wax. It’s easy to apply and when you apply a little friction on the lathe (to generate some heat) it really brings out the grain and details of the sycamore.
After the ’22 was done I then applied a thin layer of Liberon Wax to seal it and provide a deep gloss finish, which should be reasonably hard-wearing.
Much like the Woodturning – New Ash bowl I turned and posted about recently, there are some nice contrasts on the underside of this one.
I’m aware that if I’d gone a bit deeper/thinner on the inside this colour would have started to show through there too, which may have worked well, but I didn’t want to risk making it too thin. Or risk meeting the jaws of the chuck with the tip of my bowl gouge…
Ordered some new woodturning blanks from Home of Wood recently. Very pleased with what they sent – sizes and variety all exactly what I was after.
Here are some pics of the first one I’ve finished turning, an Ash bowl with some nice grain and range of colour, finished off with my homemade beeswax and oil mix: